This page represents a project proposal; information about the completed project can be found at :

http://www.stanford.edu/~neel/cs223bfinalproj


Objective:

Given the current trend towards miniaturization of electronics, it is feasible that in the near future a person will easily carry any number of powerful electronic devices on his or her body. The availability of powerful computers in increasingly small packages will allow algorithms in computer vision - a field that has traditionally been constrained to high-end desktops - to run on laptops and handhelds.

Consequently, advances in computer vision will soon be able to contribute to a traditionally "low-tech" field : assistive devices for the blind. The goal of this project is to develop a system that extracts information from a moving camera and presents important or especially salient visual features in alternative sensory modalities.


Proposal:

Using a laptop, digital video camera, and a haptic display we aim to create a system that captures images and renders key features via haptic and auditory feedback. We have several specific feedback mechanisms that we hope to implement; if they are successful, we will explore other features that are easily computed in real-time :

The project will initially focus on developing the architecture that will allow these systems to interact in real-time. Due to the hardware-dependent nature of the API's involved, this will likely be a difficult task in and of itself.

Once the basic architecture is in place, we hope to explore the psychological aspects of the project, with a particular focus on the specific visual features that are most useful to a user and the specific methods of presentation that most effectively convey the relevant visual features.


Group Members:

Dan Morris (agentmorris@gmail.com)

Neel Joshi (neel@stanford.edu)


Resources: