This page represents a project proposal; information about the completed project can be found at:
http://www.stanford.edu/~neel/cs223bfinalproj
Objective:
Given the current trend towards miniaturization of electronics, it is feasible that in the near future a person will easily carry any number of powerful electronic devices on his or her body. The availability of powerful computers in increasingly small packages will allow algorithms in computer vision - a field that has traditionally been constrained to high-end desktops - to run on laptops and handhelds.
Consequently, advances in computer vision will soon be able to contribute to a traditionally "low-tech" field : assistive devices for the blind. The goal of this project is to develop a system that extracts information from a moving camera and presents important or especially salient visual features in alternative sensory modalities.
Using a laptop, digital video camera, and a haptic display we aim to create a system that captures images and renders key features via haptic and auditory feedback. We have several specific feedback mechanisms that we hope to implement; if they are successful, we will explore other features that are easily computed in real-time:
Once the basic architecture is in place, we hope to explore the psychological aspects of the project, with a particular focus on the specific visual features that are most useful to a user and the specific methods of presentation that most effectively convey the relevant visual features.
Group Members:
Dan Morris (agentmorris@gmail.com)
Neel Joshi (neel@stanford.edu)