EyeTracking Intro

From ESE497 Wiki
Revision as of 01:49, 19 December 2011 by Phani (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

First Phase: In this phase of the project, we focus on software development. We will use an infrared camera to capture the images of a human eye using Lab view. We employ template matching methods to locate the center of the pupil, where we will use a small patch of dark pixels as a template. We will propose adaptive search methods, in which we choose the search space in each frame based on the estimates of the pupil location from the previous frames. The proposed adaptive search will greatly reduce the computational complexity of the algorithm, which is essential for the real time tracking. We will also work on developing hybrid methods that use template matching along with feature selection to develop robust and computationally inexpensive algorithms that are insensitive to noise in the images and orientation of the camera that is used to obtain the images.

Second Phase: In the second phase of the project, we will build the hardware system that can record the images using a camera mounted on a pair of sunglasses. We will port the software that we develop during the first phase on to a suitable microcontroller, which does the processing and generates the control signal which can be used to move the prosthetic limbs. We will also develop hardware to transmit the recorded images to the micro-controller wirelessly.

Eye tracking project would help patients in various tasks such as communication, writing emails, drawing and making music. More advanced applications of this project are: cognitive studies, laser refractive surgery, computer usability, translation process research, infant research, sport training and commercial eye tracking.