Difference between revisions of "FingerSpark"
Line 1: | Line 1: | ||
==Overview== | ==Overview== | ||
− | FingerSpark | + | FingerSpark is a program that tracks a user’s hand and individual fingers in a video feed, then interprets a gesture from the movement of the user’s fingertips. The user will wear a glove with differently colored fingertips, making it easier for the camera to pinpoint the two-dimensional location of each finger. The user positions his hand 2-3 feet in front of the camera, where a video feed is recorded and interpreted by the Raspberry Pi B+’s CPU. The final product is mounted on a tripod for the user’s convenience. To detect the movement of brightly colored points at approximately 2-3 feet away from the camera, we use the Raspberry Pi Camera Module’s slow-motion video mode, which takes 90 frames/second at 640x480p of resolution. |
− | To achieve the desired functionality, we | + | To achieve the desired functionality, we wrote an image-processing program to analyze individual frames from the camera’s video feed. We then mask each frame and perform a bitwise OR on the array of frames to create a composite mask. We then perform image comparisons on this composite image and a series of templates, utilizing cropping techniques and an adapted form of Hooke-Jeeves’ algorithm. This optimization algorithm searches for patterns to find the template with the highest degree of similarity to the image, thereby determining which gesture the user performed. |
== Team Members == | == Team Members == | ||
Line 10: | Line 10: | ||
==Objective== | ==Objective== | ||
− | Our goal in creating FingerSpark is to work towards eliminating the barriers to perfectly natural user control of | + | Our goal in creating FingerSpark is to work towards eliminating the barriers to perfectly natural user control of electronic devices. We believe that our product will be an essential next step in developing three-dimensional operating systems, creating robots that can flawlessly mimic the fine motor skills of humans, and producing interactive augmented reality technologies. |
− | |||
− | |||
+ | Our demonstration at the end of the semester will consist of a user moving his hand in the glove with colored fingertips in front of the camera, making a gesture of his choice. After the video is recorded, our program will process the video input and correctly select the user’s gesture from our set of templates. Once the user’s gesture is correctly identified, our program will output what type of gesture the user made. | ||
==Budget== | ==Budget== | ||
Line 32: | Line 31: | ||
==Gantt Chart== | ==Gantt Chart== | ||
− | [[File:Gantt_Final_Image.png|frameless|center|1200px|alt=The | + | [[File:Gantt_Final_Image.png|frameless|center|1200px|alt=The Gantt Chart for our project.]] |
[[Category:Projects]] | [[Category:Projects]] | ||
[[Category:Spring 2016 Projects]] | [[Category:Spring 2016 Projects]] |
Revision as of 06:08, 28 April 2016
Overview
FingerSpark is a program that tracks a user’s hand and individual fingers in a video feed, then interprets a gesture from the movement of the user’s fingertips. The user will wear a glove with differently colored fingertips, making it easier for the camera to pinpoint the two-dimensional location of each finger. The user positions his hand 2-3 feet in front of the camera, where a video feed is recorded and interpreted by the Raspberry Pi B+’s CPU. The final product is mounted on a tripod for the user’s convenience. To detect the movement of brightly colored points at approximately 2-3 feet away from the camera, we use the Raspberry Pi Camera Module’s slow-motion video mode, which takes 90 frames/second at 640x480p of resolution.
To achieve the desired functionality, we wrote an image-processing program to analyze individual frames from the camera’s video feed. We then mask each frame and perform a bitwise OR on the array of frames to create a composite mask. We then perform image comparisons on this composite image and a series of templates, utilizing cropping techniques and an adapted form of Hooke-Jeeves’ algorithm. This optimization algorithm searches for patterns to find the template with the highest degree of similarity to the image, thereby determining which gesture the user performed.
Team Members
- David Battel
- Connor Goggins
- Kjartan Brownell (TA)
Objective
Our goal in creating FingerSpark is to work towards eliminating the barriers to perfectly natural user control of electronic devices. We believe that our product will be an essential next step in developing three-dimensional operating systems, creating robots that can flawlessly mimic the fine motor skills of humans, and producing interactive augmented reality technologies.
Our demonstration at the end of the semester will consist of a user moving his hand in the glove with colored fingertips in front of the camera, making a gesture of his choice. After the video is recorded, our program will process the video input and correctly select the user’s gesture from our set of templates. Once the user’s gesture is correctly identified, our program will output what type of gesture the user made.
Budget
- Raspberry Pi B+ - $29.95 (Will likely use classroom kit)
- Raspberry Pi Camera Module - $24.99 (Need to purchase)
- Set of comfortable black gloves - 2 pairs: $7.89 x 2 = $15.78 (Need to purchase)
- Spray Paint Set: $15.99 (Need to purchase)
- Tripod: $19.99 (Need to purchase)
- White Backdrop: $9.50 (Need to purchase)
TOTAL: $86.25