Difference between revisions of "AmazonRekognition"

From ESE205 Wiki
Jump to navigation Jump to search
Line 27: Line 27:
 
====Setting Up Pi====
 
====Setting Up Pi====
 
1.) Now that AWS is set up we are going to initiate the pi side of the facial recognition. First we must connect our camera module to our raspberry pi. Configure pi camera by pulling up on camera clasp, inserting camera tail such that the metal side points away from the usb ports, and then push the camera clasp back into place. <br>
 
1.) Now that AWS is set up we are going to initiate the pi side of the facial recognition. First we must connect our camera module to our raspberry pi. Configure pi camera by pulling up on camera clasp, inserting camera tail such that the metal side points away from the usb ports, and then push the camera clasp back into place. <br>
[[File:Cameramodulepic.PNG|50px|frame|right|RaspberryPiCamera]]
+
[[File:Cameramodulepic.PNG|100px|frame|right|RaspberryPiCamera]]
 
<br>
 
<br>
 
2.) Run <code> sudo raspi-config </code>and enable the camera add on then press finish. <br>
 
2.) Run <code> sudo raspi-config </code>and enable the camera add on then press finish. <br>
 
3.) Install camera with <code> sudo apt-get install python-picamera </code>, reboot your raspberry pi. <br>
 
3.) Install camera with <code> sudo apt-get install python-picamera </code>, reboot your raspberry pi. <br>
 
4.) In python run the following code (from https://medium.com/@petehouston/capture-images-from-raspberry-pi-camera-module-using-picamera-505e9788d609) to ensure the camera is functioning properly. <br>
 
4.) In python run the following code (from https://medium.com/@petehouston/capture-images-from-raspberry-pi-camera-module-using-picamera-505e9788d609) to ensure the camera is functioning properly. <br>
[[File:testcameracode.PNG|50px|frame|right|TestCameraCode]]
+
[[File:testcameracode.PNG|100px|frame|right|TestCameraCode]]
  
 
====Connect AWS and Raspberry Pi====
 
====Connect AWS and Raspberry Pi====

Revision as of 19:10, 8 April 2019

Overview

This tutorial covers how to create an Amazon Web Services account, how to set up a camera on the raspberry pi, and how to run Amazon's facial Rekognition using a picture taken from the pi. Before we can run facial recognition we must create an AWS user, create a S3 bucket, and upload pictures to this S3 bucket. The final product will be able to calculate the similarity between a picture taken on the pi with various images saved in the AWS bucket.

Materials/Prerequisites

  • Atom, or another source code editor
  • A Raspberry Pi
  • A Pi camera

Process

Setting up AWS account:

1.) Visit the following URL and click on sign into console, create a new AWS account, and proceed by entering your email, password and account name.
https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/

2.) Follow the following link to download the AWS SDK for python (Boto3)
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#installation

3.) After completing the registration process and confirming your account, sign into the console and open IAM console, choose users then add user.

  • Give your user a name, and proceed with programmatic access.
  • Add the following permissions to user: AmazonS3FullAccess, AmazonRekognitionFullAccess, and AmazonS3ReadOnlyAccess
  • After creating user, click on security credentials and download the given access key and access password and store in a safe file. These credentials should not be shared nor pushed to github.

4.) Next go to services, then S3, then create bucket, note the region and name for later use. Enable static website hosting and the ACL permissions.
5.) Next we want to upload images from your computer and add to bucket to test for facial similarity with the picture you will take. Click upload and choose image file (preferably saved as a .jpg) from computer.

Setting Up Pi

1.) Now that AWS is set up we are going to initiate the pi side of the facial recognition. First we must connect our camera module to our raspberry pi. Configure pi camera by pulling up on camera clasp, inserting camera tail such that the metal side points away from the usb ports, and then push the camera clasp back into place.

RaspberryPiCamera


2.) Run sudo raspi-config and enable the camera add on then press finish.
3.) Install camera with sudo apt-get install python-picamera , reboot your raspberry pi.
4.) In python run the following code (from https://medium.com/@petehouston/capture-images-from-raspberry-pi-camera-module-using-picamera-505e9788d609) to ensure the camera is functioning properly.

TestCameraCode

Connect AWS and Raspberry Pi

1.) First you need to import boto3, which allows python to communicate with Amazon Web Services and picamera which provides a python interface to the raspberry pi camera.

2.) Next, you need to connect python to your AWS account by entering the AWS access key and the secret access key (which were downloaded while setting up the IAM user). The region name which can be found in the S3 bucket on AWS should also be included here.

3.) Insert the following code, again entering region in the (enter region) blank and replacing bucket with the name of your S3 bucket.

4.) Next we want to instantiate our list of faces that were uploaded into our bucket. These images will be the faces that our code looks to match with the picture taken by the pi.

5.) Next we create a key_target, which will be the filename of the picture uploaded from the pi. We also define two boolean variables which we will utilize later to differentiate an image with a person to an image with no person and to differentiate a face that is recognized from a face that is not recognized. Our min_sim variable is used to set the baseline similarity that we deem necessary to consider two faces a match.

6.) Now we want to utilize our pi camera to take a picture and upload this picture to the S3 bucket with the filename intruderface.jpg

7.) The detect faces function runs and returns the following attributes: bounding box, condience, facial landmarks, facial attributes, quality, pose, and emotions. We can use these attributes to determine facial similarities. again , enter your region_name below.

8.) The compare_faces function will provide a confidence number that the taken image matches one from our list.

9.)This operation utilizes our boolean, IS_FACE, marking it true is a face is detected and false if no face is detected.

10.) If our boolean, IS_FACE is true we now want to determine if this face matches a face in our list. This function will detect the similarities between the taken image and the faces in our list and if the minimum similarity threshold is met, this will match the image to that user.

11.) If the minimum confidence level is not reached with any of the faces, we will conclude it is an unknown user.

12.)Finally we want to print out our results using this final print line.


Our results should look something like this…