Skip to content

Latest commit

 

History

History
219 lines (142 loc) · 9.93 KB

File metadata and controls

219 lines (142 loc) · 9.93 KB

Challenge 2: Deploy an AWS Lambda Function that will perform Sentiment Analysis with AWS Rekognition

Now that you have configured our DeepLens to send images to our S3 bucket, the next step is to process our face crops through Rekognition to extract emotion scores, storing scores in a DynamoDB table, and pushing scores to Cloudwatch so you can build a dashboard to track emotion metrics.

Create Rekognition Lambda

You are going to use AWS Lambda to create a script that does all three of these things every time a face crop is pushed to S3. Navigate to the Lambda console and under the "Function" Dashboard select "Create function" as you did before. This time you will be authoring a function from scratch:

Alt text

Next:

  • Under Name: enter your rekognition lambda name
  • Under runtime: enter "Python 2.7"
  • Under role: Select "Create new role from template(s)"
  • Under role name: Enter your rekogition lambda role name
  • Under Policy templates: Add "S3 object read-only permissions"

Then click "Create function."

Alt text

You should now be on the lambda function screen. Before you start editing the lambda, you need to assign additional permissions to the lambda role. As you can tell from the services listed on the right, you currently only have access to "Amazon CloudWatch Logs" and "Amazon S3".

Alt text

You need to add permissions for Rekognition and DynamoDB. Navigate to the IAM dashboard by searching for "IAM" in the "Services" drop-down. Once there, click "Roles" on the left side-bar and type in your rekognition lambda role name.

Alt text

Select your role, and you should see two policies already created from the template.

Alt text

Select "Attach Policy". First, search for "DynamDB", and select "AmazonDynamoDBFullAccess".

Alt text

Next, search for "Rekognition" and select "AmazonRekognitionFullAccess".

Alt text

Then, click "Attach policy", and you should now see these policies attached to your role.

Alt text

Back on the lambda function page, you can now see additional resources available to us on the right.

Alt text

You know that you want this lambda script to run everytime a face crop is uploaded to S3, add an event trigger to this Lambda. On the left, you'll see a list of triggers. Select "S3". At the bottom of the page, a configuration menu will open up:

  • Under Bucket: Select the bucket you created to store faces (this will be different from mine).
  • Under Event type: Select "PUT". You want the script to trigger when a PutObject call is made.
  • Under Prefix: Enter "faces". You want the script to only trigger on items uploaded to the faces prefix.

Alt text

Then click "Add". Next, select the center box with your rekognition lambda's name. The menu at the bottom of the page will now let you manually enter the function code.

Alt text

Next, you're going to copy and paste code we have provided you in this repo into the text editor in the lambda dashboard. You can find it under Challenge_2_Sentiment_Analysis, "rekognize-emotions.py", but it is included here as well for your convenience:

IMPORTANT: Remember to replace the DYNAMO_TABLE_NAME with your actual DynamoDB Table Name

You will want to download the file from that location, via the "raw" link.

from __future__ import print_function

import boto3
import urllib
import datetime

print('Loading function')

rekognition = boto3.client('rekognition')
cloudwatch = boto3.client('cloudwatch')
DYNAMO_TABLE_NAME = '<YOUR_DYNAMO_DB_TABLE>'

# --------------- Helper Function to call CloudWatch APIs ------------------

def push_to_cloudwatch(name, value):
    try:
        response = cloudwatch.put_metric_data(
            Namespace='string',
            MetricData=[
                {
                    'MetricName': name,
                    'Value': value,
                    'Unit': 'Percent'
                },
            ]
        )
        print("Metric pushed: {}".format(response))
    except Exception as e:
        print("Unable to push to cloudwatch\n e: {}".format(e))
        return True

# --------------- Helper Functions to call Rekognition APIs ------------------

def detect_faces(bucket, key):
    print("Key: {}".format(key))
    response = rekognition.detect_faces(Image={"S3Object":
                                               {"Bucket": bucket,
                                                "Name": key}},
                                        Attributes=['ALL'])

    if not response['FaceDetails']:
        print ("No Face Details Found!")
        return response

    push = False
    dynamo_obj = {}
    dynamo_obj['s3key'] = key

    for index, item in enumerate(response['FaceDetails'][0]['Emotions']):
        print("Item: {}".format(item))
        if int(item['Confidence']) > 10:
            push = True
            dynamo_obj[item['Type']] = str(round(item["Confidence"], 2))
            push_to_cloudwatch(item['Type'], round(item["Confidence"], 2))

    if push:  # Push only if at least on emotion was found
        table = boto3.resource('dynamodb').Table(DYNAMO_TABLE_NAME)
        table.put_item(Item=dynamo_obj)

    return response

# --------------- Main handler ------------------


def lambda_handler(event, context):
    '''Demonstrates S3 trigger that uses
    Rekognition APIs to detect faces, labels and index faces in S3 Object.
    '''

    # Get the object from the event
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key'].encode('utf8'))
    try:
        # Calls rekognition DetectFaces API to detect faces in S3 object
        response = detect_faces(bucket, key)

        return response
    except Exception as e:
        print("Error processing object {} from bucket {}. ".format(key, bucket) +
              "Make sure your object and bucket exist and your bucket is in the same region as this function.")
        raise e

Plese take a look at what this script does. The Function lambda_handler handles the lambda script when it's triggered by an event, in this case the "PutObject" to your S3 bucket under the prefix "faces". The handler then calls the detect_faces, which does the following:

  • Makes a detect_faces API call to Rekognition, handling an empty response
  • Checks if any emotion scores are greater than 10
  • If so, pushes emotion type and confidence score to CloudWatch
  • If at least one emotion in the response is detected or significant, the record is stored in a DynamoDB table.

Once you've copy and pasted the code, you are almost ready to Save the function so it can start triggering.

Alt text

Before you do that, you need to create the DynamoDB table to store detected emotions and emotion scores.

Create DynamoDB Table

Navigate to the DynamoDB dashboard by seraching "DynamDB" in the "Services" drop-down tab. Click "Create table" on the dashboard page (it may look different if you've created a table before).

Alt text

Next:

  • Under Table name: Enter the table specified in the lambda function, "rekognize-faces-your-name"
  • Under primary key: Enter "s3key".

Alt text

Then click create. Once created, go back to your lambda function and make sure to click "Save" at the top right. You lambda function is now active and should begin triggering upon face crop uploads.

Emotion-tracking Dashboard using Cloudwatch

Now that you have created the lambda function for processing cropped faces and a DynamoDB table to record results, you are going to build the dashboard that is the center-piece of our application: real-time emotion tracking. At this point, you should have your IoT device running and collecting face crops if it hasn't been already.

Navigate to CloudWatch by searching for "CloudWatch" under the "Services" tab. Once at the dashboard, select "Dashboards" from the left side-bar.

Alt text

Select "Create Dashboard", then enter your dashboard name.

Alt text

Then select "Line" widget and click "Configure".

Alt text

You'll then be able to configure your line widget.

Alt text

  • Click on "Untitled graph" at the top left to name your graph.
  • In the "Metrics" menu, you should see a section that says "Custom Namespaces".
  • Click on "string" (if this is not there, your face crops haven't made it through the pipeline yet. Wait a few minutes).
  • Click on "Metrics with no dimensions"
  • Select all available metrics (there should be 7. These populate as the emotions are detected. If they're not all there, make some funny faces at your camera to get a range of detections.)

Then, click the tab "Graphed metrics"

Alt text

Here, you can change how your metrics are displayed. By default, point metrics shown are averages of 5 min. periods. You may want to change the granularity to a smaller period to actually start seeing line graphs being drawn.

In addition, make sure to turn on "Auto refresh" and set the interval to 10 seconds.

Alt text

Finally, click "Create Widget" and you'll have successfully created the emotion tracking dashboard for your application! Now wait and see the results come in as your IoT device sends frames through your pipeline to detect emotions over time.


Back (Challenge 1: Facial Detection) | Next (Bonus Challenge: Custom Facial Detection with SageMaker)