diff --git a/identity-verification/module_1/1.0 SETUP - Gather images and setup S3 buckets for IDV modules.ipynb b/identity-verification/module_1/1.0 SETUP - Gather images and setup S3 buckets for IDV modules.ipynb index 83f0f13..4197df1 100644 --- a/identity-verification/module_1/1.0 SETUP - Gather images and setup S3 buckets for IDV modules.ipynb +++ b/identity-verification/module_1/1.0 SETUP - Gather images and setup S3 buckets for IDV modules.ipynb @@ -87,10 +87,7 @@ "source": [ "s3 = boto3.client('s3')\n", "try:\n", - " s3.create_bucket(\n", - " Bucket=bucket_name,\n", - " CreateBucketConfiguration={'LocationConstraint': aws_region}\n", - " )\n", + " s3.create_bucket(Bucket=bucket_name)\n", "except Exception as err:\n", " print(\"ERROR: {}\".format(err))" ] diff --git a/identity-verification/module_1/1.1 Getting Started.ipynb b/identity-verification/module_1/1.1 Getting Started.ipynb index d7ef779..09e2e2b 100644 --- a/identity-verification/module_1/1.1 Getting Started.ipynb +++ b/identity-verification/module_1/1.1 Getting Started.ipynb @@ -93,7 +93,6 @@ "metadata": {}, "outputs": [], "source": [ - "# there should be close to 1000 files in the bucket. \n", "def list_s3_files_using_client(bucket_name):\n", " \"\"\"\n", " This functions list all files in s3 bucket.\n", diff --git a/identity-verification/module_2/2.1 Rekognition IDV API Examples.ipynb b/identity-verification/module_2/2.1 Rekognition IDV API Examples.ipynb index b65130a..5a1acc6 100644 --- a/identity-verification/module_2/2.1 Rekognition IDV API Examples.ipynb +++ b/identity-verification/module_2/2.1 Rekognition IDV API Examples.ipynb @@ -14,7 +14,7 @@ "\n", "-------\n", "\n", - "In-person user identity verification is slow to scale, costly, and high friction for users. Machine learning powered facial biometrics can enable online user identity verification. Amazon Rekognition offers pre-trained facial recognition and analysis capabilities that you can quickly add to your user onboarding and authentication work flows to verify opted-in users' identity online. \n", + "In-person user identity verification is slow to scale, costly, and high friction for users. Machine learning powered facial biometrics can enable online user identity verification. Amazon Rekognition offers pre-trained facial recognition and analysis capabilities that you can quickly add to your user onboarding and authentication workflows to verify opted-in users' identity online. \n", "\n", "In this notebook, we'll use the Amazon Rekgonition's key APIs for Identity Verification. After running this notebook you should be able to use the following APIs:\n", "\n", @@ -317,7 +317,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## IndexFaces \n", + "## Index Multiple Faces \n", "-----\n", "\n", "The easy way to index multiple faces is to simply make a list of faces and loop over the list, in this case we are going to use the \"reference name\" as our external image id and the reference image will be the image we add to our collection. Of course you can use a variety of parallel processing methods to speed up indexing operations, however in this case here we'll keep it simple and just use a for loop. Note we are using about 100 images so this may take a few seconds. \n", @@ -477,7 +477,7 @@ "----\n", "Pass the FaceID above to search_faces, this will identify potential matches within the collection. \n", "\n", - "**Select a face id from the results above and enter it below**" + "**Select a face id from the results above and enter it below** " ] }, { @@ -486,7 +486,6 @@ "metadata": {}, "outputs": [], "source": [ - "\n", "face_id = \"\" # replace with your own face ID\n", "try:\n", " response = rek_client.search_faces(\n", @@ -664,10 +663,12 @@ "metadata": {}, "outputs": [], "source": [ - "response = rek_client.delete_collection(\n", - " CollectionId= collection_name\n", - ")\n", - "response" + "try:\n", + " rek_client.delete_collection(\n", + " CollectionId=collection_name\n", + " )\n", + "except Exception as err:\n", + " print(\"ERROR: {}\".format(err))" ] } ], diff --git a/identity-verification/module_2/2.2 Onboarding IDV API Example.ipynb b/identity-verification/module_2/2.2 Onboarding IDV API Example.ipynb index d3f9f4c..176b96f 100644 --- a/identity-verification/module_2/2.2 Onboarding IDV API Example.ipynb +++ b/identity-verification/module_2/2.2 Onboarding IDV API Example.ipynb @@ -95,7 +95,7 @@ "## Name your Collection \n", "\n", "# -- onboarded users collection --\n", - "collection_name = '' # name your collection something like \"onboarded-users\"\n", + "collection_name = ' ' # name your collection something like \"onboarded-users\"\n", "\n", "try:\n", " rek_client.create_collection(\n", @@ -279,7 +279,14 @@ " 'Bucket':bucket_name,\n", " 'Name':id_image}},\n", " Attributes=['ALL'])\n", - "response" + "print(\"-- 1st face found:\")\n", + "print(response['FaceDetails'][0]['BoundingBox'])\n", + "print(response['FaceDetails'][0]['Quality'])\n", + "\n", + "print(\"-- 2nd face found:\")\n", + "print(response['FaceDetails'][1]['BoundingBox'])\n", + "print(response['FaceDetails'][1]['Quality'])\n", + "\n" ] }, { @@ -324,7 +331,7 @@ "'Quality': {'Brightness': 94.17919921875, 'Sharpness': 46.02980041503906}\n", " \n", "# -- selfie quality --\n", - "'Quality': {'Brightness': 93.77082824707031,'Sharpness': 20.927310943603516}\n", + "'Quality': {'Brightness': 89.2042007446289, 'Sharpness': 53.330047607421875}\n", "```" ] }, @@ -339,7 +346,7 @@ " 'Bucket':bucket_name,\n", " 'Name':selfie_image}},\n", " Attributes=['ALL'])\n", - "response" + "print(response['FaceDetails'][0]['Quality'])" ] }, { @@ -545,7 +552,7 @@ "source": [ "## Clean up\n", "----\n", - "Here we simply need to delete our collections \n" + "As part of our cleanup, we can delete our two collections. This will delete the collections and all the face vectors contained within.\n" ] }, { @@ -560,17 +567,18 @@ " rek_client.delete_collection(\n", " CollectionId=collection_name\n", " )\n", - "except:\n", - " print(\"collection: {} NOT FOUND\".format(collection_name))\n", "\n", + "except Exception as err:\n", + " print(\"ERROR: {}\".format(err))\n", + " \n", "# fraudulent user collection \n", "\n", "try:\n", " rek_client.delete_collection(\n", " CollectionId=fraud_collection_name\n", " )\n", - "except:\n", - " print(\"collection: {} NOT FOUND\".format(fraud_collection_name))" + "except Exception as err:\n", + " print(\"ERROR: {}\".format(err))" ] } ], diff --git a/identity-verification/module_2/2.3 Authentication IDV API Example.ipynb b/identity-verification/module_2/2.3 Authentication IDV API Example.ipynb index 3f5e424..125711e 100644 --- a/identity-verification/module_2/2.3 Authentication IDV API Example.ipynb +++ b/identity-verification/module_2/2.3 Authentication IDV API Example.ipynb @@ -193,7 +193,7 @@ "----\n", "\n", "Here we want to do some basic checks:\n", - "1. that we can detect that there is only one face in the selie \n", + "1. that we can detect that there is only one face in the selfie \n", "2. the quality (sharpness and brightness) are sufficient to match with \n", "\n", "Note: we could do several other checks, but we'll see those in module 3.\n", @@ -347,7 +347,7 @@ "
Results \n", " \n", "\n", - "- Check out the similarity of each face found, it should range from 99.99 to 99.96 (Rekognition is extreemly accurate) \n", + "- Check out the similarity of each face found, it should range from 99.99 to 99.96 \n", "- Also note the ExternalImageId of each face, they should all match Entity_X_Cannon\n", " \n", "
" @@ -391,7 +391,7 @@ "## Search within Collection\n", "----- \n", "\n", - "What faces matches another face within the collection? well now we have several faces of Al Gore lets see what that looks like." + "What faces matches another face within the collection? We now have several faces of the same person, lets see what that looks like." ] }, { @@ -400,8 +400,8 @@ "metadata": {}, "outputs": [], "source": [ - "## Snag a FaceID from above\n", - "face_id = \"628d65f7-e96b-45fb-91f7-3841064c6fa7\"\n", + "## -- Snag a FaceID from above --\n", + "face_id = \" \" # enter FaceID from above\n", "try:\n", " response = rek_client.search_faces(\n", " CollectionId=collection_name,\n", @@ -420,7 +420,7 @@ "## Clean up\n", "------\n", "\n", - "now simply delete our collection " + "As part of our cleanup, we can delete our collection. This will delete the collection and all the face vectors contained within." ] }, { @@ -429,10 +429,12 @@ "metadata": {}, "outputs": [], "source": [ - "response = rek_client.delete_collection(\n", - " CollectionId= collection_name\n", - ")\n", - "response" + "try:\n", + " rek_client.delete_collection(\n", + " CollectionId=collection_name\n", + " )\n", + "except Exception as err:\n", + " print(\"ERROR: {}\".format(err))" ] } ], diff --git a/identity-verification/module_3/3.2 Fun with Rekognition DetectFaces and SearchFacesByImage.ipynb b/identity-verification/module_3/3.2 Fun with Rekognition DetectFaces and SearchFacesByImage.ipynb new file mode 100644 index 0000000..c9aadaa --- /dev/null +++ b/identity-verification/module_3/3.2 Fun with Rekognition DetectFaces and SearchFacesByImage.ipynb @@ -0,0 +1,418 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "64907692", + "metadata": {}, + "source": [ + "# 3.2 Fun with Rekognition DetectFaces and SearchFacesByImage\n", + "----\n", + "This is fun but optional lab. This lab shows how you can index faces into a collection then use detect faces and search faces by image to identify several faces found in a single image. To do this we'll create a collection, index several face images into the collection. Then we'll search the faces found in a single image against the collection of known faces in order to identify the faces in the image. \n", + "\n", + "## Steps \n", + "\n", + "1. Load packages \n", + "2. View existing collections \n", + "3. Create a new collection \n", + "4. Index faces into the collection \n", + "5. Search the collection to find and present faces found in an image\n", + "6. Clean up\n", + "\n", + "This notebook will guide you through on how to compare all faces detected in an image against your Amazon Rekognition Face Collection. " + ] + }, + { + "cell_type": "markdown", + "id": "ec7fb7d7", + "metadata": {}, + "source": [ + "## Step 1. Load Libraries " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ed871982", + "metadata": {}, + "outputs": [], + "source": [ + "import boto3, os, io\n", + "client=boto3.client('rekognition')" + ] + }, + { + "cell_type": "markdown", + "id": "ce603810", + "metadata": {}, + "source": [ + "## Step 2. View your existing collections" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ca7475f8", + "metadata": {}, + "outputs": [], + "source": [ + "def list_collections():\n", + "\n", + " max_results=10\n", + " \n", + " print('Displaying collections...')\n", + " response=client.list_collections(MaxResults=max_results)\n", + " collection_count=0\n", + " done=False\n", + " \n", + " while not done:\n", + " collections=response['CollectionIds']\n", + "\n", + " for collection in collections:\n", + " print (collection)\n", + " collection_count+=1\n", + " if 'NextToken' in response:\n", + " nextToken=response['NextToken']\n", + " response=client.list_collections(NextToken=nextToken,MaxResults=max_results)\n", + " \n", + " else:\n", + " done=True\n", + "\n", + " return collection_count \n", + "\n", + "collection_count=list_collections()\n", + "\n", + "print(\"There are: {} collections in your account \".format(collection_count))\n" + ] + }, + { + "cell_type": "markdown", + "id": "fce788a7", + "metadata": {}, + "source": [ + "## Step 3. Create a new collection\n", + "-----\n", + "\n", + "Remember you must use a unique name if you are creating a new collection" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8c27a89b", + "metadata": {}, + "outputs": [], + "source": [ + "collection_id=' ' # name your collection " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "25560fe4", + "metadata": {}, + "outputs": [], + "source": [ + "def create_collection(collection_id):\n", + " #Create a collection\n", + " print('Creating collection:' + collection_id)\n", + " try:\n", + " response=client.create_collection(CollectionId=collection_id)\n", + " except:\n", + " client.delete_collection(CollectionId=collection_id)\n", + " response=client.create_collection(CollectionId=collection_id)\n", + " print('Collection ARN: ' + response['CollectionArn'])\n", + " print('Status code: ' + str(response['StatusCode']))\n", + " print('Done.')\n", + " \n", + "create_collection(collection_id)" + ] + }, + { + "cell_type": "markdown", + "id": "1a22c079", + "metadata": {}, + "source": [ + "### Step 3a. Confirm your collection creation. \n", + "-----\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2b5e0bab", + "metadata": {}, + "outputs": [], + "source": [ + "collection_count=list_collections()\n", + "print(\"collections: \" + str(collection_count))" + ] + }, + { + "cell_type": "markdown", + "id": "42199e56", + "metadata": {}, + "source": [ + "## Step 4. Index faces (add faces to a collection) \n", + "-----\n", + "Here we are going to iterate over the files in the populate folder and index their faces. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "403168c7", + "metadata": {}, + "outputs": [], + "source": [ + "directory = 'media/populate'\n", + " \n", + "# iterate over files the populate directory\n", + "for filename in os.listdir(directory):\n", + " f = os.path.join(directory, filename)\n", + " # checking if it is a file\n", + " if os.path.isfile(f):\n", + " print(f)\n", + " file = open(f, \"rb\") # opening for [r]eading as [b]inary\n", + " data = file.read() \n", + " response=client.index_faces(CollectionId=collection_id,\n", + " Image={'Bytes':data},\n", + " ExternalImageId=f.split(\"/\")[2],\n", + " MaxFaces=1,\n", + " QualityFilter=\"AUTO\",\n", + " DetectionAttributes=['ALL'])\n", + " print ('Results for ' + f.split(\"/\")[2])\n", + " print('Faces indexed:')\n", + " for faceRecord in response['FaceRecords']:\n", + " print(' Face ID : {}'.format( faceRecord['Face']['FaceId']))\n", + " print(' Location: {}'.format(faceRecord['Face']['BoundingBox']))\n", + " \n", + " if len(response['UnindexedFaces']) > 0:\n", + " print('Faces not indexed:')\n", + " for unindexedFace in response['UnindexedFaces']:\n", + " print(' Location: {}'.format(unindexedFace['FaceDetail']['BoundingBox']))\n", + " print(' Reasons :')\n", + " for reason in unindexedFace['Reasons']:\n", + " print(' ' + reason)\n", + " file.close()\n" + ] + }, + { + "cell_type": "markdown", + "id": "2a5924ea", + "metadata": {}, + "source": [ + "### 4a. List faces in the collection" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "45c9428c", + "metadata": {}, + "outputs": [], + "source": [ + "def list_faces_in_collection(collection_id):\n", + "\n", + " maxResults=20\n", + " faces_count=0\n", + " tokens=True\n", + "\n", + " response=client.list_faces(CollectionId=collection_id,\n", + " MaxResults=maxResults)\n", + "\n", + " print('Faces in collection: {}'.format( collection_id))\n", + " \n", + " while tokens:\n", + "\n", + " faces=response['Faces']\n", + "\n", + " for face in faces:\n", + " print(\" FaceID: {}, ExternalImageId: {}\".format(face[\"FaceId\"],face[\"ExternalImageId\"].split('.')[0]))\n", + " faces_count+=1\n", + " if 'NextToken' in response:\n", + " nextToken=response['NextToken']\n", + " response=client.list_faces(CollectionId=collection_id,\n", + " NextToken=nextToken,MaxResults=maxResults)\n", + " else:\n", + " tokens=False\n", + " return faces_count \n", + "\n", + "faces_count=list_faces_in_collection(collection_id)\n", + "print(\"Number of faces in collection: {}\".format(faces_count))" + ] + }, + { + "cell_type": "markdown", + "id": "12a91336", + "metadata": {}, + "source": [ + "## Step 5. Find faces in photo\n", + "\n", + "----\n", + "Here we create a few functions that will be useful for transforming, detecting and extracting faces " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "40b4fdd9", + "metadata": {}, + "outputs": [], + "source": [ + "def transform_bounding(img, box):\n", + " imgWidth, imgHeight = img.size\n", + " l = (imgWidth * box['Left'])-5\n", + " t = (imgHeight * box['Top'])-5\n", + " w = (imgWidth * box['Width'])+10\n", + " h = (imgHeight * box['Height'])+10\n", + " return l,t,w,h" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f7b18bf3", + "metadata": {}, + "outputs": [], + "source": [ + "def detect_faces(file):\n", + " faces = []\n", + " f = open(file, \"rb\") # opening for [r]eading as [b]inary\n", + " data = f.read() \n", + " response = client.detect_faces(Image={'Bytes':data})\n", + " for face in response[\"FaceDetails\"]:\n", + " faces.append(face['BoundingBox'])\n", + " print(\"Faces detected: \" + str(len(response['FaceDetails']))) \n", + " return faces" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ece418cd", + "metadata": { + "scrolled": false + }, + "outputs": [], + "source": [ + "directory = 'media/test'\n", + "\n", + "from PIL import Image # to load images\n", + "from IPython.display import display # to display images\n", + "\n", + "for filename in os.listdir(directory):\n", + " f = os.path.join(directory, filename)\n", + " # checking if it is a file\n", + " if os.path.isfile(f):\n", + " img = Image.open(f)\n", + " display(img)\n", + " \n", + " faces = detect_faces(f) \n", + " for face in faces:\n", + " l,t,w,h = transform_bounding(img,face)\n", + " cropped = img.crop((l,t,l+w,t+h)) \n", + "\n", + " stream = io.BytesIO()\n", + " cropped.save(stream, format='PNG')\n", + " bin_img = stream.getvalue()\n", + "\n", + " response0 = client.detect_faces(\n", + " Image={'Bytes': bin_img},\n", + " )\n", + "\n", + " if len(response0['FaceDetails']) > 0:\n", + " print(\"face found\")\n", + " display(cropped)\n", + " response1=client.search_faces_by_image(CollectionId=collection_id,\n", + " Image={'Bytes': bin_img},\n", + " FaceMatchThreshold=50)\n", + " faceMatches=response1['FaceMatches']\n", + " if(len(faceMatches) > 0):\n", + " for match in faceMatches:\n", + " print ('Person : ' + match['Face']['ExternalImageId'].split('.')[0])\n", + " print ('Similarity : ' + \"{:.2f}\".format(match['Similarity']) + \"%\")\n", + " else:\n", + " print(\"but no match found\")\n", + " else:\n", + " print(\"face not found in the following crop\")\n", + " cropped.show()\n", + "\n", + " print(\"------------------------------\") " + ] + }, + { + "cell_type": "markdown", + "id": "36915da3", + "metadata": {}, + "source": [ + "## Clean up the resources" + ] + }, + { + "cell_type": "markdown", + "id": "86cf8c7e", + "metadata": {}, + "source": [ + "Delete your face collection, this will delete the collection and the face vectors stored in the collection." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c008ce84", + "metadata": {}, + "outputs": [], + "source": [ + "def delete_collection(collection_id):\n", + "\n", + " print('Attempting to delete collection ' + collection_id)\n", + " status_code=0\n", + " try:\n", + " response=client.delete_collection(CollectionId=collection_id)\n", + " status_code=response['StatusCode']\n", + " \n", + " except ClientError as e:\n", + " if e.response['Error']['Code'] == 'ResourceNotFoundException':\n", + " print ('The collection ' + collection_id + ' was not found ')\n", + " else:\n", + " print ('Error other than Not Found occurred: ' + e.response['Error']['Message'])\n", + " status_code=e.response['ResponseMetadata']['HTTPStatusCode']\n", + " return(status_code)\n", + "\n", + "\n", + "status_code=delete_collection(collection_id)\n", + "print('Status code: ' + str(status_code))\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e755e2f0", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "instance_type": "ml.t3.medium", + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.7" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/identity-verification/module_3/media/populate/dani.png b/identity-verification/module_3/media/populate/dani.png new file mode 100644 index 0000000..586405e Binary files /dev/null and b/identity-verification/module_3/media/populate/dani.png differ diff --git a/identity-verification/module_3/media/populate/dano.png b/identity-verification/module_3/media/populate/dano.png new file mode 100644 index 0000000..7cbc303 Binary files /dev/null and b/identity-verification/module_3/media/populate/dano.png differ diff --git a/identity-verification/module_3/media/populate/fran.png b/identity-verification/module_3/media/populate/fran.png new file mode 100644 index 0000000..ffb5296 Binary files /dev/null and b/identity-verification/module_3/media/populate/fran.png differ diff --git a/identity-verification/module_3/media/test/friends.png b/identity-verification/module_3/media/test/friends.png new file mode 100644 index 0000000..e2f635c Binary files /dev/null and b/identity-verification/module_3/media/test/friends.png differ