Skip to content
This repository has been archived by the owner on Dec 15, 2020. It is now read-only.
Steve Martinelli edited this page Mar 19, 2018 · 15 revisions

Short Name

Augmented Reality Résumé with Visual Recognition

Short Description

In this code pattern, we will create augmented reality based résumés with Visual Recognition. The iOS app recognizes the face and presents you with the AR view that displays a résumé of the person in the camera view. The app utilizes IBM Visual Recognition to classify the image and uses that classification to get details about the person from data stored in an IBM Cloudant NoSQL database.

Offering Type

Cloud

Introduction

Augmented Reality provides an enhanced version of reality by superimposing virtual objects over a user’s view of the real world. ARKit blends digital objects and information with the environment around you, taking apps far beyond the screen and freeing them to interact with the real world in entirely new ways. This pattern combines ARKit with IBM Visual Recognition and IBM Cloudant database to give you a complete Augmented Reality experience.

Author

Sanjeev Ghimire

Code

The GitHub source links can be found here.

Demo

N/A

Video

Overview

The easiest way to find and connect to people around the world is through social media apps like Facebook, Twitter and LinkedIn. These, however, only provide text based search capabilities. However, with the recently announced release of the iOS ARKit toolkit, search is now possible using facial recognition. Combining iOS face recognition using Vision API, classification using IBM Visual Recognition, and person identification using classified image and data, one can build an app to search faces and identify them. One of the use cases is to build a Augmented Reality based résumé using visual recognition.

The main purpose of this code pattern is to demonstrate how to identify a person and his details using Augmented Reality and Visual Recognition

Flow

  1. User opens up the app in mobile
  2. Face recognition using iOS Vision API
  3. Classify cropped face using IBM Visual Recognition
  4. Get details from IBM Cloudant using the classification
  5. Overlay the data in front of the user in the mobile camera view

Included components

  1. Swift mobile app
    1. Face Recognition using Vision API
    2. ARKit : An iOS augmented reality platform
  2. IBM Visual Recognition: An IBM service to analyze the visual content of images or video frames to understand what is happening in a scene
  3. IBM Cloudant DB: A highly scalable and performant JSON database service

Featured technologies

  • ARKit: ARKit blends digital objects and information with the environment around you, taking apps far beyond the screen and freeing them to interact with the real world in entirely new ways.

Blog

Links

Clone this wiki locally