Skip to content

CEDL2017/homework1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

overview

Deep Classification

updates

  • 27/9/2017: provide subset of dataset, separated into train/test set
  • 27/9/2017: in this homework, we only evaluat the performance of object classification. You can use other label for multi-task learning, etc.
  • 4/10/2017: Due: Oct. 5, 11:59pm. => Due: Oct. 12, 11:59pm.

Brief

  • +2 extra credit of the whole semester
  • Due: Oct. 5, 11:59pm.
  • Required files: results/index.md, and code/
  • Project reference

Overview

Recently, the technological advance of wearable devices has led to significant interests in recognizing human behaviors in daily life (i.e., uninstrumented environment). Among many devices, egocentric camera systems have drawn significant attention, since the camera is aligned with the field-of-view of wearer, it naturally captures what a person sees. These systems have shown great potential in recognizing daily activities(e.g., making meals, watching TV, etc.), estimating hand poses, generating howto videos, etc.

Despite many advantages of egocentric camera systems, there exists two main issues which are much less discussed. Firstly, hand localization is not solved especially for passive camera systems. Even for active camera systems like Kinect, hand localization is challenging when two hands are interacting or a hand is interacting with an object. Secondly, the limited field-of-view of an egocentric camera implies that hands will inevitably move outside the images sometimes.

HandCam (Fig. 1), a novel wearable camera capturing activities of hands, for recognizing human behaviors. HandCam has two main advantages over egocentric systems : (1) it avoids the need to detect hands and manipulation regions; (2) it observes the activities of hands almost at all time.

Requirement

Data

Introduction

This is a dataset recorded by hand camera system.

The camera system consist of three wide-angle cameras, two mounted on the left and right wrists to capture hands (referred to as HandCam) and one mounted on the head (referred to as HeadCam).

The dataset consists of 20 sets of video sequences (i.e., each set includes two HandCams and one HeadCam synchronized videos) captured in three scenes: a small office, a mid-size lab, and a large home.)

We want to classify some kinds of hand states including free v.s. active (i.e., hands holding objects or not), object categories, and hand gestures. At the same time, a synchronized video has two sequence need to be labeled, the left hand states and right hand states.

For each classification task (i.e., free vs. active, object categories, or hand gesture), there are forty sequences of data. We split the dataset into two parts, half for training, half for testing. The object instance is totally separated into training and testing.

Zip files

frames.zip contains all the frames sample from the original videos by 6fps.

labels.zip conatins the labels for all frames.

FA : free vs. active (only 0/1)

obj: object categories (24 classes, including free)

ges: hand gesture (13 gestures, including free)

Details of obj. and ges.

Obj = { 'free':0,
        'computer':1,
        'cellphone':2,
        'coin':3,
        'ruler':4,
        'thermos-bottle':5,
        'whiteboard-pen':6,
        'whiteboard-eraser':7,
        'pen':8,
        'cup':9,
        'remote-control-TV':10,
        'remote-control-AC':11,
        'switch':12,
        'windows':13,
        'fridge':14,
        'cupboard':15,
        'water-tap':16,
        'toy':17,
        'kettle':18,
        'bottle':19,
        'cookie':20,
        'book':21,
        'magnet':22,
        'lamp-switch':23}

Ges= {  'free':0,
        'press'1,
        'large-diameter':2,
        'lateral-tripod':3,
        'parallel-extension':4,
        'thumb-2-finger':5,
        'thumb-4-finger':6,
        'thumb-index-finger':7,
        'precision-disk':8,
        'lateral-pinch':9,
        'tripod':10,
        'medium-wrap':11,
        'light-tool':12}

Writeup

You are required to implement a deep-learning-based method to recognize hand states (free vs. active hands, hand gestures, object categories). Moreover, You might need to further take advantage of both HandCam and HeadCam. You will have to compete the performance with your classmates, so try to use as many techniques as possible to improve. Your score will based on the performance ranking.

For this project, and all other projects, you must do a project report in results folder using Markdown. We provide you with a placeholder index.md document which you can edit. In the report you will describe your algorithm and any decisions you made to write your algorithm a particular way. Then, you will describe how to run your code and if your code depended on other packages. You also need to show and discuss the results of your algorithm. Discuss any extra credit you did, and clearly show what contribution it had on the results (e.g. performance with and without each extra credit component).

You should also include the precision-recall curve of your final classifier and any interesting variants of your algorithm.

Rubric

  • 40 pts: According to performance ranking in class
  • 60 pts: Outperform the AlexNet baseline
  • -5*n pts: Lose 5 points for every time (after the first) you do not follow the instructions for the hand in format

Get start & hand in

Credits

Assignment designed by Cheng-Sheng Chan. Contents in this handout are from Chan et al..

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •