Skip to content

xiaoweihu/annotation-UIs

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

annotation-UIs

This repository contains image annotation UIs used for various projects at Stanford University.

Authors: Olga Russakovsky ([email protected]) and Justin Johnson

Entrypoint

The entry point for the UIs is all_actions.html. You can open it in your browser to see a set of sample tasks and check out the UIs.

Annotating images on Amazon Machanical Turk

The simplest backend to use with these templates is simple-amt

Important: make sure to put in absolute rather than relative paths, e.g.,

//image-net.org/path/to/you/file

Search for 'absolute' in the code -- most places should be marked with comments. You'll need to do it in

  • best_of_both_world/all_actions.html
  • best_of_both_worlds/instructions.html
  • best_of_both_worlds/task_header.js
  • whats_the_point/all_actions.html
  • all image paths

References

If you find the UIs useful in your research, please cite:

best_of_both_worlds

Project page: http://ai.stanford.edu/~olga/best_of_both_worlds

@inproceedings{RussakovskyCVPR15,
author = {Olga Russakovsky and Li-Jia Li and Li Fei-Fei}, 
title = {Best of both worlds: human-machine collaboration for object annotation},	
booktitle = {CVPR},
year = {2015}
} 

whats_the_point

Project page: http://vision.stanford.edu/whats_the_point

@article{Bearman15,
author = {Amy Bearman and Olga Russakovsky and Vittorio Ferrari and Li Fei-Fei},
title = {What's the point: Semantic segmentation with point supervision},
journal = {ArXiv e-prints},
eprint = {1506.02106}, 
year = {2015}
}

About

Image annotation UIs (good for AMT tasks)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 64.8%
  • HTML 35.2%