Web application and REST Api to serve DeepLabV3+ model to perform semantic segmentation on the input image.
- Backend - Django, Django Rest Framework
- Frontend - HTML, CSS, JavaScript, Bootstrap, Jinja Templating Engine
- To test apis - Postman
- For virtual environment- pyenv
-
Web: Run at /predict, Upload Image and it will segment the image and then show both the images.
-
API: Upload the image with a POST request to the server at /predict-api with an image as 'img' field and it will respond with a path to the location of input & output images.
-
Output Format of API:
{ "input_image_url":"url_wrt_server", "output_image_url":"url_wrt_server" }
/predict
--> Return webpage with functionality to upload an image and get inferred image./about
--> Returns about page/predict-api
--> Takes post requests with an image field as 'img' and returns path to input & inferred image.
- Website front page
- After prediction
- API response in postman
- About page
- Running django server using vscode
- Classes it is able to predict
- Python 3.6.8
- Django 3.0.6
- Django rest framework 3.11.0
- django-cors-headers 3.2.1
- tensorflow 1.15.2
- OS - Ubuntu 20 LTS
/frontend
--> To test apis/media/images
--> Stores Input Images/predict-with-deeplabv3
--> contains Deeplabv3 model and scripts/prediction
--> App with prediction logic/sample-images
--> Sample images to test/semantic-segmentation-api
--> Main folder, contains settingssetup-env.sh
--> Bash script to setup env required for the model
- Install pyenv
- Open terminal from the root location of this project
- Run command:
./setup-env.sh
- Run:
pyenv local venvSSA
- Run:
python manage.py migrate
- Run:
python manage.py migrate --run-syncdb
- Run:
python manage.py runserver
- https://github.com/rishizek/tensorflow-deeplab-v3-plus
- https://github.com/DrSleep/tensorflow-deeplab-resnet
- Special thanks to original the author of DeeplabV3+ and its implementation
- Mr Himanshu Mittal for guidance on major project
- Size of model is large in terms of disk space, making size of project large.
- It would be better to use docker to run the model.
- It would be better to use a webserver like nginx to serve inferred images.
- Not for production (It is a simple College Project)