- User enters the website (app.opencap.ai)
- The website calls the backend and creates a session
- The session generates a QR code displayed in the webapp
- User scans the code with the iOS app. App uses the code to connect directly to the backend
- User clicks record in the webapp -> this invokes backend to change session state to recording
- iPhones pull session state every 1sec and see it's in 'recording' state. They start recording
- User clicks stop recording changing state to 'upload'
- iPhones upload the videos
- When all videos are uploaded, backend changes the state to processing and adds videos to the queue for processing
- Video processing pipeline pools sessions in 'processing' state and processes them
- After processing, results are sent to the backend and the backend changes its state to 'done'
Clone this repo, then
conda create -n opencap python=3.7
pip install -r requirements.txt
Create the .env
file with all env variables and credentials.
python manage.py runserver
- Add fields to
mcserver/models.py
- Run
python manage.py makemigrations
- Run
python manage.py migrate
(be careful, this modifies the database) - Add fields we want to expose in the api to the
mcserver/serializers.py
file
Then for deploying to production we pull all the updated code and run the step 3. (with the production .env
file)
Instructions in this Link.
Note: You must also install gettext. After install, restart your IDE/Terminal).
Inside of mcserver folder:
-
Create files for a language:
django-admin makemessages -l <language-code>
-
Compile messages:
django-admin compilemessages
/sessions/new/ -> returns session_id and the QR code
/sessions/<session_id>/status/?device_id=<device_id> <- devices use this link to register and get video_id
/sessions/<session_id>/record/ -> server uses this link to start recording
/sessions/<session_id>/stop/ -> server uses this link to stop recording
/video/<video_id>/ <- devices use this link to upload the recorded video and parameters