This repository is the official implementation of [Re] IDOL: Inertial Deep Orientation-Estimation and Localization. The code is being prepared for submission to: (https://paperswithcode.com/rc2021)ML Reproducibility Challenge 2021 Fall Edition, and a course project in CISC 867 Deep Learning, Queen's University.
- Quaternion Multiplication -> See here
- Yury Petrov's Ellipsoid Fitting (Python Version) -> See here
- Extended Kalman Filters -> See here
- 3Blue1Brown Quaternion Explanations -> See here
- IMUs and what they do -> See here
- What is a random walk? -> See here
📋 Optional: include a graphic explaining your approach/main result, bibtex entry, link to demos, blog posts and tutorials
Optional Dependencies piptools - used to modify requirements.txt without dependency hassles.
pip install pip-tools
Steps:
- (Optional) To generate new requirements (after adding new requirement to requirements.in):
Note: This requires that you install pip-tools, if you haven't installed pip-tools then
please do
pip install pip-tools
to use the command below.
pip-compile
- To setup virtual environment:
python -m venv .venv
- To activate virtual environment (unix):
source .venv/bin/activate
- To install requirements:
pip install -r requirements.txt
-
To setup the datasets: a. Create a folder called datasets. b. Create another folder within datasets called csvs. You should have datasets/csvs as part of your folder structure c. Download and extract the datasets from here. Extract each building into datasets.
-
To install tensorflow graphics a. Run
git clone https://github.com/tensorflow/graphics.git
. b.cd
to the directory where you cloned tensorflow graphics. c. Runpython -m venv .venv
d. Runsource .venv/bin/activate
(Bash),.\.venv\Scripts\activate.ps1
(Windows Powershell) or.\.venv\Scripts\activate.bat
(Windows Cmd) e. Runpip install wheel
f. Runpython setup.py bdist_wheel
g.cd
to Re-IDOL's location. h. Runpip install /path/to/tensorflow-graphics-location/dist/tensorflow_graphics-2021.12.11-py3-none-any.whl
To train the model(s) in the paper, run this command:
python main.py train_orient --option=<option number 1-3>
python main.py train_pos --option=<option number 1-3>
To test the model(s) in the paper, run this command:
python main.py test_orient --option=<option number 1-3>
python main.py test_pos --option=<option number 1-3>
You can find pretrained models in the directory: pretrained/Buildings"<number 1-3>"/OrientNet
Our model achieves the following performance on the known set (training) of buildings (1-3)/ the unknown set (1-3):
Model name | Building 1 | Building 2 | Building 3 |
---|---|---|---|
OrientNet (rad) | x / test | x / test | tbd |
PosNet (meter) | y / test | y / test | tbd |
📋 Include a table of results from your paper, and link back to the leaderboard for clarity and context. If your main result is a figure, include that figure and link to the command or notebook to reproduce it.
Uses Apache License, see LICENSE for more details.