If reticade is already installed, here are the steps to get the GUI running:
- Open a command prompt within the root directory of reticade
- Run
env\Scripts\activate.bat
- Run
python -m reticade.run_ui
- Navigate to http://127.0.0.1:5000
This project requires Python 3.8 or greater, and is tested against Python 3.8.4 on Windows.
python --version
python -m pip --version
python -m pip install --user virtualenv
python -m venv env
env\Scripts\activate.bat
NB: if you want to leave the virtual environment from this terminal window, run deactivate
.
From within the virtual environment, run:
python -m pip install -r requirements_windows.txt
This will run a quick sanity-check to test self-contained components. It won't test connectivity to the Bruker imaging software or LabView instance! We'll look into how to check test those connections in the next section.
python -m pytest -W ignore::DeprecationWarning
This project requires Python 3.8 or greater, and is tested against Python 3.10.2 on OS X and Linux.
After cloning this repository:
python3 --version
python3 -m pip --version
python3 -m pip install --user virtualenv
python3 -m venv env
source env/bin/activate
NB: if you want to leave the virtual environment from this terminal window, run deactivate
.
From within the virtual environment, run:
python3 -m pip install -r requirements.txt
This will run a quick sanity-check to test self-contained components. It won't test connectivity to the Bruker imaging software or LabView instance! We'll look into how to check test those connections in the next section.
python3 -m pytest -W ignore::DeprecationWarning
This standalone program retrieves image data through PrairieLink and can trigger the acquisition of timeseries or live viewing. In order for it to retrieve data, the PrairieView software must be running on the machine. It's essential that the version of PrairieView is 5.6 (earlier versions lack the APIs to retrieve raw data quickly enough).
From within the virtual environment, and from within the root directory, start up a Python shell with the python
command on Windows (or python3
on Linux/OSX).
Import the standalone link module:
from reticade import sapv_link
Create a standalone link object:
imaging = sapv_link.StandaloneImager()
To acquire timeseries, using the already current configuration in PrairieView, call:
imaging.run_timeseries(<TIME IN SECONDS>)
To populate the contents of the imaging link without saving a timeseries, use live mode instead:
imaging.run_liveview(<TIME IN SECONDS>)
When done using the imaging tool, remember to release the resources with:
imaging.close()
From within a new terminal in the virtual environment, and from within the root directory, start up a Python shell with the python
command on Windows (or python3
on Linux/OSX).
Note: to exit the Python shell and return to the command-line simply enter the exit()
command.
Import the reticade module.
from reticade import interactive
Create a new reticade harness:
my_harness = interactive.Harness()
This harness serves as the main interaction point.
By default, reticade reads from shared memory populated by the standalone imaging tool (above). Once a standalone imaging link object has been created (without the need for it to be currently acquiring images), you can configure reticade to read from this memory with:
my_harness.init_imaging()
To verify that you are receiving data from the microscope, you can run:
my_harness.show_raw_image()
This will display the latest image at the time of running. Close the image window to regain control of the shell.
If you want to view a continuous stream of data from the microscope, instead run:
my_harness.show_live_view()
Close the window to regain control of the shell.
In order to send data to LabView, we need to tell the harness the IP address of the machine running LabView.
In this example, we'll say that the IP address is 123.123.123.123
.
my_harness.set_link_ip("123.123.123.123")
You can send some 'dummy' data to LabView and validate that it receives it as follows:
dummy_decoded_velocities = [1, 2, 3, 4]
my_harness.test_link(dummy_decoded_velocities)
Be sure to check that LabView actually receives the data correctly!
Decoders are saved as .json files. This repository includes a 'fake' decoder for validation purposes at demo/fake_decoder.json
. For information on how to create a decoder from previous data, see the 'Training a decoder' section.
You can load a decoder into a harness as follows:
my_harness.load_decoder("path/to/decoder.json")
Once reticade you're happy that reticade is correctly reading from the microscope, sending data to LabView, and has the right decoder loaded, you can start it running.
my_harness.run(stop_after_seconds=300)
Setting the stop_after_seconds
parameter will gracefully stop reticade after that duration.
When you're done with a harness, you should close it in order to:
- Release the shared memory allocated for PrairieView's use
- Release resources associated with the harness You can do this with:
my_harness.close()
If you forget to close your harness before exiting the environment, you'll get a warning about leaked memory resources.
A decoder can be trained on another computer and then shared with the computer running the real-time decoding. To train a decoder with the default settings, run the following from your virtual environment (not your Python shell):
python3 -m reticade.train_decoder "<path to training data folder>"
Reticade expects the training data folder to be structured as follows:
training_folder
│ positions.csv
│ metadata.txt
│ 0.tif
│ 1.tif
│ ...
Where:
positions.csv
contains N positions on the linear track, one per line, arranged in chronological order.metadata.txt
contains additional information about the data (e.g. which animal, the date, the training protocol) that will be attached to the decoder..tif
files are N images, the alphabetical sorting of which yields the images in chronological order.
The decoder is a pipeline, where each stage in the pipeline can be broadly described as belonging to one of the following categories:
- Signal processing: cleaning up the raw image data
- Pattern matching: machine-learned mappings between signal-processed images and position on the track
- Behavioural interpretation: matching up the animal's behaviour to where it is on the track
The most important part of the decoder to fine-tune on a per-animal basis is the signal processing layer, as different imaging settings will require different signal processing. The first things to look at are:
- The high and low frequencies of the spatial bandpass filter (the DoG filter). A more zoomed-out field of view will have cells that are smaller, so the bandpass needs to be set smaller.
- The time-constants of the delta F filter. These operate on discrete frames, so if you're changing the imaging frequency you need to change these time constants accordingly.
You can visualise arbitrary signal processing pipelines (and test the effect of changes) by running:
python3 -m reticade.validation.sig_proc_validation <path to training data folder>
You can send arbitrary data to LabView from the test harness. If you want to initialise the LabView state, e.g. setting the BMI-controlled velocity to zero before running an experiment, you can send a packet to configure the state. For example:
my_harness.test_link([0])
The legacy_scripts
directory contains a Krupic Lab MATLAB script to sync up galvo signals to positions.