-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This runs incredibly slow on my data. How to speed it up? #13
Comments
Hi @loomcode - can you post the log output so I can see where it is slow? |
Quick follow up, it can take a long time for two reasons: (i) you're assessing all combinations of track linkages, or (ii) the optimizer is not well configured for your data. The log would help to discern which to optimize. If you're tracking lots of objects, you can also use the |
Thanks for the "APPROXIMATE" updates suggestion. Here's the output when I run it with approximate updates enabled and max_search_radius = 5. [INFO][2020/12/14 04:22:36 PM] Loading motion model: b'cell_motion'
Perturbing LP to avoid stalling [308]...
It continues like this for a long time. I'll leave it overnight to see if it finishes but my project requires an implementation that will finish in a matter of seconds. |
It can take a long time to optimize if you have posed the problem poorly, i.e. you probably need to change some parameters in the config to better to describe what you are trying to track and what you care about. Do you actually need to run the optimizer with only 10 frames of data? Do you care about identifying cell divisions? Seems like you could just comment that line out, and get decent results: # tracker.optimize() Are those tracks good enough? If not, then you'll need to optimize the config. |
I'm trying to track about 5000 peaks (2D) per frame for 10 frames. Using trackpy it takes about 10-15 seconds to run but btrack is still running after 20 minutes.. I've tried reducing theta_time and theta_dist in the model ("cellmodel.json") and that doesn't seem to have an effect. What optimization parameters can I change to speed this up? Additionally I'm running on a 20 core server. Can I take advantage of multiprocessing without writing my own implementation?
The text was updated successfully, but these errors were encountered: