-
Notifications
You must be signed in to change notification settings - Fork 270
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chunked model application #2133
Conversation
1d16600
to
2c2bcc7
Compare
2c2bcc7
to
6c9ae3e
Compare
Codecov ReportBase: 92.74% // Head: 92.74% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #2133 +/- ##
==========================================
- Coverage 92.74% 92.74% -0.01%
==========================================
Files 214 214
Lines 17914 17941 +27
==========================================
+ Hits 16615 16639 +24
- Misses 1299 1302 +3
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
This is successful in that it reduced memory consumption a lot, but it's also a lot slower and I think it doesn't make sense to merge until we remove the bottlenecks from the table loader for that. I will try to incorporate this. |
Ok, main problem was that adding the telescope trigger information naively was very slow, since it required loading the whole trigger telescope for each chunk. Now, application is much faster (even a lot faster than non-chunked, since joining N chunks is faster than joining the whole table). |
No description provided.