-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support sliced iteration for all Readers #1081
Comments
+1 |
It would just need something putting in the |
Solve #314 (universal offsets) and you're done (and you would probably also solve some of #918 (pythonic slicing) by consolidating the whole trajectory slicing interface). I would be very much in favor of having reasonably performant random access instead of adding a crippled implementation. #314 is also important for parallel trajectory analysis (PR #618). |
@orbeckst universal offsets would only speed up iterations for non-random-access readers. Backwards iteration would be a problem. But this doesn't have much to do with random-access. |
@kain88-de , I thought that as soon as you try to do do any slicing on one of the non-random access readers we would resort to jumping to individual frames. For those readers that is
Or has this been improved lately? |
yeah we would rewind (cheap) and then forward to frame. It is a bit costly but would increase usability. I would then also add a warning that slices will be slow with that trajectory and suggest conversion to a random access format. |
... or As a band-aid for any kind of forward iteration: def _slice_iterator(self, trjslice):
frames = np.arange(len(self))[trjslice]
for ts in self:
if ts.frame in frames:
yield ts EDIT: Add a check that |
transfer_to_memory won't work because I would also like that function to accept general slicing, see #1030 Your band aid solution is in the terms of what I thought. |
Expected behaviour
for ts in u.trajectory[start:stop:step]
should always work.Actual behaviour
This only works for readers that implement random access
Solution
I know that this is only efficiently to implement for a random access reader. But we should still be able to iterate through the trajectory with a sliced iterator, it's just that we need to pay the iteration cost for the full trajectory. There is nothing stopping us from yielding only the requested frames. To notify the user that his action are slow because of the trajectory format we can just print a warning telling him that computations can be speed up by switching to a random access reader.
I'm stumbling on this right now because we are writing a library on top of MDAnalysis at work which plans to use sliced iterators. We don't support this for all readers though so I'm left with writing MDAnalysis wrapper to support sliced iteration for all trajectory types.
The text was updated successfully, but these errors were encountered: