Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support sliced iteration for all Readers #1081

Closed
kain88-de opened this issue Nov 21, 2016 · 8 comments
Closed

Support sliced iteration for all Readers #1081

kain88-de opened this issue Nov 21, 2016 · 8 comments

Comments

@kain88-de
Copy link
Member

Expected behaviour

for ts in u.trajectory[start:stop:step] should always work.

Actual behaviour

This only works for readers that implement random access

Solution

I know that this is only efficiently to implement for a random access reader. But we should still be able to iterate through the trajectory with a sliced iterator, it's just that we need to pay the iteration cost for the full trajectory. There is nothing stopping us from yielding only the requested frames. To notify the user that his action are slow because of the trajectory format we can just print a warning telling him that computations can be speed up by switching to a random access reader.

I'm stumbling on this right now because we are writing a library on top of MDAnalysis at work which plans to use sliced iterators. We don't support this for all readers though so I'm left with writing MDAnalysis wrapper to support sliced iteration for all trajectory types.

@mnmelo
Copy link
Member

mnmelo commented Nov 21, 2016

+1
I totally agree with the need to consolidate this aspect. Last I worked on that part of the code there was quite some duplication and inconsistent behavior.

@richardjgowers
Copy link
Member

https://github.com/MDAnalysis/mdanalysis/blob/develop/package/MDAnalysis/coordinates/base.py#L1250

It would just need something putting in the except block here

@orbeckst
Copy link
Member

Solve #314 (universal offsets) and you're done (and you would probably also solve some of #918 (pythonic slicing) by consolidating the whole trajectory slicing interface).

I would be very much in favor of having reasonably performant random access instead of adding a crippled implementation. #314 is also important for parallel trajectory analysis (PR #618).

@kain88-de
Copy link
Member Author

@orbeckst universal offsets would only speed up iterations for non-random-access readers. Backwards iteration would be a problem. But this doesn't have much to do with random-access.

@orbeckst
Copy link
Member

@kain88-de , I thought that as soon as you try to do do any slicing on one of the non-random access readers we would resort to jumping to individual frames. For those readers that is

  1. rewind
  2. forward to frame

Or has this been improved lately?

@kain88-de
Copy link
Member Author

yeah we would rewind (cheap) and then forward to frame. It is a bit costly but would increase usability. I would then also add a warning that slices will be slow with that trajectory and suggest conversion to a random access format.

@orbeckst
Copy link
Member

orbeckst commented Nov 23, 2016

... or transfer_to_memory().

As a band-aid for any kind of forward iteration:

def _slice_iterator(self, trjslice):
    frames = np.arange(len(self))[trjslice]
    for ts in self:
         if ts.frame in frames:
              yield ts

EDIT: Add a check that frames is monotone ascending.

@kain88-de
Copy link
Member Author

transfer_to_memory won't work because I would also like that function to accept general slicing, see #1030

Your band aid solution is in the terms of what I thought.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants