Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torchrl dataset problem. #13

Open
dwsmart32 opened this issue Jan 25, 2024 · 2 comments
Open

Torchrl dataset problem. #13

dwsmart32 opened this issue Jan 25, 2024 · 2 comments

Comments

@dwsmart32
Copy link

pytorch/rl#1833 (comment)

Hi again!. Thanks for forwarding this problem to author of torchrl.

However, the author �raised a issue that it can be a problem of vd4rl dataset.(maybe some omitted dataset)

It seems like it has to get one more or less data(maybe .npz file) according to error message!

TensorDict(
    fields={
        action: MemoryMappedTensor(shape=torch.Size([500, 21]), device=cpu, dtype=torch.float32, is_shared=False),
        discount: MemoryMappedTensor(shape=torch.Size([500]), device=cpu, dtype=torch.float64, is_shared=False),
        image: MemoryMappedTensor(shape=torch.Size([500, 64, 64, 3]), device=cpu, dtype=torch.uint8, is_shared=False),
        is_first: MemoryMappedTensor(shape=torch.Size([501]), device=cpu, dtype=torch.bool, is_shared=False),
        is_last: MemoryMappedTensor(shape=torch.Size([501]), device=cpu, dtype=torch.bool, is_shared=False),
        is_terminal: MemoryMappedTensor(shape=torch.Size([501]), device=cpu, dtype=torch.bool, is_shared=False),
        reward: MemoryMappedTensor(shape=torch.Size([500]), device=cpu, dtype=torch.float64, is_shared=False)},
    batch_size=torch.Size([]),
    device=cpu,
    is_shared=False)

I'm wondering whether you failed to upload a single file to somewhere.

Could you please look into this matter once again?

Appreciate it for your time.

I think it will be powerful and become very easy to access to this dataset, if it supports torchrl perfectly.

@conglu1997
Copy link
Owner

I'm looking into it!

@conglu1997
Copy link
Owner

Apologies, haven't had time to look at this yet. A quick workaround would be to simply skip this single .npz file with the issue until we can re-generate it. This shouldn't impact the eval or training in any measurable way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants