Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Correctly handle splits for datasets.arrow_dataset.Dataset objects (#…
…1504) * Correctly handle splits for datasets.arrow_dataset.Dataset objects The `load_tokenized_prepared_datasets` function currently has logic for loading a dataset from local path that always checks if a split is in the dataset. The problem is, if the dataset is loaded using `load_from_disk` and it is an Arrow-based dataset, *there is no* split information. Instead what happens is, by calling `split in ds`, it presumably searches through all the rows and columns of the arrow dataset object to find e.g., 'train' assuming `split == 'train'`. This causes the program to hang. See https://chat.openai.com/share/0d567dbd-d60b-4079-9040-e1de58a4dff3 for context. * chore: lint --------- Co-authored-by: Wing Lian <[email protected]>
- Loading branch information