Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets #6925

Merged
merged 2 commits into from
May 31, 2024

Conversation

albertvillanova
Copy link
Member

@albertvillanova albertvillanova commented May 28, 2024

Fix NonMatchingSplitsSizesError or ExpectedMoreSplits error for no-code Hub datasets if the user passes:

  • data_dir
  • data_files

The proposed solution is to avoid using exported dataset info (from Parquet exports) in these cases.
Additionally, also if the user passes revision other than "main" (so that no network requests are made).

This PR fixes a bug introduced by:

Fix #6918, fix #6939.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@albertvillanova albertvillanova changed the title Do not use exported dataset infos for no-script Hub datasets in some cases Do not use exported dataset infos for no-code Hub datasets in some cases May 28, 2024
@albertvillanova albertvillanova changed the title Do not use exported dataset infos for no-code Hub datasets in some cases Do not use exported dataset info for no-code Hub datasets in some cases May 28, 2024
@albertvillanova albertvillanova changed the title Do not use exported dataset info for no-code Hub datasets in some cases Fix NonMatchingSplitsSizesError in no-code Hub datasets when passing data_dir, data_files or revision May 28, 2024
@albertvillanova
Copy link
Member Author

Do you think this is worth making a patch release for?
CC: @huggingface/datasets

@albertvillanova albertvillanova changed the title Fix NonMatchingSplitsSizesError in no-code Hub datasets when passing data_dir, data_files or revision Fix NonMatchingSplitsSizesError in no-code Hub datasets when passing data_dir, data_files May 28, 2024
@albertvillanova albertvillanova changed the title Fix NonMatchingSplitsSizesError in no-code Hub datasets when passing data_dir, data_files Fix NonMatchingSplitsSizesError/ExpectedMoreSplits in no-code Hub datasets when passing data_dir/data_files May 31, 2024
@albertvillanova
Copy link
Member Author

I will add some regression tests before merging.

And I will make a patch release afterwards.

@albertvillanova albertvillanova merged commit 157585f into main May 31, 2024
12 checks passed
@albertvillanova albertvillanova deleted the fix-6918 branch May 31, 2024 17:10
Copy link

Show benchmarks

PyArrow==8.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.004959 / 0.011353 (-0.006394) 0.003654 / 0.011008 (-0.007354) 0.064087 / 0.038508 (0.025579) 0.031942 / 0.023109 (0.008833) 0.236830 / 0.275898 (-0.039068) 0.265359 / 0.323480 (-0.058121) 0.003108 / 0.007986 (-0.004878) 0.002824 / 0.004328 (-0.001504) 0.049102 / 0.004250 (0.044852) 0.046070 / 0.037052 (0.009017) 0.248830 / 0.258489 (-0.009659) 0.283900 / 0.293841 (-0.009941) 0.027799 / 0.128546 (-0.100747) 0.010572 / 0.075646 (-0.065074) 0.223595 / 0.419271 (-0.195677) 0.036951 / 0.043533 (-0.006582) 0.238813 / 0.255139 (-0.016326) 0.253841 / 0.283200 (-0.029359) 0.018471 / 0.141683 (-0.123212) 1.131969 / 1.452155 (-0.320186) 1.173763 / 1.492716 (-0.318954)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.095504 / 0.018006 (0.077498) 0.301469 / 0.000490 (0.300979) 0.000212 / 0.000200 (0.000012) 0.000052 / 0.000054 (-0.000003)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.019194 / 0.037411 (-0.018217) 0.062313 / 0.014526 (0.047787) 0.075852 / 0.176557 (-0.100704) 0.121996 / 0.737135 (-0.615140) 0.076416 / 0.296338 (-0.219923)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.292465 / 0.215209 (0.077256) 2.910234 / 2.077655 (0.832579) 1.479672 / 1.504120 (-0.024448) 1.332281 / 1.541195 (-0.208913) 1.354095 / 1.468490 (-0.114395) 0.573438 / 4.584777 (-4.011339) 2.382406 / 3.745712 (-1.363307) 2.708289 / 5.269862 (-2.561572) 1.739665 / 4.565676 (-2.826011) 0.063514 / 0.424275 (-0.360761) 0.005008 / 0.007607 (-0.002599) 0.350070 / 0.226044 (0.124025) 3.475837 / 2.268929 (1.206909) 1.804639 / 55.444624 (-53.639985) 1.520472 / 6.876477 (-5.356005) 1.658061 / 2.142072 (-0.484011) 0.648495 / 4.805227 (-4.156732) 0.118394 / 6.500664 (-6.382270) 0.042557 / 0.075469 (-0.032912)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 0.960772 / 1.841788 (-0.881016) 11.451629 / 8.074308 (3.377321) 9.613331 / 10.191392 (-0.578061) 0.130259 / 0.680424 (-0.550164) 0.015828 / 0.534201 (-0.518373) 0.287581 / 0.579283 (-0.291702) 0.266517 / 0.434364 (-0.167847) 0.327334 / 0.540337 (-0.213003) 0.427881 / 1.386936 (-0.959055)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.005364 / 0.011353 (-0.005989) 0.003723 / 0.011008 (-0.007285) 0.049990 / 0.038508 (0.011482) 0.032023 / 0.023109 (0.008913) 0.258609 / 0.275898 (-0.017289) 0.281250 / 0.323480 (-0.042230) 0.004222 / 0.007986 (-0.003764) 0.002799 / 0.004328 (-0.001529) 0.049546 / 0.004250 (0.045296) 0.040298 / 0.037052 (0.003246) 0.273552 / 0.258489 (0.015063) 0.304042 / 0.293841 (0.010201) 0.030116 / 0.128546 (-0.098430) 0.010792 / 0.075646 (-0.064855) 0.058427 / 0.419271 (-0.360845) 0.033415 / 0.043533 (-0.010118) 0.258794 / 0.255139 (0.003655) 0.275304 / 0.283200 (-0.007896) 0.017944 / 0.141683 (-0.123739) 1.109291 / 1.452155 (-0.342864) 1.156627 / 1.492716 (-0.336090)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.096700 / 0.018006 (0.078693) 0.301108 / 0.000490 (0.300618) 0.000208 / 0.000200 (0.000008) 0.000054 / 0.000054 (-0.000001)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.022632 / 0.037411 (-0.014779) 0.075813 / 0.014526 (0.061287) 0.090302 / 0.176557 (-0.086254) 0.130375 / 0.737135 (-0.606760) 0.089710 / 0.296338 (-0.206629)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.297091 / 0.215209 (0.081882) 2.910379 / 2.077655 (0.832725) 1.570460 / 1.504120 (0.066340) 1.441619 / 1.541195 (-0.099576) 1.442417 / 1.468490 (-0.026073) 0.570034 / 4.584777 (-4.014743) 0.952613 / 3.745712 (-2.793099) 2.659274 / 5.269862 (-2.610588) 1.751013 / 4.565676 (-2.814663) 0.064639 / 0.424275 (-0.359636) 0.005145 / 0.007607 (-0.002462) 0.347478 / 0.226044 (0.121434) 3.443862 / 2.268929 (1.174933) 1.897246 / 55.444624 (-53.547379) 1.609267 / 6.876477 (-5.267210) 1.755116 / 2.142072 (-0.386956) 0.658982 / 4.805227 (-4.146245) 0.117000 / 6.500664 (-6.383664) 0.041453 / 0.075469 (-0.034016)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.005843 / 1.841788 (-0.835944) 12.101306 / 8.074308 (4.026998) 10.370706 / 10.191392 (0.179314) 0.139374 / 0.680424 (-0.541050) 0.015605 / 0.534201 (-0.518596) 0.286978 / 0.579283 (-0.292305) 0.122951 / 0.434364 (-0.311413) 0.331729 / 0.540337 (-0.208609) 0.422088 / 1.386936 (-0.964848)

@albertvillanova albertvillanova changed the title Fix NonMatchingSplitsSizesError/ExpectedMoreSplits in no-code Hub datasets when passing data_dir/data_files Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets Jun 2, 2024
albertvillanova added a commit that referenced this pull request Jun 3, 2024
…asets when passing data_dir/data_files (#6925)

* Do not use exported dataset infos in some cases

* Add regression tests
albertvillanova added a commit that referenced this pull request Jun 3, 2024
…asets when passing data_dir/data_files (#6925)

* Do not use exported dataset infos in some cases

* Add regression tests
@meg-huggingface
Copy link
Contributor

I'm hitting this error now, using Spaces. Here's what an attempt to just get the 'validation' split is doing:

code:


  | import os
  | from huggingface_hub import HfApi
  | from datasets import Dataset, load_dataset, DownloadConfig
  |  
  |  
  | GATED_IMAGENET = os.environ.get("GATED_IMAGENET")
  | api = HfApi(token=GATED_IMAGENET)
  |
  | ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)


log:

Generating validation split:   0%|          | 0/50000 [00:00<?, ? examples/s]
Generating validation split:  12%|█▏        | 6172/50000 [00:01<00:07, 5804.90 examples/s]
Generating validation split:  25%|██▌       | 12716/50000 [00:02<00:06, 6167.77 examples/s]
Generating validation split:  38%|███▊      | 19060/50000 [00:03<00:04, 6218.99 examples/s]
Generating validation split:  51%|█████     | 25603/50000 [00:04<00:03, 6126.35 examples/s]
Generating validation split:  64%|██████▍   | 32145/50000 [00:05<00:02, 6166.95 examples/s]
Generating validation split:  77%|███████▋  | 38716/50000 [00:06<00:01, 6272.66 examples/s]
Generating validation split:  90%|█████████ | 45158/50000 [00:07<00:00, 6307.44 examples/s]
Generating validation split: 100%|██████████| 50000/50000 [00:08<00:00, 6212.19 examples/s]
Traceback (most recent call last):
  File "/home/user/app/app.py", line 12, in <module>
    ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)
  File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2154, in load_dataset
    builder_instance.download_and_prepare(
  File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
    self._download_and_prepare(
  File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 1018, in _download_and_prepare
    verify_splits(self.info.splits, split_dict)
  File "/usr/local/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 68, in verify_splits
    raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))
datasets.exceptions.ExpectedMoreSplitsError: {'train', 'test'}

@lhoestq
Copy link
Member

lhoestq commented Nov 7, 2024

Hi Meg ! Thanks for reporting, I'll see how I can fix this. In the meantime feel free to pass verification_mode="no_checks" to load_dataset

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ExpectedMoreSplits error when using data_dir NonMatchingSplitsSizesError when using data_dir
4 participants