Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batched filter inputs? #237

Open
stas00 opened this issue Jul 4, 2024 · 6 comments
Open

Batched filter inputs? #237

stas00 opened this issue Jul 4, 2024 · 6 comments

Comments

@stas00
Copy link

stas00 commented Jul 4, 2024

This is a very cool library! Kudos to the authors!

The Filter API seems to be only working with a single item at a time.

Is there a way to filter in batches? Say you're using a filter that uses an ml model inference. It'd be much more efficient to infer large batches, than 1 item at a time.

I looked around the examples and code in case I have missed it, but I don't seem to find any suggestions that batched input is supported.

The API I think would be similar to the HF Tokenizer where it takes batches and returns batches, so here instead of returning a bool, it'd return a list of bools. If the input is a single sample, return a single bool - if a list, return a list.

Thanks a lot!

@guipenedo
Copy link
Collaborator

Great suggestion, thanks! Added support for this in 7ba873f
You can now override the BaseFilter's
def filter_batch(self, batch: List[Document]) -> List[bool | Tuple[bool, str]]:
method and pass batch_size to the BaseFilter's __init__

@stas00
Copy link
Author

stas00 commented Jul 9, 2024

Hmm, possibly a test is needed? if I replace filter with filter_batch it now fails:

TypeError: Can't instantiate abstract class ClassifierFilter with abstract method filter

so I think it still expects filter to be define?

@stas00
Copy link
Author

stas00 commented Jul 10, 2024

ok, so I sorted out how to get the classifier to work on the gpus under pickle #242 (comment) - and I got a 10x speed up, but the gpus are massively underutilized with 1 sample at a time.

If you have some insights to how to resolve the above - or perhaps you have an example I could adapt to use batches that should allow me to finish the Fineweb shards classification within the 24h slurm job limit imposed on me. Otherwise it's still too slow and then I will have to re-shard fineweb manually, which was the whole purpose to avoid by using datatrove.

Merci

@stas00
Copy link
Author

stas00 commented Jul 10, 2024

ok, I found a workaround, adding:

    def filter(self, doc) -> bool | tuple[bool, str]:
        pass

I suppose that if filter is required then it should be defined in the base class?

@stas00
Copy link
Author

stas00 commented Jul 10, 2024

And now with batched filter, the reported it/s - what does it mean? is it batched it/s or something else?

Is there a way to customize it to print the samples/second which is now much more meaningful?

@nldxtd
Copy link

nldxtd commented Aug 22, 2024

I was thinking about using batch inference with ml too, but I get this wired question that when running in batch inference mode, the actual time used is far more than single inference
截屏2024-08-22 22 32 41
this is single mode
截屏2024-08-22 22 33 04
this is batch mode

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants