Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: More robust token offset handling for Huggingface tokenizers #131

Merged
merged 3 commits into from
Apr 3, 2024

Conversation

benbrandt
Copy link
Owner

Handles two issues:

  1. Previously padding tokens were treated as any other token, which provides unexpected behavior in the chunking.
  2. I made some previously wrong assumptions with how the offset ranges were represented (I thought they were inclusive when they were in fact exclusive, just didn't include whitespaces)

Prior to this, if a tokenizer had a padding argument, the chunk sizer would base the count off of the padded tokens. However, for chunk sizing, we don't want to calculate based on the padding tokens, we just want to know how many tokens the given text is.
I had previously assumed the offsets were non-inclusive and needed to become inclusive, however this isn't actually the case, and may not be consistent beyond ByteLevel preprocessors. It seems the offsets refer to the words themselves, and don't need to be adjusted. And therefore we also don't have to adjust for prefixing either, as the offsets stay the same even for prefixed text.
@benbrandt benbrandt linked an issue Apr 3, 2024 that may be closed by this pull request
Copy link

codecov bot commented Apr 3, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

❗ No coverage uploaded for pull request base (main@e5e4f67). Click here to learn what that means.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #131   +/-   ##
=======================================
  Coverage        ?   99.75%           
=======================================
  Files           ?        7           
  Lines           ?     1603           
  Branches        ?        0           
=======================================
  Hits            ?     1599           
  Misses          ?        4           
  Partials        ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@benbrandt benbrandt merged commit 47d86fb into main Apr 3, 2024
24 checks passed
@benbrandt benbrandt deleted the 129-not-working-with-certain-tokenizers branch April 3, 2024 15:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Not working with certain tokenizer(s)
1 participant