Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llm always reports an error that the length exceeds the limit #10846

Open
5 tasks done
Vampxgg opened this issue Nov 19, 2024 · 1 comment
Open
5 tasks done

llm always reports an error that the length exceeds the limit #10846

Vampxgg opened this issue Nov 19, 2024 · 1 comment
Labels
🐞 bug Something isn't working

Comments

@Vampxgg
Copy link

Vampxgg commented Nov 19, 2024

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.11.2

Cloud or Self Hosted

Cloud

Steps to reproduce

4ec37fe3-fe7e-4b38-b823-9a2f073c4905

✔️ Expected Behavior

LLM analyzes the content extracted from my document and outputs the data in markdown format to me.

❌ Actual Behavior

I have no problem using regular text to pass in large amounts of text content, but when using the document extractor to extract the uploaded file, it reports that it exceeds MAX_TOKEN. I really want to optimize the document extractor's function.

@dosubot dosubot bot added the 🐞 bug Something isn't working label Nov 19, 2024
Copy link

dosubot bot commented Nov 19, 2024

I found some similar discussions and issues related to token limits in Dify:

  1. Error using large language model - This discussion is unsolved and might provide insights into similar token limit issues.
  2. The maximum length should be in terms of token count rather than character count - Another unsolved discussion that touches on token limits.
  3. bug: max_token limit exceeded when change llm - This closed issue discusses a similar problem where the max_tokens parameter was set too high. The solution involved ensuring the max_tokens parameter does not exceed 4096 tokens, which is the operational limit for some models.

To address your issue, you might want to check the configuration of the max_tokens parameter in your document extractor settings. Ensure it aligns with the model's token limit, which is typically 4096 tokens for many models. Adjusting this setting might help optimize the document extractor's function [1][2][3].

To continue talking to Dosu, mention @dosu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant