You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
[FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
Please do not modify this template :) and fill in all the required fields.
Dify version
0.11.2
Cloud or Self Hosted
Cloud
Steps to reproduce
✔️ Expected Behavior
LLM analyzes the content extracted from my document and outputs the data in markdown format to me.
❌ Actual Behavior
I have no problem using regular text to pass in large amounts of text content, but when using the document extractor to extract the uploaded file, it reports that it exceeds MAX_TOKEN. I really want to optimize the document extractor's function.
The text was updated successfully, but these errors were encountered:
bug: max_token limit exceeded when change llm - This closed issue discusses a similar problem where the max_tokens parameter was set too high. The solution involved ensuring the max_tokens parameter does not exceed 4096 tokens, which is the operational limit for some models.
To address your issue, you might want to check the configuration of the max_tokens parameter in your document extractor settings. Ensure it aligns with the model's token limit, which is typically 4096 tokens for many models. Adjusting this setting might help optimize the document extractor's function [1][2][3].
Self Checks
Dify version
0.11.2
Cloud or Self Hosted
Cloud
Steps to reproduce
✔️ Expected Behavior
LLM analyzes the content extracted from my document and outputs the data in markdown format to me.
❌ Actual Behavior
I have no problem using regular text to pass in large amounts of text content, but when using the document extractor to extract the uploaded file, it reports that it exceeds MAX_TOKEN. I really want to optimize the document extractor's function.
The text was updated successfully, but these errors were encountered: