-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] [dolphinscheduler-api] HDFS failed to upload large file #10340
Comments
Search before asking
What happenedAfter configuring HDFS, uploading too large files (90MB) using the resource center fails, but small files are ok. Guess: cannot upload files larger than the HDFS data block size (64MB), check the log error message: no suitable decomposer. What you expected to happenIn the previous version (2.0.5), this function can be used normally, but the upgraded version 3.0.0-alpha and 3.0.0-beta-1 cannot be used normally. How to reproduceAfter the cluster is deployed, configure HDFS, use the Hadoop cluster NameNode configured with HA, and upload a file over 128MB (more than HDFS data blocks) after completion Anything elseThe problem occurs every time, and the relevant logs are in the latest files under api-server/logs under the installation directory. Version3.0.0-beta-1 Are you willing to submit PR?
Code of Conduct
|
Thank you for your feedback, we have received your issue, Please wait patiently for a reply.
|
I encountered this error, too. May I ask is there any update or potential solutions to this issue? |
你好,请问解决了吗?我也遇到这个问题了,好像是上传时间超过15秒就会自动取消。 |
从debug springboot报的实际异常是org.eclipse.jetty.io.EofException: Early EOF。原因在前端用axios 提交时设置了超时时间为15s,导致在上传过程中前端超时提前中断了, 见下面在ui service.ts的代码片段部分,修改加大timeout时间后上传大文件通过 |
|
related: #10509 |
I'm having the same problem, what specific file configuration needs to be changed, please? The service.ts file was not found. @CriysHot @yaowj2 The file url is https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-ui/src/service/service.ts. |
Solved, the following 4 files need to be modified.
and find the following configuration |
@yangjf2019 Great job! Would you like to submit a PR to fix this? |
Thanks, I can try. |
Hi, @EricGao888 is it possible to increase the value of this parameter so that it becomes 30 minutes? |
In general, and frankly, it's not recommended to use dolphinscheduler to upload files that are too big! |
May I ask whether it is possible to make it configurable for users? |
Yes, I think it should be done too, please let me take another look, thanks. |
Do you still want to submit pull request to fix this? @yangjf2019 |
我改了那四个js文件,,为啥还是不行啊,有大佬有相同的情况吗 |
The larger the file, the longer the upload time. It is recommended to set the time value larger |
谢谢,我的问题已经解决了。我后面修改了js文件之后,清空了下浏览器的缓存之后,就能正常上传了,后台也没有报错了 |
Hello @yangjf2019, may I ask whether you are still working on this issue? We have received feedback from many users that they get blocked by this issue. IMHO we could have a hot-fix for it at the first step, simply increasing the threshold. Then we could move it further and make it configurable for users. Thanks. |
Hot fix #11694 |
How about make this configurable? modify code seems tricky |
我的版本是3.1.2请问这个问题前端是否解决? |
可否留个联系方式 |
请问这个文件在哪/api-server/ui/assets/service.766f4632.js |
docker没有 ui/assets/下文件,忽略 |
Search before asking
What happened
在配置HDFS之后使用资源中心上传过大的文件(90MB)失败,小文件可以。猜想:不能上传超过HDFS数据块大小(64MB)的文件,查看日志错误信息:没有合适的分解器。
What you expected to happen
在之前版本(2.0.5)中,该功能正常可以使用,升级版本3.0.0-alpha,3.0.0-beta-1均不能正常使用。
How to reproduce
集群部署完成后,配置HDFS,使用的Hadoop 集群 NameNode 配置了 HA ,完成之后上传一个128MB(超过HDFS数据块)以上的文件
Anything else
问题每次都出现,相关的日志再安装目录下面的api-server/logs下的最新文件中
Version
3.0.0-beta-1
Are you willing to submit PR?
Code of Conduct
The text was updated successfully, but these errors were encountered: