Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] CI failed due to HDFS is not ready #2874

Closed
mchades opened this issue Apr 10, 2024 · 0 comments · Fixed by #2871
Closed

[Improvement] CI failed due to HDFS is not ready #2874

mchades opened this issue Apr 10, 2024 · 0 comments · Fixed by #2871
Assignees
Labels
improvement Improvements on everything

Comments

@mchades
Copy link
Contributor

mchades commented Apr 10, 2024

What would you like to be improved?

Some of the reasons for CI failure are that the Hive container check did not pass, and the logs show "HDFS is not ready".

The way HDFS starts is through the start-dfs.sh script, which triggers an SSH process. If there are any issues with SSH, the DataNode will be unable to start.

How should we improve?

Start HDFS components individually.

@mchades mchades added the improvement Improvements on everything label Apr 10, 2024
@mchades mchades self-assigned this Apr 10, 2024
@mchades mchades added this to the Gravitino 0.5.0 milestone Apr 10, 2024
xunliu pushed a commit that referenced this issue Apr 11, 2024
### What changes were proposed in this pull request?

 - Remove SSH service from the startup script.
 - Use `hadoop-daemon.sh` to start HDFS services.

### Why are the changes needed?

Fix: #2874 

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

CI pass
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvements on everything
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant