Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: k8s spark optimizer release error #3401

Closed
2 tasks done
lintingbin opened this issue Jan 9, 2025 · 2 comments · Fixed by #3402
Closed
2 tasks done

[Bug]: k8s spark optimizer release error #3401

lintingbin opened this issue Jan 9, 2025 · 2 comments · Fixed by #3402
Labels
type:bug Something isn't working

Comments

@lintingbin
Copy link
Contributor

What happened?

When I release the Spark K8s optimizer, an error is reported. I found that the namespace for the kill command is not the one I configured in the resource group, but is "default".

Affects Versions

0.7.1

What table formats are you seeing the problem on?

Iceberg

What engines are you seeing the problem on?

AMS

How to reproduce

No response

Relevant log output

2025-01-09 08:14:57,208 INFO [JettyServerThreadPool-1470070] [org.apache.amoro.server.manager.SparkOptimizerContainer] [] - Releasing spark optimizer using command: export SPARK_HOME=/opt/spark && export HADOOP_USER_NAME=hive && /opt/spark/bin/spark-submit --kill default:amoro-optimizer-6i6ld3rup897p9dhdpf0ifgosh --master k8s://https://kubernetes.default.svc --conf spark.kubernetes.container.image.pullPolicy=Always --conf spark.executor.memory=4g --conf spark.driver.memory=2g --conf spark.driver.cores=1 --conf spark.kubernetes.executor.podTemplateFile=gs://hdp-vvp/spark/executor_pod_template.yaml --conf spark.executor.cores=1 --conf spark.dynamicAllocation.maxExecutors=5 --conf spark.dynamicAllocation.shuffleTracking.enabled=true --conf spark.kubernetes.namespace=amoro --conf spark.kubernetes.container.image=asia.gcr.io/farlight-hadoop/amoro-spark-optimizer:0.7.1-incubating-spark3.3 --conf spark.kubernetes.authenticate.driver.serviceAccountName=amoro --conf spark.shuffle.service.enabled=false --conf spark.dynamicAllocation.enabled=true
root@amoro-76fcd56c4d-jrgjh:/usr/local/amoro# export SPARK_HOME=/opt/spark && export HADOOP_USER_NAME=hive && /opt/spark/bin/spark-submit --kill default:amoro-optimizer-6i6ld3rup897p9dhdpf0ifgosh --master k8s://https://kubernetes.default.svc --conf spark.kubernetes.container.image.pullPolicy=Always --conf spark.executor.memory=4g --conf spark.driver.memory=2g --conf spark.driver.cores=1 --conf spark.kubernetes.executor.podTemplateFile=gs://hdp-vvp/spark/executor_pod_template.yaml --conf spark.executor.cores=1 --conf spark.dynamicAllocation.maxExecutors=5 --conf spark.dynamicAllocation.shuffleTracking.enabled=true --conf spark.kubernetes.namespace=amoro --conf spark.kubernetes.container.image=asia.gcr.io/farlight-hadoop/amoro-spark-optimizer:0.7.1-incubating-spark3.3 --conf spark.kubernetes.authenticate.driver.serviceAccountName=amoro --conf spark.shuffle.service.enabled=false --conf spark.dynamicAllocation.enabled=true

Anything else

No response

Are you willing to submit a PR?

  • Yes I am willing to submit a PR!

Code of Conduct

  • I agree to follow this project's Code of Conduct
@lintingbin lintingbin added the type:bug Something isn't working label Jan 9, 2025
@tcodehuber
Copy link
Contributor

@lintingbin Which namespace is your AMS installed in K8s?

@lintingbin
Copy link
Contributor Author

@tcodehuber We specified the namespace on the resource group through spark-conf.spark.kubernetes.namespace=amoro, but when killing, it was the default namespace. You can look at the PR I submitted and you will understand what is going on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants