-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change credentials for LocalStack between environments without replicating all of my environment properties. #28
Comments
Discussed DOD with @ewilkins-csi and @jacksondelametter |
Decided to use one secret ref in the base values:
Then the contents of the |
Adjusted DOD. LocalStack secret will now be included as part of LocalStack V2 helm chart. Will update Mlflow to utilize this secret as well. |
carter-cundiff
changed the title
Change credentials for Spark between environments without replicating all of my environment properties.
Change credentials for LocalStack between environments without replicating all of my environment properties.
May 2, 2024
aaron-gary
added a commit
that referenced
this issue
May 2, 2024
# This is the 1st commit message: # This is a combination of 4 commits. # This is the 1st commit message: #2 Add Maven build workflow # This is the commit message #2: #2 update build workflow # This is the commit message #3: #2 add branch checkout to build workflow # This is the commit message #4: #2 Add on event to build workflow # This is the commit message #2: #2 Remove unused executions # This is the commit message #3: #2 Focus on build # This is the commit message #4: #2 Remove build execution where not needed # This is the commit message #5: #2 debug failing module # This is the commit message #6: #2 Remove unused target folder copy # This is the commit message #7: #2 Build spark and jenkins docker images # This is the commit message #8: #2 Retry full build # This is the commit message #9: #2 Omitting module # This is the commit message #10: #2 Fix for out of disk space # This is the commit message #11: #2 tagging docker images # This is the commit message #12: #2 Remove Temporarily remove Docker module # This is the commit message #13: #2 Build update # This is the commit message #14: #2 Build update # This is the commit message #15: #2 move chart dry-runs to IT profile # This is the commit message #16: #2 curl delta-hive assembly in docker build # This is the commit message #17: #2 cache m2 repo # This is the commit message #18: #2 prune docker build cache between images to save space # This is the commit message #19: #2 add maven build-cache to GH cache # This is the commit message #20: #2 run clean goal in build to clear docker cache # This is the commit message #21: #2 set maven caches to always save even if the build failed # This is the commit message #22: #2 adjust number of docker modules built # This is the commit message #23: #2 use the same cache for .m2 even if poms change # This is the commit message #24: #2 change from `save-always` flag to `if: always()` see actions/cache#1315 # This is the commit message #25: #2 further reduce docker images being built # This is the commit message #26: #2 disable modules that depend on helm charts # This is the commit message #27: #2 use maven wrapper # This is the commit message #28: #2 restore modules to test build-cache # This is the commit message #29: #2 fix build of modules with intra-project chart dependencies # This is the commit message #30: #2 use explict .m2 repo cache so we can fall-back to older caches # This is the commit message #31: #2 save maven caches on build failure
aaron-gary
added a commit
that referenced
this issue
May 2, 2024
# This is the 1st commit message: # This is a combination of 4 commits. # This is the 1st commit message: #2 Add Maven build workflow # This is the commit message #2: #2 update build workflow # This is the commit message #3: #2 add branch checkout to build workflow # This is the commit message #4: #2 Add on event to build workflow # This is the commit message #2: #2 Remove unused executions # This is the commit message #3: #2 Focus on build # This is the commit message #4: #2 Remove build execution where not needed # This is the commit message #5: #2 debug failing module # This is the commit message #6: #2 Remove unused target folder copy # This is the commit message #7: #2 Build spark and jenkins docker images # This is the commit message #8: #2 Retry full build # This is the commit message #9: #2 Omitting module # This is the commit message #10: #2 Fix for out of disk space # This is the commit message #11: #2 tagging docker images # This is the commit message #12: #2 Remove Temporarily remove Docker module # This is the commit message #13: #2 Build update # This is the commit message #14: #2 Build update # This is the commit message #15: #2 move chart dry-runs to IT profile # This is the commit message #16: #2 curl delta-hive assembly in docker build # This is the commit message #17: #2 cache m2 repo # This is the commit message #18: #2 prune docker build cache between images to save space # This is the commit message #19: #2 add maven build-cache to GH cache # This is the commit message #20: #2 run clean goal in build to clear docker cache # This is the commit message #21: #2 set maven caches to always save even if the build failed # This is the commit message #22: #2 adjust number of docker modules built # This is the commit message #23: #2 use the same cache for .m2 even if poms change # This is the commit message #24: #2 change from `save-always` flag to `if: always()` see actions/cache#1315 # This is the commit message #25: #2 further reduce docker images being built # This is the commit message #26: #2 disable modules that depend on helm charts # This is the commit message #27: #2 use maven wrapper # This is the commit message #28: #2 restore modules to test build-cache # This is the commit message #29: #2 fix build of modules with intra-project chart dependencies # This is the commit message #30: #2 use explict .m2 repo cache so we can fall-back to older caches # This is the commit message #31: #2 save maven caches on build failure
OTS looks good! |
carter-cundiff
added a commit
that referenced
this issue
May 9, 2024
…plicating all of my environment properties
carter-cundiff
added a commit
that referenced
this issue
May 9, 2024
…plicating all of my environment properties
carter-cundiff
added a commit
that referenced
this issue
May 10, 2024
…plicating all of my environment properties
carter-cundiff
added a commit
that referenced
this issue
May 10, 2024
…plicating all of my environment properties
carter-cundiff
added a commit
that referenced
this issue
May 10, 2024
…plicating all of my environment properties
carter-cundiff
added a commit
that referenced
this issue
May 10, 2024
#28 Change credentials for LocalStack between environments without replicating all of my environment properties
Testing passed, closing issue. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In your Spark Application you are set up be default to utilize two sets of credentials. Localstack credentials are provided in your dev values and remote credentials pulled from a SealedSecret are provided in your base values. Currently these credentials are passed to your Spark Application as
env
variables. Unfortunately due to the way lists are handled in yaml, there is no good way to append theseenv
variables between yaml files, so as a result you must duplicate all yourenv
variables within your dev values and base values.This could be improved by utilizing the
envFrom
field within our Spark Application for passing our different credentials within the dev and base values without the need to duplicate all the otherenv
variables that we would like regardless of the environment.DOD
envFrom.secretRef
✔️Test Step Outline
Test New Project SparkApplication Generation
example-pipeline-models/src/main/resources/pipelines/
save_model
function withinexample-pipelines/ml-pipeline-training/pipeline-training-step/src/pipeline_training_step/impl/ml_pipeline_training.py
:example-docker/example-pipeline-training-step-docker/pom.xml
:and
example-docker/example-pipeline-training-step-docker/src/main/resources/docker/Dockerfile
(after the FROM command):example-deploy/src/main/resources/apps/s3-local/Chart.yaml
dependencies section to the following:Note: this relative path assumes the aiSSEMBLE repo lives alongside this project in the same directory
aissemble-localstack
withaissemble-localstack-chart
in theexample-deploy/src/main/resources/apps/s3-local/values.yaml
tilt up; tilt down
http://localhost:5005/#/experiments/1
and select the latest runTestLocalStack.json
with the following content exists in the artifacts tab:Test upgrading a project
example-pipeline-models/src/main/resources/pipelines/
<version>${version.clean.plugin}</version>
fromexample-upgrade-deploy/pom.xml
pom.xml
build-parent version to1.7.0-SNAPSHOT
example-upgrade-pipelines/pyspark-pipeline/src/pyspark_pipeline/resources/apps/pyspark-pipeline-base-values.yaml
andexample-upgrade-pipelines/spark-pipeline/src/main/resources/apps/spark-pipeline-base-values.yaml
now contain the following:Note: the
spark-pipeline-base-values.yaml
will also containjavaOptions:
9. Verify the
example-upgrade-deploy/src/main/resources/apps/mlflow-ui/values-dev.yaml
now has the following:example-upgrade-deploy/src/main/resources/apps/mlflow-ui/values.yaml
now has the following:The text was updated successfully, but these errors were encountered: