Skip to content

Commit

Permalink
#28 Change credentials for LocalStack between environments without re…
Browse files Browse the repository at this point in the history
…plicating all of my environment properties
  • Loading branch information
carter-cundiff committed May 10, 2024
1 parent 23afe50 commit 2711083
Show file tree
Hide file tree
Showing 40 changed files with 1,896 additions and 136 deletions.
4 changes: 3 additions & 1 deletion DRAFT_RELEASE_NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,10 @@ To reduce burden of upgrading aiSSEMBLE, the Baton project is used to automate t
| upgrade-v2-chart-files-aissemble-version-migration | Updates the helm chart dependencies within your project's deployment resources (<YOUR_PROJECT>-deploy/src/main/resources/apps/) to use the latest version of the aiSSEMBLE |
| upgrade-v1-chart-files-aissemble-version-migration | Updates the docker image tags within your project's deployment resources (<YOUR_PROJECT>-deploy/src/main/resources/apps/) to use the latest version of the aiSSEMBLE |
| upgrade-mlflow-v2-external-s3-migration | Update the mlflow V2 deployment (if present) in your project to utilize Localstack for local development and SealedSecrets for remote deployments |
| upgrade-spark-application-s3-migration | Update the pipeline SparkApplication(s) (if present) in your project to utilize Localstack for local development and SealedSecrets for remote deployments |
| upgrade-foundation-extension-python-package-migration | Updates the pyproject.toml files within your projects pipelines folder (<YOUR_PROJECT>-pipelines) to use the updated aiSSEMBLE foundation and extension Python packages with the latest naming convention |
|upgrade-helm-chart-files-names-migration | Updates the Chart.yaml and values*.yaml files within your project's deploy folder (<YOUR_PROJECT>-deploy) to use the new Helm chart naming convention (aissemble-<chart-name>-chart) |
|upgrade-helm-chart-files-names-migration | Updates the Chart.yaml and values*.yaml files within your project's deploy folder (<YOUR_PROJECT>-deploy) to use the new Helm chart naming convention (aissemble-\<chart-name\>-chart) |
| upgrade-dockerfile-pip-install-migration | Updates dockerfiles such that python dependency installations fail during the build, rather than at runtime |

To deactivate any of these migrations, add the following configuration to the `baton-maven-plugin` within your root `pom.xml`:

Expand Down
2 changes: 1 addition & 1 deletion build-parent/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@
<version.hadoop>3.3.4</version.hadoop>
<version.neo4j>4.1.5_for_spark_3</version.neo4j>
<version.aws.sdk.bundle>1.12.262</version.aws.sdk.bundle>
<version.baton>0.2.0</version.baton>
<version.baton>1.0.0</version.baton>

<!-- ***************** -->
<!-- Repository Deployment URLs - mostly in aissemble-root -->
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ RUN apt-get update -y && apt-get install --assume-yes \
&& apt-get clean \
&& ln -s /usr/bin/python${PYTHON_VERSION} /usr/bin/python

# Pyspark uses `python3` to execute pyspark pipelines. This links out latest python install to that command.
RUN ln -sf /usr/bin/python3.11 /usr/bin/python3

## Add spark configurations
COPY ./src/main/resources/conf/ ${SPARK_HOME}/conf/

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ https://github.com/localstack/helm-charts/tree/main/charts/localstack
|----------------------|-----------------------------------------------------------------------------------------------------|-------------------|--------------------------------------------------------------------------|
| fullnameOverride | String to fully override common.names.fullname | No | s3-local |
| startServices | Comma-separated list of AWS CLI service names which should be loaded right when starting LocalStack | No | s3 |
| service.type | Kubernetes Service type | No | LoadBalancer |
| enableStartupScripts | Enables execution of startup behaviors | No | true |
| startupScriptContent | Base script for triggering creation of localstack resources | No | Triggers creation of s3 buckets/keys |
| volumes | Creates required volumes | No | configMap `localstack-resources` -> `create-s3-resources.sh` |
Expand All @@ -34,9 +35,11 @@ https://github.com/localstack/helm-charts/tree/main/charts/localstack

The following properties are provided by the `aissemble-localstack-chart` chart

| Property | Description | Required Override | Default |
|----------|------------------------------------------------|-------------------|---------|
| buckets | Collection of buckets and keys to create in s3 | No | [] |
| Property | Description | Required Override | Default |
|--------------------------|-----------------------------------------------------|-------------------|--------------------|
| buckets | Collection of buckets and keys to create in s3 | No | [] |
| credentialSecret.enabled | Whether to use a secret to store the S3 credentials | No | true |
| credentialSecret.name | Name of the credential secret | No | remote-auth-config |

# Migration From v1 Structure

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{{- if .Values.credentialSecret.enabled }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.credentialSecret.name }}
stringData:
AWS_ACCESS_KEY_ID: "123"
AWS_SECRET_ACCESS_KEY: "456"
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
suite: localstack
templates:
- credential-secret.yaml
tests:
- it: Should contain correct default values when enabled
asserts:
- isKind:
of: Secret
- equal:
path: metadata.name
value: remote-auth-config
- equal:
path: stringData.AWS_ACCESS_KEY_ID
value: "123"
- equal:
path: stringData.AWS_SECRET_ACCESS_KEY
value: "456"
- it: Should be able to set the secret name
set:
credentialSecret:
name: test-name
asserts:
- isKind:
of: Secret
- equal:
path: metadata.name
value: test-name
- it: Should not exist when disabled
set:
credentialSecret:
enabled: false
asserts:
- hasDocuments:
count: 0
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
localstack:
fullnameOverride: s3-local
startServices: s3
service:
type: LoadBalancer
enableStartupScripts: true
startupScriptContent: |
#!/bin/sh
Expand All @@ -17,4 +19,7 @@ localstack:
- key: create-s3-resources.sh
path: create-s3-resources.sh

buckets: []
buckets: []
credentialSecret:
enabled: true
name: remote-auth-config
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ metadata:
name: remote-auth-config
stringData:
YOUR_SECRET_KEY: YOUR_SECRET_VALUE
YOUR_SECRET_KEY: YOUR_SECRET_VALUE
YOUR_SECRET_KEY: YOUR_SECRET_VALUE
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,4 @@ appVersion: "1.0.0"
dependencies:
- name: aissemble-mlflow-chart
version: ${versionTag}
repository: oci://ghcr.io/boozallen
repository: oci://ghcr.io/boozallen
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,8 @@ aissemble-mlflow-chart:
service:
type: LoadBalancer
externalS3:
existingSecret: ""
host: "s3-local"
port: 4566
accessKeyID: "123"
accessKeySecret: "456"
protocol: http
protocol: http
existingSecretAccessKeyIDKey: "AWS_ACCESS_KEY_ID"
existingSecretKeySecretKey: "AWS_SECRET_ACCESS_KEY"
Original file line number Diff line number Diff line change
Expand Up @@ -70,42 +70,39 @@ sparkApp:
cores: 1
coreLimit: "1200m"
memory: "2048m"
#if (${useS3Local})
# Setup these secret key references within your SealedSecret
## # See our guide for using SealedSecret's in your project to learn more
## # TODO: LINK-TO-GUIDE-HERE
envFrom:
- secretRef:
name: remote-auth-config
#end
env:
- name: KRAUSENING_BASE
value: /opt/spark/krausening/base
#if (${enableDataLineageSupport})
- name: ENABLE_LINEAGE
value: "true"
#end
#if (${useS3Local})
- name: AWS_ACCESS_KEY_ID
value: "123"
- name: AWS_SECRET_ACCESS_KEY
value: "456"
- name: STORAGE_ENDPOINT
value: "http://s3-local:4566"
#end
#if (${isJavaPipeline})
javaOptions: "-DKRAUSENING_BASE=/opt/spark/krausening/base"
#end
executor:
cores: 1
memory: "4096m"
#if (${useS3Local})
envFrom:
- secretRef:
name: remote-auth-config
#end
env:
- name: KRAUSENING_BASE
value: /opt/spark/krausening/base
#if (${enableDataLineageSupport})
- name: ENABLE_LINEAGE
value: "true"
#end
#if (${useS3Local})
- name: AWS_ACCESS_KEY_ID
value: "123"
- name: AWS_SECRET_ACCESS_KEY
value: "456"
- name: STORAGE_ENDPOINT
value: "http://s3-local:4566"
#end
#if (${isJavaPipeline})
javaOptions: "-DKRAUSENING_BASE=/opt/spark/krausening/base"
#end
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
import com.boozallen.aissemble.upgrade.migration.AbstractAissembleMigration;
import com.boozallen.aissemble.upgrade.pojo.Chart;
import com.boozallen.aissemble.upgrade.pojo.MlflowValues;
import com.boozallen.aissemble.upgrade.pojo.MlflowValues.Mlflow;
import com.boozallen.aissemble.upgrade.pojo.Chart.Dependency;
import com.boozallen.aissemble.upgrade.util.FileUtils;
import com.boozallen.aissemble.upgrade.util.YamlUtils;
Expand Down Expand Up @@ -122,12 +123,11 @@ protected boolean performMigration(File chartFile) {

LinkedList<String> linesToAddValuesDev = new LinkedList<String>(Arrays.asList(
"externalS3:",
"existingSecret: \"\"",
"host: \"s3-local\"",
"port: 4566",
"accessKeyID: \"123\"",
"accessKeySecret: \"456\"",
"protocol: http"
"protocol: http",
"existingSecretAccessKeyIDKey: \"AWS_ACCESS_KEY_ID\"",
"existingSecretKeySecretKey: \"AWS_SECRET_ACCESS_KEY\""
));

return migrateValuesFile(this.valuesFile, this.valuesObject, linesToAddValues)
Expand All @@ -146,8 +146,8 @@ private boolean migrateValuesFile(File file, MlflowValues helmValuesObject, Link
logger.info("Migrating file: {}", file.getAbsolutePath());
List<String> newFileContents = new ArrayList<>();
List<String> originalFile;
try {
originalFile = FileUtils.readAllFileLines(file);
try {
originalFile = FileUtils.readAllFileLines(file);
logger.debug("Read in {} lines", originalFile.size());

int tabSize;
Expand All @@ -159,7 +159,8 @@ private boolean migrateValuesFile(File file, MlflowValues helmValuesObject, Link
newFileContents.add("aissemble-mlflow:");
newFileContents.add(SPACE.repeat(tabSize) + "mlflow:");

indentValues(linesToAdd, tabSize);
FileUtils.indentValues(linesToAdd.subList(0, 1), tabSize * 2);
FileUtils.indentValues(linesToAdd.subList(1, linesToAdd.size()), tabSize * 3);
newFileContents.addAll(linesToAdd);
}
else {
Expand All @@ -184,7 +185,8 @@ private boolean migrateValuesFile(File file, MlflowValues helmValuesObject, Link
// (leading space on current line) - (leading space on previous line)
tabSize = (line.length() - line.stripLeading().length()) -
(lineBeingAppended.length() - lineBeingAppended.stripLeading().length());
indentValues(linesToAdd, tabSize);
FileUtils.indentValues(linesToAdd.subList(0, 1), tabSize * 2);
FileUtils.indentValues(linesToAdd.subList(1, linesToAdd.size()), tabSize * 3);

if (addMlFlowHeader) {
linesToAdd.addFirst(SPACE.repeat(tabSize) + "mlflow:");
Expand All @@ -211,15 +213,4 @@ private boolean migrateValuesFile(File file, MlflowValues helmValuesObject, Link
}
return false;
}

private void indentValues(LinkedList<String> values, int tabSize) {
for (int i = 0; i < values.size(); i++) {
// indent the first header values 2 tabs and the nested elements 3 tabs
if (i < 1) {
values.set(i, SPACE.repeat(tabSize * 2) + values.get(i));
} else {
values.set(i, SPACE.repeat(tabSize * 3) + values.get(i));
}
}
}
}
Loading

0 comments on commit 2711083

Please sign in to comment.