We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When create a combine data records pyspark pipeline project, the pipeline can't start because of split data record dependency error.
Clear, specific, and detailed steps taken to enable reproduction of the bug for investigation.
mvn archetype:generate '-DarchetypeGroupId=com.boozallen.aissemble' \ '-DarchetypeArtifactId=foundation-archetype' \ '-DarchetypeVersion=1.10.0-SNAPSHOT' \ '-DgroupId=org.test' \ '-Dpackage=org.test' \ '-DprojectGitUrl=test.org/test.git' \ '-DprojectName=Test combine records' \ '-DartifactId=test-combine-records' \ && cd test-combine-records
-pipeline-models/src/main/
mvn clean install
-docker/test-combine-record-spark-worker-docker/src/main/resources
mvn clean install -Dmaven.build.cache.skipCache
-shared/pom.xml
aissemble-data-records-separate-module
<configuration> <basePackage>com.boozallen</basePackage> - <profile>aissemble-data-records-combined-module</profile> + <profile>aissemble-data-records-separate-module</profile> </configuration>
spark-pipeline/pom.xml
<dependency> <groupId>${project.groupId}</groupId> - <artifactId>test-combine-record-data-records-java</artifactId> + <artifactId>test-combine-record-data-records-spark-java</artifactId> <version>${project.version}</version> </dependency>
pyspark-pipeline/pom.xml
<dependency> <groupId>${project.groupId}</groupId> - <artifactId>test-combine-record-data-records-python</artifactId> + <artifactId>test-combine-record-data-records-spark-python</artifactId> <version>${project.version}</version> </dependency>
pyspark-pipeline/pyproject.toml
test-combine-record-data-records-spark-python = {path = "../../test-combine-record-shared/test-combine-record-data-records-spark-python", develop = true}
All services are running in ready state.
spark-worker-image failed at the below error
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Description
When create a combine data records pyspark pipeline project, the pipeline can't start because of split data record dependency error.
Steps to Reproduce
Clear, specific, and detailed steps taken to enable reproduction of the bug for investigation.
-pipeline-models/src/main/
directorymvn clean install
and following manual actions-docker/test-combine-record-spark-worker-docker/src/main/resources
directory-shared/pom.xml
, use the theaissemble-data-records-separate-module
profile for split recordsspark-pipeline/pom.xml
, update the data-record artifact namepyspark-pipeline/pom.xml
, update the data-record artifact namepyspark-pipeline/pyproject.toml
, update the test-combine-record-data-records-python dependency package name to include spark as followingExpected Behavior
All services are running in ready state.
Actual Behavior
spark-worker-image failed at the below error
Additional Context
The text was updated successfully, but these errors were encountered: