diff --git a/CHANGELOG.md b/CHANGELOG.md
index 10535728..81775200 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -24,9 +24,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- fix inputs for `bedrock-finetuning` module not working
- add `retention-type` argument for the bucket in the `bedrock-finetuning` module
- fix broken dependencies for `examples/airflow-dags`
-- use `add_dependency` to avoid deprecation warnings from CDK.
-- Various typo fixes.
-- Various clean-ups to the SageMaker Service Catalog templates.
+- use `add_dependency` to avoid deprecation warnings from CDK
+- various typo fixes
+- various clean-ups to the SageMaker Service Catalog templates
+- fix opensearch removal policy
## v1.2.0
diff --git a/README.md b/README.md
index ca73ad84..385e33a9 100644
--- a/README.md
+++ b/README.md
@@ -23,16 +23,16 @@ See deployment steps in the [Deployment Guide](DEPLOYMENT.md).
### SageMaker Modules
-| Type | Description |
-|---------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| [SageMaker Studio Module](modules/sagemaker/sagemaker-studio/README.md) | Provisions secure SageMaker Studio Domain environment, creates example User Profiles for Data Scientist and Lead Data Scientist linked to IAM Roles, and adds lifecycle config |
-| [SageMaker Endpoint Module](modules/sagemaker/sagemaker-endpoint/README.md) | Creates SageMaker real-time inference endpoint for the specified model package or latest approved model from the model package group |
-| [SageMaker Project Templates via Service Catalog Module](modules/sagemaker/sagemaker-templates-service-catalog/README.md) | Provisions SageMaker Project Templates for an organization. The templates are available using SageMaker Studio Classic or Service Catalog. Available templates:
- [Train a model on Abalone dataset using XGBoost](modules/sagemaker/sagemaker-templates-service-catalog/README.md#train-a-model-on-abalone-dataset-with-xgboost-template)
- [Perform batch inference](modules/sagemaker/sagemaker-templates-service-catalog/README.md#batch-inference-template)
- [Multi-account model deployment](modules/sagemaker/sagemaker-templates-service-catalog/README.md#multi-account-model-deployment-template)
- [HuggingFace model import template](modules/sagemaker/sagemaker-templates-service-catalog/README.md#huggingface-model-import-template) |
-| [SageMaker Notebook Instance Module](modules/sagemaker/sagemaker-notebook/README.md) | Creates secure SageMaker Notebook Instance for the Data Scientist, clones the source code to the workspace |
-| [SageMaker Custom Kernel Module](modules/sagemaker/sagemaker-custom-kernel/README.md) | Builds custom kernel for SageMaker Studio from a Dockerfile |
-| [SageMaker Model Package Group Module](modules/sagemaker/sagemaker-model-package-group/README.md) | Creates a SageMaker Model Package Group to register and version SageMaker Machine Learning (ML) models and setups an Amazon EventBridge Rule to send model package group state change events to an Amazon EventBridge Bus |
-| [SageMaker Model Package Promote Pipeline Module](modules/sagemaker/sagemaker-model-package-promote-pipeline/README.md) | Deploy a Pipeline to promote SageMaker Model Packages in a multi-account setup. The pipeline can be triggered through an EventBridge rule in reaction of a SageMaker Model Package Group state event change (Approved/Rejected). Once the pipeline is triggered, it will promote the latest approved model package, if one is found. |
-| [SageMaker Model Monitoring Module](modules/sagemaker/sagemaker-model-monitoring-module/README.md) | Deploy data quality, model quality, model bias, and model explainability monitoring jobs which run against a SageMaker Endpoint. |
+| Type | Description |
+|---------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| [SageMaker Studio Module](modules/sagemaker/sagemaker-studio/README.md) | Provisions secure SageMaker Studio Domain environment, creates example User Profiles for Data Scientist and Lead Data Scientist linked to IAM Roles, and adds lifecycle config |
+| [SageMaker Endpoint Module](modules/sagemaker/sagemaker-endpoint/README.md) | Creates SageMaker real-time inference endpoint for the specified model package or latest approved model from the model package group |
+| [SageMaker Project Templates via Service Catalog Module](modules/sagemaker/sagemaker-templates-service-catalog/README.md) | Provisions SageMaker Project Templates for an organization. The templates are available using SageMaker Studio Classic or Service Catalog. Available templates:
- [Train a model on Abalone dataset using XGBoost](modules/sagemaker/sagemaker-templates-service-catalog/README.md#train-a-model-on-abalone-dataset-with-xgboost-template)
- [Perform batch inference](modules/sagemaker/sagemaker-templates-service-catalog/README.md#batch-inference-template)
- [Multi-account model deployment](modules/sagemaker/sagemaker-templates-service-catalog/README.md#multi-account-model-deployment-template)
- [HuggingFace model import template](modules/sagemaker/sagemaker-templates-service-catalog/README.md#huggingface-model-import-template)
- [Perform LLM Evaluation](modules/sagemaker/sagemaker-templates-service-catalog/README.md#llm-evaluate-template) |
+| [SageMaker Notebook Instance Module](modules/sagemaker/sagemaker-notebook/README.md) | Creates secure SageMaker Notebook Instance for the Data Scientist, clones the source code to the workspace |
+| [SageMaker Custom Kernel Module](modules/sagemaker/sagemaker-custom-kernel/README.md) | Builds custom kernel for SageMaker Studio from a Dockerfile |
+| [SageMaker Model Package Group Module](modules/sagemaker/sagemaker-model-package-group/README.md) | Creates a SageMaker Model Package Group to register and version SageMaker Machine Learning (ML) models and setups an Amazon EventBridge Rule to send model package group state change events to an Amazon EventBridge Bus |
+| [SageMaker Model Package Promote Pipeline Module](modules/sagemaker/sagemaker-model-package-promote-pipeline/README.md) | Deploy a Pipeline to promote SageMaker Model Packages in a multi-account setup. The pipeline can be triggered through an EventBridge rule in reaction of a SageMaker Model Package Group state event change (Approved/Rejected). Once the pipeline is triggered, it will promote the latest approved model package, if one is found. |
+| [SageMaker Model Monitoring Module](modules/sagemaker/sagemaker-model-monitoring-module/README.md) | Deploy data quality, model quality, model bias, and model explainability monitoring jobs which run against a SageMaker Endpoint. |
### Mlflow Modules
diff --git a/manifests/fmops-qna-rag/storage-modules.yaml b/manifests/fmops-qna-rag/storage-modules.yaml
index 2966c8d8..26c1e674 100644
--- a/manifests/fmops-qna-rag/storage-modules.yaml
+++ b/manifests/fmops-qna-rag/storage-modules.yaml
@@ -4,7 +4,7 @@ parameters:
- name: encryption-type
value: SSE
- name: retention-type
- value: RETAIN
+ value: DESTROY
- name: vpc-id
valueFrom:
moduleMetadata:
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/README.md b/modules/sagemaker/sagemaker-templates-service-catalog/README.md
index dd3b6634..a6b046df 100644
--- a/modules/sagemaker/sagemaker-templates-service-catalog/README.md
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/README.md
@@ -18,6 +18,12 @@ The template contains an example SageMaker Pipeline to train a model on Abalone
![Abalone with XGBoost](docs/_static/abalone-xgboost-template.png "Abalone with XGBoost Template Architecture")
+#### LLM evaluate template
+
+This project template contains SageMaker pipeline that performs LLM evaluation.
+
+![LLM evaluate template](docs/_static/llm-evaluate.png "LLM Evaluate Template Architecture")
+
The template is based on basic multi-account template from [AWS Enterprise MLOps Framework](https://github.com/aws-samples/aws-enterprise-mlops-framework/blob/main/mlops-multi-account-cdk/mlops-sm-project-template/README.md#sagemaker-project-stack).
#### Batch Inference Template
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/docs/_static/llm-evaluate.png b/modules/sagemaker/sagemaker-templates-service-catalog/docs/_static/llm-evaluate.png
new file mode 100644
index 00000000..cf414d49
Binary files /dev/null and b/modules/sagemaker/sagemaker-templates-service-catalog/docs/_static/llm-evaluate.png differ
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/docs/_static/llm-evaluate.xml b/modules/sagemaker/sagemaker-templates-service-catalog/docs/_static/llm-evaluate.xml
new file mode 100644
index 00000000..ccacccd0
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/docs/_static/llm-evaluate.xml
@@ -0,0 +1 @@
+7V1rd5s4E/41+agcQOL20Y7bbk7bPXmb7WX3yx4BwqbBxgtyYvfXvyMDNtgYOw22wVaak1oD6K6ZZx6NxQ2+G88/xHQ6+hx5LLzRFG9+gwc3mmYbBP4KwSIVGHomGMaBl4rUteAx+MUyoZJJZ4HHktKNPIpCHkzLQjeaTJjLSzIax9FL+TY/CsulTumQbQkeXRpuS78HHh+lUksz1/I/WDAc5SWrhp1eGdP85qwlyYh60UtBhN/d4Ls4inj6aTy/Y6Hou7xfsucmbM7FlXvvGw1nWbWIscrhC0uiWeyyAUvcOJjyKIaH4kyY3v3vDe7ltWBxQMPgF+VBNEHPLE7g//Su5+wWmnVZXJFxVuYjG9MJD9wB5fQumnAaTFh8SO7p0zwOJsNPAWcxDdOx42zCS62extGUxTybNSPOxXj3brT38Au3R2E0XNwmzJ3FAV/c0jH9FU1uPfYMl/1oNvGWNYCEF9BhTMfoOUhmq5qBnGqartu6gVzs+Yi4moscYtmQVEzfsKjj2ukQv0/rfP/lfqtbX1UrmMXBcIKCSTKFeSr68r0bjafRBFqeQMIi1FIc3Ue6QTREqGoiW7d0xBzfcwzdJw5zm+2aZJFwNkZjsWRhPECiEB3bum0iolkYEd/QkaXYCqLMMH3bw8wmdrFT4EP1TMivVszN/FI23aun/mrVxeV58ZoJr3Zkwqtywl/dhI+cn8JYaUpIHbCXy3Zl0xUmhhuEfy2mWZ8PGeQfuGjVd+l9+Ur5qH0f9T/+p3z7rkW//oEBpuM//0DqauqtllbCF7lNi0WnMJGBcoP7LyOYl49TuhzkFzDiIBvxsaiVCh/9IAzvoIuhIYMJVABEHk1Gy8fF9bQ6MPM5m+9cvJko9vyvU5idaeEFQ50kjIsmfysuIFxoxAcWjRmPF3BzXoqerd5sWphZ8mVtpU0jk40KFhrrG6t+uMp6Pc4Hqpiazu+A4rH+c5WB8uPr6IP985vVt36GH370TqOMFN11day6yNOIiojtYlh3HmgkD5um6epEM3ypjPYoo6N3CvF11bZVH/nYNaBTCHSKoyhIUbFuKNSxNZ/s6pRVHV5eXm5f8G0UiyZCdrZoqWiEpiHQByhZwKSeowkUiLnQek22sNTjQ+GgoITHM5fPYibq4RDGqGshhTAdEQYDbhm2mIa64Ts2MR3TOWoDa9dnrXZpG1xoyjKK6qfGcaehfASf6zN9WqrBh2Aqlg94iNXW83nqHmIv8dqdqjKY0ygQKwI+6334Bf16p9zocOVOpG41fUOwmTbLAnU7JfIoCzbTZlmgbmavbpSvblawINhKlbJXNspXChWEX9yPZlz0+d3K5RYwQpj1AObuBlYoIIkqoOHDPMmcflXL01nHi1zBaV4O5Xi+XL239CXRlNshIJjpssx7d2n1Ki7/K4YeMuBx9MTySsES1IhlqaQa2AgUE4Dv3wtBL4KYR6IUmqVC5nORI7RALNplaoCVrNaFInq9vtm3ikBJOSNQIqpeAkoa3kZKxNC3kZJqHBsppctOQiUJlSRUklDpDVCpBhIcWh1NURRRHdC075d1cUegT6AiKfJ4G2jLFd0Vo7Z99MaIApoIDwJrtViNeUOWI4go5qNoGE1o+G4t7ZfpjwJA+ck4X2RohM54JKDOKodPkQADBcpDFLTPjhdXI9Wgj9Wn/6GHiERfEyX5db9o0tzHLIRp8Mw2ctg23dmjDwLUrmGCppT5lC3rz2k8ZDx7qjSFCtVoABN0BBLU1r5t61yCggpQIBHfYabf0FXdNDQFmcSBzjUVBlWxPeTYjq8bjuE65Limvx/SydOfMND3g0pFUqNZmxqr7V7xVN9VDIMhbBlixjMFWVghyHCZTwDtKTQjmltjaqmbqca1gd3ZczvNa+6Qz8Zhb5nf2mv+JGrwECUBT5WpE3EejeGGZdX61H0aLk1vwVP2lz87Pe+Cba7kHM7lUpsbprLCo8YVWw8N+tM1U771lvNqVeu2yaKWqyue5yFKbQsRy2WIqtRFOrV1H1xKwsyGEUULfOlOe5onsDVX7dxp+rG8OzYP+I/C57/FLbd6lhrMsyeWiUUh8QBNADUt+i2VTUBl/ygmCjmJ5DqrZWpRTG1mdlyXc8cOUha7ljp0tYNxFv+0EWpZ0ztiDeur3zadIh1J6UheriNZry/b40v+XjuajjSsGGDVdBhYKeT6sG4JVVRETd1Dlso8TImp6Vg7G0IxQrF76wXQDmPIlzY1FTm54A569C4ajwPx7Bc2Fa5sJGxSeh+U4mw+u1x9hRwrYZAL+bpZvvsRUDrdXheLUBmPUBWTUBmXsB2bULptGS1QUcKmsEpmbgvV7dvyAINtYZWsKppi82m14ml14+ndsQwb+/nw770Yj60YB7hGBiZcLFwbBDFklMKAicByG6EG8Az8kL5dFZyw4kNKkQN7CZb6uAUXaiXWVCmqc0/cBV2aZ0j4wVzUY0cgRq560jCMPiQrAzIKC+B8pM3WBoe2zdpYFayNdewoiNy+dBSqNm0eJVRtM3kjAyFaTU/thDBnCIXokOfTTaqMHIsqOwsrZV0DK0U6beq1napbmvrWmfpL0c2SlarVl5KVunBWKo9nhSd74Ob71BUqyaBj4eNPnGRapqP2UlT7S3wEW6n0Z+4T2OG3cl8JPojzsiTn1R3Oy1B6GJuv47w001RV44o4rwS3iOvSiN4SrsvqNABu2uh2BgAzRoiDbYpc4jiIKABrHNtQEcAAi7mqjVk+ddoDgK+bCaqCDSXDLhmhi2OE8NGCp87BCOX7MwcwQmZnGSHc7TglLOOUJCMkGaFWMEJNH513Nkbo6PFWHWeExCZffxaEXiORSU6a0wH7TWtLIUma1pM0MjDpwMCkdP6f8XyWjYPsWhOXtDq4tpvQVJ4k2yFoKuOSLpuNKkAWyTtdHO+k1R+f95Yv7XX9y3arQ+v3k1hKZ0ksrSvnuNVXXyKFDiCFS1H0ksQ6UzjQiUiso5NxF0BirfcoG+CxpqvMDgAs9YfcSCpLUlldpLJWS+B8bBb0e0vZrK4cj3QauygxqmSzJJv1BjbroQQ3JKF1UYQWrj+DoWuBVAcf+JSfU91BDgp3+1v0WH6Lvjv2/VJ0s+SgzhSAdBoOqum3DlwcB1X12oi3f+MNMh2nmR5ARMnDnrpERPX6mt17HRGlqsYABu96iKj1/D8fC6VvntDdGhaq2yi1acsoUapkoSQL1eBLryQb1Wk2yvqbq9+i79N/jI/qF0Obe38lX0NEjvbGK3kmeu4qHRymlTk07abI6uZRB8BHffXbBj5+0zm/kECXfD20h2S4eFZLot0L5mRbRpE9xAxBF7ksgYmhiBqdliDDMlJLEmSSIGuaICNk44AosyUEWQ6L2oBRJYHSeQJFQqULhkr1519VAJczcGOncf6u5JWBO3iBo52D3hQ9dhZGS9tmtHb0XxfOU68b+naghd9htFp6nnpbmaCmGa369SAZLcloSZh2cYzWXzGUDo99FiNzYjJLk2SWJLMkmdV0tJfeVjIr93JaAE8lmSXJLImS2ouSatuxgVnaw2M17PJdDo/1OlwoX38jcaHEhY1vcqp6GRfibVyIK3AhPjou7PZbcPC1vgWni/hI4v7O4/5LgbjnR2W/s7t4tHfqXGHwff2GywFblVqDKOHEW5VdedFPffXbZvPbusV3oq3Ko2+5yq1KScJdDwl3UoSyf6vy3TMNZ5SzY+9W1ilJyUpJVkqyUg2yUoZx9t3KelDUBoQqWYvOsxYSKF0wUKptxzZsOcOG5Wkcv05uWO4Hnl/YMEj4Ul+fA3hqEnhK4CmBZ8PA0zS1lgLPnK6XwFMCzxY0UALPjgLPbdjSHuDZMFPfcuApujCKeOHWD0JRpKOC3/0f
\ No newline at end of file
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/__init__.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/__init__.py
new file mode 100644
index 00000000..b7a726ca
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/__init__.py
@@ -0,0 +1 @@
+# Adding a comment here - empty files create issues with zipping https://github.com/aws/aws-cdk/issues/19012
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/pipeline_constructs/__init__.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/pipeline_constructs/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/pipeline_constructs/build_pipeline_construct.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/pipeline_constructs/build_pipeline_construct.py
new file mode 100644
index 00000000..7a014f9e
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/pipeline_constructs/build_pipeline_construct.py
@@ -0,0 +1,278 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+# SPDX-License-Identifier: Apache-2.0
+
+from typing import Any
+
+import aws_cdk
+from aws_cdk import Aws
+from aws_cdk import aws_cloudwatch as cloudwatch
+from aws_cdk import aws_codebuild as codebuild
+from aws_cdk import aws_codecommit as codecommit
+from aws_cdk import aws_codepipeline as codepipeline
+from aws_cdk import aws_codepipeline_actions as codepipeline_actions
+from aws_cdk import aws_iam as iam
+from aws_cdk import aws_s3 as s3
+from aws_cdk import aws_s3_assets as s3_assets
+from constructs import Construct
+
+
+class BuildPipelineConstruct(Construct):
+ def __init__(
+ self,
+ scope: Construct,
+ construct_id: str,
+ project_name: str,
+ project_id: str,
+ model_package_group_name: str,
+ model_bucket: s3.IBucket,
+ pipeline_artifact_bucket: s3.IBucket,
+ repo_asset: s3_assets.Asset,
+ **kwargs: Any,
+ ) -> None:
+ super().__init__(scope, construct_id, **kwargs)
+
+ # Define resource name
+ sagemaker_pipeline_name = f"{project_name}-{project_id}"
+ sagemaker_pipeline_description = f"{project_name} Model Build Pipeline"
+
+ # Create source repo from seed bucket/key
+ build_app_repository = codecommit.Repository(
+ self,
+ "Build App Code Repo",
+ repository_name=f"{project_name}-{construct_id}",
+ code=codecommit.Code.from_asset(
+ asset=repo_asset,
+ branch="main",
+ ),
+ )
+ aws_cdk.Tags.of(build_app_repository).add("sagemaker:project-id", project_id)
+ aws_cdk.Tags.of(build_app_repository).add("sagemaker:project-name", project_name)
+
+ sagemaker_seedcode_bucket = s3.Bucket.from_bucket_name(
+ self,
+ "SageMaker Seedcode Bucket",
+ f"sagemaker-servicecatalog-seedcode-{Aws.REGION}",
+ )
+
+ codebuild_role = iam.Role(
+ self,
+ "CodeBuild Role",
+ assumed_by=iam.ServicePrincipal("codebuild.amazonaws.com"),
+ path="/service-role/",
+ )
+
+ sagemaker_execution_role = iam.Role(
+ self,
+ "SageMaker Execution Role",
+ assumed_by=iam.ServicePrincipal("sagemaker.amazonaws.com"),
+ path="/service-role/",
+ )
+
+ # Create a policy statement for SM and ECR pull
+ sagemaker_policy = iam.Policy(
+ self,
+ "SageMaker Policy",
+ document=iam.PolicyDocument(
+ statements=[
+ iam.PolicyStatement(
+ actions=[
+ "logs:CreateLogGroup",
+ "logs:CreateLogStream",
+ "logs:PutLogEvents",
+ ],
+ resources=["*"],
+ ),
+ iam.PolicyStatement(
+ actions=[
+ "ecr:BatchCheckLayerAvailability",
+ "ecr:BatchGetImage",
+ "ecr:Describe*",
+ "ecr:GetAuthorizationToken",
+ "ecr:GetDownloadUrlForLayer",
+ ],
+ resources=["*"],
+ ),
+ iam.PolicyStatement(
+ actions=[
+ "kms:Encrypt",
+ "kms:ReEncrypt*",
+ "kms:GenerateDataKey*",
+ "kms:Decrypt",
+ "kms:DescribeKey",
+ ],
+ effect=iam.Effect.ALLOW,
+ resources=[f"arn:{Aws.PARTITION}:kms:{Aws.REGION}:{Aws.ACCOUNT_ID}:key/*"],
+ ),
+ ]
+ ),
+ )
+
+ cloudwatch.Metric.grant_put_metric_data(sagemaker_policy)
+ model_bucket.grant_read_write(sagemaker_policy)
+ sagemaker_seedcode_bucket.grant_read_write(sagemaker_policy)
+
+ sagemaker_execution_role.grant_pass_role(codebuild_role)
+ sagemaker_execution_role.grant_pass_role(sagemaker_execution_role)
+
+ # Attach the policy
+ sagemaker_policy.attach_to_role(sagemaker_execution_role)
+ sagemaker_policy.attach_to_role(codebuild_role)
+
+ # Grant extra permissions for the SageMaker role
+ sagemaker_execution_role.add_to_policy(
+ iam.PolicyStatement(
+ actions=[
+ "sagemaker:CreateModel",
+ "sagemaker:DeleteModel",
+ "sagemaker:DescribeModel",
+ "sagemaker:CreateProcessingJob",
+ "sagemaker:DescribeProcessingJob",
+ "sagemaker:StopProcessingJob",
+ "sagemaker:CreateTrainingJob",
+ "sagemaker:DescribeTrainingJob",
+ "sagemaker:StopTrainingJob",
+ "sagemaker:AddTags",
+ "sagemaker:DeleteTags",
+ "sagemaker:ListTags",
+ ],
+ resources=[
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:model/*",
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:processing-job/*",
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:training-job/*",
+ ],
+ )
+ )
+ sagemaker_execution_role.add_to_policy(
+ iam.PolicyStatement(
+ actions=[
+ "sagemaker:CreateModelPackageGroup",
+ "sagemaker:DeleteModelPackageGroup",
+ "sagemaker:DescribeModelPackageGroup",
+ "sagemaker:CreateModelPackage",
+ "sagemaker:DeleteModelPackage",
+ "sagemaker:UpdateModelPackage",
+ "sagemaker:DescribeModelPackage",
+ "sagemaker:ListModelPackages",
+ "sagemaker:AddTags",
+ "sagemaker:DeleteTags",
+ "sagemaker:ListTags",
+ ],
+ resources=[
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:model-package-group/"
+ f"{model_package_group_name}",
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:model-package/"
+ f"{model_package_group_name}/*",
+ ],
+ ),
+ )
+
+ # Grant extra permissions for the CodeBuild role
+ codebuild_role.add_to_policy(
+ iam.PolicyStatement(
+ actions=[
+ "sagemaker:DescribeModelPackage",
+ "sagemaker:ListModelPackages",
+ "sagemaker:UpdateModelPackage",
+ "sagemaker:AddTags",
+ "sagemaker:DeleteTags",
+ "sagemaker:ListTags",
+ ],
+ resources=[
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:model-package/"
+ f"{model_package_group_name}/*"
+ ],
+ )
+ )
+ codebuild_role.add_to_policy(
+ iam.PolicyStatement(
+ actions=[
+ "sagemaker:CreatePipeline",
+ "sagemaker:UpdatePipeline",
+ "sagemaker:DeletePipeline",
+ "sagemaker:StartPipelineExecution",
+ "sagemaker:StopPipelineExecution",
+ "sagemaker:DescribePipelineExecution",
+ "sagemaker:ListPipelineExecutionSteps",
+ "sagemaker:AddTags",
+ "sagemaker:DeleteTags",
+ "sagemaker:ListTags",
+ ],
+ resources=[
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:pipeline/"
+ f"{sagemaker_pipeline_name}",
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:pipeline/"
+ f"{sagemaker_pipeline_name}/execution/*",
+ ],
+ ),
+ )
+ codebuild_role.add_to_policy(
+ iam.PolicyStatement(
+ actions=[
+ "sagemaker:DescribeImageVersion",
+ ],
+ resources=[
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:image-version/*",
+ ],
+ )
+ )
+
+ # Create the CodeBuild project
+ sm_pipeline_build = codebuild.PipelineProject(
+ self,
+ "SM Pipeline Build",
+ project_name=f"{project_name}-{construct_id}",
+ role=codebuild_role, # figure out what actually this role would need
+ build_spec=codebuild.BuildSpec.from_source_filename("buildspec.yml"),
+ environment=codebuild.BuildEnvironment(
+ build_image=codebuild.LinuxBuildImage.STANDARD_5_0,
+ environment_variables={
+ "SAGEMAKER_PROJECT_NAME": codebuild.BuildEnvironmentVariable(value=project_name),
+ "SAGEMAKER_PROJECT_ID": codebuild.BuildEnvironmentVariable(value=project_id),
+ "MODEL_PACKAGE_GROUP_NAME": codebuild.BuildEnvironmentVariable(value=model_package_group_name),
+ "AWS_REGION": codebuild.BuildEnvironmentVariable(value=Aws.REGION),
+ "SAGEMAKER_PIPELINE_NAME": codebuild.BuildEnvironmentVariable(
+ value=sagemaker_pipeline_name,
+ ),
+ "SAGEMAKER_PIPELINE_DESCRIPTION": codebuild.BuildEnvironmentVariable(
+ value=sagemaker_pipeline_description,
+ ),
+ "SAGEMAKER_PIPELINE_ROLE_ARN": codebuild.BuildEnvironmentVariable(
+ value=sagemaker_execution_role.role_arn,
+ ),
+ "ARTIFACT_BUCKET": codebuild.BuildEnvironmentVariable(value=model_bucket.bucket_name),
+ "ARTIFACT_BUCKET_KMS_ID": codebuild.BuildEnvironmentVariable(
+ value=model_bucket.encryption_key.key_id # type: ignore[union-attr]
+ ),
+ },
+ ),
+ )
+
+ source_artifact = codepipeline.Artifact(artifact_name="GitSource")
+
+ build_pipeline = codepipeline.Pipeline(
+ self,
+ "Pipeline",
+ pipeline_name=f"{project_name}-{construct_id}",
+ artifact_bucket=pipeline_artifact_bucket,
+ )
+
+ # add a source stage
+ source_stage = build_pipeline.add_stage(stage_name="Source")
+ source_stage.add_action(
+ codepipeline_actions.CodeCommitSourceAction(
+ action_name="Source",
+ output=source_artifact,
+ repository=build_app_repository,
+ branch="main",
+ )
+ )
+
+ # add a build stage
+ build_stage = build_pipeline.add_stage(stage_name="Build")
+ build_stage.add_action(
+ codepipeline_actions.CodeBuildAction(
+ action_name="SMPipeline",
+ input=source_artifact,
+ project=sm_pipeline_build,
+ )
+ )
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/product_stack.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/product_stack.py
new file mode 100644
index 00000000..6702599e
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/product_stack.py
@@ -0,0 +1,255 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+# SPDX-License-Identifier: Apache-2.0
+
+from typing import Any
+
+import aws_cdk.aws_iam as iam
+import aws_cdk.aws_kms as kms
+import aws_cdk.aws_s3 as s3
+import aws_cdk.aws_s3_assets as s3_assets
+import aws_cdk.aws_sagemaker as sagemaker
+import aws_cdk.aws_servicecatalog as servicecatalog
+from aws_cdk import Aws, CfnOutput, CfnParameter, CfnTag, RemovalPolicy, Tags
+from constructs import Construct
+
+from templates.finetune_llm_evaluation.pipeline_constructs.build_pipeline_construct import (
+ BuildPipelineConstruct,
+)
+
+
+class Product(servicecatalog.ProductStack):
+ DESCRIPTION: str = (
+ "This template includes a model building pipeline that includes a workflow to pre-process, "
+ "train, evaluate and register a model. The deploy pipeline creates a dev,preprod and "
+ "production endpoint. The target DEV/PREPROD/PROD accounts are parameterized in this template."
+ )
+ TEMPLATE_NAME: str = "Fine-tune & Deploy LLMOps template (with HuggingFace) (multi-account)"
+
+ def __init__(
+ self,
+ scope: Construct,
+ id: str,
+ build_app_asset: s3_assets.Asset,
+ pre_prod_account_id: str,
+ prod_account_id: str,
+ **kwargs: Any,
+ ) -> None:
+ super().__init__(scope, id)
+
+ dev_account_id = Aws.ACCOUNT_ID
+ pre_prod_account_id = Aws.ACCOUNT_ID if not pre_prod_account_id else pre_prod_account_id
+ prod_account_id = Aws.ACCOUNT_ID if not prod_account_id else prod_account_id
+
+ sagemaker_project_name = CfnParameter(
+ self,
+ "SageMakerProjectName",
+ type="String",
+ description="Name of the project.",
+ ).value_as_string
+
+ sagemaker_project_id = CfnParameter(
+ self,
+ "SageMakerProjectId",
+ type="String",
+ description="Service generated Id of the project.",
+ ).value_as_string
+
+ pre_prod_account_id = CfnParameter(
+ self,
+ "PreProdAccountId",
+ type="String",
+ description="Pre-prod AWS account id.. Required for cross-account model registry permissions.",
+ default=pre_prod_account_id,
+ ).value_as_string
+
+ prod_account_id = CfnParameter(
+ self,
+ "ProdAccountId",
+ type="String",
+ description="Prod AWS account id. Required for cross-account model registry permissions.",
+ default=prod_account_id,
+ ).value_as_string
+
+ Tags.of(self).add("sagemaker:project-id", sagemaker_project_id)
+ Tags.of(self).add("sagemaker:project-name", sagemaker_project_name)
+
+ # create kms key to be used by the assets bucket
+ kms_key = kms.Key(
+ self,
+ "Artifacts Bucket KMS Key",
+ description="key used for encryption of data in Amazon S3",
+ enable_key_rotation=True,
+ policy=iam.PolicyDocument(
+ statements=[
+ iam.PolicyStatement(
+ actions=["kms:*"],
+ effect=iam.Effect.ALLOW,
+ resources=["*"],
+ principals=[iam.AccountRootPrincipal()],
+ ),
+ iam.PolicyStatement(
+ actions=[
+ "kms:Encrypt",
+ "kms:Decrypt",
+ "kms:ReEncrypt*",
+ "kms:GenerateDataKey*",
+ "kms:DescribeKey",
+ ],
+ resources=[
+ "*",
+ ],
+ principals=[
+ iam.AccountPrincipal(pre_prod_account_id),
+ iam.AccountPrincipal(prod_account_id),
+ ],
+ ),
+ ]
+ ),
+ )
+
+ model_bucket = s3.Bucket(
+ self,
+ "S3 Artifact",
+ bucket_name=f"mlops-{sagemaker_project_name}-{sagemaker_project_id}-{Aws.ACCOUNT_ID}",
+ encryption_key=kms_key,
+ versioned=True,
+ removal_policy=RemovalPolicy.DESTROY,
+ enforce_ssl=True, # Blocks insecure requests to the bucket
+ )
+
+ # DEV account access to objects in the bucket
+ model_bucket.add_to_resource_policy(
+ iam.PolicyStatement(
+ sid="AddDevPermissions",
+ actions=["s3:*"],
+ resources=[
+ model_bucket.arn_for_objects(key_pattern="*"),
+ model_bucket.bucket_arn,
+ ],
+ principals=[
+ iam.AccountRootPrincipal(),
+ ],
+ )
+ )
+
+ # PROD account access to objects in the bucket
+ model_bucket.add_to_resource_policy(
+ iam.PolicyStatement(
+ sid="AddCrossAccountPermissions",
+ actions=["s3:List*", "s3:Get*", "s3:Put*"],
+ resources=[
+ model_bucket.arn_for_objects(key_pattern="*"),
+ model_bucket.bucket_arn,
+ ],
+ principals=[
+ iam.AccountPrincipal(pre_prod_account_id),
+ iam.AccountPrincipal(prod_account_id),
+ ],
+ )
+ )
+
+ model_package_group_name = f"{sagemaker_project_name}-{sagemaker_project_id}"
+
+ # cross account model registry resource policy
+ model_package_group_policy = iam.PolicyDocument(
+ statements=[
+ iam.PolicyStatement(
+ sid="ModelPackageGroup",
+ actions=[
+ "sagemaker:DescribeModelPackageGroup",
+ ],
+ resources=[
+ (
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:model-package-group/"
+ f"{model_package_group_name}"
+ )
+ ],
+ principals=[
+ iam.ArnPrincipal(f"arn:{Aws.PARTITION}:iam::{dev_account_id}:root"),
+ iam.ArnPrincipal(f"arn:{Aws.PARTITION}:iam::{pre_prod_account_id}:root"),
+ iam.ArnPrincipal(f"arn:{Aws.PARTITION}:iam::{prod_account_id}:root"),
+ ],
+ ),
+ iam.PolicyStatement(
+ sid="ModelPackage",
+ actions=[
+ "sagemaker:DescribeModelPackage",
+ "sagemaker:ListModelPackages",
+ "sagemaker:UpdateModelPackage",
+ "sagemaker:CreateModel",
+ ],
+ resources=[
+ (
+ f"arn:{Aws.PARTITION}:sagemaker:{Aws.REGION}:{Aws.ACCOUNT_ID}:model-package/"
+ f"{model_package_group_name}/*"
+ )
+ ],
+ principals=[
+ iam.ArnPrincipal(f"arn:{Aws.PARTITION}:iam::{dev_account_id}:root"),
+ iam.ArnPrincipal(f"arn:{Aws.PARTITION}:iam::{pre_prod_account_id}:root"),
+ iam.ArnPrincipal(f"arn:{Aws.PARTITION}:iam::{prod_account_id}:root"),
+ ],
+ ),
+ ]
+ ).to_json()
+
+ sagemaker.CfnModelPackageGroup(
+ self,
+ "Model Package Group",
+ model_package_group_name=model_package_group_name,
+ model_package_group_description=f"Model Package Group for {sagemaker_project_name}",
+ model_package_group_policy=model_package_group_policy,
+ tags=[
+ CfnTag(key="sagemaker:project-id", value=sagemaker_project_id),
+ CfnTag(key="sagemaker:project-name", value=sagemaker_project_name),
+ ],
+ )
+
+ kms_key = kms.Key(
+ self,
+ "Pipeline Bucket KMS Key",
+ description="key used for encryption of data in Amazon S3",
+ enable_key_rotation=True,
+ policy=iam.PolicyDocument(
+ statements=[
+ iam.PolicyStatement(
+ actions=["kms:*"],
+ effect=iam.Effect.ALLOW,
+ resources=["*"],
+ principals=[iam.AccountRootPrincipal()],
+ )
+ ]
+ ),
+ )
+
+ pipeline_artifact_bucket = s3.Bucket(
+ self,
+ "Pipeline Bucket",
+ bucket_name=f"pipeline-{sagemaker_project_name}-{sagemaker_project_id}-{Aws.ACCOUNT_ID}",
+ encryption_key=kms_key,
+ versioned=True,
+ removal_policy=RemovalPolicy.DESTROY,
+ )
+
+ BuildPipelineConstruct(
+ self,
+ "build",
+ project_name=sagemaker_project_name,
+ project_id=sagemaker_project_id,
+ model_package_group_name=model_package_group_name,
+ model_bucket=model_bucket,
+ pipeline_artifact_bucket=pipeline_artifact_bucket,
+ repo_asset=build_app_asset,
+ )
+
+ CfnOutput(
+ self,
+ "Model Bucket Name",
+ value=model_bucket.bucket_name,
+ )
+
+ CfnOutput(
+ self,
+ "Model Package Group Name",
+ value=model_package_group_name,
+ )
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/.pre-commit-config.yaml b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/.pre-commit-config.yaml
new file mode 100644
index 00000000..7a9c7e1c
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/.pre-commit-config.yaml
@@ -0,0 +1,52 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+repos:
+- repo: https://github.com/pre-commit/pre-commit-hooks
+ rev: v4.3.0
+ hooks:
+ - id: check-added-large-files
+ - id: check-json
+ - id: check-merge-conflict
+ # - id: check-yaml
+ - id: end-of-file-fixer
+ - id: requirements-txt-fixer
+ - id: trailing-whitespace
+- repo: https://github.com/psf/black
+ rev: 22.6.0
+ hooks:
+ - id: black
+ args: ["--line-length=120"]
+- repo: https://gitlab.com/PyCQA/flake8
+ rev: 3.9.2
+ hooks:
+ - id: flake8
+ args: ["--ignore=E231,E501,F841,W503,F403,E266,W605,F541,F401,E302", "--exclude=app.py", "--max-line-length=120"]
+- repo: https://github.com/Lucas-C/pre-commit-hooks
+ rev: v1.2.0
+ hooks:
+ - id: forbid-crlf
+ - id: remove-crlf
+ - id: insert-license
+ files: \.(py|yaml)$
+- repo: local
+ hooks:
+ - id: clear-jupyter-notebooks
+ name: clear-jupyter-notebooks
+ entry: bash -c 'find . -type f -name "*.ipynb" -exec jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace "{}" \; && git add . && exit 0'
+ language: system
+ pass_filenames: false
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/Makefile b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/Makefile
new file mode 100644
index 00000000..ce0bc7b2
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/Makefile
@@ -0,0 +1,102 @@
+.PHONY: lint init
+
+#################################################################################
+# GLOBALS #
+#################################################################################
+
+PROJECT_DIR := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
+PROJECT_NAME = gfdtv-dataanalysis-data-models
+PYTHON_INTERPRETER = python3
+
+ifeq (,$(shell which conda))
+HAS_CONDA=False
+else
+HAS_CONDA=True
+endif
+
+#################################################################################
+# COMMANDS #
+#################################################################################
+
+## Lint using flake8
+lint:
+ flake8 src
+## Setup git hooks
+init:
+ git config core.hooksPath .githooks
+
+clean:
+ rm -f cdk.staging
+ rm -rf cdk.out
+ find . -name '*.egg-info' -exec rm -fr {} +
+ find . -name '.coverage' -exec rm -fr {} +
+ find . -name '.pytest_cache' -exec rm -fr {} +
+ find . -name '.tox' -exec rm -fr {} +
+ find . -name '__pycache__' -exec rm -fr {} +
+#################################################################################
+# PROJECT RULES #
+#################################################################################
+
+
+
+
+#################################################################################
+# Self Documenting Commands #
+#################################################################################
+
+.DEFAULT_GOAL := help
+
+# Inspired by
+# sed script explained:
+# /^##/:
+# * save line in hold space
+# * purge line
+# * Loop:
+# * append newline + line to hold space
+# * go to next line
+# * if line starts with doc comment, strip comment character off and loop
+# * remove target prerequisites
+# * append hold space (+ newline) to line
+# * replace newline plus comments by `---`
+# * print line
+# Separate expressions are necessary because labels cannot be delimited by
+# semicolon; see
+.PHONY: help
+help:
+ @echo "$$(tput bold)Available rules:$$(tput sgr0)"
+ @echo
+ @sed -n -e "/^## / { \
+ h; \
+ s/.*//; \
+ :doc" \
+ -e "H; \
+ n; \
+ s/^## //; \
+ t doc" \
+ -e "s/:.*//; \
+ G; \
+ s/\\n## /---/; \
+ s/\\n/ /g; \
+ p; \
+ }" ${MAKEFILE_LIST} \
+ | LC_ALL='C' sort --ignore-case \
+ | awk -F '---' \
+ -v ncol=$$(tput cols) \
+ -v indent=19 \
+ -v col_on="$$(tput setaf 6)" \
+ -v col_off="$$(tput sgr0)" \
+ '{ \
+ printf "%s%*s%s ", col_on, -indent, $$1, col_off; \
+ n = split($$2, words, " "); \
+ line_length = ncol - indent; \
+ for (i = 1; i <= n; i++) { \
+ line_length -= length(words[i]) + 1; \
+ if (line_length <= 0) { \
+ line_length = ncol - indent - length(words[i]) - 1; \
+ printf "\n%*s ", -indent, " "; \
+ } \
+ printf "%s ", words[i]; \
+ } \
+ printf "\n"; \
+ }' \
+ | more $(shell test $(shell uname) = Darwin && echo '--no-init --raw-control-chars')
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/README.md b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/README.md
new file mode 100644
index 00000000..e32aedd8
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/README.md
@@ -0,0 +1,27 @@
+# SageMaker Build - Train Pipelines
+
+This folder contains all the SageMaker Pipelines of your project.
+
+`buildspec.yml` defines how to run a pipeline after each commit to this repository.
+`ml_pipelines/` contains the SageMaker pipelines definitions.
+The expected output of the your main pipeline (here `text2sql_finetune/pipeline.py`) is a model registered to SageMaker Model Registry.
+
+`text2sql_finetune/source_scripts/` contains the underlying scripts run by the steps of your SageMaker Pipelines. For example, if your SageMaker Pipeline runs a Processing Job as part of a Processing Step, the code being run inside the Processing Job should be defined in this folder.
+
+A typical folder structure for `source_scripts/` can contain `helpers`, `preprocessing`, `training`, `postprocessing`, `evaluate`, depending on the nature of the steps run as part of the SageMaker Pipeline.
+
+We provide here an example for finetuning CodeLlama on the task text to SQL with Parameter Efficient Fine-Tuning (PEFT).
+
+Additionally, if you use custom containers, the Dockerfile definitions should be found in that folder.
+
+`tests/` contains the unittests for your `source_scripts/`
+
+`notebooks/` contains experimentation notebooks.
+
+# Run pipeline from command line from this folder
+
+```
+pip install -e .
+
+run-pipeline --module-name ml_pipelines.text2sql_finetune.pipeline --role-arn YOUR_SAGEMAKER_EXECUTION_ROLE_ARN --kwargs '{"region":"eu-west-1"}'
+```
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/buildspec.yml b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/buildspec.yml
new file mode 100644
index 00000000..6b15115a
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/buildspec.yml
@@ -0,0 +1,19 @@
+version: 0.2
+
+phases:
+ install:
+ runtime-versions:
+ python: 3.11
+ commands:
+ - pip install --upgrade --force-reinstall . "awscli>1.20.30"
+
+ build:
+ commands:
+ - export PYTHONUNBUFFERED=TRUE
+ - export SAGEMAKER_PROJECT_NAME_ID="${SAGEMAKER_PROJECT_NAME}-${SAGEMAKER_PROJECT_ID}"
+ - |
+ run-pipeline --module-name ml_pipelines.text2sql_finetune.pipeline \
+ --role-arn $SAGEMAKER_PIPELINE_ROLE_ARN \
+ --tags "[{\"Key\":\"sagemaker:project-name\", \"Value\":\"${SAGEMAKER_PROJECT_NAME}\"}, {\"Key\":\"sagemaker:project-id\", \"Value\":\"${SAGEMAKER_PROJECT_ID}\"}]" \
+ --kwargs "{\"region\":\"${AWS_REGION}\",\"role\":\"${SAGEMAKER_PIPELINE_ROLE_ARN}\",\"default_bucket\":\"${ARTIFACT_BUCKET}\",\"pipeline_name\":\"${SAGEMAKER_PROJECT_NAME_ID}\",\"model_package_group_name\":\"${MODEL_PACKAGE_GROUP_NAME}\",\"base_job_prefix\":\"${SAGEMAKER_PROJECT_NAME_ID}\"}" #", \"bucket_kms_id\":\"${ARTIFACT_BUCKET_KMS_ID}\"}"
+ - echo "Create/Update of the SageMaker Pipeline and execution completed."
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/README.md b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/README.md
new file mode 100644
index 00000000..8e309f81
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/README.md
@@ -0,0 +1,7 @@
+# SageMaker Pipelines
+
+This folder contains SageMaker Pipeline definitions and helper scripts to either simply "get" a SageMaker Pipeline definition (JSON dictionnary) with `get_pipeline_definition.py`, or "run" a SageMaker Pipeline from a SageMaker pipeline definition with `run_pipeline.py`.
+
+Those files are generic and can be reused to call any SageMaker Pipeline.
+
+Each SageMaker Pipeline definition should be be treated as a modul inside its own folder, for example here the "training" pipeline, contained inside `training/`.
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/__init__.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/__init__.py
new file mode 100644
index 00000000..ff79f21c
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/__init__.py
@@ -0,0 +1,30 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+# © 2021 Amazon Web Services, Inc. or its affiliates. All Rights Reserved. This
+# AWS Content is provided subject to the terms of the AWS Customer Agreement
+# available at http://aws.amazon.com/agreement or other written agreement between
+# Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL
+# or both.
+#
+# Any code, applications, scripts, templates, proofs of concept, documentation
+# and other items provided by AWS under this SOW are "AWS Content," as defined
+# in the Agreement, and are provided for illustration purposes only. All such
+# AWS Content is provided solely at the option of AWS, and is subject to the
+# terms of the Addendum and the Agreement. Customer is solely responsible for
+# using, deploying, testing, and supporting any code and applications provided
+# by AWS under this SOW.
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/__version__.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/__version__.py
new file mode 100644
index 00000000..660d19ee
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/__version__.py
@@ -0,0 +1,26 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+"""Metadata for the ml pipelines package."""
+
+__title__ = "ml_pipelines"
+__description__ = "ml pipelines - template package"
+__version__ = "0.0.1"
+__author__ = ""
+__author_email__ = ""
+__license__ = "Apache 2.0"
+__url__ = ""
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/_utils.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/_utils.py
new file mode 100644
index 00000000..85022161
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/_utils.py
@@ -0,0 +1,92 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+# © 2021 Amazon Web Services, Inc. or its affiliates. All Rights Reserved. This
+# AWS Content is provided subject to the terms of the AWS Customer Agreement
+# available at http://aws.amazon.com/agreement or other written agreement between
+# Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL
+# or both.
+#
+# Any code, applications, scripts, templates, proofs of concept, documentation
+# and other items provided by AWS under this SOW are "AWS Content," as defined
+# in the Agreement, and are provided for illustration purposes only. All such
+# AWS Content is provided solely at the option of AWS, and is subject to the
+# terms of the Addendum and the Agreement. Customer is solely responsible for
+# using, deploying, testing, and supporting any code and applications provided
+# by AWS under this SOW.
+
+# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"). You
+# may not use this file except in compliance with the License. A copy of
+# the License is located at
+#
+# http://aws.amazon.com/apache2.0/
+#
+# or in the "license" file accompanying this file. This file is
+# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
+# ANY KIND, either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+"""Provides utilities for SageMaker Pipeline CLI."""
+
+from __future__ import absolute_import
+
+import ast
+
+
+def get_pipeline_driver(module_name, passed_args=None):
+ """Gets the driver for generating your pipeline definition.
+
+ Pipeline modules must define a get_pipeline() module-level method.
+
+ Args:
+ module_name: The module name of your pipeline.
+ passed_args: Optional passed arguments that your pipeline may be templated by.
+
+ Returns:
+ The SageMaker Workflow pipeline.
+ """
+ _imports = __import__(module_name, fromlist=["get_pipeline"])
+ kwargs = convert_struct(passed_args)
+ return _imports.get_pipeline(**kwargs)
+
+
+def convert_struct(str_struct=None):
+ """convert the string argument to it's proper type
+
+ Args:
+ str_struct (str, optional): string to be evaluated. Defaults to None.
+
+ Returns:
+ string struct as it's actuat evaluated type
+ """
+ return ast.literal_eval(str_struct) if str_struct else {}
+
+
+def get_pipeline_custom_tags(module_name, args, tags):
+ """Gets the custom tags for pipeline
+
+ Returns:
+ Custom tags to be added to the pipeline
+ """
+ try:
+ _imports = __import__(module_name, fromlist=["get_pipeline_custom_tags"])
+ kwargs = convert_struct(args)
+ return _imports.get_pipeline_custom_tags(tags, kwargs["region"], kwargs["sagemaker_project_arn"])
+ except Exception as e:
+ print(f"Error getting project tags: {e}")
+ return tags
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/get_pipeline_definition.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/get_pipeline_definition.py
new file mode 100644
index 00000000..da8245e3
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/get_pipeline_definition.py
@@ -0,0 +1,78 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+
+"""A CLI to get pipeline definitions from pipeline modules."""
+
+from __future__ import absolute_import
+
+import argparse
+import sys
+
+from ml_pipelines._utils import get_pipeline_driver
+
+
+def main(): # pragma: no cover
+ """The main harness that gets the pipeline definition JSON.
+
+ Prints the json to stdout or saves to file.
+ """
+ parser = argparse.ArgumentParser("Gets the pipeline definition for the pipeline script.")
+
+ parser.add_argument(
+ "-n",
+ "--module-name",
+ dest="module_name",
+ type=str,
+ help="The module name of the pipeline to import.",
+ )
+ parser.add_argument(
+ "-f",
+ "--file-name",
+ dest="file_name",
+ type=str,
+ default=None,
+ help="The file to output the pipeline definition json to.",
+ )
+ parser.add_argument(
+ "-kwargs",
+ "--kwargs",
+ dest="kwargs",
+ default=None,
+ help="Dict string of keyword arguments for the pipeline generation (if supported)",
+ )
+ args = parser.parse_args()
+
+ if args.module_name is None:
+ parser.print_help()
+ sys.exit(2)
+
+ try:
+ pipeline = get_pipeline_driver(args.module_name, args.kwargs)
+ content = pipeline.definition()
+ if args.file_name:
+ with open(args.file_name, "w") as f:
+ f.write(content)
+ else:
+ print(content)
+ except Exception as e: # pylint: disable=W0703
+ print(f"Exception: {e}")
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/run_pipeline.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/run_pipeline.py
new file mode 100644
index 00000000..25ab651f
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/run_pipeline.py
@@ -0,0 +1,112 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+"""A CLI to create or update and run pipelines."""
+
+from __future__ import absolute_import
+
+import argparse
+import json
+import sys
+
+from ml_pipelines._utils import convert_struct, get_pipeline_custom_tags, get_pipeline_driver
+
+
+def main(): # pragma: no cover
+ """The main harness that creates or updates and runs the pipeline.
+
+ Creates or updates the pipeline and runs it.
+ """
+ parser = argparse.ArgumentParser("Creates or updates and runs the pipeline for the pipeline script.")
+
+ parser.add_argument(
+ "-n",
+ "--module-name",
+ dest="module_name",
+ type=str,
+ help="The module name of the pipeline to import.",
+ )
+ parser.add_argument(
+ "-kwargs",
+ "--kwargs",
+ dest="kwargs",
+ default=None,
+ help="Dict string of keyword arguments for the pipeline generation (if supported)",
+ )
+ parser.add_argument(
+ "-role-arn",
+ "--role-arn",
+ dest="role_arn",
+ type=str,
+ help="The role arn for the pipeline service execution role.",
+ )
+ parser.add_argument(
+ "-description",
+ "--description",
+ dest="description",
+ type=str,
+ default=None,
+ help="The description of the pipeline.",
+ )
+ parser.add_argument(
+ "-tags",
+ "--tags",
+ dest="tags",
+ default=None,
+ help="""List of dict strings of '[{"Key": "string", "Value": "string"}, ..]'""",
+ )
+ args = parser.parse_args()
+
+ if args.module_name is None or args.role_arn is None:
+ parser.print_help()
+ sys.exit(2)
+ tags = convert_struct(args.tags)
+
+ try:
+ pipeline = get_pipeline_driver(args.module_name, args.kwargs)
+ print("###### Creating/updating a SageMaker Pipeline with the following definition:")
+ parsed = json.loads(pipeline.definition())
+ print(json.dumps(parsed, indent=2, sort_keys=True))
+
+ all_tags = get_pipeline_custom_tags(args.module_name, args.kwargs, tags)
+
+ upsert_response = pipeline.upsert(role_arn=args.role_arn, description=args.description, tags=all_tags)
+
+ upsert_response = pipeline.upsert(
+ role_arn=args.role_arn, description=args.description
+ ) # , tags=tags) # Removing tag momentaneously
+ print("\n###### Created/Updated SageMaker Pipeline: Response received:")
+ print(upsert_response)
+
+ execution = pipeline.start()
+ print(f"\n###### Execution started with PipelineExecutionArn: {execution.arn}")
+
+ print("Waiting for the execution to finish...")
+ # setting below values to wait for the execution to finish within 8 hrs.
+ delay_seconds: int = 30
+ max_attempts: int = 120 * 8
+ execution.wait(delay=delay_seconds, max_attempts=max_attempts)
+ print("\n#####Execution completed. Execution step details:")
+
+ print(execution.list_steps())
+ except Exception as e: # pylint: disable=W0703
+ print(f"Exception: {e}")
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/README.md b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/README.md
new file mode 100644
index 00000000..3561799f
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/README.md
@@ -0,0 +1,4 @@
+## CodeLLama Fine-tune ModelBuild Project Template
+
+This is a sample code repository that demonstrates how the LLM CodeLlama can be fine-tuned with Amazon SageMaker pipelines.
+The pipeline fine-tunes the model with the provided data. After training, the model will be evaluated and it the score exceeds a certain threshold, the model will be registered to SageMaker Model Registry.
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/__init__.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/__init__.py
new file mode 100644
index 00000000..ff79f21c
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/__init__.py
@@ -0,0 +1,30 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+# © 2021 Amazon Web Services, Inc. or its affiliates. All Rights Reserved. This
+# AWS Content is provided subject to the terms of the AWS Customer Agreement
+# available at http://aws.amazon.com/agreement or other written agreement between
+# Customer and either Amazon Web Services, Inc. or Amazon Web Services EMEA SARL
+# or both.
+#
+# Any code, applications, scripts, templates, proofs of concept, documentation
+# and other items provided by AWS under this SOW are "AWS Content," as defined
+# in the Agreement, and are provided for illustration purposes only. All such
+# AWS Content is provided solely at the option of AWS, and is subject to the
+# terms of the Addendum and the Agreement. Customer is solely responsible for
+# using, deploying, testing, and supporting any code and applications provided
+# by AWS under this SOW.
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/pipeline.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/pipeline.py
new file mode 100644
index 00000000..619e30e2
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/ml_pipelines/text2sql_finetune/pipeline.py
@@ -0,0 +1,404 @@
+"""Example workflow pipeline script for Text-to-SQL pipeline using Hugging Face and CodeLlama.
+
+ . - ModelStep and Register Model
+ .
+ Process Data -> Train -> Evaluate -> Condition .
+ .
+ . - (stop)
+
+Implements a get_pipeline(**kwargs) method.
+"""
+
+import logging
+
+import boto3
+import sagemaker
+import sagemaker.session
+from sagemaker.huggingface import (
+ HuggingFace,
+ HuggingFaceModel,
+ HuggingFaceProcessor,
+ get_huggingface_llm_image_uri,
+)
+from sagemaker.inputs import TrainingInput
+from sagemaker.model_metrics import MetricsSource, ModelMetrics
+from sagemaker.processing import ProcessingInput, ProcessingOutput
+from sagemaker.workflow.condition_step import ConditionStep
+from sagemaker.workflow.conditions import ConditionGreaterThanOrEqualTo
+from sagemaker.workflow.functions import JsonGet
+from sagemaker.workflow.model_step import ModelStep
+from sagemaker.workflow.parameters import (
+ ParameterFloat,
+ ParameterInteger,
+ ParameterString,
+)
+from sagemaker.workflow.pipeline import Pipeline
+from sagemaker.workflow.pipeline_context import PipelineSession
+from sagemaker.workflow.properties import PropertyFile
+from sagemaker.workflow.steps import ProcessingStep, TrainingStep
+
+# from sagemaker.workflow.steps import CacheConfig # Enable to debug specific steps if previous steps were succesful
+
+
+logging.basicConfig(level=logging.INFO)
+logger = logging.getLogger()
+
+SCRIPTS_DIR_PATH = "source_scripts"
+
+
+def get_sagemaker_client(region):
+ """Gets the sagemaker client.
+
+ Args:
+ region: the aws region to start the session
+ default_bucket: the bucket to use for storing the artifacts
+
+ Returns:
+ `sagemaker.session.Session instance
+ """
+ boto_session = boto3.Session(region_name=region)
+ sagemaker_client = boto_session.client("sagemaker")
+ return sagemaker_client
+
+
+def get_session(region, default_bucket):
+ """Gets the sagemaker session based on the region.
+
+ Args:
+ region: the aws region to start the session
+ default_bucket: the bucket to use for storing the artifacts
+
+ Returns:
+ `sagemaker.session.Session instance
+ """
+
+ boto_session = boto3.Session(region_name=region)
+
+ sagemaker_client = boto_session.client("sagemaker")
+ runtime_client = boto_session.client("sagemaker-runtime")
+ return sagemaker.session.Session(
+ boto_session=boto_session,
+ sagemaker_client=sagemaker_client,
+ sagemaker_runtime_client=runtime_client,
+ default_bucket=default_bucket,
+ )
+
+
+def get_pipeline_session(region, default_bucket):
+ """Gets the pipeline session based on the region.
+
+ Args:
+ region: the aws region to start the session
+ default_bucket: the bucket to use for storing the artifacts
+
+ Returns:
+ PipelineSession instance
+ """
+
+ boto_session = boto3.Session(region_name=region)
+ sagemaker_client = boto_session.client("sagemaker")
+
+ return PipelineSession(
+ boto_session=boto_session,
+ sagemaker_client=sagemaker_client,
+ default_bucket=default_bucket,
+ )
+
+
+def get_pipeline_custom_tags(new_tags, region, sagemaker_project_name=None):
+ try:
+ sm_client = get_sagemaker_client(region)
+ response = sm_client.describe_project(ProjectName=sagemaker_project_name)
+ sagemaker_project_arn = response["ProjectArn"]
+ response = sm_client.list_tags(ResourceArn=sagemaker_project_arn)
+ project_tags = response["Tags"]
+ for project_tag in project_tags:
+ new_tags.append(project_tag)
+ except Exception as e:
+ logger.error(f"Error getting project tags: {e}")
+ return new_tags
+
+
+def get_pipeline(
+ region,
+ sagemaker_project_name=None,
+ role=None,
+ default_bucket=None,
+ bucket_kms_id=None,
+ model_package_group_name="Text2SQLGenerationPackageGroup",
+ pipeline_name="Text2SQLSQLGenerationPipeline",
+ base_job_prefix="Text2sql",
+ processing_instance_count=1,
+ processing_instance_type="ml.g4dn.xlarge", # small gpu instance for data preprocessing
+ training_instance_type="ml.g5.12xlarge", # "ml.g5.24xlarge", # larger instance type for training if needed
+ evaluation_instance_type="ml.g4dn.12xlarge",
+ transformers_version="4.28.1",
+ pytorch_version="2.0.0",
+ py_version="py310",
+):
+ """Gets a SageMaker ML Pipeline instance to fine-tune LLMs with HuggingFace scripts.
+
+ Args:
+
+ region: AWS region to create and run the pipeline.
+ sagemaker_project_name: sagemaker project name
+ role: IAM role to create and run steps and pipeline.
+ default_bucket: the bucket to use for storing the artifacts
+ bucket_kms_id: bucket kms id
+ model_package_group_name: model package group name
+ pipeline_name: sagemaker pipeline name
+ base_job_prefix: base job prefix
+ processing_instance_count: number of processing instances to use
+ training_instance_type: training instance type
+ transformers_version: hugging face transformers package version
+ pytorch_version: PyTorch version to use
+ py_version: Python version to use
+
+
+ Returns:
+ an instance of a pipeline
+ """
+ sagemaker_session = get_session(region, default_bucket)
+ if role is None:
+ role = sagemaker.session.get_execution_role(sagemaker_session)
+
+ pipeline_session = get_pipeline_session(region, default_bucket)
+
+ logger.info(
+ f"sagemaker_project_name : {sagemaker_project_name}, "
+ f"bucket_kms_id : {bucket_kms_id}, default_bucket : {default_bucket}, role : {role}"
+ )
+
+ # parameters for pipeline execution
+ processing_instance_count = ParameterInteger(name="ProcessingInstanceCount", default_value=1)
+
+ model_approval_status = ParameterString(name="ModelApprovalStatus", default_value="PendingManualApproval")
+
+ # condition step for evaluating model quality and branching execution
+ acc_score_threshold = ParameterFloat(name="AccuracyScoreThreshold", default_value=0.0) # Auto approve for test
+
+ hf_model_id = ParameterString(name="HuggingFaceModel", default_value="codellama/CodeLlama-7b-hf")
+
+ hf_dataset_name = ParameterString(name="HuggingFaceDataset", default_value="philikai/Spider-SQL-LLAMA2_train")
+
+ # This parameter is used to test the entire pipeline and is useful for development and debugging.
+ # If set to True, a small data sample will be selected to speed up pipeline execution.
+ dry_run = ParameterString(name="DryRun", default_value="True")
+
+ # cache_config = CacheConfig(enable_caching=True, expire_after="P1D")
+ # # Enable to debug specific steps if previous steps were succesful
+
+ ########################################## PREPROCESSING STEP #################################################
+
+ hf_data_processor = HuggingFaceProcessor(
+ instance_type=processing_instance_type,
+ instance_count=processing_instance_count,
+ transformers_version=transformers_version,
+ pytorch_version=pytorch_version,
+ py_version=py_version,
+ base_job_name=f"{base_job_prefix}/preprocess-dataset",
+ sagemaker_session=pipeline_session,
+ role=role,
+ output_kms_key=bucket_kms_id,
+ )
+
+ step_args = hf_data_processor.run(
+ outputs=[
+ ProcessingOutput(output_name="train", source="/opt/ml/processing/train"),
+ ProcessingOutput(output_name="test", source="/opt/ml/processing/test"),
+ ],
+ code="preprocess.py",
+ source_dir=SCRIPTS_DIR_PATH,
+ arguments=[
+ "--dataset_name",
+ hf_dataset_name,
+ "--dry_run",
+ dry_run,
+ ],
+ )
+
+ step_process = ProcessingStep(
+ name="LoadPreprocessSplitDataset",
+ step_args=step_args,
+ # cache_config=cache_config, # Enable to debug specific steps if previous steps were succesful
+ )
+
+ ########################################## TRAINING STEP #######################################################
+
+ hyperparameters = {
+ "model_id": hf_model_id, # pre-trained model
+ "epochs": 2, # number of training epochs --> 1 is for fast testing.
+ # Can be exposed as pipeline parameter for example.
+ "per_device_train_batch_size": 4, # batch size for training
+ "lr": 1e-4, # learning rate used during training
+ "merge_weights": True, # wether to merge LoRA into the model (needs more memory)
+ }
+
+ huggingface_estimator = HuggingFace(
+ entry_point="train.py", # train script
+ source_dir=SCRIPTS_DIR_PATH, # directory which includes all the files needed for training
+ instance_type=training_instance_type, # instances type used for the training job
+ instance_count=1, # the number of instances used for training
+ base_job_name=f"{base_job_prefix}/training", # the name of the training job
+ role=role, # Iam role used in training job to access AWS ressources, e.g. S3
+ volume_size=300, # the size of the EBS volume in GB
+ transformers_version=transformers_version, # the transformers version used in the training job
+ pytorch_version=pytorch_version, # the pytorch_version version used in the training job
+ py_version=py_version, # the python version used in the training job
+ hyperparameters=hyperparameters, # the hyperparameters passed to the training job
+ sagemaker_session=pipeline_session,
+ environment={"HUGGINGFACE_HUB_CACHE": "/tmp/.cache"}, # set env variable to cache models in /tmp
+ keepAlivePeriod=600,
+ output_kms_key=bucket_kms_id,
+ )
+
+ step_args = huggingface_estimator.fit(
+ inputs={
+ "training": TrainingInput(
+ s3_data=step_process.properties.ProcessingOutputConfig.Outputs["train"].S3Output.S3Uri,
+ )
+ },
+ )
+
+ step_train = TrainingStep(
+ name="FinetuneLLMSQLModel",
+ step_args=step_args,
+ # cache_config=cache_config, # Enable to debug specific steps if previous steps were succesful
+ )
+
+ ############ Evaluation Step ##############
+
+ hf_evaluator = HuggingFaceProcessor(
+ role=role,
+ instance_count=processing_instance_count,
+ instance_type=evaluation_instance_type,
+ transformers_version=transformers_version,
+ pytorch_version=pytorch_version,
+ py_version=py_version,
+ base_job_name=f"{base_job_prefix}/evaluation",
+ sagemaker_session=pipeline_session,
+ output_kms_key=bucket_kms_id,
+ )
+
+ # The evaluate.py defines several parameters as input args.
+ # We are only passing the --dry-run parameter here as an example.
+ # If you want to be able to change the other parameters at runtime,
+ # please implement them as SageMaker pipeline paramets analougously to the
+ # --dry-run parameter.
+ step_args = hf_evaluator.run(
+ code="evaluate.py",
+ source_dir=SCRIPTS_DIR_PATH,
+ arguments=[
+ "--dry_run",
+ dry_run,
+ ],
+ inputs=[
+ ProcessingInput(
+ source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
+ destination="/opt/ml/processing/model",
+ ),
+ ProcessingInput(
+ source=step_process.properties.ProcessingOutputConfig.Outputs["test"].S3Output.S3Uri,
+ destination="/opt/ml/processing/test",
+ ),
+ ],
+ outputs=[
+ ProcessingOutput(
+ output_name="evaluation",
+ source="/opt/ml/processing/evaluation",
+ ),
+ ],
+ )
+
+ evaluation_report = PropertyFile(
+ name="LLMEvaluationReport",
+ output_name="evaluation",
+ path="evaluation.json",
+ )
+
+ step_eval = ProcessingStep(
+ name="EvaluateSQLModel",
+ step_args=step_args,
+ property_files=[evaluation_report],
+ # cache_config=cache_config, # Enable to debug specific steps if previous steps were succesful
+ )
+
+ # ########## MODEL CREATION & REGISTRATION STEP #####
+
+ model_metrics = ModelMetrics(
+ model_statistics=MetricsSource(
+ s3_uri="{}/evaluation.json".format(
+ step_eval.arguments["ProcessingOutputConfig"]["Outputs"][0]["S3Output"]["S3Uri"]
+ ),
+ content_type="application/json",
+ )
+ )
+
+ # Inference endpoint works with the tgi images which can be retrieved with
+ # the get_huggingface_llm_image_uri() method
+ llm_image = get_huggingface_llm_image_uri("huggingface", version="1.0.3")
+
+ env = {
+ "HF_MODEL_ID": "/opt/ml/model",
+ }
+
+ huggingface_model = HuggingFaceModel(
+ name="LLMModel",
+ image_uri=llm_image,
+ env=env,
+ model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
+ sagemaker_session=pipeline_session,
+ role=role,
+ model_kms_key=bucket_kms_id,
+ )
+
+ step_args = huggingface_model.register(
+ content_types=["application/json"],
+ response_types=["application/json"],
+ inference_instances=["ml.g4dn.xlarge", "ml.g4dn.8xlarge", "ml.g5.12xlarge"],
+ transform_instances=["ml.g4dn.xlarge", "ml.g4dn.8xlarge", "ml.g5.12xlarge"],
+ model_package_group_name=model_package_group_name,
+ approval_status=model_approval_status,
+ model_metrics=model_metrics,
+ )
+
+ step_register = ModelStep(
+ name="TextToSQLLLM",
+ step_args=step_args,
+ )
+
+ ########################################## CONDITION STEP #######################################################
+
+ cond_gte = ConditionGreaterThanOrEqualTo(
+ left=JsonGet(
+ step_name=step_eval.name,
+ property_file=evaluation_report,
+ json_path="metrics.accuracy.value",
+ ),
+ right=acc_score_threshold,
+ )
+
+ step_cond = ConditionStep(
+ name="CheckAccuracyScore",
+ conditions=[cond_gte],
+ if_steps=[step_register],
+ else_steps=[],
+ )
+
+ # Create pipeline instance
+ pipeline = Pipeline(
+ name=pipeline_name,
+ parameters=[
+ processing_instance_type,
+ processing_instance_count,
+ training_instance_type,
+ hf_model_id,
+ hf_dataset_name,
+ model_approval_status,
+ acc_score_threshold,
+ dry_run,
+ ],
+ steps=[step_process, step_train, step_eval, step_cond],
+ sagemaker_session=pipeline_session,
+ )
+ return pipeline
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/notebooks/README.md b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/notebooks/README.md
new file mode 100644
index 00000000..c0749333
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/notebooks/README.md
@@ -0,0 +1,4 @@
+# Jupyter Notebooks
+
+This folder is intended to store your experiment notebooks.
+Typically the first step would be to store your Data Science notebooks, and start defining example SageMaker pipelines in here. Once satisfied with the first iteration of a SageMaker pipeline, the code should move as python scripts inside the respective `ml_pipelines/` and `source_scripts/` folders.
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/notebooks/sm_pipelines_runbook.ipynb b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/notebooks/sm_pipelines_runbook.ipynb
new file mode 100644
index 00000000..247f44ce
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/notebooks/sm_pipelines_runbook.ipynb
@@ -0,0 +1,450 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import logging\n",
+ "\n",
+ "import boto3\n",
+ "import sagemaker\n",
+ "import sagemaker.session\n",
+ "from sagemaker.estimator import Estimator\n",
+ "from sagemaker.inputs import TrainingInput\n",
+ "from sagemaker.model_metrics import (\n",
+ " MetricsSource,\n",
+ " ModelMetrics,\n",
+ ")\n",
+ "from sagemaker.processing import (\n",
+ " ProcessingInput,\n",
+ " ProcessingOutput,\n",
+ " ScriptProcessor,\n",
+ ")\n",
+ "from sagemaker.workflow.condition_step import (\n",
+ " ConditionStep,\n",
+ ")\n",
+ "from sagemaker.workflow.conditions import ConditionLessThanOrEqualTo\n",
+ "from sagemaker.workflow.functions import (\n",
+ " JsonGet,\n",
+ ")\n",
+ "from sagemaker.workflow.parameters import (\n",
+ " ParameterInteger,\n",
+ " ParameterString,\n",
+ ")\n",
+ "from sagemaker.workflow.pipeline import Pipeline\n",
+ "from sagemaker.workflow.properties import PropertyFile\n",
+ "from sagemaker.workflow.step_collections import RegisterModel\n",
+ "from sagemaker.workflow.steps import (\n",
+ " ProcessingStep,\n",
+ " TrainingStep,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "logger = logging.getLogger(__name__)\n",
+ "\n",
+ "\"\"\"Environment Variables\"\"\"\n",
+ "proj_dir = \"TO_BE_DEFINED\"\n",
+ "region = \"TO_BE_DEFINED\"\n",
+ "model_artefact_bucket = \"TO_BE_DEFINED\"\n",
+ "role = \"TO_BE_DEFINED\"\n",
+ "project_name = \"TO_BE_DEFINED\"\n",
+ "stage = \"test\"\n",
+ "model_package_group_name = (\"AbalonePackageGroup\",)\n",
+ "pipeline_name = (\"AbalonePipeline\",)\n",
+ "base_job_prefix = (\"Abalone\",)\n",
+ "project_id = (\"SageMakerProjectId\",)\n",
+ "processing_image_uri = None\n",
+ "training_image_uri = None\n",
+ "inference_image_uri = None"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_session(region, default_bucket):\n",
+ " \"\"\"Gets the sagemaker session based on the region.\n",
+ "\n",
+ " Args:\n",
+ " region: the aws region to start the session\n",
+ " default_bucket: the bucket to use for storing the artifacts\n",
+ "\n",
+ " Returns:\n",
+ " `sagemaker.session.Session instance\n",
+ " \"\"\"\n",
+ "\n",
+ " boto_session = boto3.Session(region_name=region)\n",
+ "\n",
+ " sagemaker_client = boto_session.client(\"sagemaker\")\n",
+ " runtime_client = boto_session.client(\"sagemaker-runtime\")\n",
+ " return sagemaker.session.Session(\n",
+ " boto_session=boto_session,\n",
+ " sagemaker_client=sagemaker_client,\n",
+ " sagemaker_runtime_client=runtime_client,\n",
+ " default_bucket=default_bucket,\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sagemaker_session = get_session(region, model_artefact_bucket)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Feature Engineering\n",
+ "This section describes the different steps involved in feature engineering which includes loading and transforming different data sources to build the features needed for the ML Use Case"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "processing_instance_count = ParameterInteger(name=\"ProcessingInstanceCount\", default_value=1)\n",
+ "processing_instance_type = ParameterString(name=\"ProcessingInstanceType\", default_value=\"ml.m5.xlarge\")\n",
+ "training_instance_type = ParameterString(name=\"TrainingInstanceType\", default_value=\"ml.m5.xlarge\")\n",
+ "inference_instance_type = ParameterString(name=\"InferenceInstanceType\", default_value=\"ml.m5.xlarge\")\n",
+ "model_approval_status = ParameterString(name=\"ModelApprovalStatus\", default_value=\"PendingManualApproval\")\n",
+ "input_data = ParameterString(\n",
+ " name=\"InputDataUrl\",\n",
+ " default_value=f\"s3://sagemaker-servicecatalog-seedcode-{region}/dataset/abalone-dataset.csv\",\n",
+ ")\n",
+ "processing_image_name = \"sagemaker-{0}-processingimagebuild\".format(project_id)\n",
+ "training_image_name = \"sagemaker-{0}-trainingimagebuild\".format(project_id)\n",
+ "inference_image_name = \"sagemaker-{0}-inferenceimagebuild\".format(project_id)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# processing step for feature engineering\n",
+ "try:\n",
+ " processing_image_uri = sagemaker_session.sagemaker_client.describe_image_version(ImageName=processing_image_name)[\n",
+ " \"ContainerImage\"\n",
+ " ]\n",
+ "\n",
+ "except sagemaker_session.sagemaker_client.exceptions.ResourceNotFound:\n",
+ " processing_image_uri = sagemaker.image_uris.retrieve(\n",
+ " framework=\"xgboost\",\n",
+ " region=region,\n",
+ " version=\"1.0-1\",\n",
+ " py_version=\"py3\",\n",
+ " instance_type=processing_instance_type,\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define Script Processor\n",
+ "script_processor = ScriptProcessor(\n",
+ " image_uri=processing_image_uri,\n",
+ " instance_type=processing_instance_type,\n",
+ " instance_count=processing_instance_count,\n",
+ " base_job_name=f\"{base_job_prefix}/sklearn-abalone-preprocess\",\n",
+ " command=[\"python3\"],\n",
+ " sagemaker_session=sagemaker_session,\n",
+ " role=role,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define ProcessingStep\n",
+ "step_process = ProcessingStep(\n",
+ " name=\"PreprocessAbaloneData\",\n",
+ " processor=script_processor,\n",
+ " outputs=[\n",
+ " ProcessingOutput(output_name=\"train\", source=\"/opt/ml/processing/train\"),\n",
+ " ProcessingOutput(output_name=\"validation\", source=\"/opt/ml/processing/validation\"),\n",
+ " ProcessingOutput(output_name=\"test\", source=\"/opt/ml/processing/test\"),\n",
+ " ],\n",
+ " code=\"source_scripts/preprocessing/prepare_abalone_data/main.py\",\n",
+ " job_arguments=[\"--input-data\", input_data],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Training an XGBoost model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# training step for generating model artifacts\n",
+ "model_path = f\"s3://{sagemaker_session.default_bucket()}/{base_job_prefix}/AbaloneTrain\"\n",
+ "\n",
+ "try:\n",
+ " training_image_uri = sagemaker_session.sagemaker_client.describe_image_version(ImageName=training_image_name)[\n",
+ " \"ContainerImage\"\n",
+ " ]\n",
+ "except sagemaker_session.sagemaker_client.exceptions.ResourceNotFound:\n",
+ " training_image_uri = sagemaker.image_uris.retrieve(\n",
+ " framework=\"xgboost\",\n",
+ " region=region,\n",
+ " version=\"1.0-1\",\n",
+ " py_version=\"py3\",\n",
+ " instance_type=training_instance_type,\n",
+ " )\n",
+ "\n",
+ "xgb_train = Estimator(\n",
+ " image_uri=training_image_uri,\n",
+ " instance_type=training_instance_type,\n",
+ " instance_count=1,\n",
+ " output_path=model_path,\n",
+ " base_job_name=f\"{base_job_prefix}/abalone-train\",\n",
+ " sagemaker_session=sagemaker_session,\n",
+ " role=role,\n",
+ ")\n",
+ "xgb_train.set_hyperparameters(\n",
+ " objective=\"reg:linear\",\n",
+ " num_round=50,\n",
+ " max_depth=5,\n",
+ " eta=0.2,\n",
+ " gamma=4,\n",
+ " min_child_weight=6,\n",
+ " subsample=0.7,\n",
+ " silent=0,\n",
+ ")\n",
+ "step_train = TrainingStep(\n",
+ " name=\"TrainAbaloneModel\",\n",
+ " estimator=xgb_train,\n",
+ " inputs={\n",
+ " \"train\": TrainingInput(\n",
+ " s3_data=step_process.properties.ProcessingOutputConfig.Outputs[\"train\"].S3Output.S3Uri,\n",
+ " content_type=\"text/csv\",\n",
+ " ),\n",
+ " \"validation\": TrainingInput(\n",
+ " s3_data=step_process.properties.ProcessingOutputConfig.Outputs[\"validation\"].S3Output.S3Uri,\n",
+ " content_type=\"text/csv\",\n",
+ " ),\n",
+ " },\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Evaluate the Model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# processing step for evaluation\n",
+ "script_eval = ScriptProcessor(\n",
+ " image_uri=training_image_uri,\n",
+ " command=[\"python3\"],\n",
+ " instance_type=processing_instance_type,\n",
+ " instance_count=1,\n",
+ " base_job_name=f\"{base_job_prefix}/script-abalone-eval\",\n",
+ " sagemaker_session=sagemaker_session,\n",
+ " role=role,\n",
+ ")\n",
+ "evaluation_report = PropertyFile(\n",
+ " name=\"AbaloneEvaluationReport\",\n",
+ " output_name=\"evaluation\",\n",
+ " path=\"evaluation.json\",\n",
+ ")\n",
+ "step_eval = ProcessingStep(\n",
+ " name=\"EvaluateAbaloneModel\",\n",
+ " processor=script_eval,\n",
+ " inputs=[\n",
+ " ProcessingInput(\n",
+ " source=step_train.properties.ModelArtifacts.S3ModelArtifacts,\n",
+ " destination=\"/opt/ml/processing/model\",\n",
+ " ),\n",
+ " ProcessingInput(\n",
+ " source=step_process.properties.ProcessingOutputConfig.Outputs[\"test\"].S3Output.S3Uri,\n",
+ " destination=\"/opt/ml/processing/test\",\n",
+ " ),\n",
+ " ],\n",
+ " outputs=[\n",
+ " ProcessingOutput(output_name=\"evaluation\", source=\"/opt/ml/processing/evaluation\"),\n",
+ " ],\n",
+ " code=\"source_scripts/evaluate/evaluate_xgboost/main.py\",\n",
+ " property_files=[evaluation_report],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Conditional step to push model to SageMaker Model Registry"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# register model step that will be conditionally executed\n",
+ "model_metrics = ModelMetrics(\n",
+ " model_statistics=MetricsSource(\n",
+ " s3_uri=\"{}/evaluation.json\".format(\n",
+ " step_eval.arguments[\"ProcessingOutputConfig\"][\"Outputs\"][0][\"S3Output\"][\"S3Uri\"]\n",
+ " ),\n",
+ " content_type=\"application/json\",\n",
+ " )\n",
+ ")\n",
+ "\n",
+ "try:\n",
+ " inference_image_uri = sagemaker_session.sagemaker_client.describe_image_version(ImageName=inference_image_name)[\n",
+ " \"ContainerImage\"\n",
+ " ]\n",
+ "except sagemaker_session.sagemaker_client.exceptions.ResourceNotFound:\n",
+ " inference_image_uri = sagemaker.image_uris.retrieve(\n",
+ " framework=\"xgboost\",\n",
+ " region=region,\n",
+ " version=\"1.0-1\",\n",
+ " py_version=\"py3\",\n",
+ " instance_type=inference_instance_type,\n",
+ " )\n",
+ "step_register = RegisterModel(\n",
+ " name=\"RegisterAbaloneModel\",\n",
+ " estimator=xgb_train,\n",
+ " image_uri=inference_image_uri,\n",
+ " model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,\n",
+ " content_types=[\"text/csv\"],\n",
+ " response_types=[\"text/csv\"],\n",
+ " inference_instances=[\"ml.t2.medium\", \"ml.m5.large\"],\n",
+ " transform_instances=[\"ml.m5.large\"],\n",
+ " model_package_group_name=model_package_group_name,\n",
+ " approval_status=model_approval_status,\n",
+ " model_metrics=model_metrics,\n",
+ ")\n",
+ "\n",
+ "# condition step for evaluating model quality and branching execution\n",
+ "cond_lte = ConditionLessThanOrEqualTo(\n",
+ " left=JsonGet(step_name=step_eval.name, property_file=evaluation_report, json_path=\"regression_metrics.mse.value\"),\n",
+ " right=6.0,\n",
+ ")\n",
+ "step_cond = ConditionStep(\n",
+ " name=\"CheckMSEAbaloneEvaluation\",\n",
+ " conditions=[cond_lte],\n",
+ " if_steps=[step_register],\n",
+ " else_steps=[],\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Create and run the Pipeline"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# pipeline instance\n",
+ "pipeline = Pipeline(\n",
+ " name=pipeline_name,\n",
+ " parameters=[\n",
+ " processing_instance_type,\n",
+ " processing_instance_count,\n",
+ " training_instance_type,\n",
+ " model_approval_status,\n",
+ " input_data,\n",
+ " ],\n",
+ " steps=[step_process, step_train, step_eval, step_cond],\n",
+ " sagemaker_session=sagemaker_session,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "\n",
+ "definition = json.loads(pipeline.definition())\n",
+ "definition"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pipeline.upsert(role_arn=role, description=f\"{stage} pipelines for {project_name}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pipeline.start()"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "conda_python3",
+ "language": "python",
+ "name": "conda_python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/setup.cfg b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/setup.cfg
new file mode 100644
index 00000000..6f878705
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/setup.cfg
@@ -0,0 +1,14 @@
+[tool:pytest]
+addopts =
+ -vv
+testpaths = tests
+
+[aliases]
+test=pytest
+
+[metadata]
+description-file = README.md
+license_file = LICENSE
+
+[wheel]
+universal = 1
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/setup.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/setup.py
new file mode 100644
index 00000000..144c84bc
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/setup.py
@@ -0,0 +1,89 @@
+# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
+#
+# SPDX-License-Identifier: MIT-0
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy of this
+# software and associated documentation files (the "Software"), to deal in the Software
+# without restriction, including without limitation the rights to use, copy, modify,
+# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
+# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
+# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+import os
+
+import setuptools
+
+about = {}
+here = os.path.abspath(os.path.dirname(__file__))
+with open(os.path.join(here, "ml_pipelines", "__version__.py")) as f:
+ exec(f.read(), about)
+
+
+with open("README.md", "r") as f:
+ readme = f.read()
+
+
+required_packages = [
+ "sagemaker==2.194.0",
+ "scipy==1.9.3",
+ "diffusers==0.11.1",
+ "datasets==2.8.0",
+ "transformers==4.33.0",
+ "torch>=2.0.0",
+ "peft==0.4.0",
+ "accelerate==0.21.0",
+ "bitsandbytes==0.40.2",
+ "safetensors>=0.3.1",
+ "tokenizers>=0.13.3",
+]
+extras = {
+ "test": [
+ "black",
+ "coverage",
+ "flake8",
+ "mock",
+ "pydocstyle",
+ "pytest",
+ "pytest-cov",
+ "sagemaker",
+ "tox",
+ ]
+}
+setuptools.setup(
+ name=about["__title__"],
+ description=about["__description__"],
+ version=about["__version__"],
+ author=about["__author__"],
+ author_email=about["__author_email__"],
+ long_description=readme,
+ long_description_content_type="text/markdown",
+ url=about["__url__"],
+ license=about["__license__"],
+ packages=setuptools.find_packages(),
+ include_package_data=True,
+ python_requires=">=3.6",
+ install_requires=required_packages,
+ extras_require=extras,
+ entry_points={
+ "console_scripts": [
+ "get-pipeline-definition=ml_pipelines.get_pipeline_definition:main",
+ "run-pipeline=ml_pipelines.run_pipeline:main",
+ ]
+ },
+ classifiers=[
+ "Development Status :: 3 - Alpha",
+ "Intended Audience :: Developers",
+ "Natural Language :: English",
+ "Programming Language :: Python",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ ],
+)
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/data_processing.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/data_processing.py
new file mode 100644
index 00000000..9c388c18
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/data_processing.py
@@ -0,0 +1,109 @@
+import logging
+from functools import partial
+from itertools import chain
+
+from datasets import Dataset, load_dataset
+from transformers import AutoTokenizer
+
+logger = logging.getLogger()
+logger.setLevel(logging.INFO)
+logger.addHandler(logging.StreamHandler())
+
+
+# Code adapted from:
+
+
+class CodeLlamaDataProcessor:
+ PROMPT_TEMPLATE = """
+ ### Instruction
+ Given an input question, use SQLite syntax to generate a SQL query by choosing one or
+ multiple of the following tables.
+ The foreign and primary keys will be supplied. Write the query in between .
+ Answer the following question with the context below:
+ {question}
+
+ ### Context
+ {schema} | {foreign_keys} | {primary_keys}
+
+ ### Answer
+ {query}
+ {eos_token}
+ """
+ REMAINDER = {"input_ids": [], "attention_mask": [], "token_type_ids": []}
+ MODEL_ID = "codellama/CodeLlama-7b-hf"
+ TOKENIZER = AutoTokenizer.from_pretrained(MODEL_ID)
+
+ def __init__(self, dataset: Dataset, is_training: bool):
+ self.dataset = dataset
+ self.is_training = is_training
+
+ @staticmethod
+ def load_hf_dataset(dataset_name: str):
+ if isinstance(dataset_name, str):
+ try:
+ dataset = load_dataset(dataset_name)
+ logger.info("Dataset loaded successfully.")
+ except Exception as e:
+ logger.info(f"Failed to load dataset: {e}")
+ raise RuntimeError(f"Failed to load dataset: {e}")
+ else:
+ raise TypeError("Dataset is not a string.")
+
+ return dataset
+
+ def _assemble_prompt(self, sample: dict) -> str:
+ prompt = self.PROMPT_TEMPLATE.format(
+ question=sample["question"],
+ schema=sample["schema"],
+ foreign_keys=sample["foreign_keys"],
+ primary_keys=sample["primary_keys"],
+ query=f" {sample['query']} " if self.is_training else "",
+ eos_token=self.TOKENIZER.eos_token if self.is_training else "",
+ )
+ return prompt
+
+ def template_dataset(self, sample: dict) -> dict:
+ prompt = self._assemble_prompt(sample=sample)
+ sample["prompt"] = prompt.strip()
+
+ if not self.is_training:
+ sample["answer"] = sample["query"]
+ return sample
+
+ @classmethod
+ def _chunk(cls, sample, chunk_length: int):
+ concatenated_examples = {k: list(chain(*sample[k])) for k in sample.keys()}
+ concatenated_examples = {k: cls.REMAINDER[k] + concatenated_examples[k] for k in concatenated_examples.keys()}
+
+ batch_total_length = len(concatenated_examples[list(sample.keys())[0]])
+ if batch_total_length >= chunk_length:
+ batch_chunk_length = (batch_total_length // chunk_length) * chunk_length
+ else:
+ raise ValueError("Batch length is less than chunk length.")
+
+ result = {
+ k: [t[i : i + chunk_length] for i in range(0, batch_chunk_length, chunk_length)]
+ for k, t in concatenated_examples.items()
+ }
+ cls.REMAINDER = {k: concatenated_examples[k][batch_chunk_length:] for k in concatenated_examples.keys()}
+ result["labels"] = result["input_ids"].copy()
+ return result
+
+ @classmethod
+ def chunk_and_tokenize(cls, prompt_dataset: Dataset, chunk_length: int):
+ chunked_tokenized_dataset = prompt_dataset.map(
+ lambda sample: cls.TOKENIZER(sample["prompt"]),
+ batched=True,
+ remove_columns=list(prompt_dataset.features),
+ ).map(
+ partial(cls._chunk, chunk_length=chunk_length),
+ batched=True,
+ )
+ return chunked_tokenized_dataset
+
+ def _get_sample_prompts(self):
+ prompt_dataset = self.dataset.map(self.template_dataset, remove_columns=list(self.dataset.features))
+ return prompt_dataset
+
+ def prepare_data(self):
+ return self._get_sample_prompts()
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/evaluate.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/evaluate.py
new file mode 100644
index 00000000..256eed23
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/evaluate.py
@@ -0,0 +1,199 @@
+import argparse
+import json
+import logging
+import os
+import re
+import tarfile
+
+import torch
+from datasets import load_from_disk
+from tqdm.auto import tqdm
+from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
+from transformers.pipelines.pt_utils import KeyDataset
+
+logger = logging.getLogger()
+logger.setLevel(logging.INFO)
+logger.addHandler(logging.StreamHandler())
+
+
+def clean_prediction(generated_response: str) -> str:
+ """
+ Extract SQL statements from generated response.
+ Args:
+ generated_response: response returned from the model
+
+ Returns:
+ str: generated SQL statement
+
+ """
+ # Define the pattern to match text between tags, accounting for whitespace
+ pattern = r"\s*(.*?)\s*<\/SQL>"
+
+ # Find all matches of the pattern in the long string
+ matches = re.findall(pattern, generated_response, re.DOTALL)
+
+ # Extract the SQL query from the first match (assuming there is only one match)
+ if matches:
+ return matches[0]
+ else:
+ return "" # Return empty string to avoid issues downstream
+
+
+def normalise_string(s) -> str:
+ """
+ Normalise string using pre-defined rules to allow for easier evaluation.
+ For example, if the prediction only differs in letter case or spaces.
+ Args:
+ s: input string to be transformed
+
+ Returns:
+ str: Normalised string
+
+ """
+ # Remove spaces and newlines, convert to lowercase
+ normalized = s.translate(str.maketrans("", "", " \n")).lower()
+ # Change single quotes to double quotes
+ normalized = normalized.replace("'", '"')
+ # Delete any trailing semicolons
+ normalized = normalized.rstrip(";")
+ # Strip leading and trailing whitespaces
+ normalized = normalized.strip()
+ return normalized
+
+
+def evaluate_model(args):
+ """
+ Evaluate the model performance.
+
+ Args:
+ args: input arguments from SageMaker pipeline.
+
+ Returns:
+
+ """
+ logger.info("Decompressing model assets.")
+ with tarfile.open(name=os.path.join(args.model_dir, "model.tar.gz"), mode="r:gz") as tar_file:
+ os.makedirs(args.model_dir, exist_ok=True)
+ tar_file.extractall(args.model_dir)
+
+ logger.info(f"Decompressed Model assets: {os.listdir( args.model_dir )}")
+
+ # Load test dataset
+ test_dataset = load_from_disk(args.test_data_dir)
+ if (
+ args.dry_run != "False"
+ ): # Needs to be string matching because of way arguments are passed in SageMaker Processing Job
+ test_dataset = test_dataset.select(range(8))
+ # ensure that we do not have any trailing/leading whitespaces, this can lead to issues during inference
+ test_data = test_dataset.map(lambda sample: {"prompt": sample["prompt"].strip()})
+ logger.info("Loading test dataset")
+ logger.info(f"Test dataset has {len( test_data )} samples")
+
+ model = AutoModelForCausalLM.from_pretrained(args.model_dir, device_map="auto", torch_dtype=torch.float16)
+
+ # Load the tokeinzer --> same model_dir
+ tokenizer = AutoTokenizer.from_pretrained(args.model_dir)
+
+ logger.info("Successfully loaded the model and tokenizer")
+
+ predictions = []
+ gt_queries = []
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, return_full_text=False)
+ pipe.tokenizer.pad_token_id = model.config.eos_token_id
+ for i, prediction in enumerate(
+ tqdm(
+ pipe(
+ KeyDataset(test_data, "prompt"),
+ max_new_tokens=args.max_new_tokens,
+ do_sample=args.do_sample,
+ temperature=args.temperature,
+ top_k=args.top_k,
+ top_p=args.top_p,
+ repetition_penalty=args.repetition_penalty,
+ batch_size=args.batch_size,
+ pad_token_id=tokenizer.pad_token_id,
+ ),
+ desc="Generating Predictions",
+ )
+ ):
+ predictions.extend(prediction)
+ # gt_queries.append(samples["answer"][i])
+ gt_queries.append(test_data["answer"][i])
+
+ cleaned_predictions = [clean_prediction(prediction["generated_text"]) for prediction in predictions]
+
+ prediction_result = [
+ 1 if normalise_string(prediction) == normalise_string(query) else 0
+ for prediction, query in zip(cleaned_predictions, gt_queries)
+ ]
+
+ # compute accuracy
+ accuracy = sum(prediction_result) / len(prediction_result)
+
+ logger.info(f"Accuracy: {accuracy * 100:.2f}%")
+
+ eval_report_dict = {"metrics": {"accuracy": {"value": accuracy, "standard_deviation": "NaN"}}}
+
+ os.makedirs(args.output_dir, exist_ok=True)
+ logger.info(
+ f"""Writing out evaluation report with accuracy score: {accuracy}
+ for a total number of samples of {len(prediction_result)}."""
+ )
+ evaluation_path = os.path.join(args.output_dir, "evaluation.json")
+
+ with open(evaluation_path, "w") as f:
+ f.write(json.dumps(eval_report_dict))
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ parser.add_argument(
+ "--model_dir",
+ type=str,
+ default="/opt/ml/processing/model",
+ help="Local path to load model",
+ )
+ parser.add_argument(
+ "--test_data_dir",
+ type=str,
+ default="/opt/ml/processing/test",
+ help="Local path to load test data",
+ )
+ parser.add_argument(
+ "--output_dir",
+ type=str,
+ default="/opt/ml/processing/evaluation",
+ help="Directory where output will be saved.",
+ )
+ parser.add_argument(
+ "--max_new_tokens",
+ type=int,
+ default=256,
+ help="Maximum number of new tokens to generate.",
+ )
+ parser.add_argument("--do_sample", default=True, help="Whether to use sampling for generation.")
+ parser.add_argument(
+ "--temperature",
+ type=float,
+ default=0.001,
+ help="Sampling temperature for generation.",
+ )
+ parser.add_argument("--top_k", type=int, default=50, help="Value of top-k sampling.")
+ parser.add_argument("--top_p", type=float, default=0.95, help="Value of top-p sampling.")
+ parser.add_argument(
+ "--repetition_penalty",
+ type=float,
+ default=1.03,
+ help="Repetition penalty for generation.",
+ )
+ parser.add_argument("--batch_size", type=int, default=6, help="Batch size for inference.")
+ parser.add_argument(
+ "--dry_run",
+ type=str,
+ default="True",
+ help="Run with subset of data for testing",
+ )
+
+ args, _ = parser.parse_known_args()
+ logger.info("Starting model evaluation...")
+ evaluate_model(args)
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/preprocess.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/preprocess.py
new file mode 100644
index 00000000..06914e87
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/preprocess.py
@@ -0,0 +1,77 @@
+import argparse
+import logging
+
+from data_processing import CodeLlamaDataProcessor
+
+logger = logging.getLogger()
+logger.setLevel(logging.INFO)
+logger.addHandler(logging.StreamHandler())
+
+
+def preprocess(args):
+ """
+ Preprocess the dataset and save it to disk.
+
+ Args:
+ args: input arguments from SageMaker pipeline
+
+ Returns:
+
+ """
+ dataset = CodeLlamaDataProcessor.load_hf_dataset(dataset_name=args.dataset_name)
+
+ data_processor_training = CodeLlamaDataProcessor(dataset=dataset["train"], is_training=True)
+ dataset_processor_test = CodeLlamaDataProcessor(dataset=dataset["validation"], is_training=False)
+
+ logger.info("Processing training dataset.")
+ dataset_train = data_processor_training.prepare_data()
+
+ logger.info("Processing test dataset.")
+ dataset_test = dataset_processor_test.prepare_data()
+
+ if (
+ args.dry_run != "False"
+ ): # Needs to be string matching because of way arguments are passed in SageMaker Processing Job
+ logger.info(
+ """Dry run, only processing a couple of examples for testing and demonstration.
+ If this is not intended, please set the flag dry_run to False."""
+ )
+ dataset_train = dataset_train.select(range(12))
+ dataset_test = dataset_test.select(range(8))
+
+ logger.info(f"Writing out datasets to {args.train_data_path} and {args.test_data_path}")
+ dataset_train.save_to_disk(args.train_data_path)
+ dataset_test.save_to_disk(args.test_data_path)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ # add model id and dataset path argument
+ parser.add_argument(
+ "--dataset_name",
+ type=str,
+ default="philikai/Spider-SQL-LLAMA2_train",
+ help="HuggingFace dataset to use",
+ )
+ parser.add_argument(
+ "--train_data_path",
+ type=str,
+ default="/opt/ml/processing/train",
+ help="Local path to save train data",
+ )
+ parser.add_argument(
+ "--test_data_path",
+ type=str,
+ default="/opt/ml/processing/test",
+ help="Local path to save test data",
+ )
+ parser.add_argument(
+ "--dry_run",
+ type=str,
+ default="True",
+ help="Run with subset of data for testing",
+ )
+
+ args, _ = parser.parse_known_args()
+ logger.info("Starting preprocessing")
+ preprocess(args)
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/requirements.txt b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/requirements.txt
new file mode 100644
index 00000000..bac7b0cc
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/requirements.txt
@@ -0,0 +1,7 @@
+transformers==4.33.0
+torch>=2.0.0
+peft==0.4.0
+accelerate==0.21.0
+bitsandbytes==0.40.2
+safetensors>=0.3.1
+tokenizers>=0.13.3
\ No newline at end of file
diff --git a/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/train.py b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/train.py
new file mode 100644
index 00000000..53784186
--- /dev/null
+++ b/modules/sagemaker/sagemaker-templates-service-catalog/templates/finetune_llm_evaluation/seed_code/build_app/source_scripts/train.py
@@ -0,0 +1,265 @@
+import argparse
+import logging
+import os
+
+import bitsandbytes as bnb
+import torch
+from data_processing import CodeLlamaDataProcessor
+from datasets import load_from_disk
+from peft import (
+ AutoPeftModelForCausalLM,
+ LoraConfig,
+ PeftConfig,
+ PeftModel,
+ TaskType,
+ get_peft_model,
+ prepare_model_for_kbit_training,
+)
+from peft.tuners.lora import LoraLayer
+from transformers import (
+ AutoModelForCausalLM,
+ AutoTokenizer,
+ BitsAndBytesConfig,
+ Trainer,
+ TrainingArguments,
+ default_data_collator,
+ set_seed,
+)
+
+logger = logging.getLogger()
+logger.setLevel(logging.INFO)
+logger.addHandler(logging.StreamHandler())
+
+
+def find_all_linear_names(model) -> list[str]:
+ """
+ Find all the names of linear layers in the model.
+ Args:
+ model: the model to search for linear layers
+
+ Returns: List containing linear layers
+
+ """
+ lora_module_names = set()
+ for name, module in model.named_modules():
+ if isinstance(module, bnb.nn.Linear4bit):
+ names = name.split(".")
+ lora_module_names.add(names[0] if len(names) == 1 else names[-1])
+
+ if "lm_head" in lora_module_names: # needed for 16-bit
+ lora_module_names.remove("lm_head")
+ return list(lora_module_names)
+
+
+def create_peft_model(model, gradient_checkpointing: bool = True, bf16: bool = True) -> PeftModel:
+ """
+ Create a PEFT model from a HuggingFace model.
+
+ Args:
+ model: the HuggingFace model to create the PEFT model from
+ gradient_checkpointing: whether to use gradient checkpointing
+ bf16: whether to use bf16
+
+ Returns: the PEFT model
+
+ """
+ # prepare int-4 model for training
+ model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=gradient_checkpointing)
+ if gradient_checkpointing:
+ model.gradient_checkpointing_enable()
+
+ # get lora target modules
+ modules = find_all_linear_names(model)
+ logger.info(f"Found {len( modules )} modules to quantize: {modules}")
+
+ peft_config = LoraConfig(
+ r=64,
+ lora_alpha=32,
+ target_modules=modules,
+ lora_dropout=0.1,
+ bias="none",
+ task_type=TaskType.CAUSAL_LM,
+ )
+
+ model = get_peft_model(model, peft_config)
+
+ # pre-process the model by upcasting the layer norms in float 32 for
+ for name, module in model.named_modules():
+ if isinstance(module, LoraLayer):
+ if bf16:
+ module = module.to(torch.bfloat16)
+ if "norm" in name:
+ module = module.to(torch.float32)
+ if "lm_head" in name or "embed_tokens" in name:
+ if hasattr(module, "weight"):
+ if bf16 and module.weight.dtype == torch.float32:
+ module = module.to(torch.bfloat16)
+
+ model.print_trainable_parameters()
+
+ return model
+
+
+def train(args):
+ """
+ Fine-tune model from HuggingFace and save it.
+ Args:
+ args: input arguments from SageMaker pipeline
+
+ Returns:
+
+ """
+ # os.environ["WANDB_DISABLED"] = "true"
+ # set seed
+ set_seed(args.seed)
+
+ dataset = load_from_disk(args.train_data)
+
+ # create tokenized dataset
+ tokenized_dataset = CodeLlamaDataProcessor.chunk_and_tokenize(
+ prompt_dataset=dataset, chunk_length=args.chunk_length
+ )
+
+ # load model from the hub with a bnb config
+ bnb_config = BitsAndBytesConfig(
+ load_in_4bit=True,
+ bnb_4bit_use_double_quant=True,
+ bnb_4bit_quant_type="nf4",
+ bnb_4bit_compute_dtype=torch.bfloat16,
+ )
+
+ # set the pad token to the eos token to ensure that the model will pick it up during training
+ tokenizer = AutoTokenizer.from_pretrained(args.model_id)
+ tokenizer.pad_token = tokenizer.eos_token
+
+ model = AutoModelForCausalLM.from_pretrained(
+ args.model_id,
+ use_cache=(False if args.gradient_checkpointing else True), # this is needed for gradient checkpointing
+ device_map="auto",
+ quantization_config=bnb_config,
+ )
+
+ # create peft config
+ model = create_peft_model(model, gradient_checkpointing=args.gradient_checkpointing, bf16=args.bf16)
+
+ # Define training args
+ training_args = TrainingArguments(
+ output_dir=args.output_data_dir,
+ per_device_train_batch_size=args.per_device_train_batch_size,
+ bf16=args.bf16, # Use BF16 if available
+ learning_rate=args.lr,
+ num_train_epochs=args.epochs,
+ gradient_checkpointing=args.gradient_checkpointing,
+ # logging strategies
+ logging_dir=f"{args.output_data_dir}/logs",
+ logging_strategy="steps",
+ logging_steps=10,
+ save_strategy="no",
+ report_to=[],
+ )
+
+ # Create Trainer instance
+ trainer = Trainer(
+ model=model,
+ args=training_args,
+ train_dataset=tokenized_dataset,
+ data_collator=default_data_collator,
+ )
+
+ logger.info("Start training")
+ # Start training
+ trainer.train()
+
+ logger.info("Save Model")
+ # save model
+ trainer.save_model()
+
+ # free the memory again
+ del model
+ del trainer
+ torch.cuda.empty_cache()
+
+ #### MERGE PEFT AND BASE MODEL ####
+
+ logger.info("Merge Base model with Adapter")
+ # Load PEFT model on CPU
+ config = PeftConfig.from_pretrained(args.output_data_dir)
+ model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, low_cpu_mem_usage=True)
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
+ model.resize_token_embeddings(len(tokenizer))
+ # model = PeftModel.from_pretrained( model, args.output_data_dir )
+ model = AutoPeftModelForCausalLM.from_pretrained(
+ args.output_data_dir,
+ torch_dtype=torch.float16,
+ low_cpu_mem_usage=True,
+ )
+ # Merge LoRA and base model and save
+ merged_model = model.merge_and_unload()
+ merged_model.save_pretrained(args.model_dir, safe_serialization=True, max_shard_size="2GB")
+
+ # save tokenizer for easy inference
+ logger.info("Saving tokenizer")
+ tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
+ tokenizer.save_pretrained(args.model_dir)
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser()
+ # add model id and dataset path argument
+ parser.add_argument(
+ "--model_id",
+ type=str,
+ help="Model id to use for training.",
+ )
+ parser.add_argument(
+ "--model_dir",
+ type=str,
+ default=os.environ.get("SM_MODEL_DIR"),
+ help="Directory inside the container where the final model will be saved.",
+ )
+ parser.add_argument(
+ "--output_data_dir",
+ type=str,
+ default=os.environ.get("SM_OUTPUT_DATA_DIR"),
+ )
+ parser.add_argument(
+ "--train_data",
+ type=str,
+ default=os.environ.get("SM_CHANNEL_TRAINING"),
+ help="Directory with the training data.",
+ )
+ parser.add_argument("--epochs", type=int, default=3, help="Number of epochs to train for.")
+ parser.add_argument(
+ "--per_device_train_batch_size",
+ type=int,
+ default=1,
+ help="Batch size to use for training.",
+ )
+ parser.add_argument(
+ "--chunk_length",
+ type=int,
+ default=2048,
+ help="Chunk length for tokenized dataset.",
+ )
+ parser.add_argument("--lr", type=float, default=5e-5, help="Learning rate to use for training.")
+ parser.add_argument("--seed", type=int, default=42, help="Seed to use for training.")
+ parser.add_argument(
+ "--gradient_checkpointing",
+ type=bool,
+ default=True,
+ help="Path to deepspeed config file.",
+ )
+ parser.add_argument(
+ "--bf16",
+ type=bool,
+ default=True if torch.cuda.get_device_capability()[0] == 8 else False,
+ help="Whether to use bf16.",
+ )
+ parser.add_argument(
+ "--merge_weights",
+ type=bool,
+ default=True,
+ help="Whether to merge LoRA weights with base model.",
+ )
+ args, _ = parser.parse_known_args()
+ train(args)