diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index f7e9acd152..3501b4095f 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -101,7 +101,8 @@ repos: # MD034 - Bare URL used # MD041 - First line in file should be a top level header # MD046 - Code block style - args: [--disable-rules, "MD013,MD022,MD033,MD034,MD041,MD046", scan] + # MD024 - Multiple headings cannot contain the same content. + args: [--disable-rules, "MD013,MD022,MD033,MD034,MD041,MD046,MD024", scan] - repo: https://github.com/jumanjihouse/pre-commit-hooks rev: "3.0.0" hooks: diff --git a/examples/README.md b/examples/README.md index 6acd823bde..e7125276f4 100644 --- a/examples/README.md +++ b/examples/README.md @@ -13,6 +13,7 @@ md_toc github examples/README.md | sed -e "s/\s-\s/ * /" * [Blueprint Descriptions](#blueprint-descriptions) * [hpc-slurm.yaml](#hpc-slurmyaml-) ![core-badge] * [hpc-enterprise-slurm.yaml](#hpc-enterprise-slurmyaml-) ![core-badge] + * [hpc-slurm6.yaml](#hpc-slurm6yaml-) ![community-badge] ![experimental-badge] * [ml-slurm.yaml](#ml-slurmyaml-) ![core-badge] * [image-builder.yaml](#image-builderyaml-) ![core-badge] * [serverless-batch.yaml](#serverless-batchyaml-) ![core-badge] @@ -264,6 +265,35 @@ to 256 [hpc-enterprise-slurm.yaml]: ./hpc-enterprise-slurm.yaml +### [hpc-slurm6.yaml] ![community-badge] ![experimental-badge] + +> **Warning**: Requires additional dependencies **to be installed on the system deploying the infrastructure**. +> +> ```shell +> # Install Python3 and run +> pip3 install -r https://raw.githubusercontent.com/GoogleCloudPlatform/slurm-gcp/6.2.1/scripts/requirements.txt +> ``` + +Creates a basic auto-scaling Slurm cluster with mostly default settings. The +blueprint also creates a new VPC network, and a filestore instance mounted to +`/home`. + +There are 2 partitions in this example: `debug`, and `compute`. The `debug` +partition uses `n2-standard-2` VMs, which should work out of the box without +needing to request additional quota. The purpose of the `debug` partition is to +make sure that first time users are not immediately blocked by quota +limitations. + +[hpc-slurm6.yaml]: ../community/examples/hpc-slurm6.yaml + +#### Compute Partition + +There is a `compute` partition that achieves higher performance. Any +performance analysis should be done on the `compute` partition. By default it +uses `c2-standard-60` VMs with placement groups enabled. You may need to request +additional quota for `C2 CPUs` in the region you are deploying in. You can +select the compute partition using the `-p compute` argument when running `srun`. + ### [ml-slurm.yaml] ![core-badge] This blueprint provisions an HPC cluster running the Slurm scheduler with the