This API allows you to manage Kubernetes Kapsule and Kosmos clusters.
- Access Control List (ACL) management commands
- Kapsule cluster management commands
- Cluster type management commands
- Manage your Kubernetes Kapsule cluster's kubeconfig files
- Kapsule node management commands
- Kapsule pool management commands
- Available Kubernetes versions commands
Network Access Control Lists (ACLs) allow you to manage inbound network traffic by setting up ACL rules.
Network Access Control Lists (ACLs) allow you to manage inbound network traffic by setting up ACL rules.
Usage:
scw k8s acl
A cluster is a fully managed Kubernetes cluster It is composed of different pools, each pool containing the same kind of nodes.
Create a new Kubernetes cluster in a Scaleway region.
Usage:
scw k8s cluster create [arg=value ...]
Args:
Name | Description | |
---|---|---|
project-id | Project ID to use. If none is passed the default project ID will be used | |
type | Type of the cluster (possible values are kapsule, multicloud, kapsule-dedicated-8, kapsule-dedicated-16) | |
name | Required Default: <generated> |
Cluster name |
description | Cluster description | |
tags.{index} | Tags associated with the cluster | |
version | Required Default: latest |
Kubernetes version of the cluster |
cni | Required Default: cilium One of: unknown_cni , cilium , calico , weave , flannel , kilo , none |
Container Network Interface (CNI) plugin running in the cluster |
pools.{index}.name | Name of the pool | |
pools.{index}.node-type | Node type is the type of Scaleway Instance wanted for the pool. Nodes with insufficient memory are not eligible (DEV1-S, PLAY2-PICO, STARDUST). 'external' is a special node type used to provision instances from other cloud providers in a Kosmos Cluster | |
pools.{index}.placement-group-id | Placement group ID in which all the nodes of the pool will be created | |
pools.{index}.autoscaling | Defines whether the autoscaling feature is enabled for the pool | |
pools.{index}.size | Size (number of nodes) of the pool | |
pools.{index}.min-size | Defines the minimum size of the pool. Note that this field is only used when autoscaling is enabled on the pool | |
pools.{index}.max-size | Defines the maximum size of the pool. Note that this field is only used when autoscaling is enabled on the pool | |
pools.{index}.container-runtime | One of: unknown_runtime , docker , containerd , crio |
Customization of the container runtime is available for each pool. Note that docker has been deprecated since version 1.20 and will be removed by version 1.24 |
pools.{index}.autohealing | Defines whether the autohealing feature is enabled for the pool | |
pools.{index}.tags.{index} | Tags associated with the pool | |
pools.{index}.kubelet-args.{key} | Kubelet arguments to be used by this pool. Note that this feature is experimental | |
pools.{index}.upgrade-policy.max-unavailable | The maximum number of nodes that can be not ready at the same time | |
pools.{index}.upgrade-policy.max-surge | The maximum number of nodes to be created during the upgrade | |
pools.{index}.zone | Zone in which the pool's nodes will be spawned | |
pools.{index}.root-volume-type | One of: default_volume_type , l_ssd , b_ssd |
Defines the system volume disk type. Two different types of volume (volume_type ) are provided: l_ssd is a local block storage which means your system is stored locally on your node's hypervisor. b_ssd is a remote block storage which means your system is stored on a centralized and resilient cluster |
pools.{index}.root-volume-size | System volume disk size | |
pools.{index}.public-ip-disabled | Defines if the public IP should be removed from Nodes. To use this feature, your Cluster must have an attached Private Network set up with a Public Gateway | |
autoscaler-config.scale-down-disabled | Disable the cluster autoscaler | |
autoscaler-config.scale-down-delay-after-add | How long after scale up that scale down evaluation resumes | |
autoscaler-config.estimator | One of: unknown_estimator , binpacking |
Type of resource estimator to be used in scale up |
autoscaler-config.expander | One of: unknown_expander , random , most_pods , least_waste , priority , price |
Type of node group expander to be used in scale up |
autoscaler-config.ignore-daemonsets-utilization | Ignore DaemonSet pods when calculating resource utilization for scaling down | |
autoscaler-config.balance-similar-node-groups | Detect similar node groups and balance the number of nodes between them | |
autoscaler-config.expendable-pods-priority-cutoff | Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they won't cause scale up. Pods with null priority (PodPriority disabled) are non expendable | |
autoscaler-config.scale-down-unneeded-time | How long a node should be unneeded before it is eligible to be scaled down | |
autoscaler-config.scale-down-utilization-threshold | Node utilization level, defined as a sum of requested resources divided by capacity, below which a node can be considered for scale down | |
autoscaler-config.max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | |
auto-upgrade.enable | Defines whether auto upgrade is enabled for the cluster | |
auto-upgrade.maintenance-window.start-hour | Start time of the two-hour maintenance window | |
auto-upgrade.maintenance-window.day | One of: any , monday , tuesday , wednesday , thursday , friday , saturday , sunday |
Day of the week for the maintenance window |
feature-gates.{index} | List of feature gates to enable | |
admission-plugins.{index} | List of admission plugins to enable | |
open-id-connect-config.issuer-url | URL of the provider which allows the API server to discover public signing keys. Only URLs using the https:// scheme are accepted. This is typically the provider's discovery URL without a path, for example "https://accounts.google.com" or "https://login.salesforce.com" |
|
open-id-connect-config.client-id | A client ID that all tokens must be issued for | |
open-id-connect-config.username-claim | JWT claim to use as the user name. The default is sub , which is expected to be the end user's unique identifier. Admins can choose other claims, such as email or name , depending on their provider. However, claims other than email will be prefixed with the issuer URL to prevent name collision |
|
open-id-connect-config.username-prefix | Prefix prepended to username claims to prevent name collision (such as system: users). For example, the value oidc: will create usernames like oidc:jane.doe . If this flag is not provided and username_claim is a value other than email , the prefix defaults to ( Issuer URL )# where ( Issuer URL ) is the value of issuer_url . The value - can be used to disable all prefixing |
|
open-id-connect-config.groups-claim.{index} | JWT claim to use as the user's group | |
open-id-connect-config.groups-prefix | Prefix prepended to group claims to prevent name collision (such as system: groups). For example, the value oidc: will create group names like oidc:engineering and oidc:infra |
|
open-id-connect-config.required-claim.{index} | Multiple key=value pairs describing a required claim in the ID token. If set, the claims are verified to be present in the ID token with a matching value | |
apiserver-cert-sans.{index} | Additional Subject Alternative Names for the Kubernetes API server certificate | |
private-network-id | Private network ID for internal cluster communication (cannot be changed later). For Kapsule clusters, if none is provided, a private network will be created | |
organization-id | Organization ID to use. If none is passed the default organization ID will be used | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Create a Kubernetes cluster named foo with cilium as CNI, in version 1.27.0 and with a pool named default composed of 3 DEV1-M
scw k8s cluster create name=foo version=1.27.0 pools.0.size=3 pools.0.node-type=DEV1-M pools.0.name=default
Create a Kubernetes cluster named bar, tagged, calico as CNI, in version 1.27.0 and with a tagged pool named default composed of 2 RENDER-S and autohealing and autoscaling enabled (between 1 and 10 nodes)
scw k8s cluster create name=bar version=1.27.0 tags.0=tag1 tags.1=tag2 cni=calico pools.0.size=2 pools.0.node-type=RENDER-S pools.0.min-size=1 pools.0.max-size=10 pools.0.autohealing=true pools.0.autoscaling=true pools.0.tags.0=pooltag1 pools.0.tags.1=pooltag2 pools.0.name=default
Delete a specific Kubernetes cluster and all its associated pools and nodes. Note that this method will not delete any Load Balancer or Block Volume that are associated with the cluster.
Usage:
scw k8s cluster delete <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster to delete |
with-additional-resources | Defines whether all volumes (including retain volume type), empty Private Networks and Load Balancers with a name starting with the cluster ID will also be deleted | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Delete a cluster
scw k8s cluster delete 11111111-1111-1111-111111111111
Delete a cluster with its Block volumes and Load Balancers
scw k8s cluster delete 11111111-1111-1111-111111111111 with-additional-resources=true
Retrieve information about a specific Kubernetes cluster.
Usage:
scw k8s cluster get <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the requested cluster |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get a cluster information
scw k8s cluster get 11111111-1111-1111-111111111111
List all existing Kubernetes clusters in a specific region.
Usage:
scw k8s cluster list [arg=value ...]
Args:
Name | Description | |
---|---|---|
project-id | Project ID on which to filter the returned clusters | |
order-by | One of: created_at_asc , created_at_desc , updated_at_asc , updated_at_desc , name_asc , name_desc , status_asc , status_desc , version_asc , version_desc |
Sort order of returned clusters |
name | Name to filter on, only clusters containing this substring in their name will be returned | |
status | One of: unknown , creating , ready , deleting , deleted , updating , locked , pool_required |
Status to filter on, only clusters with this status will be returned |
type | Type to filter on, only clusters with this type will be returned | |
private-network-id | Private Network ID to filter on, only clusters within this Private Network will be returned | |
organization-id | Organization ID on which to filter the returned clusters | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw , all |
Region to target. If none is passed will use default region from the config |
Examples:
List all clusters on your default region
scw k8s cluster list
List the ready clusters on your default region
scw k8s cluster list status=ready
List the clusters that match the given name on fr-par ('cluster1' will return 'cluster100' and 'cluster1' but not 'foo')
scw k8s cluster list region=fr-par name=cluster1
List the cluster types that a specific Kubernetes cluster is allowed to switch to.
Usage:
scw k8s cluster list-available-types <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID for which the available Kubernetes types will be listed |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
List all cluster types that a cluster can upgrade to
scw k8s cluster list-available-types 11111111-1111-1111-111111111111
List the versions that a specific Kubernetes cluster is allowed to upgrade to. Results will include every patch version greater than the current patch, as well as one minor version ahead of the current version. Any upgrade skipping a minor version will not work.
Usage:
scw k8s cluster list-available-versions <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID for which the available Kubernetes versions will be listed |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
List all versions that a cluster can upgrade to
scw k8s cluster list-available-versions 11111111-1111-1111-111111111111
Reset the admin token for a specific Kubernetes cluster. This will revoke the old admin token (which will not be usable afterwards) and create a new one. Note that you will need to download kubeconfig again to keep interacting with the cluster.
Usage:
scw k8s cluster reset-admin-token <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID on which the admin token will be renewed |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Reset the admin token for a cluster
scw k8s cluster reset-admin-token 11111111-1111-1111-111111111111
Change the type of a specific Kubernetes cluster. To see the possible values you can enter for the type
field, list available cluster types.
Usage:
scw k8s cluster set-type <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster to migrate from one type to another |
type | Required | Type of the cluster. Note that some migrations are not possible (please refer to product documentation) |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Convert a kapsule cluster to a kapsule-dedicated-16 cluster
scw k8s cluster set-type 11111111-1111-1111-111111111111 type=kapsule-dedicated-16
Update information on a specific Kubernetes cluster. You can update details such as its name, description, tags and configuration. To upgrade a cluster, you will need to use the dedicated endpoint.
Usage:
scw k8s cluster update <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster to update |
name | New external name for the cluster | |
description | New description for the cluster | |
tags.{index} | New tags associated with the cluster | |
autoscaler-config.scale-down-disabled | Disable the cluster autoscaler | |
autoscaler-config.scale-down-delay-after-add | How long after scale up that scale down evaluation resumes | |
autoscaler-config.estimator | One of: unknown_estimator , binpacking |
Type of resource estimator to be used in scale up |
autoscaler-config.expander | One of: unknown_expander , random , most_pods , least_waste , priority , price |
Type of node group expander to be used in scale up |
autoscaler-config.ignore-daemonsets-utilization | Ignore DaemonSet pods when calculating resource utilization for scaling down | |
autoscaler-config.balance-similar-node-groups | Detect similar node groups and balance the number of nodes between them | |
autoscaler-config.expendable-pods-priority-cutoff | Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they won't cause scale up. Pods with null priority (PodPriority disabled) are non expendable | |
autoscaler-config.scale-down-unneeded-time | How long a node should be unneeded before it is eligible to be scaled down | |
autoscaler-config.scale-down-utilization-threshold | Node utilization level, defined as a sum of requested resources divided by capacity, below which a node can be considered for scale down | |
autoscaler-config.max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | |
auto-upgrade.enable | Defines whether auto upgrade is enabled for the cluster | |
auto-upgrade.maintenance-window.start-hour | Start time of the two-hour maintenance window | |
auto-upgrade.maintenance-window.day | One of: any , monday , tuesday , wednesday , thursday , friday , saturday , sunday |
Day of the week for the maintenance window |
feature-gates.{index} | List of feature gates to enable | |
admission-plugins.{index} | List of admission plugins to enable | |
open-id-connect-config.issuer-url | URL of the provider which allows the API server to discover public signing keys. Only URLs using the https:// scheme are accepted. This is typically the provider's discovery URL without a path, for example "https://accounts.google.com" or "https://login.salesforce.com" |
|
open-id-connect-config.client-id | A client ID that all tokens must be issued for | |
open-id-connect-config.username-claim | JWT claim to use as the user name. The default is sub , which is expected to be the end user's unique identifier. Admins can choose other claims, such as email or name , depending on their provider. However, claims other than email will be prefixed with the issuer URL to prevent name collision |
|
open-id-connect-config.username-prefix | Prefix prepended to username claims to prevent name collision (such as system: users). For example, the value oidc: will create usernames like oidc:jane.doe . If this flag is not provided and username_claim is a value other than email , the prefix defaults to ( Issuer URL )# where ( Issuer URL ) is the value of issuer_url . The value - can be used to disable all prefixing |
|
open-id-connect-config.groups-claim.{index} | JWT claim to use as the user's group | |
open-id-connect-config.groups-prefix | Prefix prepended to group claims to prevent name collision (such as system: groups). For example, the value oidc: will create group names like oidc:engineering and oidc:infra |
|
open-id-connect-config.required-claim.{index} | Multiple key=value pairs describing a required claim in the ID token. If set, the claims are verified to be present in the ID token with a matching value | |
apiserver-cert-sans.{index} | Additional Subject Alternative Names for the Kubernetes API server certificate | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Add TTLAfterFinished and ServiceNodeExclusion as feature gates on a cluster
scw k8s cluster update 11111111-1111-1111-111111111111 feature-gates.0=TTLAfterFinished feature-gates.1=ServiceNodeExclusion
Upgrade a specific Kubernetes cluster and possibly its associated pools to a specific and supported Kubernetes version.
Usage:
scw k8s cluster upgrade <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster to upgrade |
version | Required | New Kubernetes version of the cluster. Note that the version should either be a higher patch version of the same minor version or the direct minor version after the current one |
upgrade-pools | Defines whether pools will also be upgraded once the control plane is upgraded | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Upgrade a cluster version 1.27.0 of Kubernetes (pools are not included)
scw k8s cluster upgrade 11111111-1111-1111-111111111111 version=1.27.0
Upgrade a cluster to version 1.27.0 of Kubernetes (pools are included)
scw k8s cluster upgrade 11111111-1111-1111-111111111111 version=1.27.0 upgrade-pools=true
Wait for server to reach a stable state. This is similar to using --wait flag on other action commands, but without requiring a new action on the server.
Usage:
scw k8s cluster wait <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster. |
wait-for-pools | Wait for pools to be ready. | |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
timeout | Default: 10m0s |
Timeout of the wait |
Examples:
Wait for a cluster to reach a stable state
scw k8s cluster wait 11111111-1111-1111-1111-111111111111
All cluster types available in a specified region A cluster type represents the different commercial types of clusters offered by Scaleway.
List available cluster types and their technical details.
Usage:
scw k8s cluster-type list [arg=value ...]
Args:
Name | Description | |
---|---|---|
region | Default: fr-par One of: fr-par , nl-ams , pl-waw , all |
Region to target. If none is passed will use default region from the config |
Retrieve the kubeconfig for a specified cluster.
Usage:
scw k8s kubeconfig get <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which to retrieve the kubeconfig |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
Examples:
Get the kubeconfig for a given cluster
scw k8s kubeconfig get 11111111-1111-1111-1111-111111111111
Retrieve the kubeconfig for a specified cluster and write it on disk. It will merge the new kubeconfig in the file pointed by the KUBECONFIG variable. If empty it will default to $HOME/.kube/config.
Usage:
scw k8s kubeconfig install <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which to retrieve the kubeconfig |
keep-current-context | Whether or not to keep the current kubeconfig context unmodified | |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
Examples:
Install the kubeconfig for a given cluster and using the new context
scw k8s kubeconfig install 11111111-1111-1111-1111-111111111111
Remove specified cluster from kubeconfig file specified by the KUBECONFIG env, if empty it will default to $HOME/.kube/config. If the current context points to this cluster, it will be set to an empty context.
Usage:
scw k8s kubeconfig uninstall <cluster-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which to uninstall the kubeconfig |
Examples:
Uninstall the kubeconfig for a given cluster
scw k8s kubeconfig uninstall 11111111-1111-1111-1111-111111111111
A node (short for worker node) is an abstraction for a Scaleway Instance A node is always part of a pool. Each of them has the Kubernetes software automatically installed and configured by Scaleway.
Delete a specific Node. Note that when there is not enough space to reschedule all the pods (such as in a one-node cluster), disruption of your applications can be expected.
Usage:
scw k8s node delete <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node to replace |
skip-drain | Skip draining node from its workload (Note: this parameter is currently inactive) | |
replace | Add a new node after the deletion of this node | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Delete a node
scw k8s node delete 11111111-1111-1111-111111111111
Delete a node without evicting workloads
scw k8s node delete 11111111-1111-1111-111111111111 skip-drain=true
Replace a node by a new one
scw k8s node delete 11111111-1111-1111-111111111111 replace=true
Retrieve details about a specific Kubernetes Node.
Usage:
scw k8s node get <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the requested node |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get a node
scw k8s node get 11111111-1111-1111-111111111111
List all the existing nodes for a specific Kubernetes cluster.
Usage:
scw k8s node list [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID from which the nodes will be listed from |
pool-id | Pool ID on which to filter the returned nodes | |
order-by | One of: created_at_asc , created_at_desc |
Sort order of the returned nodes |
name | Name to filter on, only nodes containing this substring in their name will be returned | |
status | One of: unknown , creating , not_ready , ready , deleting , deleted , locked , rebooting , creation_error , upgrading , starting , registering |
Status to filter on, only nodes with this status will be returned |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw , all |
Region to target. If none is passed will use default region from the config |
Examples:
List all the nodes in the cluster
scw k8s node list cluster-id=11111111-1111-1111-111111111111
List all the nodes in the cluster's 2222222222222-2222-222222222222 pool
scw k8s node list cluster-id=11111111-1111-1111-111111111111 pool-id=2222222222222-2222-222222222222
List all cluster nodes that are ready
scw k8s node list cluster-id=11111111-1111-1111-111111111111 status=ready
Reboot a specific Node. The node will first be cordoned (scheduling will be disabled on it). The existing pods on the node will then be drained and rescheduled onto another schedulable node. Note that when there is not enough space to reschedule all the pods (such as in a one-node cluster), disruption of your applications can be expected.
Usage:
scw k8s node reboot <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node to reboot |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Reboot a node
scw k8s node reboot 11111111-1111-1111-111111111111
Replace a specific Node. The node will first be cordoned (scheduling will be disabled on it). The existing pods on the node will then be drained and rescheduled onto another schedulable node. Note that when there is not enough space to reschedule all the pods (such as in a one-node cluster), disruption of your applications can be expected.
Usage:
scw k8s node replace <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node to replace |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Replace a node
scw k8s node replace 11111111-1111-1111-111111111111
Wait for a node to reach a stable state. This is similar to using --wait flag on other action commands, but without requiring a new action on the node.
Usage:
scw k8s node wait <node-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
node-id | Required | ID of the node. |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
timeout | Default: 10m0s |
Timeout of the wait |
Examples:
Wait for a node to reach a stable state
scw k8s node wait 11111111-1111-1111-1111-111111111111
A pool is a set of identical nodes A pool has a name, a size (its desired number of nodes), node number limits (min, max), and a Scaleway Instance type. Changing those limits increases/decreases the size of a pool. As a result and depending on its load, the pool will grow or shrink within those limits when autoscaling is enabled. A "default pool" is automatically created with every cluster via the console.
Create a new pool in a specific Kubernetes cluster.
Usage:
scw k8s pool create [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | Cluster ID to which the pool will be attached |
name | Required Default: <generated> |
Pool name |
node-type | Required Default: DEV1-M |
Node type is the type of Scaleway Instance wanted for the pool. Nodes with insufficient memory are not eligible (DEV1-S, PLAY2-PICO, STARDUST). 'external' is a special node type used to provision instances from other cloud providers in a Kosmos Cluster |
placement-group-id | Placement group ID in which all the nodes of the pool will be created | |
autoscaling | Defines whether the autoscaling feature is enabled for the pool | |
size | Required Default: 1 |
Size (number of nodes) of the pool |
min-size | Defines the minimum size of the pool. Note that this field is only used when autoscaling is enabled on the pool | |
max-size | Defines the maximum size of the pool. Note that this field is only used when autoscaling is enabled on the pool | |
container-runtime | One of: unknown_runtime , docker , containerd , crio |
Customization of the container runtime is available for each pool. Note that docker has been deprecated since version 1.20 and will be removed by version 1.24 |
autohealing | Defines whether the autohealing feature is enabled for the pool | |
tags.{index} | Tags associated with the pool | |
kubelet-args.{key} | Kubelet arguments to be used by this pool. Note that this feature is experimental | |
upgrade-policy.max-unavailable | ||
upgrade-policy.max-surge | ||
zone | Zone in which the pool's nodes will be spawned | |
root-volume-type | One of: default_volume_type , l_ssd , b_ssd |
Defines the system volume disk type. Two different types of volume (volume_type ) are provided: l_ssd is a local block storage which means your system is stored locally on your node's hypervisor. b_ssd is a remote block storage which means your system is stored on a centralized and resilient cluster |
root-volume-size | System volume disk size | |
public-ip-disabled | Defines if the public IP should be removed from Nodes. To use this feature, your Cluster must have an attached Private Network set up with a Public Gateway | |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Create a pool named 'bar' with 2 DEV1-XL on a cluster
scw k8s pool create cluster-id=11111111-1111-1111-111111111111 name=bar node-type=DEV1-XL size=2
Create a pool named 'fish' with 5 GP1-L, autoscaling within 0 and 10 nodes and autohealing enabled, and containerd as the cluster container runtime
scw k8s pool create cluster-id=11111111-1111-1111-111111111111 name=fish node-type=GP1-L size=5 min-size=0 max-size=10 autoscaling=true autohealing=true container-runtime=containerd
Create a tagged pool named 'turtle' with 1 GP1-S which is using the already created placement group 2222222222222-2222-222222222222 for all the nodes in the pool on a cluster
scw k8s pool create cluster-id=11111111-1111-1111-111111111111 name=turtle node-type=GP1-S size=1 placement-group-id=2222222222222-2222-222222222222 tags.0=turtle tags.1=placement-group
Delete a specific pool from a cluster. Note that all the pool's nodes will also be deleted.
Usage:
scw k8s pool delete <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool to delete |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Delete a specific pool
scw k8s pool delete 11111111-1111-1111-111111111111
Retrieve details about a specific pool in a Kubernetes cluster.
Usage:
scw k8s pool get <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the requested pool |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get a given pool
scw k8s pool get 11111111-1111-1111-111111111111
List all the existing pools for a specific Kubernetes cluster.
Usage:
scw k8s pool list [arg=value ...]
Args:
Name | Description | |
---|---|---|
cluster-id | Required | ID of the cluster whose pools will be listed |
order-by | One of: created_at_asc , created_at_desc , updated_at_asc , updated_at_desc , name_asc , name_desc , status_asc , status_desc , version_asc , version_desc |
Sort order of returned pools |
name | Name to filter on, only pools containing this substring in their name will be returned | |
status | One of: unknown , ready , deleting , deleted , scaling , warning , locked , upgrading |
Status to filter on, only pools with this status will be returned |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw , all |
Region to target. If none is passed will use default region from the config |
Examples:
List all pools for a cluster
scw k8s pool list cluster-id=11111111-1111-1111-111111111111
List all scaling pools for a cluster
scw k8s pool list cluster-id=11111111-1111-1111-111111111111 status=scaling
List all pools for clusters containing 'foo' in their name
scw k8s pool list cluster-id=11111111-1111-1111-111111111111 name=foo
List all pools for a cluster and order them by ascending creation date
scw k8s pool list cluster-id=11111111-1111-1111-111111111111 order-by=created_at_asc
Update the attributes of a specific pool, such as its desired size, autoscaling settings, and tags.
Usage:
scw k8s pool update <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool to update |
autoscaling | New value for the pool autoscaling enablement | |
size | New desired pool size | |
min-size | New minimum size for the pool | |
max-size | New maximum size for the pool | |
autohealing | New value for the pool autohealing enablement | |
tags.{index} | New tags associated with the pool | |
kubelet-args.{key} | New Kubelet arguments to be used by this pool. Note that this feature is experimental | |
upgrade-policy.max-unavailable | ||
upgrade-policy.max-surge | ||
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Enable autoscaling on a given pool
scw k8s pool update 11111111-1111-1111-111111111111 autoscaling=true
Reduce the size and maximum size of a given pool to 4
scw k8s pool update 11111111-1111-1111-111111111111 size=4 max-size=4
Modify the tags of a given pool
scw k8s pool update 11111111-1111-1111-111111111111 tags.0=my tags.1=new tags.2=pool
Upgrade the Kubernetes version of a specific pool. Note that it only works if the targeted version matches the cluster's version.
Usage:
scw k8s pool upgrade <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool to upgrade |
version | Required | New Kubernetes version for the pool |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Upgrade a specific pool to the Kubernetes version 1.27.0
scw k8s pool upgrade 11111111-1111-1111-111111111111 version=1.27.0
Wait for a pool to reach a stable state. This is similar to using --wait flag on other action commands, but without requiring a new action on the node.
Usage:
scw k8s pool wait <pool-id ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
pool-id | Required | ID of the pool. |
region | Default: fr-par |
Region to target. If none is passed will use default region from the config |
timeout | Default: 10m0s |
Timeout of the wait |
Examples:
Wait for a pool to reach a stable state
scw k8s pool wait 11111111-1111-1111-1111-111111111111
A version is a vanilla Kubernetes version like x.y.z
It comprises a major version x
, a minor version y
, and a patch version z
. At the minimum, Kapsule (Scaleway's managed Kubernetes), will support the last patch version for the past three minor releases. Also, each version has a different set of CNIs, eventually container runtimes, feature gates, and admission plugins available. See our Version Support Policy.
Retrieve a specific Kubernetes version and its details.
Usage:
scw k8s version get <version-name ...> [arg=value ...]
Args:
Name | Description | |
---|---|---|
version-name | Required | Requested version name |
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
Get the Kubernetes version 1.27.0
scw k8s version get 1.27.0
List all available versions for the creation of a new Kubernetes cluster.
Usage:
scw k8s version list [arg=value ...]
Args:
Name | Description | |
---|---|---|
region | Default: fr-par One of: fr-par , nl-ams , pl-waw |
Region to target. If none is passed will use default region from the config |
Examples:
List all available Kubernetes version in Kapsule
scw k8s version list