Skip to content

Commit

Permalink
GA release version for Placement Groups guide (#7058)
Browse files Browse the repository at this point in the history
* GA release version for Placement Groups guide

* Fixes for broken links

* added PG listing to product availability

* Fixes for before you begin sections

* Fixes for links

* Typo fix
  • Loading branch information
Vernholio authored Jul 29, 2024
1 parent 13e0600 commit 081c44b
Show file tree
Hide file tree
Showing 3 changed files with 110 additions and 88 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,110 +2,133 @@
title: "Work with Placement Groups"
description: "Learn how to group your compute instances to best meet your delivery model."
published: 2024-06-20
modified: 2024-06-26
modified: 2024-07-30
keywords: ["placement-group", "affinity", "compliance"]
---

When you deploy several compute instances in an Akamai data center ("region"), they're allocated to physical machines. This allocation varies based on several factors, including the compute instance plan and availability for that plan's sizes. However, you may want your compute instances in specific physical locations, to best support your need:
When you deploy several compute instances in one of our compute regions, theyre allocated to physical machines (“hosts”) with available resources. However, depending on your workload requirements, you may need your compute instances to follow specific strategies:

- You may want them close together, even on the same host to speed up performance.
- You may want to disperse them across several hosts to support high availability.
- **Grouped-together**. You may want them placed close together to reduce latency between compute instances that are used for an application or workload.

Placement groups let you determine this physical location to meet either of these models.
- **Spread apart**. You may want to disperse them across several hosts to support high-availability, for example when required for fail-over.

Placement groups disperse your compute instances across hosts in a region to meet either of these models.

## Overview

The Placement Groups service gives you a convenient way to set up groups of your compute instances, using our various tools. Create a new placement group in a supported region and add new or existing compute instances from that region to your group. With the new group created, we physically move your compute instances into it, based on your desired model.
Setting up a placement group is a simple process using Cloud Manager, the Linode API, or our CLI. Create a new group in a supported region and add new or existing compute instances from that region to your group. When assigning compute instances to the placement group, we physically place them, based on your desired model.

## Availability

The Placement Groups service is available in select regions. Currently, this includes:

- Miami, FL (us-mia)

- Chicago, IL (us-ord)
Placement Groups is available in all [core compute regions](/docs/products/platform/get-started/guides/choose-a-data-center/#product-availability) that support compute instances.

{{< note >}}
Placement Groups is in limited availability. Throughout this phase, we expect to increase the number of supported regions.
Currently, placement groups aren't supported in distributed compute regions.
{{< /note >}}

## Affinity, enforcement, and compliance
## Placement groups and compliance

Review these sections for an understanding of the placement groups concept.

### Placement group type

To distribute your compute instances in a placement group, we use the industry-recognized affinity standard. This standard supports two "preferred container" types:
To distribute your compute instances in a placement group, we use industry-recognized placement strategies. When creating a new group, you select from one of two placement group types:

- **Affinity**. Compute instances are physically close together, possibly on the same host. This preferred container type is best for applications that require performance over availability.
- **Affinity**. When you assign compute instances to the group, we place them physically close together, possibly on the same host. This supports the grouped-together model and is best for applications that require low-latency.

- **Anti-affinity**. Compute instances are placed in separate fault domains, but they're still in the same region. This preferred container type better supports a high-availability model.
- **Anti-affinity**. When you assign compute instances to the group, we place them in separate hosts, but keep them in the same region. This supports the spread-apart model for high-availability.

{{< note >}}
During the limited availability phase, only the **Anti-affinity** preferred container type is supported.
Currently, only the **Anti-affinity** Placement Group Type is supported.
{{< /note >}}

Placement groups also enforce the use of the preferred container type using one of two methods:
### Compliance

Your placement group is in compliance if all of the compute instances in it currently meet your grouped-together or spread-apart model, based on your selected [placement group type](#placement-group-type).

- When you create a new placement group and assign compute instances to it, we'll place them as necessary to make sure the group is compliant with your selected placement group type. There's nothing you need to do to apply compliance at this phase.

- Compliance comes into play when you add more compute instances to your placement group in the future. For example, assume you've set **Anti-affinity** as your placement group type. Your group already has three qualifying compute instances in separate hosts, to support the spread-apart model. If a fourth compute instance is added that's on the _same host_ as one of the existing three, this would make the placement group non-compliant. Compliance at this stage is enforced by your selected placement group policy.

### Placement group policy

This determines how we handle requests to add compute instances to your placement group in the future, and whether or not it stays compliant.

- **Strict (Best practice)**. You can't add more compute instances to your placement group if the preferred container lacks capacity or is unavailable. For example, let's assume the preferred container type is **Anti-affinity**. If you try to add a compute instance that's on the same host, or there's no capacity outside that host in the region, you get an error and can't add the compute instance. This helps you keep your placement group compliant, because you can only pick compute instances that fit the preferred container type.
- **Strict (Best practice)**. If a target compute instance breaks the grouped-together or spread-apart model set by your placement group type, it won't be added. Use this to ensure the placement group stays compliant.

- **Flexible**. You can add more compute instances to your placement group even if they're outside the preferred container type. However, if you add one and it violates the preferred container type, the placement group becomes non-compliant. Once the necessary capacity is available in the region, we physically move the compute instance for you to fit your preferred container type and make it compliant again. This can work for you if you know you need to add more compute instances in the future.
- **Flexible**. A target compute instance will be added even if it breaks the grouped-together or spread-apart model set by your placement group type. This makes the placement group non-compliant. Use this if you need more flexibility to add future compute instances and compliance isn't an immediate concern.

### Fix Non-compliance
### Fix non-compliance

If a placement group becomes non-compliant, we're alerted and we'll bring it back into compliance as soon as possible. Non-compliance can only be fixed by Akamai staff. **_You can't fix it yourself_**.
If a placement group becomes non-compliant, we're alerted. We'll move an out-of-compliance compute instance, once the necessary capacity is available in the region. Non-compliance can only be fixed by Akamai staff. **_You can't fix it yourself_**.

By design, a Strict placement group can't be made non-compliant when simply creating it or managing its compute instances. In rare cases, non-compliance can occur if we need to fail-over or migrate your compute instances for maintenance. Fixing non-compliance for Strict placement groups is prioritized over Flexible groups.
Based on your selected placement group policy, here are the ways a placement group can become non-compliant:

- **Strict**. There are rare cases when we may need to fail-over or migrate your compute instances for maintenance.

- **Flexible**. A placement group using this policy can become non-compliant if a compute instance is added from outside of the placement group type's grouped-together or spread-apart model.

{{< note >}}
Fixing non-compliance for **Strict** placement groups is prioritized over **Flexible** groups.
{{< /note >}}

## Create a placement group

{{< tabs >}}
{{< tab "Cloud Manager" >}}
Here's how to create a new placement group and add existing compute instances to it.
Here are a few processes you can follow to create a new placement group and add existing compute instances to it.

### Use Cloud Manager

Review these sections to create a new group using Cloud Manager.

#### Before you begin

Make sure you understand how placement groups work. Have a look at [Affinity, enforcement, and compliance](#affinity-enforcement-and-compliance).
* Review [Placement groups and compliance](#placement-groups-and-compliance) to understand the placement group concept.
* Review the [Technical specifications](#technical-specifications) for details on what's supported.

#### Creation process

1. Navigate to the **Placement Groups** page in [Akamai Cloud Manager](http://cloud.linode.com) and click **Create Placement Groups**. The **Create Placement Group** drawer opens.
1. Navigate to the **Placement Groups** page in [Akamai Cloud Manager](http://cloud.linode.com) and click **Create Placement Groups**. The **Create Placement Group** form opens.

2. Apply your desired settings:
2. Fill out the form with your desired settings:

- **Label**. Give your placement group an easily recognizable name.
- **Region**. Select the [data center](#availability) that includes the compute instances you want to add.
- **Affinity Type**. Select the [affinity](#affinity-enforcement-and-compliance) that meets your model.
- **Affinity Type Enforcement**. Pick how you want to [enforce](#affinity-enforcement-and-compliance) compliance for your placement group, when adding compute instances to it in the future.
- **Region**. Select the [core compute region](#availability) that includes the compute instances you want to add.
- **Placement Group Type**. Select the [affinity](#placement-group-type) that meets your model.
- **Placement Group Policy**. Pick how you want to [enforce](#placement_group_policy) compliance for your placement group, when adding compute instances to it in the future.

{{< note >}}
- During the limited availability phase, only **Anti-affinity** is available for Affinity Type.
- Once you create your placement group, you *can't change* its Affinity Type Enforcement.
- Currently, only **Anti-affinity** is available for Placement Group Type.
- Once you create your placement group, you *can't change* its Placement Group Policy.
{{< /note >}}

3. When you're ready, click **Create Placement Group**.
3. When you're ready, click **Create Placement Group**. A summary of your group is revealed.

4. Click the Label for you new placement group. A summary of your group is shown.
4. Select the **Linodes (0)** tab.

5. Click **Assign Linode to Placement Group**. The Assign Linodes to \<Label\> drawer opens.
5. Click **Assign Linode to Placement Group**. The Assign Linodes form opens.

6. The **Linodes in \<Region\>** drop-down is auto-populated with eligible compute instances in your selected Region. Pick one to add it and click **Assign Linode**.

<div align=center>
<img src="pg-added-linode-v2.png" />
<img src="pg-added-linode-v1.png" width=600 />
</div>

7. Review **Linodes \<#\> of \<#\>** to see the maximum number you can add. Repeat steps 5-6 to add more compute instances, as necessary.
7. Review the **Linode limit for this placement group**, and repeat steps 5-6 to add more compute instances, as necessary.

{{< note >}}
During the limited availability phase, you’re limited to a maximum of five compute instances in a placement group.
Currently, you’re limited to a maximum of 5 compute instances in a placement group.
{{< /note >}}

With all your compute instances added, we begin provisioning by moving them into the placement group to meet your selected Affinity Type.
With all your compute instances added, we begin provisioning by moving them into the placement group to meet your selected Placement Group Type.

### Use the API

{{< /tab >}}
{{< tab "Linode API" >}}
Here, we combine API operations to create a new placement group and add existing compute instances to it.

#### Before you begin

Make sure you understand how placement groups work. Have a look at [Affinity, enforcement, and compliance](#affinity-enforcement-and-compliance).
* Review [Placement groups and compliance](#placement-groups-and-compliance) to understand the placement group concept.
* Review the [Technical specifications](#technical-specifications) for details on what's supported.

#### List regions

Expand All @@ -126,7 +149,7 @@ curl -H "Authorization: Bearer $TOKEN" \
https://api.linode.com/v4/regions/us-east
```
{{< note >}}
During limited availability, you can have a maximum of 5 compute instances in a placement group.
Currently, you can have a maximum of 5 compute instances in a placement group.
{{< /note >}}

#### List compute instances
Expand All @@ -138,28 +161,28 @@ curl -H "Authorization: Bearer $TOKEN"
https://api.linode.com/v4/networking/ips
```

#### Create a new placement group
#### Create the new placement group

Run this request to create a new placement group. Store the `id` value that's generated for it.

- `label`. Give your placement group an easily recognizable name.
- `region`. Set this to the `label` you stored for your region.
- `affinity-type`. Set this to the [affinity](#affinity-enforcement-and-compliance) that meets your model.
- `is_strict`. Define how to [enforce](#affinity-enforcement-and-compliance) compliance for your placement group, when adding compute instances to it in the future. Set to `true`, strict enforcement is applied and `false` sets it to flexible.
- `placement_group_type`. Set this to the [affinity](#placement-group-type) that meets your model.
- `placement_group_policy`. Define how to [enforce](#placement-group-policy) compliance for your placement group, when adding compute instances to it in the future. Set to `strict` for strict enforcement or `flexible` for flexible enforcement.

{{< note >}}
- During the limited availability phase, only anti-affinity (`anti-affinity:local`) is available for `affinity-type`.
- Once you create your placement group, you *can't change* its affinity type enforcement setting (`is_strict`).
- Currently, only anti-affinity (`anti-affinity:local`) is available for `placement_group_type`.
- Once you create your placement group, you *can't change* its `placement_group_policy` enforcement setting.
{{< /note >}}

```command
curl -H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-X POST -d '{
"label": "new-placement-group",
"region": "us-iad",
"affinity-type": "anti_affinity:local",
"is_strict": "true"
"region": "us-east",
"placement_group_type": "anti_affinity:local",
"placement_group_policy": "strict"
}' \
https://api.linode.com/v4/placement/groups
```
Expand All @@ -178,25 +201,24 @@ curl -H "Content-Type: application/json" \
}' \
https://api.linode.com/v4/placement/groups/12/assign
```
With all your compute instances added, we begin provisioning by moving them into the placement group to meet your selected affinity type.
With all your compute instances added, we begin provisioning by placing them into the placement group to meet your selected `placement_group_type`.

#### More with the Placement Groups API

There are several other operations in the [Linode API](https://techdocs.akamai.com/linode-api/reference/post-placement-group) that let you interact with placement groups.

{{< /tab >}}
{{< /tabs >}}
There are several other operations in the [Linode API](https://techdocs.akamai.com/linode-api/reference/post-placement-group) you can use to interact with placement groups.

## Technical Specifications

- Placement groups support dedicated and shared compute instance plans. Plan types can be mixed in a placement group. However, specialty hardware, such as GPUs aren't supported.
- Placement groups support dedicated, premium, and shared compute instance plans. You can mix dedicated and shared plan compute instances in the same placement group, but you can't mix premium plans with any other type.

- Specialty hardware, such as GPUs aren't supported.

- A compute instance can only exist in one placement group.

- The Affinity Type and Region you select determine the maximum number of compute instances per placement group. This quantity is reflected in Cloud Manager when reviewing your placement group. With the API, the [GET /v4/regions/\{regionid\}](/docs/api/regions/#region-view) operation contains the `maximum_linodes_per_pg` element that displays this maximum.
- The maximum number of compute instances in a placement group is currently five. This quantity is reflected in Cloud Manager when reviewing your placement group. With the API, the [GET /v4/regions/\{regionid\}](/docs/api/regions/#region-view) operation contains the `maximum_linodes_per_pg` element that displays this maximum.

- Placement groups can be renamed or deleted. To delete a placement group, you need to remove all compute instances from it.

- When you remove a compute instance from a placement group, it continues to function as-is, but the placement decisions are no longer guided by the group's Affinity Type.
- When you remove a compute instance from a placement group, it continues to function as-is, but the placement decisions are no longer guided by the group's Placement Group Type.

- Entry points to create a placement group are also available when creating a new compute instance or editing an existing one.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 081c44b

Please sign in to comment.