Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(ec2): design dual stack vpc #4

Closed
wants to merge 240 commits into from
Closed

feat(ec2): design dual stack vpc #4

wants to merge 240 commits into from

Conversation

scanlonp
Copy link
Owner

Showing this here.

scanlonp and others added 7 commits December 13, 2023 13:52
The init-go canary was broken because the test replaced the aws-cdk go module with a locally build version. However in canaries we want to use the publish versioned instead. This change simply makes the replacement conditional.

Manually tested in CodeBuild.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions github-actions bot added the p2 label Dec 14, 2023
Comment on lines 1597 to 1602
publicSubnet.addRoute('DefaultRoute6', {
routerType: RouterType.GATEWAY,
routerId: this.internetGatewayId!,
destinationIpv6CidrBlock: '::/0',
enablesInternetConnectivity: true,
});
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not stick this into a method on publicSubnet as well?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we use the same internet gateway by the way?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the one this.internetGatewayId! and the other one igw.ref ?

/**
* The protocol of the Vpc
*/
export enum VpcProtocol {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be IpProtocol, rather than VpcProtocol ? And the field as well?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say IpProtocol is redundant. I think VpcProtocol conveys what the property controls most clearly. But I am fine to change it, but personally prefer VpcProtocol.

@@ -1095,6 +1124,24 @@ export interface VpcProps {
* @default true
*/
readonly createInternetGateway?: boolean;

/**
* This property is specific to dual stack VPCs.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put in the docstring of all other relevant properties that they are explicitly for IPv4 configs only.

*
* @default true
*/
readonly ipv6AmazonProvidedCidrBlock?: boolean;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this I think?

Ultimately it will become a property more like this one:

readonly ipAddresses?: IIpAddresses;

*
* @default true in Subnet.Public, false in Subnet.Private or Subnet.Isolated.
* @default true in Subnet.Public, false in Subnet.Private or Subnet.Isolated. Always false for dual stack VPC
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe phrase as "true in public subnets in IPV4_ONLY vpcs, false otherwise"

@@ -1517,6 +1610,23 @@ export class Vpc extends VpcBase {
}
}

// Create an Egress Only Internet Gateway and attach it if necessary
const createEigw = props.ipv6CreateEgressOnlyInternetGateway ?? true;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the cost of the EIGW is cheap enough, I think we can do without this property.

this.createSubnetResources(requestedSubnets, allocatedSubnets);
let subnetIpv6Cidrs: string[] = [];
if (this.vpcProtocol === VpcProtocol.DUAL_STACK && this.ipv6Cidr !== undefined) {
subnetIpv6Cidrs = Fn.cidr(this.ipv6Cidr, allocatedSubnets.length, (128 - 64).toString());
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels a bit icky to me. Can't we extend IIpAddresses.allocateSubnetsCidr with allocateSubnetsCidrIpv6(), and defer the calculation to another class?

@@ -1646,24 +1761,60 @@ export class Vpc extends VpcBase {
// For reserved azs, do not create any resources
return;
}
let subnetProps: SubnetProps;
if (this.vpcProtocol === VpcProtocol.IPV4_ONLY) {
// mapPublicIpOnLaunch true in Subnet.Public, false in Subnet.Private or Subnet.Isolated.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extract all this logic out into a helper function. This is an annoying place to stick it all.

private calculateMapPublicIpOnLaunch(config: SubnetConfig): boolean | undefined;


const subnetProps = {
  mapPublicIpOnLaunch: this.calculateMapPublicIpOnLaunch(subnetConfig),
  // ...etc...
} satisfies SubnetProps;

@@ -1646,24 +1761,60 @@ export class Vpc extends VpcBase {
// For reserved azs, do not create any resources
return;
}
let subnetProps: SubnetProps;
if (this.vpcProtocol === VpcProtocol.IPV4_ONLY) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't do a top-level if on the VPC protocol.

Do a per-property calculation that can depend on the VpcProtocol instead (if relevant).

/**
* The VPC protocol
*/
private readonly vpcProtocol: VpcProtocol;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Especially if we're going to add IPV6_ONLY, I think easier to work with will be:

private readonly useIpv4: boolean;
private readonly useIpv6: boolean;

luxaritas and others added 21 commits December 14, 2023 18:59
If using CodePipeline EcsDeployAction without using the CODE_DEPLOY deployment controller, future deployments of an ECS service will revert the task definition to the task definition deployed by CloudFormation, even though the latest active revision created by the deploy action is the one that is intended to be used. This provides a way to specify the specific revision of a task definition that should be used, including the special value `latest` which uses the latest ACTIVE revision.

Closes aws#26983.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR supports Aurora MySQL 3.05.1.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.3051.html

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ndlers (aws#28373)

Add dependency from from **@aws-cdk/custom-resource-handlers** to **@aws-cdk/aws-amplify-alpha** as part of effort to standardize custom resource creation and bundling of source code.


Verified addition with `yarn install` and `yarn test`. 


Closes aws#28289.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Allows to set hourly rotation up to 4 hours on secrets as per [official docs](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_managed.html).

Closes aws#28261.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Adds support for Map's [`ItemProcessor`](https://docs.aws.amazon.com/step-functions/latest/dg/use-dist-map-orchestrate-large-scale-parallel-workloads.html#distitemprocessor) required field and deprecates [`Iterator`](https://docs.aws.amazon.com/step-functions/latest/dg/concepts-asl-use-map-state-inline.html#iterator).

Closes aws#27878.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ation types (aws#28316)

`integrationHttpMethod` must be specified for non-MOCK integration types.
This PR adds validation to prevent [build-time errors](aws#6404).

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
When multiple bucket notifications are created it creates a race condition where only the last one processed gets applied. All bucket notifications created in a stack are given the same `stackId` prefix. This prefix is then used to filter out the notification created by the custom resource. If there are other notifications created in the same stack, but not by this custom resource, they get filtered out.

This PR fixes that by filtering the notifications by the specific notification id. This ensures that only the notifications created by the individual custom resource are filter out and the rest (included those created by other custom resources) are marked external.

Note - I had to refactor some of the function code to make it fit the inline size limit. This should probably be rewritten in typescript...

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…28367)

The following PR adds validation for the case when `allowAllOutbound` and `securityGroups` are specified at the same time in `FunctionOptions`.
aws#26528
(aws#27157)

According to related issues and discussions, this PR causes existing Lambda deployments to fail.
However, since this change has already been merged and I think it is the correct change, I did not fix the validation process but added documentation to clarify the behavior.

Relates to aws#28170, aws#27669 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR adds support for configuring flexible time windows.

## Description
Currently, users cannot configure the `flexibleTimeWindow` feature in the Scheduler construct.
This feature enhances flexibility and reliability, allowing tasks to be invoked within a defined time window. 
https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-schedule-flexible-time-windows.html

CloudFormation allows users to take advantage of this feature as follows.
With this template,  it will invokes the target within 10 minutes after the scheduled time.
```yaml
Resources:
  Schedule:
    Type: AWS::Scheduler::Schedule
    Properties: 
      FlexibleTimeWindow: 
        Mode: "FLEXIBLE" # or "OFF"
        MaximumWindowInMinutes: 10 # between 1 and 1440
      Name: "sample-schedule"
      ScheduleExpression: "cron(0 9 * * ? *)"
      State: "ENABLED"
      Target:
        Arn: hoge
        RoleArn: hoge
```

## Changes
### add Enum indicating flexible time window mode
Currently there are only two modes, FLEXIBLE and OFF, so there is no problem using boolean instead of enum.
But I think it's better to use Enum to prepare for future expansion.
https://docs.aws.amazon.com/ja_jp/AWSCloudFormation/latest/UserGuide/aws-properties-scheduler-schedule-flexibletimewindow.html

### add property to `ScheduleProps` interface
`flexibleTimeWindowMode` property defaults to `OFF` to avoid a breaking change.
```ts
interface ScheduleProps {
  // ....
  /**
   * Determines whether the schedule is invoked within a flexible time window.
   *
   * @see https://docs.aws.amazon.com/scheduler/latest/UserGuide/managing-schedule-flexible-time-windows.html
   *
   * @default - FlexibleTimeWindowMode.OFF
   */
  readonly flexibleTimeWindowMode?: FlexibleTimeWindowMode;

  /**
   * The maximum time window during which the schedule can be invoked.
   *
   * @default - Required if flexibleTimeWindowMode is FLEXIBLE.
   */
  readonly maximumWindowInMinutes?: Duration;
}
```

### set the added property to `CfnSchedule` construct
Basically, just set the values as documented, but with the following validations.
- If `flexibleTimeWindowMode` is `FLEXIBLE`
  - `maximumWindowInMinutes` must be specified
  - `maximumWindowInMinutes` must be set from 1 to 1440 minutes

https://docs.aws.amazon.com/ja_jp/AWSCloudFormation/latest/UserGuide/aws-properties-scheduler-schedule-flexibletimewindow.html

In addition, I added some unit tests and integ-tests.

### others
- fixed typo in README
  -  `customizeable` => `customizable`

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add IPv6 support for VPC to the roadmap.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR introduces an internal handler framework used to code generate constructs that extend a lambda `Function`, lambda `SingletonFunction`, or core `CustomResourceProvider` construct and prohibit the user from directly configuring the `handler`, `runtime`, `code`, and `codeDirectory` properties.  In doing this, we are able to establish best practices, runtime enforcement, and consistency across all handlers we build and vend within the aws-cdk.

As expected, no integ tests were changed as a result of this PR. To verify that the code generated custom resource providers are working correctly I force ran three integ tests all targeted at an individual custom resource provider:
1. integ.global.ts to test replica provider and the code generated construct extending `Function`
2. integ.bucket-auto-delete-objects.ts to test auto delete objects provider and the code generated construct extending `CustomResourceProvider`
3. integ.aws-api.ts to test aws api provider and the code generated construct `SingletonFunction`

All of these integ tests passed successfully.

Closes aws#27303

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…for role (aws#28403)

This test case is flagged up by automated security tooling. There is no actual risk since this is a test stack that is only short-lived and the permissions for the role only allow consuming messages from a queue that doesn't hold any data.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/download-artifact/releases">actions/download-artifact's releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>What's Changed</h2>
<p>The release of upload-artifact@v4 and download-artifact@v4 are major changes to the backend architecture of Artifacts. They have numerous performance and behavioral improvements.</p>
<p>For more information, see the <a href="https://github.com/actions/toolkit/tree/main/packages/artifact"><code>@​actions/artifact</code></a> documentation.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/bflad"><code>@​bflad</code></a> made their first contribution in <a href="https://redirect.github.com/actions/download-artifact/pull/194">actions/download-artifact#194</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/actions/download-artifact/compare/v3...v4.0.0">https://github.com/actions/download-artifact/compare/v3...v4.0.0</a></p>
<h2>v3.0.2</h2>
<ul>
<li>Bump <code>@actions/artifact</code> to v1.1.1 - <a href="https://redirect.github.com/actions/download-artifact/pull/195">actions/download-artifact#195</a></li>
<li>Fixed a bug in Node16 where if an HTTP download finished too quickly (&lt;1ms, e.g. when it's mocked) we attempt to delete a temp file that has not been created yet <a href="hhttps://redirect.github.com/actions/toolkit/pull/1278">actions/toolkit#1278</a></li>
</ul>
<h2>v3.0.1</h2>
<ul>
<li><a href="https://redirect.github.com/actions/download-artifact/pull/178">Bump <code>@​actions/core</code> to 1.10.0</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/actions/download-artifact/commit/7a1cd3216ca9260cd8022db641d960b1db4d1be4"><code>7a1cd32</code></a> Merge pull request <a href="https://redirect.github.com/actions/download-artifact/issues/246">#246</a> from actions/v4-beta</li>
<li><a href="https://github.com/actions/download-artifact/commit/8f32874a49903ea488c5e7d476a9173e8706f409"><code>8f32874</code></a> licensed cache</li>
<li><a href="https://github.com/actions/download-artifact/commit/b5ff8444b1c4fcec8131f3cb1ddade813ddfacb1"><code>b5ff844</code></a> Merge pull request <a href="https://redirect.github.com/actions/download-artifact/issues/245">#245</a> from actions/robherley/v4-documentation</li>
<li><a href="https://github.com/actions/download-artifact/commit/f07a0f73f51b3f1d41667c782c821b9667da9d19"><code>f07a0f7</code></a> Update README.md</li>
<li><a href="https://github.com/actions/download-artifact/commit/7226129829bb686fdff47bd63bbd0d1373993a84"><code>7226129</code></a> update test workflow to use different artifact names for matrix</li>
<li><a href="https://github.com/actions/download-artifact/commit/ada9446619b84dd8a557aaaec3b79b58c4986cdf"><code>ada9446</code></a> update docs and bump <code>@​actions/artifact</code></li>
<li><a href="https://github.com/actions/download-artifact/commit/7eafc8b729ba790ce8f2cee54be8ad6257af4c7c"><code>7eafc8b</code></a> Merge pull request <a href="https://redirect.github.com/actions/download-artifact/issues/244">#244</a> from actions/robherley/bump-toolkit</li>
<li><a href="https://github.com/actions/download-artifact/commit/3132d12662b5915f20cdbf449465896962101abf"><code>3132d12</code></a> consume latest toolkit</li>
<li><a href="https://github.com/actions/download-artifact/commit/5be1d3867182a382bc59f2775e002595f487aa88"><code>5be1d38</code></a> Merge pull request <a href="https://redirect.github.com/actions/download-artifact/issues/243">#243</a> from actions/robherley/v4-beta-updates</li>
<li><a href="https://github.com/actions/download-artifact/commit/465b526e63559575a64716cdbb755bc78dfb263b"><code>465b526</code></a> consume latest <code>@​actions/toolkit</code></li>
<li>Additional commits viewable in <a href="https://github.com/actions/download-artifact/compare/v3...v4">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>What's Changed</h2>
<p>The release of upload-artifact@v4 and download-artifact@v4 are major changes to the backend architecture of Artifacts. They have numerous performance and behavioral improvements.</p>
<p>For more information, see the <a href="https://github.com/actions/toolkit/tree/main/packages/artifact"><code>@​actions/artifact</code></a> documentation.</p>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/vmjoseph"><code>@​vmjoseph</code></a> made their first contribution in <a href="https://redirect.github.com/actions/upload-artifact/pull/464">actions/upload-artifact#464</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/actions/upload-artifact/compare/v3...v4.0.0">https://github.com/actions/upload-artifact/compare/v3...v4.0.0</a></p>
<h2>v3.1.3</h2>
<h2>What's Changed</h2>
<ul>
<li>chore(github): remove trailing whitespaces by <a href="https://github.com/ljmf00"><code>@​ljmf00</code></a> in <a href="https://redirect.github.com/actions/upload-artifact/pull/313">actions/upload-artifact#313</a></li>
<li>Bump <code>@​actions/artifact</code> version to v1.1.2 by <a href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a href="https://redirect.github.com/actions/upload-artifact/pull/436">actions/upload-artifact#436</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/actions/upload-artifact/compare/v3...v3.1.3">https://github.com/actions/upload-artifact/compare/v3...v3.1.3</a></p>
<h2>v3.1.2</h2>
<ul>
<li>Update all <code>@actions/*</code> NPM packages to their latest versions- <a href="https://redirect.github.com/actions/upload-artifact/issues/374">#374</a></li>
<li>Update all dev dependencies to their most recent versions - <a href="https://redirect.github.com/actions/upload-artifact/issues/375">#375</a></li>
</ul>
<h2>v3.1.1</h2>
<ul>
<li>Update actions/core package to latest version to remove <code>set-output</code> deprecation warning <a href="https://redirect.github.com/actions/upload-artifact/issues/351">#351</a></li>
</ul>
<h2>v3.1.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump <code>@​actions/artifact</code> to v1.1.0 (<a href="https://redirect.github.com/actions/upload-artifact/pull/327">actions/upload-artifact#327</a>)
<ul>
<li>Adds checksum headers on artifact upload (<a href="https://redirect.github.com/actions/toolkit/pull/1095">actions/toolkit#1095</a>) (<a href="https://redirect.github.com/actions/toolkit/pull/1063">actions/toolkit#1063</a>)</li>
</ul>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/actions/upload-artifact/commit/c7d193f32edcb7bfad88892161225aeda64e9392"><code>c7d193f</code></a> Merge pull request <a href="https://redirect.github.com/actions/upload-artifact/issues/466">#466</a> from actions/v4-beta</li>
<li><a href="https://github.com/actions/upload-artifact/commit/13131bb095770b4070a7477c3cd2d96e1c16d9f4"><code>13131bb</code></a> licensed cache</li>
<li><a href="https://github.com/actions/upload-artifact/commit/4a6c273b9834f66a1d05c170dc3f80f9cdb9def1"><code>4a6c273</code></a> Merge branch 'main' into v4-beta</li>
<li><a href="https://github.com/actions/upload-artifact/commit/f391bb91a3d3118aeca171c365bb319ece276b37"><code>f391bb9</code></a> Merge pull request <a href="https://redirect.github.com/actions/upload-artifact/issues/465">#465</a> from actions/robherley/v4-documentation</li>
<li><a href="https://github.com/actions/upload-artifact/commit/9653d03c4b74c32144e02dae644fea70e079d4b3"><code>9653d03</code></a> Apply suggestions from code review</li>
<li><a href="https://github.com/actions/upload-artifact/commit/875b63076402f25ef9d52c294c86ba4f97810575"><code>875b630</code></a> add limitations section</li>
<li><a href="https://github.com/actions/upload-artifact/commit/ecb21463e93740a6be75c3116242169bfdbcb15a"><code>ecb2146</code></a> add compression example</li>
<li><a href="https://github.com/actions/upload-artifact/commit/5e7604f84a055838f64ed68bb9904751523081ae"><code>5e7604f</code></a> trim some repeated info</li>
<li><a href="https://github.com/actions/upload-artifact/commit/d6437d07581fe318a364512e6cf6b1dca6b4f92c"><code>d6437d0</code></a> naming</li>
<li><a href="https://github.com/actions/upload-artifact/commit/1b561557037b4957d7d184e9aac02bec86c771eb"><code>1b56155</code></a> s/v4-beta/v4/g</li>
<li>Additional commits viewable in <a href="https://github.com/actions/upload-artifact/compare/v3...v4">compare view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)


</details>
Update AWS Service Spec packages to latest versions

**@aws-cdk/aws-service-spec changes:**
```
├[~] service aws-applicationautoscaling
│ └ resources
│    └[~] resource AWS::ApplicationAutoScaling::ScalingPolicy
│      ├ attributes
│      │  └ Arn: (documentation changed)
│      └ types
│         ├[~] type TargetTrackingMetric
│         │ ├  - documentation: Represents a specific metric.
│         │ │  + documentation: Represents a specific metric for a target tracking scaling policy for Application Auto Scaling.
│         │ │  Metric is a property of the [AWS::ApplicationAutoScaling::ScalingPolicy TargetTrackingMetricStat](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-applicationautoscaling-scalingpolicy-targettrackingmetricstat.html) property type.
│         │ └ properties
│         │    ├ Dimensions: (documentation changed)
│         │    └ Namespace: (documentation changed)
│         ├[~] type TargetTrackingMetricDataQuery
│         │ ├  - documentation: The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
│         │ │  + documentation: The metric data to return. Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
│         │ │  You can call for a single metric or perform math expressions on multiple metrics. Any expressions used in a metric specification must eventually return a single time series.
│         │ │  For more information and examples, see [Create a target tracking scaling policy for Application Auto Scaling using metric math](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking-metric-math.html) in the *Application Auto Scaling User Guide* .
│         │ │  `TargetTrackingMetricDataQuery` is a property of the [AWS::ApplicationAutoScaling::ScalingPolicy CustomizedMetricSpecification](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-applicationautoscaling-scalingpolicy-customizedmetricspecification.html) property type.
│         │ └ properties
│         │    ├ Expression: (documentation changed)
│         │    ├ Id: (documentation changed)
│         │    ├ MetricStat: (documentation changed)
│         │    └ ReturnData: (documentation changed)
│         ├[~] type TargetTrackingMetricDimension
│         │ └  - documentation: Describes the dimension of a metric.
│         │    + documentation: `TargetTrackingMetricDimension` specifies a name/value pair that is part of the identity of a CloudWatch metric for the `Dimensions` property of the [AWS::ApplicationAutoScaling::ScalingPolicy TargetTrackingMetric](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-applicationautoscaling-scalingpolicy-targettrackingmetric.html) property type. Duplicate dimensions are not allowed.
│         └[~] type TargetTrackingMetricStat
│           ├  - documentation: This structure defines the CloudWatch metric to return, along with the statistic, period, and unit.
│           │  + documentation: This structure defines the CloudWatch metric to return, along with the statistic, period, and unit.
│           │  `TargetTrackingMetricStat` is a property of the [AWS::ApplicationAutoScaling::ScalingPolicy TargetTrackingMetricDataQuery](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-applicationautoscaling-scalingpolicy-targettrackingmetricdataquery.html) property type.
│           │  For more information about the CloudWatch terminology below, see [Amazon CloudWatch concepts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html) in the *Amazon CloudWatch User Guide* .
│           └ properties
│              ├ Metric: (documentation changed)
│              ├ Stat: (documentation changed)
│              └ Unit: (documentation changed)
├[~] service aws-appsync
│ └ resources
│    ├[~] resource AWS::AppSync::DataSource
│    │ └ attributes
│    │    └ Id: (documentation changed)
│    ├[~] resource AWS::AppSync::GraphQLApi
│    │ └ attributes
│    │    ├[+] GraphQLEndpointArn: string
│    │    └ Id: (documentation changed)
│    └[~] resource AWS::AppSync::GraphQLSchema
│      └ attributes
│         └ Id: (documentation changed)
├[+] service aws-b2bi
│ ├  capitalized: B2BI
│ │  cloudFormationNamespace: AWS::B2BI
│ │  name: aws-b2bi
│ │  shortName: b2bi
│ └ resources
│    ├resource AWS::B2BI::Capability
│    │├  name: Capability
│    ││  cloudFormationType: AWS::B2BI::Capability
│    ││  documentation: Definition of AWS::B2BI::Capability Resource Type
│    ││  tagInformation: {"tagPropertyName":"Tags","variant":"standard"}
│    │├ properties
│    ││  ├Configuration: CapabilityConfiguration (required)
│    ││  ├InstructionsDocuments: Array<S3Location>
│    ││  ├Name: string (required)
│    ││  ├Tags: Array<tag>
│    ││  └Type: string (required, immutable)
│    │├ attributes
│    ││  ├CapabilityArn: string
│    ││  ├CapabilityId: string
│    ││  ├CreatedAt: string
│    ││  └ModifiedAt: string
│    │└ types
│    │   ├type CapabilityConfiguration
│    │   │├  name: CapabilityConfiguration
│    │   │└ properties
│    │   │   └Edi: EdiConfiguration (required)
│    │   ├type EdiConfiguration
│    │   │├  name: EdiConfiguration
│    │   │└ properties
│    │   │   ├Type: EdiType (required)
│    │   │   ├InputLocation: S3Location (required)
│    │   │   ├OutputLocation: S3Location (required)
│    │   │   └TransformerId: string (required)
│    │   ├type EdiType
│    │   │├  name: EdiType
│    │   │└ properties
│    │   │   └X12Details: X12Details (required)
│    │   ├type X12Details
│    │   │├  name: X12Details
│    │   │└ properties
│    │   │   ├TransactionSet: string
│    │   │   └Version: string
│    │   └type S3Location
│    │    ├  name: S3Location
│    │    └ properties
│    │       ├BucketName: string
│    │       └Key: string
│    ├resource AWS::B2BI::Partnership
│    │├  name: Partnership
│    ││  cloudFormationType: AWS::B2BI::Partnership
│    ││  documentation: Definition of AWS::B2BI::Partnership Resource Type
│    ││  tagInformation: {"tagPropertyName":"Tags","variant":"standard"}
│    │├ properties
│    ││  ├Capabilities: Array<string>
│    ││  ├Email: string (required, immutable)
│    ││  ├Name: string (required)
│    ││  ├Phone: string (immutable)
│    ││  ├ProfileId: string (required, immutable)
│    ││  └Tags: Array<tag>
│    │└ attributes
│    │   ├CreatedAt: string
│    │   ├ModifiedAt: string
│    │   ├PartnershipArn: string
│    │   ├PartnershipId: string
│    │   └TradingPartnerId: string
│    ├resource AWS::B2BI::Profile
│    │├  name: Profile
│    ││  cloudFormationType: AWS::B2BI::Profile
│    ││  documentation: Definition of AWS::B2BI::Profile Resource Type
│    ││  tagInformation: {"tagPropertyName":"Tags","variant":"standard"}
│    │├ properties
│    ││  ├BusinessName: string (required)
│    ││  ├Email: string
│    ││  ├Logging: string (required, immutable)
│    ││  ├Name: string (required)
│    ││  ├Phone: string (required)
│    ││  └Tags: Array<tag>
│    │└ attributes
│    │   ├CreatedAt: string
│    │   ├LogGroupName: string
│    │   ├ModifiedAt: string
│    │   ├ProfileArn: string
│    │   └ProfileId: string
│    └resource AWS::B2BI::Transformer
│     ├  name: Transformer
│     │  cloudFormationType: AWS::B2BI::Transformer
│     │  documentation: Definition of AWS::B2BI::Transformer Resource Type
│     │  tagInformation: {"tagPropertyName":"Tags","variant":"standard"}
│     ├ properties
│     │  ├EdiType: EdiType (required)
│     │  ├FileFormat: string (required)
│     │  ├MappingTemplate: string (required)
│     │  ├ModifiedAt: string
│     │  ├Name: string (required)
│     │  ├SampleDocument: string
│     │  ├Status: string (required)
│     │  └Tags: Array<tag>
│     ├ attributes
│     │  ├CreatedAt: string
│     │  ├TransformerArn: string
│     │  └TransformerId: string
│     └ types
│        ├type EdiType
│        │├  name: EdiType
│        │└ properties
│        │   └X12Details: X12Details (required)
│        └type X12Details
│         ├  name: X12Details
│         └ properties
│            ├TransactionSet: string
│            └Version: string
├[~] service aws-cloud9
│ └ resources
│    └[~] resource AWS::Cloud9::EnvironmentEC2
│      └ properties
│         └ ImageId: - string (immutable)
│                    + string (required, immutable)
├[~] service aws-cloudfront
│ └ resources
│    └[+] resource AWS::CloudFront::KeyValueStore
│      ├  name: KeyValueStore
│      │  cloudFormationType: AWS::CloudFront::KeyValueStore
│      │  documentation: The Key Value Store. Use this to separate data from function code, allowing you to update data without having to publish a new version of a function. The Key Value Store holds keys and their corresponding values.
│      ├ properties
│      │  ├Name: string (required, immutable)
│      │  ├Comment: string
│      │  └ImportSource: ImportSource
│      ├ attributes
│      │  ├Arn: string
│      │  ├Id: string
│      │  └Status: string
│      └ types
│         └type ImportSource
│          ├  documentation: The import source for the Key Value Store.
│          │  name: ImportSource
│          └ properties
│             ├SourceType: string (required)
│             └SourceArn: string (required)
├[~] service aws-cloudtrail
│ └ resources
│    ├[~] resource AWS::CloudTrail::EventDataStore
│    │ ├ properties
│    │ │  ├ FederationEnabled: (documentation changed)
│    │ │  └ FederationRoleArn: (documentation changed)
│    │ └ types
│    │    └[~] type AdvancedFieldSelector
│    │      └ properties
│    │         └ Field: (documentation changed)
│    └[~] resource AWS::CloudTrail::Trail
│      └ types
│         ├[~] type AdvancedFieldSelector
│         │ └ properties
│         │    └ Field: (documentation changed)
│         └[~] type DataResource
│           └ properties
│              └ Type: (documentation changed)
├[~] service aws-cloudwatch
│ └ resources
│    └[~] resource AWS::CloudWatch::MetricStream
│      └ properties
│         ├ OutputFormat: (documentation changed)
│         └ StatisticsConfigurations: (documentation changed)
├[~] service aws-codedeploy
│ └ resources
│    ├[~] resource AWS::CodeDeploy::DeploymentConfig
│    │ ├ properties
│    │ │  └ ZonalConfig: (documentation changed)
│    │ └ types
│    │    ├[~] type MinimumHealthyHostsPerZone
│    │    │ ├  - documentation: undefined
│    │    │ │  + documentation: Information about the minimum number of healthy instances per Availability Zone.
│    │    │ └ properties
│    │    │    ├ Type: (documentation changed)
│    │    │    └ Value: (documentation changed)
│    │    └[~] type ZonalConfig
│    │      ├  - documentation: undefined
│    │      │  + documentation: Configure the `ZonalConfig` object if you want AWS CodeDeploy to deploy your application to one [Availability Zone](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones) at a time, within an AWS Region. By deploying to one Availability Zone at a time, you can expose your deployment to a progressively larger audience as confidence in the deployment's performance and viability grows. If you don't configure the `ZonalConfig` object, CodeDeploy deploys your application to a random selection of hosts across a Region.
│    │      │  For more information about the zonal configuration feature, see [zonal configuration](https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-configurations-create.html#zonal-config) in the *CodeDeploy User Guide* .
│    │      └ properties
│    │         ├ FirstZoneMonitorDurationInSeconds: (documentation changed)
│    │         ├ MinimumHealthyHostsPerZone: (documentation changed)
│    │         └ MonitorDurationInSeconds: (documentation changed)
│    └[~] resource AWS::CodeDeploy::DeploymentGroup
│      └ properties
│         └[+] TerminationHookEnabled: boolean
├[~] service aws-codepipeline
│ └ resources
│    └[~] resource AWS::CodePipeline::Pipeline
│      ├ properties
│      │  ├[+] PipelineType: string
│      │  ├[+] Triggers: Array<PipelineTriggerDeclaration>
│      │  └[+] Variables: Array<VariableDeclaration>
│      └ types
│         ├[+] type GitConfiguration
│         │ ├  documentation: A type of trigger configuration for Git-based source actions.
│         │ │  > You can specify the Git configuration trigger type for all third-party Git-based source actions that are supported by the `CodeStarSourceConnection` action type.
│         │ │  name: GitConfiguration
│         │ └ properties
│         │    ├Push: Array<GitPushFilter>
│         │    └SourceActionName: string (required)
│         ├[+] type GitPushFilter
│         │ ├  documentation: The event criteria that specify when a specified repository event will start the pipeline for the specified trigger configuration, such as the lists of Git tags to include and exclude.
│         │ │  name: GitPushFilter
│         │ └ properties
│         │    └Tags: GitTagFilterCriteria
│         ├[+] type GitTagFilterCriteria
│         │ ├  documentation: The Git tags specified as filter criteria for whether a Git tag repository event will start the pipeline.
│         │ │  name: GitTagFilterCriteria
│         │ └ properties
│         │    ├Includes: Array<string>
│         │    └Excludes: Array<string>
│         ├[+] type PipelineTriggerDeclaration
│         │ ├  documentation: Represents information about the specified trigger configuration, such as the filter criteria and the source stage for the action that contains the trigger.
│         │ │  > This is only supported for the `CodeStarSourceConnection` action type. > When a trigger configuration is specified, default change detection for repository and branch commits is disabled.
│         │ │  name: PipelineTriggerDeclaration
│         │ └ properties
│         │    ├GitConfiguration: GitConfiguration
│         │    └ProviderType: string (required)
│         └[+] type VariableDeclaration
│           ├  documentation: A variable declared at the pipeline level.
│           │  name: VariableDeclaration
│           └ properties
│              ├DefaultValue: string
│              ├Description: string
│              └Name: string (required)
├[~] service aws-cognito
│ └ resources
│    ├[~] resource AWS::Cognito::UserPool
│    │ └ attributes
│    │    └ UserPoolId: (documentation changed)
│    ├[~] resource AWS::Cognito::UserPoolClient
│    │ └ properties
│    │    └ AllowedOAuthFlows: (documentation changed)
│    ├[~] resource AWS::Cognito::UserPoolGroup
│    │ └  - documentation: Specifies a new group in the identified user pool.
│    │    Calling this action requires developer credentials.
│    │    > If you don't specify a value for a parameter, Amazon Cognito sets it to a default value.
│    │    + documentation: A user pool group that you can add a user to.
│    └[~] resource AWS::Cognito::UserPoolUser
│      └ properties
│         └ UserAttributes: (documentation changed)
├[~] service aws-config
│ └ resources
│    └[~] resource AWS::Config::ConfigurationRecorder
│      ├ properties
│      │  └[+] RecordingMode: RecordingMode
│      └ types
│         ├[+] type RecordingMode
│         │ ├  documentation: Specifies the default recording frequency that AWS Config uses to record configuration changes. AWS Config supports *Continuous recording* and *Daily recording* .
│         │ │  - Continuous recording allows you to record configuration changes continuously whenever a change occurs.
│         │ │  - Daily recording allows you to receive a configuration item (CI) representing the most recent state of your resources over the last 24-hour period, only if it’s different from the previous CI recorded.
│         │ │  > AWS Firewall Manager depends on continuous recording to monitor your resources. If you are using Firewall Manager, it is recommended that you set the recording frequency to Continuous. 
│         │ │  You can also override the recording frequency for specific resource types.
│         │ │  name: RecordingMode
│         │ └ properties
│         │    ├RecordingModeOverrides: Array<RecordingModeOverride>
│         │    └RecordingFrequency: string (required)
│         └[+] type RecordingModeOverride
│           ├  documentation: An object for you to specify your overrides for the recording mode.
│           │  name: RecordingModeOverride
│           └ properties
│              ├ResourceTypes: Array<string> (required)
│              ├RecordingFrequency: string (required)
│              └Description: string
├[~] service aws-connect
│ └ resources
│    ├[~] resource AWS::Connect::Instance
│    │ └ properties
│    │    └ Tags: (documentation changed)
│    ├[~] resource AWS::Connect::InstanceStorageConfig
│    │ └ types
│    │    └[~] type KinesisVideoStreamConfig
│    │      └ properties
│    │         └ EncryptionConfig: - EncryptionConfig
│    │                             + EncryptionConfig (required)
│    └[~] resource AWS::Connect::Rule
│      └ types
│         ├[~] type Actions
│         │ └ properties
│         │    ├[+] CreateCaseActions: Array<CreateCaseAction>
│         │    ├[+] EndAssociatedTaskActions: Array<json>
│         │    └[+] UpdateCaseActions: Array<UpdateCaseAction>
│         ├[+] type CreateCaseAction
│         │ ├  documentation: The definition for create case action.
│         │ │  name: CreateCaseAction
│         │ └ properties
│         │    ├Fields: Array<Field> (required)
│         │    └TemplateId: string (required)
│         ├[+] type Field
│         │ ├  documentation: The field of the case.
│         │ │  name: Field
│         │ └ properties
│         │    ├Id: string (required)
│         │    └Value: FieldValue (required)
│         ├[+] type FieldValue
│         │ ├  documentation: The value of the field.
│         │ │  name: FieldValue
│         │ └ properties
│         │    ├StringValue: string
│         │    ├BooleanValue: boolean
│         │    ├DoubleValue: number
│         │    └EmptyValue: json
│         └[+] type UpdateCaseAction
│           ├  documentation: The definition for update case action.
│           │  name: UpdateCaseAction
│           └ properties
│              └Fields: Array<Field> (required)
├[~] service aws-controltower
│ └ resources
│    └[~] resource AWS::ControlTower::LandingZone
│      └ properties
│         └ Manifest: (documentation changed)
├[~] service aws-datasync
│ └ resources
│    └[~] resource AWS::DataSync::Task
│      └ types
│         └[~] type Options
│           └ properties
│              └ OverwriteMode: (documentation changed)
├[~] service aws-dms
│ └ resources
│    ├[~] resource AWS::DMS::DataProvider
│    │ ├  - documentation: Resource schema for AWS::DMS::DataProvider
│    │ │  + documentation: Provides information that defines a data provider.
│    │ ├ properties
│    │ │  ├ DataProviderIdentifier: (documentation changed)
│    │ │  ├ DataProviderName: (documentation changed)
│    │ │  ├ Description: (documentation changed)
│    │ │  ├ Engine: (documentation changed)
│    │ │  └ Settings: (documentation changed)
│    │ ├ attributes
│    │ │  ├ DataProviderArn: (documentation changed)
│    │ │  └ DataProviderCreationTime: (documentation changed)
│    │ └ types
│    │    └[~] type PostgreSqlSettings
│    │      ├  - documentation: undefined
│    │      │  + documentation: Provides information that defines a PostgreSQL endpoint.
│    │      └ properties
│    │         ├ DatabaseName: (documentation changed)
│    │         ├ Port: (documentation changed)
│    │         └ ServerName: (documentation changed)
│    ├[~] resource AWS::DMS::Endpoint
│    │ └ types
│    │    └[~] type IbmDb2Settings
│    │      └ properties
│    │         ├[+] KeepCsvFiles: boolean
│    │         ├[+] LoadTimeout: integer
│    │         ├[+] MaxFileSize: integer
│    │         └[+] WriteBufferSize: integer
│    ├[~] resource AWS::DMS::InstanceProfile
│    │ ├  - documentation: Resource schema for AWS::DMS::InstanceProfile.
│    │ │  + documentation: Provides information that defines an instance profile.
│    │ ├ properties
│    │ │  ├ AvailabilityZone: (documentation changed)
│    │ │  ├ Description: (documentation changed)
│    │ │  ├ InstanceProfileIdentifier: (documentation changed)
│    │ │  ├ InstanceProfileName: (documentation changed)
│    │ │  ├ KmsKeyArn: (documentation changed)
│    │ │  ├ NetworkType: (documentation changed)
│    │ │  ├ PubliclyAccessible: (documentation changed)
│    │ │  ├ SubnetGroupIdentifier: (documentation changed)
│    │ │  └ VpcSecurityGroups: (documentation changed)
│    │ └ attributes
│    │    ├ InstanceProfileArn: (documentation changed)
│    │    └ InstanceProfileCreationTime: (documentation changed)
│    └[~] resource AWS::DMS::MigrationProject
│      ├  - documentation: Resource schema for AWS::DMS::MigrationProject
│      │  + documentation: Provides information that defines a migration project.
│      ├ properties
│      │  ├ Description: (documentation changed)
│      │  ├ InstanceProfileArn: (documentation changed)
│      │  ├ InstanceProfileIdentifier: (documentation changed)
│      │  ├ InstanceProfileName: (documentation changed)
│      │  ├ MigrationProjectIdentifier: (documentation changed)
│      │  ├ MigrationProjectName: (documentation changed)
│      │  ├ SchemaConversionApplicationAttributes: (documentation changed)
│      │  ├ SourceDataProviderDescriptors: (documentation changed)
│      │  ├ TargetDataProviderDescriptors: (documentation changed)
│      │  └ TransformationRules: (documentation changed)
│      ├ attributes
│      │  └ MigrationProjectArn: (documentation changed)
│      └ types
│         └[~] type DataProviderDescriptor
│           ├  - documentation: It is an object that describes Source and Target DataProviders and credentials for connecting to databases that are used in MigrationProject
│           │  + documentation: Information about a data provider.
│           └ properties
│              ├ DataProviderArn: (documentation changed)
│              ├ DataProviderName: (documentation changed)
│              ├ SecretsManagerAccessRoleArn: (documentation changed)
│              └ SecretsManagerSecretId: (documentation changed)
├[~] service aws-ec2
│ └ resources
│    ├[~] resource AWS::EC2::EC2Fleet
│    │ └ types
│    │    └[~] type TargetCapacitySpecificationRequest
│    │      └ properties
│    │         ├ DefaultTargetCapacityType: (documentation changed)
│    │         ├ TargetCapacityUnitType: (documentation changed)
│    │         └ TotalTargetCapacity: (documentation changed)
│    ├[~] resource AWS::EC2::Instance
│    │ ├ properties
│    │ │  ├ SsmAssociations: (documentation changed)
│    │ │  └ UserData: (documentation changed)
│    │ ├ attributes
│    │ │  └[+] InstanceId: string
│    │ └ types
│    │    └[~] type NetworkInterface
│    │      └ properties
│    │         └ AssociatePublicIpAddress: (documentation changed)
│    ├[~] resource AWS::EC2::LaunchTemplate
│    │ └ types
│    │    ├[~] type MetadataOptions
│    │    │ └ properties
│    │    │    └ HttpTokens: (documentation changed)
│    │    └[~] type NetworkInterface
│    │      └ properties
│    │         └ AssociatePublicIpAddress: (documentation changed)
│    ├[~] resource AWS::EC2::Route
│    │ └ properties
│    │    └[+] CoreNetworkArn: string
│    ├[~] resource AWS::EC2::SecurityGroupEgress
│    │ └ attributes
│    │    └ Id: (documentation changed)
│    ├[+] resource AWS::EC2::SnapshotBlockPublicAccess
│    │ ├  name: SnapshotBlockPublicAccess
│    │ │  cloudFormationType: AWS::EC2::SnapshotBlockPublicAccess
│    │ │  documentation: Specifies the state of the *block public access for snapshots* setting for the Region. For more information, see [Block public access for snapshots](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/block-public-access-snapshots.html) .
│    │ ├ properties
│    │ │  └State: string (required)
│    │ └ attributes
│    │    └AccountId: string
│    ├[~] resource AWS::EC2::SpotFleet
│    │ └ types
│    │    ├[~] type InstanceNetworkInterfaceSpecification
│    │    │ └ properties
│    │    │    └ AssociatePublicIpAddress: (documentation changed)
│    │    └[~] type SpotFleetRequestConfigData
│    │      └ properties
│    │         └ TargetCapacityUnitType: (documentation changed)
│    └[~] resource AWS::EC2::Subnet
│      └ properties
│         └ MapPublicIpOnLaunch: (documentation changed)
├[~] service aws-elasticache
│ └ resources
│    └[~] resource AWS::ElastiCache::ServerlessCache
│      ├ properties
│      │  ├[+] Endpoint: Endpoint
│      │  └[+] ReaderEndpoint: Endpoint
│      └ attributes
│         ├[-] Endpoint: Endpoint
│         ├[+] Endpoint.Address: string
│         ├[+] Endpoint.Port: integer
│         ├[-] ReaderEndpoint: Endpoint
│         ├[+] ReaderEndpoint.Address: string
│         └[+] ReaderEndpoint.Port: integer
├[~] service aws-emr
│ └ resources
│    ├[~] resource AWS::EMR::Cluster
│    │ ├ properties
│    │ │  ├[+] EbsRootVolumeIops: integer (immutable)
│    │ │  ├[+] EbsRootVolumeThroughput: integer (immutable)
│    │ │  └[+] PlacementGroupConfigs: Array<PlacementGroupConfig> (immutable)
│    │ └ types
│    │    └[+] type PlacementGroupConfig
│    │      ├  name: PlacementGroupConfig
│    │      └ properties
│    │         ├InstanceRole: string (required)
│    │         └PlacementStrategy: string
│    └[~] resource AWS::EMR::Studio
│      └ properties
│         ├ EncryptionKeyArn: (documentation changed)
│         ├ IdcInstanceArn: (documentation changed)
│         ├ IdcUserAssignment: (documentation changed)
│         └ TrustedIdentityPropagationEnabled: (documentation changed)
├[~] service aws-eventschemas
│ └ resources
│    ├[~] resource AWS::EventSchemas::Registry
│    │ └ attributes
│    │    └[-] Id: string
│    └[~] resource AWS::EventSchemas::Schema
│      └ attributes
│         ├[-] Id: string
│         ├[+] LastModified: string
│         └[+] VersionCreatedDate: string
├[~] service aws-fis
│ └ resources
│    ├[~] resource AWS::FIS::ExperimentTemplate
│    │ ├  - documentation: Describes an experiment template.
│    │ │  + documentation: Specifies an experiment template.
│    │ │  An experiment template includes the following components:
│    │ │  - *Targets* : A target can be a specific resource in your AWS environment, or one or more resources that match criteria that you specify, for example, resources that have specific tags.
│    │ │  - *Actions* : The actions to carry out on the target. You can specify multiple actions, the duration of each action, and when to start each action during an experiment.
│    │ │  - *Stop conditions* : If a stop condition is triggered while an experiment is running, the experiment is automatically stopped. You can define a stop condition as a CloudWatch alarm.
│    │ │  For more information, see [Experiment templates](https://docs.aws.amazon.com/fis/latest/userguide/experiment-templates.html) in the *AWS Fault Injection Service User Guide* .
│    │ └ types
│    │    ├[~] type ExperimentTemplateAction
│    │    │ └  - documentation: Describes an action for an experiment template.
│    │    │    + documentation: Specifies an action for an experiment template.
│    │    │    For more information, see [Actions](https://docs.aws.amazon.com/fis/latest/userguide/actions.html) in the *AWS Fault Injection Service User Guide* .
│    │    ├[~] type ExperimentTemplateLogConfiguration
│    │    │ ├  - documentation: Describes the configuration for experiment logging.
│    │    │ │  + documentation: Specifies the configuration for experiment logging.
│    │    │ │  For more information, see [Experiment logging](https://docs.aws.amazon.com/fis/latest/userguide/monitoring-logging.html) in the *AWS Fault Injection Service User Guide* .
│    │    │ └ properties
│    │    │    ├ CloudWatchLogsConfiguration: (documentation changed)
│    │    │    └ S3Configuration: (documentation changed)
│    │    ├[~] type ExperimentTemplateStopCondition
│    │    │ └  - documentation: Describes a stop condition for an experiment template.
│    │    │    + documentation: Specifies a stop condition for an experiment template.
│    │    │    For more information, see [Stop conditions](https://docs.aws.amazon.com/fis/latest/userguide/stop-conditions.html) in the *AWS Fault Injection Service User Guide* .
│    │    ├[~] type ExperimentTemplateTarget
│    │    │ ├  - documentation: Describes a target for an experiment template.
│    │    │ │  + documentation: Specifies a target for an experiment. You must specify at least one Amazon Resource Name (ARN) or at least one resource tag. You cannot specify both ARNs and tags.
│    │    │ │  For more information, see [Targets](https://docs.aws.amazon.com/fis/latest/userguide/targets.html) in the *AWS Fault Injection Service User Guide* .
│    │    │ └ properties
│    │    │    └ Parameters: (documentation changed)
│    │    └[~] type ExperimentTemplateTargetFilter
│    │      └  - documentation: Describes a filter used for the target resources in an experiment template.
│    │         + documentation: Specifies a filter used for the target resource input in an experiment template.
│    │         For more information, see [Resource filters](https://docs.aws.amazon.com/fis/latest/userguide/targets.html#target-filters) in the *AWS Fault Injection Service User Guide* .
│    └[~] resource AWS::FIS::TargetAccountConfiguration
│      └  - documentation: Creates a target account configuration for the experiment template. A target account configuration is required when `accountTargeting` of `experimentOptions` is set to `multi-account` . For more information, see [experiment options](https://docs.aws.amazon.com/fis/latest/userguide/experiment-options.html) in the *AWS Fault Injection Simulator User Guide* .
│         + documentation: Creates a target account configuration for the experiment template. A target account configuration is required when `accountTargeting` of `experimentOptions` is set to `multi-account` . For more information, see [experiment options](https://docs.aws.amazon.com/fis/latest/userguide/experiment-options.html) in the *AWS Fault Injection Service User Guide* .
├[~] service aws-gamelift
│ └ resources
│    └[~] resource AWS::GameLift::Fleet
│      └ properties
│         └[+] ApplyCapacity: string (immutable)
├[~] service aws-identitystore
│ └ resources
│    └[~] resource AWS::IdentityStore::GroupMembership
│      └ properties
│         ├ GroupId: - string (required)
│         │          + string (required, immutable)
│         └ MemberId: - MemberId (required)
│                     + MemberId (required, immutable)
├[~] service aws-imagebuilder
│ └ resources
│    ├[~] resource AWS::ImageBuilder::Component
│    │ └ properties
│    │    └ ChangeDescription: (documentation changed)
│    ├[~] resource AWS::ImageBuilder::ImagePipeline
│    │ ├ properties
│    │ │  ├[+] ExecutionRole: string
│    │ │  └[+] Workflows: Array<WorkflowConfiguration>
│    │ └ types
│    │    ├[~] type Schedule
│    │    │ └  - documentation: A schedule configures how often and when a pipeline will automatically create a new image.
│    │    │    + documentation: A schedule configures when and how often a pipeline will automatically create a new image.
│    │    ├[+] type WorkflowConfiguration
│    │    │ ├  documentation: The workflow configuration of the image
│    │    │ │  name: WorkflowConfiguration
│    │    │ └ properties
│    │    │    ├WorkflowArn: string
│    │    │    ├Parameters: Array<WorkflowParameter>
│    │    │    ├ParallelGroup: string
│    │    │    └OnFailure: string
│    │    └[+] type WorkflowParameter
│    │      ├  documentation: A parameter associated with the workflow
│    │      │  name: WorkflowParameter
│    │      └ properties
│    │         ├Name: string
│    │         └Value: Array<string>
│    ├[~] resource AWS::ImageBuilder::LifecyclePolicy
│    │ └ properties
│    │    └ ExecutionRole: (documentation changed)
│    └[+] resource AWS::ImageBuilder::Workflow
│      ├  name: Workflow
│      │  cloudFormationType: AWS::ImageBuilder::Workflow
│      │  documentation: Resource schema for AWS::ImageBuilder::Workflow
│      ├ properties
│      │  ├Name: string (required, immutable)
│      │  ├Version: string (required, immutable)
│      │  ├Description: string (immutable)
│      │  ├ChangeDescription: string (immutable)
│      │  ├Type: string (required, immutable)
│      │  ├Data: string (immutable)
│      │  ├Uri: string (immutable)
│      │  ├KmsKeyId: string (immutable)
│      │  └Tags: Map<string, string> (immutable)
│      └ attributes
│         └Arn: string
├[~] service aws-internetmonitor
│ └ resources
│    └[~] resource AWS::InternetMonitor::Monitor
│      └ types
│         ├[~] type InternetMeasurementsLogDelivery
│         │ └ properties
│         │    └ S3Config: (documentation changed)
│         └[~] type S3Config
│           ├  - documentation: The configuration for publishing Amazon CloudWatch Internet Monitor internet measurements to Amazon S3. The configuration includes the bucket name and (optionally) prefix for the S3 bucket to store the measurements, and the delivery status. The delivery status is `ENABLED` or `DISABLED` , depending on whether you choose to deliver internet measurements to S3 logs.
│           │  + documentation: The configuration for publishing Amazon CloudWatch Internet Monitor internet measurements to Amazon S3. The configuration includes the bucket name and (optionally) bucket prefix for the S3 bucket to store the measurements, and the delivery status. The delivery status is `ENABLED` if you choose to deliver internet measurements to S3 logs, and `DISABLED` otherwise.
│           │  The measurements are also published to Amazon CloudWatch Logs.
│           └ properties
│              ├ BucketName: (documentation changed)
│              ├ BucketPrefix: (documentation changed)
│              └ LogDeliveryStatus: (documentation changed)
├[~] service aws-iot
│ └ resources
│    ├[~] resource AWS::IoT::SoftwarePackage
│    │ └ properties
│    │    ├ Description: (documentation changed)
│    │    ├ PackageName: (documentation changed)
│    │    └ Tags: (documentation changed)
│    └[~] resource AWS::IoT::SoftwarePackageVersion
│      └ properties
│         ├ Attributes: (documentation changed)
│         ├ Description: (documentation changed)
│         ├ PackageName: (documentation changed)
│         ├ Tags: (documentation changed)
│         └ VersionName: (documentation changed)
├[~] service aws-iottwinmaker
│ └ resources
│    ├[~] resource AWS::IoTTwinMaker::ComponentType
│    │ ├ properties
│    │ │  └ CompositeComponentTypes: (documentation changed)
│    │ └ types
│    │    ├[~] type CompositeComponentType
│    │    │ ├  - documentation: An object that sets information about a composite component type.
│    │    │ │  + documentation: Specifies the ID of the composite component type.
│    │    │ └ properties
│    │    │    └ ComponentTypeId: (documentation changed)
│    │    └[~] type PropertyDefinition
│    │      └ properties
│    │         └ IsExternalId: (documentation changed)
│    └[~] resource AWS::IoTTwinMaker::Entity
│      ├ properties
│      │  ├ CompositeComponents: (documentation changed)
│      │  └ WorkspaceId: (documentation changed)
│      └ types
│         └[~] type CompositeComponent
│           ├  - documentation: undefined
│           │  + documentation: Information about a composite component.
│           └ properties
│              ├ ComponentPath: (documentation changed)
│              ├ ComponentTypeId: (documentation changed)
│              ├ Description: (documentation changed)
│              ├ Properties: (documentation changed)
│              ├ PropertyGroups: (documentation changed)
│              └ Status: (documentation changed)
├[~] service aws-lambda
│ └ resources
│    └[~] resource AWS::Lambda::EventInvokeConfig
│      └ attributes
│         └[-] Id: string
├[~] service aws-logs
│ └ resources
│    ├[~] resource AWS::Logs::DeliveryDestination
│    │ └  - documentation: This structure contains information about one *delivery destination* in your account. A delivery destination is an AWS resource that represents an AWS service that logs can be sent to. CloudWatch Logs, Amazon S3, are supported as Kinesis Data Firehose delivery destinations.
│    │    To configure logs delivery between a supported AWS service and a destination, you must do the following:
│    │    - Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see [PutDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) .
│    │    - Create a *delivery destination* , which is a logical object that represents the actual delivery destination.
│    │    - If you are delivering logs cross-account, you must use [PutDeliveryDestinationPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationolicy.html) in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
│    │    - Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) .
│    │    You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
│    │    + documentation: This structure contains information about one *delivery destination* in your account. A delivery destination is an AWS resource that represents an AWS service that logs can be sent to. CloudWatch Logs, Amazon S3, are supported as Kinesis Data Firehose delivery destinations.
│    │    To configure logs delivery between a supported AWS service and a destination, you must do the following:
│    │    - Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see [PutDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) .
│    │    - Create a *delivery destination* , which is a logical object that represents the actual delivery destination.
│    │    - If you are delivering logs cross-account, you must use [PutDeliveryDestinationPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html) in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
│    │    - Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) .
│    │    You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
│    └[~] resource AWS::Logs::DeliverySource
│      ├  - documentation: This structure contains information about one *delivery source* in your account. A delivery source is an AWS resource that sends logs to an AWS destination. The destination can be CloudWatch Logs, Amazon S3, or Kinesis Data Firehose.
│      │  Only some AWS services support being configured as a delivery source. These services are listed as *Supported [V2 Permissions]* in the table at [Enabling logging from AWS services.](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html)
│      │  To configure logs delivery between a supported AWS service and a destination, you must do the following:
│      │  - Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see [PutDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) .
│      │  - Create a *delivery destination* , which is a logical object that represents the actual delivery destination. For more information, see [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) .
│      │  - If you are delivering logs cross-account, you must use [PutDeliveryDestinationPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationolicy.html) in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
│      │  - Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) .
│      │  You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
│      │  + documentation: This structure contains information about one *delivery source* in your account. A delivery source is an AWS resource that sends logs to an AWS destination. The destination can be CloudWatch Logs, Amazon S3, or Kinesis Data Firehose.
│      │  Only some AWS services support being configured as a delivery source. These services are listed as *Supported [V2 Permissions]* in the table at [Enabling logging from AWS services.](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html)
│      │  To configure logs delivery between a supported AWS service and a destination, you must do the following:
│      │  - Create a delivery source, which is a logical object that represents the resource that is actually sending the logs. For more information, see [PutDeliverySource](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliverySource.html) .
│      │  - Create a *delivery destination* , which is a logical object that represents the actual delivery destination. For more information, see [PutDeliveryDestination](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestination.html) .
│      │  - If you are delivering logs cross-account, you must use [PutDeliveryDestinationPolicy](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutDeliveryDestinationPolicy.html) in the destination account to assign an IAM policy to the destination. This policy allows delivery to that destination.
│      │  - Create a *delivery* by pairing exactly one delivery source and one delivery destination. For more information, see [CreateDelivery](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateDelivery.html) .
│      │  You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.
│      └ properties
│         └ ResourceArn: (documentation changed)
├[~] service aws-opensearchservice
│ └ resources
│    └[~] resource AWS::OpenSearchService::Domain
│      └ properties
│         └ IPAddressType: (documentation changed)
├[~] service aws-organizations
│ └ resources
│    └[~] resource AWS::Organizations::Policy
│      └ properties
│         └ Content: (documentation changed)
├[~] service aws-osis
│ └ resources
│    └[~] resource AWS::OSIS::Pipeline
│      ├ properties
│      │  ├ BufferOptions: (documentation changed)
│      │  └ EncryptionAtRestOptions: (documentation changed)
│      └ types
│         ├[~] type BufferOptions
│         │ └  - documentation: Key-value pairs to configure buffering.
│         │    + documentation: Options that specify the configuration of a persistent buffer. To configure how OpenSearch Ingestion encrypts this data, set the EncryptionAtRestOptions.
│         └[~] type EncryptionAtRestOptions
│           ├  - documentation: Key-value pairs to configure encryption at rest.
│           │  + documentation: Options to control how OpenSearch encrypts all data-at-rest.
│           └ properties
│              └ KmsKeyArn: (documentation changed)
├[~] service aws-route53resolver
│ └ resources
│    └[~] resource AWS::Route53Resolver::ResolverConfig
├[~] service aws-s3
│ └ resources
│    └[~] resource AWS::S3::Bucket
│      └ types
│         ├[~] type FilterRule
│         │ └  - documentation: Specifies the Amazon S3 object key name to filter on and whether to filter on the suffix or prefix of the key name.
│         │    + documentation: Specifies the Amazon S3 object key name to filter on. An object key name is the name assigned to an object in your Amazon S3 bucket. You can also specify whether to filter on the suffix or prefix of the object key name. A prefix is a specific string of characters at the beginning of an object key name, which you can use to organize objects. For example, you can start the key names of related objects with a prefix, such as `2023-` or `engineering/` . Then, you can use `FilterRule` to find objects in a bucket with key names that have the same prefix. A suffix is similar to a prefix, but it is at the end of the object key name instead of at the beginning.
│         └[~] type ReplicationConfiguration
│           └  - documentation: A container for replication rules. You can add up to 1,000 rules. The maximum size of a replication configuration is 2 MB.
│              + documentation: A container for replication rules. You can add up to 1,000 rules. The maximum size of a replication configuration is 2 MB. The latest version of the replication configuration XML is V2. For more information about XML V2 replication configurations, see [Replication configuration](https://docs.aws.amazon.com/AmazonS3/latest/userguide/replication-add-config.html) in the *Amazon S3 User Guide* .
├[~] service aws-s3outposts
│ └ resources
│    ├[~] resource AWS::S3Outposts::Bucket
│    │ └ properties
│    │    └ OutpostId: (documentation changed)
│    └[~] resource AWS::S3Outposts::Endpoint
│      └ properties
│         └ OutpostId: (documentation changed)
├[~] service aws-sagemaker
│ └ resources
│    ├[~] resource AWS::SageMaker::Domain
│    │ ├ attributes
│    │ │  └[+] SingleSignOnApplicationArn: string
│    │ └ types
│    │    └[~] type CodeEditorAppSettings
│    │      └ properties
│    │         └[-] CustomImages: Array<CustomImage>
│    ├[~] resource AWS::SageMaker::FeatureGroup
│    │ └ types
│    │    └[~] type OnlineStoreConfig
│    │      └ properties
│    │         └ StorageType: (documentation changed)
│    └[~] resource AWS::SageMaker::UserProfile
│      └ types
│         └[~] type CodeEditorAppSettings
│           └ properties
│              └[-] CustomImages: Array<CustomImage>
├[~] service aws-securityhub
│ └ resources
│    └[~] resource AWS::SecurityHub::Hub
│      ├ properties
│      │  └ Tags: - json
│      │          + Map<string, string> ⇐ json
│      └ attributes
│         ├[+] ARN: string
│         └[+] SubscribedAt: string
├[~] service aws-servicecatalogappregistry
│ └ resources
│    └[~] resource AWS::ServiceCatalogAppRegistry::Application
│      └ attributes
│         ├[+] ApplicationName: string
│         ├[+] ApplicationTagKey: string
│         └[+] ApplicationTagValue: string
├[~] service aws-sns
│ └ resources
│    ├[~] resource AWS::SNS::Subscription
│    │ └ properties
│    │    └[+] ReplayPolicy: json
│    └[~] resource AWS::SNS::Topic
│      ├ properties
│      │  └ DeliveryStatusLogging: (documentation changed)
│      └ types
│         └[~] type LoggingConfig
│           ├  - documentation: undefined
│           │  + documentation: The `LoggingConfig` property type specifies the `Delivery` status logging configuration for an [`AWS::SNS::Topic`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sns-topic.html) .
│           └ properties
│              ├ FailureFeedbackRoleArn: (documentation changed)
│              ├ Protocol: (documentation changed)
│              ├ SuccessFeedbackRoleArn: (documentation changed)
│              └ SuccessFeedbackSampleRate: (documentation changed)
├[~] service aws-ssm
│ └ resources
│    └[~] resource AWS::SSM::Parameter
│      └ properties
│         └ Type: (documentation changed)
└[~] service aws-transfer
  └ resources
     ├[~] resource AWS::Transfer::Server
     │ ├ properties
     │ │  └ S3StorageOptions: (documentation changed)
     │ └ types
     │    ├[~] type EndpointDetails
     │    │ └ properties
     │    │    └ AddressAllocationIds: (documentation changed)
     │    └[~] type S3StorageOptions
     │      ├  - documentation: undefined
     │      │  + documentation: The Amazon S3 storage options that are configured for your server.
     │      └ properties
     │         └ DirectoryListingOptimization: (documentation changed)
     └[~] resource AWS::Transfer::User
       └ types
          └[~] type HomeDirectoryMapEntry
            └ properties
               └ Type: (documentation changed)
```
…authorizers (aws#28411)

I was using CDK and found just a few small typos, so I submitted this PR...

One is a method name, but it should not be a breaking change since it is a private scope.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…to aws-autoscaling (aws#28396)

Closes aws#28395

Adds the On-Demand `lowest-price` allocation strategy enum for aws-autoscaling. 

https://docs.aws.amazon.com/autoscaling/ec2/userguide/allocation-strategies.html#on-demand-allocation-strategy

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ect]` (aws#28414)

**CDK Version**: 2.115.0 (build 58027ee)
**Os**: macOS 14.2 (BuildVersion: 23C64)

I have observed the following warning showing up in my console today when running `cdk`:

> [Warning at /CdkStack/AuthorizerFunction] [object Object]

I was able to track down where this message was generated and apply a patch to see the error in a more descriptive format. 

For the records the error in my case was:

> addPermission() has no effect on a Lambda Function with region=${Token[TOKEN.23]}, account=${Token[TOKEN.24]}, in a Stack with region=${Token[AWS.Region.12]}, account=${Token[AWS.AccountId.8]}. Suppress this warning if this is is intentional, or pass sameEnvironment=true to fromFunctionAttributes() if you would like to add the permissions. [ack: UnclearLambdaEnvironment]

The fix proposed here makes sure that if

I am not sure this is the best way to fix this issue. The signature of the `addMessage` seems to expect a `string` for the `message` value, so maybe the error needs to be corrected downstream where the `addMessage` call is made (which judging from the stack trace seems to come from `aws-cdk-lib/aws-lambda/lib/function-base.js`).

Thoughts?

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…s#27799)

Closes aws#27449

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ls (aws#27787)

Because `AnyPrincipal` extends `ArnPrincipal` it gets caught up in the checks for parsing the ARN from the principal to get the account. This check should be skipped when the ARN is set to `"*"` because that can't be parsed.

Closes aws#27783.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Fix typo in method name (`convertArnPrincpalToAccountId` -> `convertArnPrincipalToAccountId`) and another `princpal` typo. 

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
mrgrain and others added 2 commits January 11, 2024 20:20
…ws#28669)

`--debug` exists for exactly one purpose: Printing source-mapped traces so we can find the code that is going wrong. 
Let's always enabled tracing when debugging.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
otaviomacedo and others added 14 commits January 11, 2024 21:32
…ws#28672)

ECS now supports managed instance draining which facilitates graceful termination of Amazon ECS instances for Capacity Providers.

Add a new constructor property, `enableManagedDraining`, to `AsgCapacityProvider`, to allow users to enable this feature.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ws#28660)

>  Can't destroy a stack that includes a rds database and rds parameter group where the database has removalPolicy property set to RemovalPolicy.RETAIN

### The following is the current behaviour:
```
const parameterGroup = new ParameterGroup(this, 'ParameterGroup', {
    ...
}

const database = new DatabaseInstance(this, 'DatabaseInstance', {
    parameterGroup: parameterGroup,
    removalPolicy: RemovalPolicy.RETAIN,
    ...
})
```

When destroying the stack
```
When I destroy this stack I see the following errors:

2:04:24 PM | DELETE_FAILED        | AWS::RDS::DBParameterGroup                  | ParameterGroup5E32DECB
One or more database instances are still members of this parameter group xxx-database-parametergroup5e32decb-daetrwpaqpgw, so the group cannot be deleted (Service: Rd
s, Status Code: 400, Request ID: 389b18db-ea82-482b-a0e6-f64887da6f82)

2:19:21 PM | DELETE_FAILED        | AWS::EC2::SecurityGroup                     | DatabaseInstanceSecurityGroup8BDF0112
resource sg-0bfc8aacb3d3e3d4a has a dependent object (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: 1eac5393-83df-48cf-bd75-41f25abb04
7a; Proxy: null)

```

As pointed out in the issue linked below, we cannot simply use the clusterRds' or instanceRds' removal policy because the parameter group can be simultaneously binded to a cluster and an instance. 

### New behaviour:
Add an optional property `removalPolicy` to the L2 Parameter Group resource and set the deletion policy to the generated L1 Parameter Group (Either cluster or instance) depending on the usage. 

Added unit test and integration test to verify that it works as expected.

Closes aws#22141

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec`

**L1 CloudFormation resource definition changes:**
```
├[~] service aws-acmpca
│ └ resources
│    └[~] resource AWS::ACMPCA::CertificateAuthority
│      └ types
│         ├[~] type CrlConfiguration
│         │ └ properties
│         │    └[+] CrlDistributionPointExtensionConfiguration: CrlDistributionPointExtensionConfiguration
│         └[+] type CrlDistributionPointExtensionConfiguration
│           ├  documentation: Configures the default behavior of the CRL Distribution Point extension for certificates issued by your certificate authority
│           │  name: CrlDistributionPointExtensionConfiguration
│           └ properties
│              └OmitExtension: boolean (required)
├[~] service aws-aps
│ └ resources
│    └[~] resource AWS::APS::Workspace
│      └ properties
│         └[+] KmsKeyArn: string (immutable)
├[~] service aws-cloudtrail
│ └ resources
│    ├[~] resource AWS::CloudTrail::EventDataStore
│    │ └ types
│    │    └[~] type AdvancedFieldSelector
│    │      └ properties
│    │         └ Field: (documentation changed)
│    └[~] resource AWS::CloudTrail::Trail
│      └ types
│         └[~] type AdvancedFieldSelector
│           └ properties
│              └ Field: (documentation changed)
├[~] service aws-codebuild
│ └ resources
│    └[~] resource AWS::CodeBuild::Project
│      └ types
│         └[~] type Environment
│           └ properties
│              └ Type: (documentation changed)
├[~] service aws-dlm
│ └ resources
│    └[~] resource AWS::DLM::LifecyclePolicy
│      └ properties
│         └ DefaultPolicy: (documentation changed)
├[~] service aws-docdb
│ └ resources
│    └[~] resource AWS::DocDB::DBCluster
│      └ properties
│         └[+] StorageType: string
├[~] service aws-ec2
│ └ resources
│    └[~] resource AWS::EC2::NetworkInterface
│      ├ properties
│      │  ├[+] ConnectionTrackingSpecification: ConnectionTrackingSpecification
│      │  └ EnablePrimaryIpv6: (documentation changed)
│      ├ attributes
│      │  └ PrimaryIpv6Address: (documentation changed)
│      └ types
│         └[+] type ConnectionTrackingSpecification
│           ├  documentation: A security group connection tracking specification that enables you to set the idle timeout for connection tracking on an Elastic network interface. For more information, see [Connection tracking timeouts](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-timeouts) in the *Amazon Elastic Compute Cloud User Guide* .
│           │  name: ConnectionTrackingSpecification
│           └ properties
│              ├TcpEstablishedTimeout: integer
│              ├UdpStreamTimeout: integer
│              └UdpTimeout: integer
├[~] service aws-ecs
│ └ resources
│    ├[~] resource AWS::ECS::CapacityProvider
│    │ └ types
│    │    └[~] type AutoScalingGroupProvider
│    │      └ properties
│    │         └ ManagedDraining: (documentation changed)
│    └[~] resource AWS::ECS::TaskSet
│      └  - documentation: Create a task set in the specified cluster and service. This is used when a service uses the `EXTERNAL` deployment controller type. For more information, see [Amazon ECS deployment types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html) in the *Amazon Elastic Container Service Developer Guide* .
│         + documentation: Create a task set in the specified cluster and service. This is used when a service uses the `EXTERNAL` deployment controller type. For more information, see [Amazon ECS deployment types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-types.html) in the *Amazon Elastic Container Service Developer Guide* .
│         You can create a maximum of 5 tasks sets for a deployment.
├[~] service aws-elasticache
│ └ resources
│    └[~] resource AWS::ElastiCache::ServerlessCache
│      └ properties
│         └ SubnetIds: (documentation changed)
├[~] service aws-fis
│ └ resources
│    └[~] resource AWS::FIS::ExperimentTemplate
│      ├  - documentation: Describes an experiment template.
│      │  + documentation: Specifies an experiment template.
│      │  An experiment template includes the following components:
│      │  - *Targets* : A target can be a specific resource in your AWS environment, or one or more resources that match criteria that you specify, for example, resources that have specific tags.
│      │  - *Actions* : The actions to carry out on the target. You can specify multiple actions, the duration of each action, and when to start each action during an experiment.
│      │  - *Stop conditions* : If a stop condition is triggered while an experiment is running, the experiment is automatically stopped. You can define a stop condition as a CloudWatch alarm.
│      │  For more information, see [Experiment templates](https://docs.aws.amazon.com/fis/latest/userguide/experiment-templates.html) in the *AWS Fault Injection Service User Guide* .
│      └ types
│         ├[~] type ExperimentTemplateAction
│         │ └  - documentation: Describes an action for an experiment template.
│         │    + documentation: Specifies an action for an experiment template.
│         │    For more information, see [Actions](https://docs.aws.amazon.com/fis/latest/userguide/actions.html) in the *AWS Fault Injection Service User Guide* .
│         ├[~] type ExperimentTemplateLogConfiguration
│         │ ├  - documentation: Describes the configuration for experiment logging.
│         │ │  + documentation: Specifies the configuration for experiment logging.
│         │ │  For more information, see [Experiment logging](https://docs.aws.amazon.com/fis/latest/userguide/monitoring-logging.html) in the *AWS Fault Injection Service User Guide* .
│         │ └ properties
│         │    ├ CloudWatchLogsConfiguration: (documentation changed)
│         │    └ S3Configuration: (documentation changed)
│         ├[~] type ExperimentTemplateStopCondition
│         │ └  - documentation: Describes a stop condition for an experiment template.
│         │    + documentation: Specifies a stop condition for an experiment template.
│         │    For more information, see [Stop conditions](https://docs.aws.amazon.com/fis/latest/userguide/stop-conditions.html) in the *AWS Fault Injection Service User Guide* .
│         ├[~] type ExperimentTemplateTarget
│         │ ├  - documentation: Describes a target for an experiment template.
│         │ │  + documentation: Specifies a target for an experiment. You must specify at least one Amazon Resource Name (ARN) or at least one resource tag. You cannot specify both ARNs and tags.
│         │ │  For more information, see [Targets](https://docs.aws.amazon.com/fis/latest/userguide/targets.html) in the *AWS Fault Injection Service User Guide* .
│         │ └ properties
│         │    └ Parameters: (documentation changed)
│         └[~] type ExperimentTemplateTargetFilter
│           └  - documentation: Describes a filter used for the target resources in an experiment template.
│              + documentation: Specifies a filter used for the target resource input in an experiment template.
│              For more information, see [Resource filters](https://docs.aws.amazon.com/fis/latest/userguide/targets.html#target-filters) in the *AWS Fault Injection Service User Guide* .
├[~] service aws-fsx
│ └ resources
│    ├[~] resource AWS::FSx::FileSystem
│    │ ├  - documentation: The `AWS::FSx::FileSystem` resource is an Amazon FSx resource type that specifies an Amazon FSx file system. You can create any of the following supported file system types:
│    │ │  - Amazon FSx for Lustre
│    │ │  - Amazon FSx for NetApp ONTAP
│    │ │  - Amazon FSx for OpenZFS
│    │ │  - Amazon FSx for Windows File Server
│    │ │  + documentation: The `AWS::FSx::FileSystem` resource is an Amazon FSx resource type that specifies an Amazon FSx file system. You can create any of the following supported file system types:
│    │ │  - Amazon FSx for Lustre
│    │ │  - Amazon FSx for NetApp ONTAP
│    │ │  - FSx for OpenZFS
│    │ │  - Amazon FSx for Windows File Server
│    │ └ properties
│    │    ├ LustreConfiguration: (documentation changed)
│    │    ├ StorageCapacity: (documentation changed)
│    │    └ WindowsConfiguration: (documentation changed)
│    └[~] resource AWS::FSx::Volume
│      └ types
│         ├[~] type AggregateConfiguration
│         │ ├  - documentation: Used to specify configuration options for a volume’s storage aggregate or aggregates.
│         │ │  + documentation: Use to specify configuration options for a volume’s storage aggregate or aggregates.
│         │ └ properties
│         │    └ ConstituentsPerAggregate: (documentation changed)
│         └[~] type OntapConfiguration
│           └ properties
│              ├ AggregateConfiguration: (documentation changed)
│              ├ SizeInBytes: (documentation changed)
│              ├ StorageEfficiencyEnabled: (documentation changed)
│              └ VolumeStyle: (documentation changed)
├[~] service aws-guardduty
│ └ resources
│    └[~] resource AWS::GuardDuty::IPSet
│      └ properties
│         └ Name: - string (required)
│                 + string
├[~] service aws-iot
│ └ resources
│    └[~] resource AWS::IoT::DomainConfiguration
│      ├ properties
│      │  └[-] ServerCertificateConfig: ServerCertificateConfig
│      └ types
│         └[-] type ServerCertificateConfig
│           ├  name: ServerCertificateConfig
│           └ properties
│              └EnableOCSPCheck: boolean
├[~] service aws-lambda
│ └ resources
│    └[~] resource AWS::Lambda::Function
│      └ types
│         └[~] type LoggingConfig
│           └ properties
│              ├ ApplicationLogLevel: (documentation changed)
│              └ SystemLogLevel: (documentation changed)
├[~] service aws-location
│ └ resources
│    └[~] resource AWS::Location::Map
│      └ types
│         └[~] type MapConfiguration
│           └ properties
│              └ Style: (documentation changed)
├[~] service aws-quicksight
│ └ resources
│    ├[~] resource AWS::QuickSight::Analysis
│    │ └ properties
│    │    ├[+] Errors: Array<AnalysisError>
│    │    └[+] Sheets: Array<Sheet>
│    └[~] resource AWS::QuickSight::Topic
│      └ properties
│         └[+] UserExperienceVersion: string
├[~] service aws-rds
│ └ resources
│    └[~] resource AWS::RDS::EventSubscription
│      └ properties
│         └ SnsTopicArn: (documentation changed)
├[~] service aws-redshift
│ └ resources
│    └[~] resource AWS::Redshift::Cluster
│      ├ properties
│      │  ├ ManageMasterPassword: (documentation changed)
│      │  ├ MasterPasswordSecretKmsKeyId: (documentation changed)
│      │  └ NamespaceResourcePolicy: (documentation changed)
│      └ attributes
│         ├ ClusterNamespaceArn: (documentation changed)
│         └ Id: (documentation changed)
├[~] service aws-redshiftserverless
│ └ resources
│    └[~] resource AWS::RedshiftServerless::Workgroup
│      └ types
│         └[~] type Workgroup
│           └ properties
│              └ ConfigParameters: (documentation changed)
├[~] service aws-route53
│ └ resources
│    ├[~] resource AWS::Route53::RecordSet
│    │ └ properties
│    │    └ GeoLocation: (documentation changed)
│    └[~] resource AWS::Route53::RecordSetGroup
│      ├ attributes
│      │  └ Id: (documentation changed)
│      └ types
│         └[~] type RecordSet
│           └ properties
│              └ GeoLocation: (documentation changed)
├[~] service aws-sagemaker
│ └ resources
│    ├[~] resource AWS::SageMaker::FeatureGroup
│    │ ├ properties
│    │ │  └[+] ThroughputConfig: ThroughputConfig
│    │ └ types
│    │    └[+] type ThroughputConfig
│    │      ├  name: ThroughputConfig
│    │      └ properties
│    │         ├ThroughputMode: string (required)
│    │         ├ProvisionedReadCapacityUnits: integer
│    │         └ProvisionedWriteCapacityUnits: integer
│    ├[~] resource AWS::SageMaker::Model
│    │ └ types
│    │    ├[+] type ModelAccessConfig
│    │    │ ├  documentation: The access configuration file for the ML model. You can explicitly accept the model end-user license agreement (EULA) within the `ModelAccessConfig` . For more information, see [End-user license agreements](https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-foundation-models-choose.html#jumpstart-foundation-models-choose-eula) .
│    │    │ │  name: ModelAccessConfig
│    │    │ └ properties
│    │    │    └AcceptEula: boolean (required)
│    │    └[~] type S3DataSource
│    │      └ properties
│    │         ├[+] ModelAccessConfig: ModelAccessConfig
│    │         └ S3Uri: (documentation changed)
│    └[~] resource AWS::SageMaker::ModelPackage
│      └ types
│         └[~] type S3DataSource
│           └ properties
│              └ S3Uri: (documentation changed)
├[~] service aws-ssm
│ └ resources
│    └[~] resource AWS::SSM::Parameter
│      └ properties
│         └ Type: (documentation changed)
└[~] service aws-transfer
  └ resources
     └[~] resource AWS::Transfer::Connector
       └  - documentation: Creates the connector, which captures the parameters for a connection for the AS2 or SFTP protocol. For AS2, the connector is required for sending files to an externally hosted AS2 server. For SFTP, the connector is required when sending files to an SFTP server or receiving files from an SFTP server. For more details about connectors, see [Create AS2 connectors](https://docs.aws.amazon.com/transfer/latest/userguide/create-b2b-server.html#configure-as2-connector) and [Create SFTP connectors](https://docs.aws.amazon.com/transfer/latest/userguide/configure-sftp-connector.html) .
          > You must specify exactly one configuration object: either for AS2 ( `As2Config` ) or SFTP ( `SftpConfig` ).
          + documentation: Creates the connector, which captures the parameters for a connection for the AS2 or SFTP protocol. For AS2, the connector is required for sending files to an externally hosted AS2 server. For SFTP, the connector is required when sending files to an SFTP server or receiving files from an SFTP server. For more details about connectors, see [Configure AS2 connectors](https://docs.aws.amazon.com/transfer/latest/userguide/configure-as2-connector.html) and [Create SFTP connectors](https://docs.aws.amazon.com/transfer/latest/userguide/configure-sftp-connector.html) .
          > You must specify exactly one configuration object: either for AS2 ( `As2Config` ) or SFTP ( `SftpConfig` ).
```
add abstraction team to mergify and merit badger

----

*By submitting this pull request, I confirm that my contribution is made
under the terms of the Apache-2.0 license*

Co-authored-by: GZ <[email protected]>
…ws#28672)

ECS now supports managed instance draining which facilitates graceful termination of Amazon ECS instances for Capacity Providers.

Add a new constructor property, `enableManagedDraining`, to `AsgCapacityProvider`, to allow users to enable this feature.

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*

(cherry picked from commit aaa2a09)
Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec`

**L1 CloudFormation resource definition changes:**
```
├[~] service aws-ec2
│ └ resources
│    ├[~] resource AWS::EC2::IPAMPool
│    │ ├ properties
│    │ │  └[+] SourceResource: SourceResource (immutable)
│    │ └ types
│    │    └[+] type SourceResource
│    │      ├  documentation: The resource associated with this pool's space. Depending on the ResourceType, setting a SourceResource changes which space can be provisioned in this pool and which types of resources can receive allocations
│    │      │  name: SourceResource
│    │      └ properties
│    │         ├ResourceId: string (required)
│    │         ├ResourceType: string (required)
│    │         ├ResourceRegion: string (required)
│    │         └ResourceOwner: string (required)
│    └[~] resource AWS::EC2::NetworkInterface
│      ├ properties
│      │  └ ConnectionTrackingSpecification: (documentation changed)
│      └ types
│         └[~] type ConnectionTrackingSpecification
│           └  - documentation: A security group connection tracking specification that enables you to set the idle timeout for connection tracking on an Elastic network interface. For more information, see [Connection tracking timeouts](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-timeouts) in the *Amazon Elastic Compute Cloud User Guide* .
│              + documentation: Configurable options for connection tracking on a network interface. For more information, see [Connection tracking timeouts](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-timeouts) in the *Amazon Elastic Compute Cloud User Guide* .
├[~] service aws-ecs
│ └ resources
│    └[~] resource AWS::ECS::TaskDefinition
│      └ types
│         └[~] type Volume
│           └ properties
│              └[+] ConfiguredAtLaunch: boolean
├[~] service aws-fsx
│ └ resources
│    ├[~] resource AWS::FSx::FileSystem
│    │ ├ properties
│    │ │  └ SecurityGroupIds: (documentation changed)
│    │ └ types
│    │    ├[~] type OntapConfiguration
│    │    │ └ properties
│    │    │    └ RouteTableIds: (documentation changed)
│    │    └[~] type UserAndGroupQuotas
│    │      ├  - documentation: The configuration for how much storage a user or group can use on the volume.
│    │      │  + documentation: Used to configure quotas that define how much storage a user or group can use on an FSx for OpenZFS volume. For more information, see [Volume properties](https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/managing-volumes.html#volume-properties) in the FSx for OpenZFS User Guide.
│    │      └ properties
│    │         ├ Id: (documentation changed)
│    │         ├ StorageCapacityQuotaGiB: (documentation changed)
│    │         └ Type: (documentation changed)
│    └[~] resource AWS::FSx::Volume
│      └ types
│         ├[~] type OntapConfiguration
│         │ └ properties
│         │    ├ AggregateConfiguration: (documentation changed)
│         │    ├ OntapVolumeType: (documentation changed)
│         │    ├ SizeInBytes: (documentation changed)
│         │    ├ SizeInMegabytes: (documentation changed)
│         │    ├ SnapshotPolicy: (documentation changed)
│         │    └ VolumeStyle: (documentation changed)
│         ├[~] type OpenZFSConfiguration
│         │ └ properties
│         │    └ UserAndGroupQuotas: (documentation changed)
│         └[~] type UserAndGroupQuotas
│           ├  - documentation: An object specifying how much storage users or groups can use on the volume.
│           │  + documentation: Configures how much storage users and groups can use on the volume.
│           └ properties
│              ├ Id: (documentation changed)
│              ├ StorageCapacityQuotaGiB: (documentation changed)
│              └ Type: (documentation changed)
├[~] service aws-guardduty
│ └ resources
│    └[~] resource AWS::GuardDuty::ThreatIntelSet
│      └ properties
│         └ Name: - string (required)
│                 + string
├[~] service aws-imagebuilder
│ └ resources
│    └[~] resource AWS::ImageBuilder::LifecyclePolicy
│      └ types
│         └[~] type RecipeSelection
│           └ properties
│              └ SemanticVersion: - string
│                                 + string (required)
├[~] service aws-kendra
│ └ resources
│    └[~] resource AWS::Kendra::DataSource
│      └ types
│         └[~] type S3DataSourceConfiguration
│           └  - documentation: Provides the configuration information to connect to an Amazon S3 bucket.
│              + documentation: Provides the configuration information to connect to an Amazon S3 bucket.
│              > `S3DataSourceConfiguration` is deprecated. Amazon VPC is not supported if you configure your Amazon S3 connector with this method. Use [TemplateConfiguration](https://docs.aws.amazon.com/kendra/latest/APIReference/API_TemplateConfiguration.html) to configure your Amazon S3 connector instead. See [Amazon S3 template schema](https://docs.aws.amazon.com/kendra/latest/dg/ds-schemas.html#ds-s3-schema) for more details.
├[~] service aws-managedblockchain
│ └ resources
│    └[~] resource AWS::ManagedBlockchain::Accessor
│      └ properties
│         └ NetworkType: (documentation changed)
├[~] service aws-networkmanager
│ └ resources
│    └[~] resource AWS::NetworkManager::Device
│      └ attributes
│         └ CreatedAt: (documentation changed)
├[~] service aws-redshiftserverless
│ └ resources
│    └[~] resource AWS::RedshiftServerless::Workgroup
│      ├ properties
│      │  └ ConfigParameters: (documentation changed)
│      └ types
│         ├[~] type ConfigParameter
│         │ └ properties
│         │    └ ParameterKey: (documentation changed)
│         └[~] type Workgroup
│           └ properties
│              └ ConfigParameters: (documentation changed)
├[~] service aws-sagemaker
│ └ resources
│    └[~] resource AWS::SageMaker::FeatureGroup
│      ├ properties
│      │  └ ThroughputConfig: (documentation changed)
│      └ types
│         └[~] type ThroughputConfig
│           ├  - documentation: undefined
│           │  + documentation: Used to set feature group throughput configuration. There are two modes: `ON_DEMAND` and `PROVISIONED` . With on-demand mode, you are charged for data reads and writes that your application performs on your feature group. You do not need to specify read and write throughput because Feature Store accommodates your workloads as they ramp up and down. You can switch a feature group to on-demand only once in a 24 hour period. With provisioned throughput mode, you specify the read and write capacity per second that you expect your application to require, and you are billed based on those limits. Exceeding provisioned throughput will result in your requests being throttled.
│           │  Note: `PROVISIONED` throughput mode is supported only for feature groups that are offline-only, or use the [`Standard`](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_OnlineStoreConfig.html#sagemaker-Type-OnlineStoreConfig-StorageType) tier online store.
│           └ properties
│              ├ ProvisionedReadCapacityUnits: (documentation changed)
│              ├ ProvisionedWriteCapacityUnits: (documentation changed)
│              └ ThroughputMode: (documentation changed)
└[~] service aws-verifiedpermissions
  └ resources
     ├[~] resource AWS::VerifiedPermissions::Policy
     │ └ properties
     │    └ PolicyStoreId: - string (immutable)
     │                     + string (required, immutable)
     └[~] resource AWS::VerifiedPermissions::PolicyStore
       └ properties
          └[+] Description: string
```
Make the sync workflow a bit more efficient, by only fetching the branches we're actually planning on syncing from `upstream`.

Also document the limitations of GitHub Actions tokens more clearly.


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
mrgrain and others added 3 commits January 12, 2024 14:24
When executing the CDK app program fails, we don't print any useful debug information. This makes sense because we are passing all output from the program to the shell, expecting this would be enough to debug any faults. However the program might be faulty in a way that no (useful) output is printed. To help with this case print the failing command when `--debug` is enabled.

This might require a follow up with a better DX for the generic non debug case. For now this will improve the situation.

Related to aws#28637

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
… to install latest sdk version (aws#28688)

This introduced uncertainty into the resource behavior, adds at least 60s to the execution time and will cause deployments in CN regions to fail.

No tests add because the existing tests run with the `@aws-cdk/customresources:installLatestAwsSdkDefault` feature flag set to the recommended value. This change is merely change the `OpenSearchAccessPolicy` config for users that don't set the feature flag. We can safely do this, because we control the code for this custom resource and know it works with the provided SDK version.

Related to aws#27597

----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@scanlonp
Copy link
Owner Author

Merged into upstream here aws#28480.

@scanlonp scanlonp closed this Jan 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.