The number of requests subtracted from the given limit in the initial time period.
" + "documentation":"The day that a time period starts. For example, with a time period of WEEK
, an offset of 0
starts on Sunday, and an offset of 1
starts on Monday.
Creates an AWS Batch compute environment. You can create MANAGED
or UNMANAGED
compute environments. MANAGED
compute environments can use Amazon EC2 or AWS Fargate resources. UNMANAGED
compute environments can only use EC2 resources.
In a managed compute environment, AWS Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. You can choose either to use EC2 On-Demand Instances and EC2 Spot Instances, or to use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is below a specified percentage of the On-Demand price.
Multi-node parallel jobs are not supported on Spot Instances.
In an unmanaged compute environment, you can manage your own EC2 compute resources and have a lot of flexibility with how you configure your compute resources. For example, you can use custom AMI. However, you need to verify that your AMI meets the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you have created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that is associated with it. Then, manually launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide.
AWS Batch doesn't upgrade the AMIs in a compute environment after it's created. For example, it doesn't update the AMIs when a newer version of the Amazon ECS-optimized AMI is available. Therefore, you're responsible for the management of the guest operating system (including updates and security patches) and any additional application software or utilities that you install on the compute resources. To use a new AMI for your AWS Batch jobs, complete these steps:
Create a new compute environment with the new AMI.
Add the compute environment to an existing job queue.
Remove the earlier compute environment from your job queue.
Delete the earlier compute environment.
Creates an AWS Batch compute environment. You can create MANAGED
or UNMANAGED
compute environments. MANAGED
compute environments can use Amazon EC2 or AWS Fargate resources. UNMANAGED
compute environments can only use EC2 resources.
In a managed compute environment, AWS Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the launch template that you specify when you create the compute environment. You can choose either to use EC2 On-Demand Instances and EC2 Spot Instances, or to use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price.
Multi-node parallel jobs are not supported on Spot Instances.
In an unmanaged compute environment, you can manage your own EC2 compute resources and have a lot of flexibility with how you configure your compute resources. For example, you can use custom AMI. However, you need to verify that your AMI meets the Amazon ECS container instance AMI specification. For more information, see container instance AMIs in the Amazon Elastic Container Service Developer Guide. After you have created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that's associated with it. Then, manually launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS container instance in the Amazon Elastic Container Service Developer Guide.
AWS Batch doesn't upgrade the AMIs in a compute environment after it's created. For example, it doesn't update the AMIs when a newer version of the Amazon ECS-optimized AMI is available. Therefore, you're responsible for the management of the guest operating system (including updates and security patches) and any additional application software or utilities that you install on the compute resources. To use a new AMI for your AWS Batch jobs, complete these steps:
Create a new compute environment with the new AMI.
Add the compute environment to an existing job queue.
Remove the earlier compute environment from your job queue.
Delete the earlier compute environment.
Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation. All jobs in the queue are terminated when you delete a job queue.
It's not necessary to disassociate compute environments from a queue before submitting a DeleteJobQueue
request.
Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation. All jobs in the queue are eventually terminated when you delete a job queue. The jobs are terminated at a rate of about 16 jobs each second.
It's not necessary to disassociate compute environments from a queue before submitting a DeleteJobQueue
request.
The job index within the array that is associated with this job. This parameter is returned for array job children.
" + "documentation":"The job index within the array that's associated with this job. This parameter is returned for array job children.
" } }, "documentation":"An object representing the array properties of a job.
" @@ -329,7 +329,7 @@ }, "index":{ "shape":"Integer", - "documentation":"The job index within the array that is associated with this job. This parameter is returned for children of array jobs.
" + "documentation":"The job index within the array that's associated with this job. This parameter is returned for children of array jobs.
" } }, "documentation":"An object representing the array properties of a job.
" @@ -350,7 +350,7 @@ }, "taskArn":{ "shape":"String", - "documentation":"The Amazon Resource Name (ARN) of the Amazon ECS task that is associated with the job attempt. Each container attempt receives a task ARN when they reach the STARTING
status.
The Amazon Resource Name (ARN) of the Amazon ECS task that's associated with the job attempt. Each container attempt receives a task ARN when they reach the STARTING
status.
The network interfaces associated with the job attempt.
" } }, - "documentation":"An object representing the details of a container that is part of a job attempt.
" + "documentation":"An object representing the details of a container that's part of a job attempt.
" }, "AttemptDetail":{ "type":"structure", @@ -455,7 +455,8 @@ "shape":"String", "documentation":"A message to attach to the job that explains the reason for canceling it. This message is returned by future DescribeJobs operations on the job. This message is also recorded in the AWS Batch activity logs.
" } - } + }, + "documentation":"Contains the parameters for CancelJob
.
These errors are usually caused by a client action, such as using an action or resource on behalf of a user that doesn't have permissions to use the action or resource, or specifying an identifier that isn't valid.
", + "documentation":"These errors are usually caused by a client action, such as using an action or resource on behalf of a user that doesn't have permissions to use the action or resource, or specifying an identifier that's not valid.
", "error":{"httpStatusCode":400}, "exception":true }, @@ -481,7 +482,7 @@ "members":{ "computeEnvironmentName":{ "shape":"String", - "documentation":"The name of the compute environment.
" + "documentation":"The name of the compute environment. Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.
" }, "computeEnvironmentArn":{ "shape":"String", @@ -497,11 +498,11 @@ }, "type":{ "shape":"CEType", - "documentation":"The type of compute environment: EC2
, SPOT
, FARGATE
, or FARGATE_SPOT
. For more information, see Compute Environments in the AWS Batch User Guide.
The type of the compute environment: MANAGED
or UNMANAGED
. For more information, see Compute Environments in the AWS Batch User Guide.
The state of the compute environment. The valid values are ENABLED
or DISABLED
.
If the state is ENABLED
, then the AWS Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.
If the state is DISABLED
, then the AWS Batch scheduler doesn't attempt to place jobs within the environment. Jobs in a STARTING
or RUNNING
state continue to progress normally. Managed compute environments in the DISABLED
state do not scale out. However, they scale in to minvCpus
value after instances become idle.
The state of the compute environment. The valid values are ENABLED
or DISABLED
.
If the state is ENABLED
, then the AWS Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.
If the state is DISABLED
, then the AWS Batch scheduler doesn't attempt to place jobs within the environment. Jobs in a STARTING
or RUNNING
state continue to progress normally. Managed compute environments in the DISABLED
state don't scale out. However, they scale in to minvCpus
value after instances become idle.
The compute resources defined for the compute environment.
" + "documentation":"The compute resources defined for the compute environment. For more information, see Compute Environments in the AWS Batch User Guide.
" }, "serviceRole":{ "shape":"String", - "documentation":"The service role associated with the compute environment that allows AWS Batch to make calls to AWS API operations on your behalf.
" + "documentation":"The service role associated with the compute environment that allows AWS Batch to make calls to AWS API operations on your behalf. For more information, see AWS Batch service IAM role in the AWS Batch User Guide.
" } }, "documentation":"An object representing an AWS Batch compute environment.
" @@ -542,7 +543,7 @@ "documentation":"The Amazon Resource Name (ARN) of the compute environment.
" } }, - "documentation":"The order in which compute environments are tried for job placement within a queue. Compute environments are tried in ascending order. For example, if two compute environments are associated with a job queue, the compute environment with a lower order integer value is tried for job placement first.
" + "documentation":"The order in which compute environments are tried for job placement within a queue. Compute environments are tried in ascending order. For example, if two compute environments are associated with a job queue, the compute environment with a lower order integer value is tried for job placement first. Compute environments must be in the VALID
state before you can associate them with a job queue. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments can't be mixed.
All compute environments that are associated with a job queue must share the same architecture. AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
The type of compute environment: EC2
, SPOT
, FARGATE
, or FARGATE_SPOT
. For more information, see Compute Environments in the AWS Batch User Guide.
The type of compute environment: EC2
, SPOT
, FARGATE
, or FARGATE_SPOT
. For more information, see Compute Environments in the AWS Batch User Guide.
If you choose SPOT
, you must also specify an Amazon EC2 Spot Fleet role with the spotIamFleetRole
parameter. For more information, see Amazon EC2 Spot Fleet role in the AWS Batch User Guide.
The maximum number of Amazon EC2 vCPUs that an environment can reach.
With both BEST_FIT_PROGRESSIVE
and SPOT_CAPACITY_OPTIMIZED
allocation strategies, AWS Batch may need to go above maxvCpus
to meet your capacity requirements. In this event, AWS Batch will never go above maxvCpus
by more than a single instance (e.g., no more than a single instance from among those specified in your compute environment).
The maximum number of Amazon EC2 vCPUs that a compute environment can reach.
With both BEST_FIT_PROGRESSIVE
and SPOT_CAPACITY_OPTIMIZED
allocation strategies, AWS Batch might need to go above maxvCpus
to meet your capacity requirements. In this event, AWS Batch will never go above maxvCpus
by more than a single instance (e.g., no more than a single instance from among those specified in your compute environment).
The instances types that may be launched. You can specify instance families to launch any instance type within those families (for example, c5
or p3
), or you can specify specific sizes within a family (such as c5.8xlarge
). You can also choose optimal
to select instance types (from the C, M, and R instance families) on the fly that match the demand of your job queues.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
When you create a compute environment, the instance types that you select for the compute environment must share the same architecture. For example, you can't mix x86 and ARM instances in the same compute environment.
The instances types that can be launched. You can specify instance families to launch any instance type within those families (for example, c5
or p3
), or you can specify specific sizes within a family (such as c5.8xlarge
). You can also choose optimal
to select instance types (from the C4, M4, and R4 instance families) on the fly that match the demand of your job queues.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
When you create a compute environment, the instance types that you select for the compute environment must share the same architecture. For example, you can't mix x86 and ARM instances in the same compute environment.
Currently, optimal
uses instance types from the C4, M4, and R4 instance families. In Regions that don't have instance types from those instance families, instance types from the C5, M5. and R5 instance families are used.
The Amazon EC2 key pair that is used for instances launched in the compute environment. You can use this key pair to log in to your instances with SSH.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
The Amazon EC2 key pair that's used for instances launched in the compute environment. You can use this key pair to log in to your instances with SSH.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
Key-value pair tags to be applied to EC2 resources that are launched in the compute environment. For AWS Batch, these take the form of \"String1\": \"String2\", where String1 is the tag key and String2 is the tag value—for example, { \"Name\": \"AWS Batch Instance - C4OnDemand\" }. This is helpful for recognizing your AWS Batch instances in the Amazon EC2 console. These tags can not be updated or removed after the compute environment has been created; any changes require creating a new compute environment and removing the old compute environment. These tags are not seen when using the AWS Batch ListTagsForResource
API operation.
Key-value pair tags to be applied to EC2 resources that are launched in the compute environment. For AWS Batch, these take the form of \"String1\": \"String2\", where String1 is the tag key and String2 is the tag value−for example, { \"Name\": \"AWS Batch Instance - C4OnDemand\" }. This is helpful for recognizing your AWS Batch instances in the Amazon EC2 console. These tags can't be updated or removed after the compute environment has been created; any changes require creating a new compute environment and removing the old compute environment. These tags are not seen when using the AWS Batch ListTagsForResource
API operation.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched. For example, if your maximum percentage is 20%, then the Spot price must be below 20% of the current On-Demand price for that Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum percentage. If you leave this field empty, the default value is 100% of the On-Demand price.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
The maximum percentage that a Spot Instance price can be when compared with the On-Demand price for that instance type before instances are launched. For example, if your maximum percentage is 20%, then the Spot price must be less than 20% of the current On-Demand price for that Amazon EC2 instance. You always pay the lowest (market) price and never more than your maximum percentage. If you leave this field empty, the default value is 100% of the On-Demand price.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
Provides information used to select Amazon Machine Images (AMIs) for instances in the compute environment. If Ec2Configuration
isn't specified, the default is ECS_AL1
.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
Provides information used to select Amazon Machine Images (AMIs) for EC2 instances in the compute environment. If Ec2Configuration
isn't specified, the default is ECS_AL1
.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
An object representing an AWS Batch compute resource.
" + "documentation":"An object representing an AWS Batch compute resource. For more information, see Compute Environments in the AWS Batch User Guide.
" }, "ComputeResourceUpdate":{ "type":"structure", "members":{ "minvCpus":{ "shape":"Integer", - "documentation":"The minimum number of Amazon EC2 vCPUs that an environment should maintain.
This parameter isnt applicable to jobs running on Fargate resources, and shouldn't be specified.
The minimum number of Amazon EC2 vCPUs that an environment should maintain.
This parameter isn't applicable to jobs running on Fargate resources, and shouldn't be specified.
The maximum number of Amazon EC2 vCPUs that an environment can reach.
" + "documentation":"The maximum number of Amazon EC2 vCPUs that an environment can reach.
With both BEST_FIT_PROGRESSIVE
and SPOT_CAPACITY_OPTIMIZED
allocation strategies, AWS Batch might need to go above maxvCpus
to meet your capacity requirements. In this event, AWS Batch will never go above maxvCpus
by more than a single instance (e.g., no more than a single instance from among those specified in your compute environment).
The Amazon EC2 security groups associated with instances launched in the compute environment. This parameter is required for Fargate compute resources, where it can contain up to 5 security groups. This can't be specified for EC2 compute resources. Providing an empty list is handled as if this parameter wasn't specified and no changeis made.
" + "documentation":"The Amazon EC2 security groups associated with instances launched in the compute environment. This parameter is required for Fargate compute resources, where it can contain up to 5 security groups. This can't be specified for EC2 compute resources. Providing an empty list is handled as if this parameter wasn't specified and no change is made.
" } }, - "documentation":"An object representing the attributes of a compute environment that can be updated.
" + "documentation":"An object representing the attributes of a compute environment that can be updated. For more information, see Compute Environments in the AWS Batch User Guide.
" }, "ContainerDetail":{ "type":"structure", @@ -672,7 +673,7 @@ }, "command":{ "shape":"StringList", - "documentation":"The command that is passed to the container.
" + "documentation":"The command that's passed to the container.
" }, "jobRoleArn":{ "shape":"String", @@ -724,7 +725,7 @@ }, "taskArn":{ "shape":"String", - "documentation":"The Amazon Resource Name (ARN) of the Amazon ECS task that is associated with the container job. Each container attempt receives a task ARN when they reach the STARTING
status.
The Amazon Resource Name (ARN) of the Amazon ECS task that's associated with the container job. Each container attempt receives a task ARN when they reach the STARTING
status.
The log configuration specification for the container.
This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance. Or, alternatively, it must be configured on a different log server for remote logging options. For more information on the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
AWS Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"
The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.
The log configuration specification for the container.
This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance. Or, alternatively, it must be configured on a different log server for remote logging options. For more information on the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
AWS Batch currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers might be available in future releases of the Amazon ECS container agent.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"
The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.
The platform configuration for jobs running on Fargate resources. Jobs running on EC2 resources must not specify this parameter.
" } }, - "documentation":"An object representing the details of a container that is part of a job.
" + "documentation":"An object representing the details of a container that's part of a job.
" }, "ContainerOverrides":{ "type":"structure", @@ -820,7 +821,7 @@ }, "command":{ "shape":"StringList", - "documentation":"The command that is passed to the container. This parameter maps to Cmd
in the Create a container section of the Docker Remote API and the COMMAND
parameter to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd.
The command that's passed to the container. This parameter maps to Cmd
in the Create a container section of the Docker Remote API and the COMMAND
parameter to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd.
The platform configuration for jobs running on Fargate resources. Jobs running on EC2 resources must not specify this parameter.
" } }, - "documentation":"Container properties are used in job definitions to describe the container that is launched as part of a job.
" + "documentation":"Container properties are used in job definitions to describe the container that's launched as part of a job.
" }, "ContainerSummary":{ "type":"structure", @@ -921,7 +922,7 @@ }, "state":{ "shape":"CEState", - "documentation":"The state of the compute environment. If the state is ENABLED
, then the compute environment accepts jobs from a queue and can scale out automatically based on queues.
The state of the compute environment. If the state is ENABLED
, then the compute environment accepts jobs from a queue and can scale out automatically based on queues.
If the state is ENABLED
, then the AWS Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.
If the state is DISABLED
, then the AWS Batch scheduler doesn't attempt to place jobs within the environment. Jobs in a STARTING
or RUNNING
state continue to progress normally. Managed compute environments in the DISABLED
state don't scale out. However, they scale in to minvCpus
value after instances become idle.
The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path.
Depending on how you created your AWS Batch service role, its ARN may contain the service-role
path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN doesn't use the service-role
path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.
The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf. For more information, see AWS Batch service IAM role in the AWS Batch User Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path.
Depending on how you created your AWS Batch service role, its ARN might contain the service-role
path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN doesn't use the service-role
path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.
The tags that you apply to the compute environment to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging AWS Resources in AWS General Reference.
These tags can be updated or removed using the TagResource and UntagResource API operations. These tags don't propagate to the underlying compute resources.
" } - } + }, + "documentation":"Contains the parameters for CreateComputeEnvironment
.
The name of the compute environment.
" + "documentation":"The name of the compute environment. Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.
" }, "computeEnvironmentArn":{ "shape":"String", @@ -960,7 +962,7 @@ "members":{ "jobQueueName":{ "shape":"String", - "documentation":"The name of the job queue.
" + "documentation":"The name of the job queue. Up to 128 letters (uppercase and lowercase), numbers, and underscores are allowed.
" }, "state":{ "shape":"JQState", @@ -968,17 +970,18 @@ }, "priority":{ "shape":"Integer", - "documentation":"The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of 10
is given scheduling preference over a job queue with a priority value of 1
.
The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of 10
is given scheduling preference over a job queue with a priority value of 1
. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments cannot be mixed.
The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler uses this parameter to determine which compute environment should run a specific job. Compute environments must be in the VALID
state before you can associate them with a job queue. You can associate up to three compute environments with a job queue. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments can't be mixed.
The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler uses this parameter to determine which compute environment should run a specific job. Compute environments must be in the VALID
state before you can associate them with a job queue. You can associate up to three compute environments with a job queue. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments can't be mixed.
All compute environments that are associated with a job queue must share the same architecture. AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
The tags that you apply to the job queue to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging AWS Resources in AWS General Reference.
" + "documentation":"The tags that you apply to the job queue to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging your AWS Batch resources in AWS Batch User Guide.
" } - } + }, + "documentation":"Contains the parameters for CreateJobQueue
.
The name or Amazon Resource Name (ARN) of the compute environment to delete.
" } - } + }, + "documentation":"Contains the parameters for DeleteComputeEnvironment
.
The short name or full Amazon Resource Name (ARN) of the queue to delete.
" } - } + }, + "documentation":"Contains the parameters for DeleteJobQueue
.
The nextToken
value returned from a previous paginated DescribeComputeEnvironments
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes.
The nextToken
value returned from a previous paginated DescribeComputeEnvironments
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that's only used to retrieve the next items in a list and not for other programmatic purposes.
Contains the parameters for DescribeComputeEnvironments
.
The nextToken
value returned from a previous paginated DescribeJobDefinitions
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes.
The nextToken
value returned from a previous paginated DescribeJobDefinitions
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that's only used to retrieve the next items in a list and not for other programmatic purposes.
Contains the parameters for DescribeJobDefinitions
.
The nextToken
value returned from a previous paginated DescribeJobQueues
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes.
The nextToken
value returned from a previous paginated DescribeJobQueues
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that's only used to retrieve the next items in a list and not for other programmatic purposes.
Contains the parameters for DescribeJobQueues
.
A list of up to 100 job IDs.
" } - } + }, + "documentation":"Contains the parameters for DescribeJobs
.
The image type to match with the instance type to select an AMI. If the imageIdOverride
parameter isn't specified, then a recent Amazon ECS-optimized AMI is used.
Amazon Linux 2− Default for all AWS Graviton-based instance families (for example, C6g
, M6g
, R6g
, and T4g
) and can be used for all non-GPU instance types.
Amazon Linux 2 (GPU)−Default for all GPU instance families (for example P4
and G4
) and can be used for all non-AWS Graviton-based instance types.
Amazon Linux−Default for all non-GPU, non-AWS-Graviton instance families. Amazon Linux is reaching the end-of-life of standard support. For more information, see Amazon Linux AMI.
The image type to match with the instance type to select an AMI. If the imageIdOverride
parameter isn't specified, then a recent Amazon ECS-optimized AMI is used.
Amazon Linux 2− Default for all AWS Graviton-based instance families (for example, C6g
, M6g
, R6g
, and T4g
) and can be used for all non-GPU instance types.
Amazon Linux 2 (GPU)−Default for all GPU instance families (for example P4
and G4
) and can be used for all non-AWS Graviton-based instance types.
Amazon Linux−Default for all non-GPU, non-AWS Graviton instance families. Amazon Linux is reaching the end-of-life of standard support. For more information, see Amazon Linux AMI.
The path on the host container instance that is presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location does not exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
The path on the host container instance that's presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If this parameter contains a file location, then the data volume persists at the specified location on the host container instance until you delete it manually. If the source path location does not exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.
This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Determine whether your data volume persists on the host container instance and where it is stored. If this parameter is empty, then the Docker daemon assigns a host path for your data volume, but the data isn't guaranteed to persist after the containers associated with it stop running.
" @@ -1415,7 +1424,7 @@ }, "status":{ "shape":"JobStatus", - "documentation":"The current status for the job.
If your jobs do not progress to STARTING
, see Jobs Stuck in RUNNABLE Status in the troubleshooting section of the AWS Batch User Guide.
The current status for the job.
If your jobs don't progress to STARTING
, see Jobs Stuck in RUNNABLE Status in the troubleshooting section of the AWS Batch User Guide.
The job definition that is used by this job.
" + "documentation":"The job definition that's used by this job.
" }, "parameters":{ "shape":"ParametersMap", @@ -1459,7 +1468,7 @@ }, "nodeDetails":{ "shape":"NodeDetails", - "documentation":"An object representing the details of a node that is associated with a multi-node parallel job.
" + "documentation":"An object representing the details of a node that's associated with a multi-node parallel job.
" }, "nodeProperties":{ "shape":"NodeProperties", @@ -1524,7 +1533,7 @@ }, "priority":{ "shape":"Integer", - "documentation":"The priority of the job queue.
" + "documentation":"The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order, for example, a job queue with a priority value of 10
is given scheduling preference over a job queue with a priority value of 1
. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments cannot be mixed.
The tags applied to the job queue.
" + "documentation":"The tags applied to the job queue. For more information, see Tagging your AWS Batch resources in AWS Batch User Guide.
" } }, "documentation":"An object representing the details of an AWS Batch job queue.
" @@ -1594,7 +1603,7 @@ }, "container":{ "shape":"ContainerSummary", - "documentation":"An object representing the details of the container that is associated with the job.
" + "documentation":"An object representing the details of the container that's associated with the job.
" }, "arrayProperties":{ "shape":"ArrayPropertiesSummary", @@ -1678,7 +1687,7 @@ }, "swappiness":{ "shape":"Integer", - "documentation":"This allows you to tune a container's memory swappiness behavior. A swappiness
value of 0
causes swapping not to happen unless absolutely necessary. A swappiness
value of 100
causes pages to be swapped very aggressively. Accepted values are whole numbers between 0
and 100
. If the swappiness
parameter isn't specified, a default value of 60
is used. If a value isn't specified for maxSwap
then this parameter is ignored. This parameter maps to the --memory-swappiness
option to docker run.
This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
This allows you to tune a container's memory swappiness behavior. A swappiness
value of 0
causes swapping not to happen unless absolutely necessary. A swappiness
value of 100
causes pages to be swapped very aggressively. Accepted values are whole numbers between 0
and 100
. If the swappiness
parameter isn't specified, a default value of 60
is used. If a value isn't specified for maxSwap
then this parameter is ignored. If maxSwap
is set to 0, the container doesn't use swap. This parameter maps to the --memory-swappiness
option to docker run.
Consider the following when you use a per-container swap configuration.
Swap space must be enabled and allocated on the container instance for the containers to use.
The Amazon ECS optimized AMIs don't have swap enabled by default. You must enable swap on the instance to use this feature. For more information, see Instance Store Swap Volumes in the Amazon EC2 User Guide for Linux Instances or How do I allocate memory to work as swap space in an Amazon EC2 instance by using a swap file?
The swap space parameters are only supported for job definitions using EC2 resources.
If the maxSwap
and swappiness
parameters are omitted from a job definition, each container will have a default swappiness
value of 60 and the total swap usage will be limited to two times the memory reservation of the container.
This parameter isn't applicable to jobs running on Fargate resources and shouldn't be provided.
Linux-specific modifications that are applied to the container, such as details for device mappings.
" @@ -1708,9 +1717,10 @@ }, "nextToken":{ "shape":"String", - "documentation":"The nextToken
value returned from a previous paginated ListJobs
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that is only used to retrieve the next items in a list and not for other programmatic purposes.
The nextToken
value returned from a previous paginated ListJobs
request where maxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the nextToken
value. This value is null
when there are no more results to return.
This token should be treated as an opaque identifier that's only used to retrieve the next items in a list and not for other programmatic purposes.
Contains the parameters for ListJobs
.
The log driver to use for the container. The valid values listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.
The supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
, syslog
, and splunk
.
Jobs running on Fargate resources are restricted to the awslogs
and splunk
log drivers.
Specifies the Amazon CloudWatch Logs logging driver. For more information, see Using the awslogs Log Driver in the AWS Batch User Guide and Amazon CloudWatch Logs logging driver in the Docker documentation.
Specifies the Fluentd logging driver. For more information, including usage and options, see Fluentd logging driver in the Docker documentation.
Specifies the Graylog Extended Format (GELF) logging driver. For more information, including usage and options, see Graylog Extended Format logging driver in the Docker documentation.
Specifies the journald logging driver. For more information, including usage and options, see Journald logging driver in the Docker documentation.
Specifies the JSON file logging driver. For more information, including usage and options, see JSON File logging driver in the Docker documentation.
Specifies the Splunk logging driver. For more information, including usage and options, see Splunk logging driver in the Docker documentation.
Specifies the syslog logging driver. For more information, including usage and options, see Syslog logging driver in the Docker documentation.
If you have a custom driver that isn't listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that is available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"
The log driver to use for the container. The valid values listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.
The supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
, syslog
, and splunk
.
Jobs running on Fargate resources are restricted to the awslogs
and splunk
log drivers.
Specifies the Amazon CloudWatch Logs logging driver. For more information, see Using the awslogs Log Driver in the AWS Batch User Guide and Amazon CloudWatch Logs logging driver in the Docker documentation.
Specifies the Fluentd logging driver. For more information, including usage and options, see Fluentd logging driver in the Docker documentation.
Specifies the Graylog Extended Format (GELF) logging driver. For more information, including usage and options, see Graylog Extended Format logging driver in the Docker documentation.
Specifies the journald logging driver. For more information, including usage and options, see Journald logging driver in the Docker documentation.
Specifies the JSON file logging driver. For more information, including usage and options, see JSON File logging driver in the Docker documentation.
Specifies the Splunk logging driver. For more information, including usage and options, see Splunk logging driver in the Docker documentation.
Specifies the syslog logging driver. For more information, including usage and options, see Syslog logging driver in the Docker documentation.
If you have a custom driver that'sn't listed earlier that you want to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that's available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you want to have included. However, Amazon Web Services doesn't currently support running modified copies of this software.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"
The name of the volume to mount.
" } }, - "documentation":"Details on a Docker volume mount point that is used in a job's container properties. This parameter maps to Volumes
in the Create a container section of the Docker Remote API and the --volume
option to docker run.
Details on a Docker volume mount point that's used in a job's container properties. This parameter maps to Volumes
in the Create a container section of the Docker Remote API and the --volume
option to docker run.
The node property overrides for the job.
" } }, - "documentation":"Object representing any node overrides to a job definition that is used in a SubmitJob API operation.
This isn't applicable to jobs running on Fargate resources and shouldn't be provided; use containerOverrides
instead.
Object representing any node overrides to a job definition that's used in a SubmitJob API operation.
This isn't applicable to jobs running on Fargate resources and shouldn't be provided; use containerOverrides
instead.
The node index for the node. Node index numbering begins at zero. This index is also available on the node with the AWS_BATCH_JOB_NODE_INDEX
environment variable.
An object representing the properties of a node that is associated with a multi-node parallel job.
" + "documentation":"An object representing the properties of a node that's associated with a multi-node parallel job.
" }, "NodePropertyOverride":{ "type":"structure", @@ -1920,7 +1930,7 @@ "documentation":"The overrides that should be sent to a node range.
" } }, - "documentation":"Object representing any node overrides to a job definition that is used in a SubmitJob API operation.
" + "documentation":"Object representing any node overrides to a job definition that's used in a SubmitJob API operation.
" }, "NodePropertyOverrides":{ "type":"list", @@ -1936,7 +1946,7 @@ "members":{ "targetNodes":{ "shape":"String", - "documentation":"The range of nodes, using node index values. A range of 0:3
indicates nodes with index values of 0
through 3
. If the starting range value is omitted (:n
), then 0
is used to start the range. If the ending range value is omitted (n:
), then the highest possible node index is used to end the range. Your accumulative node ranges must account for all nodes (0:n
). You may nest node ranges, for example 0:10
and 4:5
, in which case the 4:5
range properties override the 0:10
properties.
The range of nodes, using node index values. A range of 0:3
indicates nodes with index values of 0
through 3
. If the starting range value is omitted (:n
), then 0
is used to start the range. If the ending range value is omitted (n:
), then the highest possible node index is used to end the range. Your accumulative node ranges must account for all nodes (0:n
). You can nest node ranges, for example 0:10
and 4:5
, in which case the 4:5
range properties override the 0:10
properties.
The retry strategy to use for failed jobs that are submitted with this job definition. Any retry strategy that is specified during a SubmitJob operation overrides the retry strategy defined here. If a job is terminated due to a timeout, it isn't retried.
" + "documentation":"The retry strategy to use for failed jobs that are submitted with this job definition. Any retry strategy that's specified during a SubmitJob operation overrides the retry strategy defined here. If a job is terminated due to a timeout, it isn't retried.
" }, "propagateTags":{ "shape":"Boolean", @@ -1998,7 +2008,7 @@ }, "timeout":{ "shape":"JobTimeout", - "documentation":"The timeout configuration for jobs that are submitted with this job definition, after which AWS Batch terminates your jobs if they have not finished. If a job is terminated due to a timeout, it isn't retried. The minimum value for the timeout is 60 seconds. Any timeout configuration that is specified during a SubmitJob operation overrides the timeout configuration defined here. For more information, see Job Timeouts in the AWS Batch User Guide.
" + "documentation":"The timeout configuration for jobs that are submitted with this job definition, after which AWS Batch terminates your jobs if they have not finished. If a job is terminated due to a timeout, it isn't retried. The minimum value for the timeout is 60 seconds. Any timeout configuration that's specified during a SubmitJob operation overrides the timeout configuration defined here. For more information, see Job Timeouts in the AWS Batch User Guide.
" }, "tags":{ "shape":"TagrisTagsMap", @@ -2008,7 +2018,8 @@ "shape":"PlatformCapabilityList", "documentation":"The platform capabilities required by the job definition. If no value is specified, it defaults to EC2
. To run the job on Fargate resources, specify FARGATE
.
Contains the parameters for RegisterJobDefinition
.
The quantity of the specified resource to reserve for the container. The values vary based on the type
specified.
The number of physical GPUs to reserve for the container. The number of GPUs reserved for all containers in a job shouldn't exceed the number of available GPUs on the compute resource that the job is launched on.
GPUs are not available for jobs running on Fargate resources.
For jobs running on EC2 resources, the hard limit (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run. You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run. You must specify at least 4 MiB of memory for a job.
If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Memory Management in the AWS Batch User Guide.
For jobs running on Fargate resources, then value
is the hard limit (in GiB), represented in decimal form, and must match one of the supported values (0.5 and whole numbers between 1 and 30, inclusive) and the VCPU
values must be one of the values supported for that memory value.
VCPU
= 0.25
VCPU
= 0.25 or 0.5
VCPU
= 0.25, 0.5, or 1
VCPU
= 0.5, or 1
VCPU
= 0.5, 1, or 2
VCPU
= 1 or 2
VCPU
= 1, 2, or 4
VCPU
= 2 or 4
VCPU
= 4
The number of vCPUs reserved for the container. This parameter maps to CpuShares
in the Create a container section of the Docker Remote API and the --cpu-shares
option to docker run. Each vCPU is equivalent to 1,024 CPU shares. You must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.
For jobs running on Fargate resources, then value
must match one of the supported values and the MEMORY
values must be one of the values supported for that VCPU value. The supported values are 0.25, 0.5, 1, 2, and 4
MEMORY
= 0.5, 1, or 2
MEMORY
= 1, 2, 3, or 4
MEMORY
= 2, 3, 4, 5, 6, 7, or 8
MEMORY
= 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, or 16
MEMORY
= 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, or 30
The quantity of the specified resource to reserve for the container. The values vary based on the type
specified.
The number of physical GPUs to reserve for the container. The number of GPUs reserved for all containers in a job shouldn't exceed the number of available GPUs on the compute resource that the job is launched on.
GPUs are not available for jobs running on Fargate resources.
For jobs running on EC2 resources, the hard limit (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run. You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to Memory
in the Create a container section of the Docker Remote API and the --memory
option to docker run.
If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see Memory Management in the AWS Batch User Guide.
For jobs running on Fargate resources, then value
is the hard limit (in MiB), and must match one of the supported values and the VCPU
values must be one of the values supported for that memory value.
VCPU
= 0.25
VCPU
= 0.25 or 0.5
VCPU
= 0.25, 0.5, or 1
VCPU
= 0.5, or 1
VCPU
= 0.5, 1, or 2
VCPU
= 1 or 2
VCPU
= 1, 2, or 4
VCPU
= 2 or 4
VCPU
= 4
The number of vCPUs reserved for the container. This parameter maps to CpuShares
in the Create a container section of the Docker Remote API and the --cpu-shares
option to docker run. Each vCPU is equivalent to 1,024 CPU shares. For EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.
For jobs running on Fargate resources, then value
must match one of the supported values and the MEMORY
values must be one of the values supported for that VCPU value. The supported values are 0.25, 0.5, 1, 2, and 4
MEMORY
= 512, 1024, or 2048
MEMORY
= 1024, 2048, 3072, or 4096
MEMORY
= 2048, 3072, 4096, 5120, 6144, 7168, or 8192
MEMORY
= 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384
MEMORY
= 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720
The number of times to move a job to the RUNNABLE
status. You may specify between 1 and 10 attempts. If the value of attempts
is greater than one, the job is retried on failure the same number of attempts as the value.
The number of times to move a job to the RUNNABLE
status. You can specify between 1 and 10 attempts. If the value of attempts
is greater than one, the job is retried on failure the same number of attempts as the value.
A list of container overrides in JSON format that specify the name of a container in the specified job definition and the overrides it should receive. You can override the default command for a container (that is specified in the job definition or the Docker image) with a command
override. You can also override existing environment variables (that are specified in the job definition or Docker image) on a container or add new environment variables to it with an environment
override.
A list of container overrides in JSON format that specify the name of a container in the specified job definition and the overrides it should receive. You can override the default command for a container (that's specified in the job definition or the Docker image) with a command
override. You can also override existing environment variables (that are specified in the job definition or Docker image) on a container or add new environment variables to it with an environment
override.
The tags that you apply to the job request to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see Tagging AWS Resources in AWS General Reference.
" } - } + }, + "documentation":"Contains the parameters for SubmitJob
.
A message to attach to the job that explains the reason for canceling it. This message is returned by future DescribeJobs operations on the job. This message is also recorded in the AWS Batch activity logs.
" } - } + }, + "documentation":"Contains the parameters for TerminateJob
.
The state of the compute environment. Compute environments in the ENABLED
state can accept jobs from a queue and scale in or out automatically based on the workload demand of its associated queues.
The state of the compute environment. Compute environments in the ENABLED
state can accept jobs from a queue and scale in or out automatically based on the workload demand of its associated queues.
If the state is ENABLED
, then the AWS Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.
If the state is DISABLED
, then the AWS Batch scheduler doesn't attempt to place jobs within the environment. Jobs in a STARTING
or RUNNING
state continue to progress normally. Managed compute environments in the DISABLED
state don't scale out. However, they scale in to minvCpus
value after instances become idle.
Details of the compute resources managed by the compute environment. Required for a managed compute environment.
" + "documentation":"Details of the compute resources managed by the compute environment. Required for a managed compute environment. For more information, see Compute Environments in the AWS Batch User Guide.
" }, "serviceRole":{ "shape":"String", - "documentation":"The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path.
Depending on how you created your AWS Batch service role, its ARN might contain the service-role
path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN does not use the service-role
path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.
The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf. For more information, see AWS Batch service IAM role in the AWS Batch User Guide.
If your specified role has a path other than /
, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path.
Depending on how you created your AWS Batch service role, its ARN might contain the service-role
path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN does not use the service-role
path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.
Contains the parameters for UpdateComputeEnvironment
.
The name of the compute environment.
" + "documentation":"The name of the compute environment. Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.
" }, "computeEnvironmentArn":{ "shape":"String", @@ -2400,13 +2414,14 @@ }, "priority":{ "shape":"Integer", - "documentation":"The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order, for example, a job queue with a priority value of 10
is given scheduling preference over a job queue with a priority value of 1
.
The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority
parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order, for example, a job queue with a priority value of 10
is given scheduling preference over a job queue with a priority value of 1
. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments cannot be mixed.
Details the set of compute environments mapped to a job queue and their order relative to each other. This is one of the parameters used by the job scheduler to determine which compute environment should run a given job. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments can't be mixed.
Details the set of compute environments mapped to a job queue and their order relative to each other. This is one of the parameters used by the job scheduler to determine which compute environment should run a given job. Compute environments must be in the VALID
state before you can associate them with a job queue. All of the compute environments must be either EC2 (EC2
or SPOT
) or Fargate (FARGATE
or FARGATE_SPOT
); EC2 and Fargate compute environments can't be mixed.
All compute environments that are associated with a job queue must share the same architecture. AWS Batch doesn't support mixing compute environment architecture types in a single job queue.
Contains the parameters for UpdateJobQueue
.
Deletes the retention configuration.
" }, + "DeleteStoredQuery":{ + "name":"DeleteStoredQuery", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteStoredQueryRequest"}, + "output":{"shape":"DeleteStoredQueryResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Deletes the stored query for an AWS account in an AWS Region.
" + }, "DeliverConfigSnapshot":{ "name":"DeliverConfigSnapshot", "http":{ @@ -803,6 +817,20 @@ ], "documentation":"Returns a list of configuration items for the specified resource. The list contains details about each state of the resource during the specified time interval. If you specified a retention period to retain your ConfigurationItems
between a minimum of 30 days and a maximum of 7 years (2557 days), AWS Config returns the ConfigurationItems
for the specified retention period.
The response is paginated. By default, AWS Config returns a limit of 10 configuration items per page. You can customize this number with the limit
parameter. The response includes a nextToken
string. To get the next page of results, run the request again and specify the string for the nextToken
parameter.
Each call to the API is limited to span a duration of seven days. It is likely that the number of records returned is smaller than the specified limit
. In such cases, you can make another call, using the nextToken
.
Returns the details of a specific stored query.
" + }, "ListAggregateDiscoveredResources":{ "name":"ListAggregateDiscoveredResources", "http":{ @@ -835,6 +863,20 @@ ], "documentation":"Accepts a resource type and returns a list of resource identifiers for the resources of that type. A resource identifier includes the resource type, ID, and (if available) the custom resource name. The results consist of resources that AWS Config has discovered, including those that AWS Config is not currently recording. You can narrow the results to include only resources that have specific resource IDs or a resource name.
You can specify either resource IDs or a resource name, but not both, in the same request.
The response is paginated. By default, AWS Config lists 100 resource identifiers on each page. You can customize this number with the limit
parameter. The response includes a nextToken
string. To get the next page of results, run the request again and specify the string for the nextToken
parameter.
List the stored queries for an AWS account in an AWS Region. The default is 100.
" + }, "ListTagsForResource":{ "name":"ListTagsForResource", "http":{ @@ -1042,7 +1084,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InsufficientPermissionsException"} ], - "documentation":"A remediation exception is when a specific resource is no longer considered for auto-remediation. This API adds a new exception or updates an existing exception for a specific resource with a specific AWS Config rule.
AWS Config generates a remediation exception when a problem occurs executing a remediation action to a specific resource. Remediation exceptions blocks auto-remediation until the exception is cleared.
A remediation exception is when a specific resource is no longer considered for auto-remediation. This API adds a new exception or updates an exisiting exception for a specific resource with a specific AWS Config rule.
AWS Config generates a remediation exception when a problem occurs executing a remediation action to a specific resource. Remediation exceptions blocks auto-remediation until the exception is cleared.
Creates and updates the retention configuration with details about retention period (number of days) that AWS Config stores your historical information. The API creates the RetentionConfiguration
object and names the object as default. When you have a RetentionConfiguration
object named default, calling the API modifies the default object.
Currently, AWS Config supports only one retention configuration per region in your account.
Saves a new query or updates an existing saved query. The QueryName
must be unique for an AWS account in an AWS Region. You can create upto 300 queries in an AWS account in an AWS Region.
Amazon Resource Name (ARN) associated with the resource.
" + "documentation":"accoun
" }, "resourceType":{ "shape":"ResourceType", @@ -2255,11 +2312,11 @@ }, "DeliveryS3Bucket":{ "shape":"DeliveryS3Bucket", - "documentation":"Amazon S3 bucket where AWS Config stores conformance pack templates.
This field is optional.
Conformance pack template that is used to create a pack. The delivery bucket name should start with awsconfigconforms. For example: \"Resource\": \"arn:aws:s3:::your_bucket_name/*\".
" }, "DeliveryS3KeyPrefix":{ "shape":"DeliveryS3KeyPrefix", - "documentation":"The prefix for the Amazon S3 bucket.
This field is optional.
The prefix for the Amazon S3 bucket.
" }, "ConformancePackInputParameters":{ "shape":"ConformancePackInputParameters", @@ -2682,6 +2739,21 @@ } } }, + "DeleteStoredQueryRequest":{ + "type":"structure", + "required":["QueryName"], + "members":{ + "QueryName":{ + "shape":"QueryName", + "documentation":"The name of the query that you want to delete.
" + } + } + }, + "DeleteStoredQueryResponse":{ + "type":"structure", + "members":{ + } + }, "DeliverConfigSnapshotRequest":{ "type":"structure", "required":["deliveryChannelName"], @@ -3503,6 +3575,7 @@ "max":256, "min":0 }, + "ErrorMessage":{"type":"string"}, "Evaluation":{ "type":"structure", "required":[ @@ -4225,6 +4298,25 @@ }, "documentation":"The output for the GetResourceConfigHistory action.
" }, + "GetStoredQueryRequest":{ + "type":"structure", + "required":["QueryName"], + "members":{ + "QueryName":{ + "shape":"QueryName", + "documentation":"The name of the query.
" + } + } + }, + "GetStoredQueryResponse":{ + "type":"structure", + "members":{ + "StoredQuery":{ + "shape":"StoredQuery", + "documentation":"Returns a StoredQuery
object.
The maximum number of resource identifiers returned on each page. You cannot specify a number greater than 100. If you specify 0, AWS Config uses the default.
" + "documentation":"The maximum number of resource identifiers returned on each page. The default is 100. You cannot specify a number greater than 100. If you specify 0, AWS Config uses the default.
" }, "NextToken":{ "shape":"NextToken", @@ -4459,6 +4551,34 @@ }, "documentation":"" }, + "ListStoredQueriesRequest":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"String", + "documentation":"The nextToken string returned in a previous request that you use to request the next page of results in a paginated response.
", + "box":true + }, + "MaxResults":{ + "shape":"Limit", + "documentation":"The maximum number of results to be returned with a single call.
", + "box":true + } + } + }, + "ListStoredQueriesResponse":{ + "type":"structure", + "members":{ + "StoredQueryMetadata":{ + "shape":"StoredQueryMetadataList", + "documentation":"A list of StoredQueryMetadata
objects.
If the previous paginated request didn't return all of the remaining results, the response object's NextToken
parameter value is set to a token. To retrieve the next set of results, call this action again and assign that token to the request object's NextToken
parameter. If there are no remaining results, the previous response object's NextToken
parameter is set to null
.
Amazon S3 bucket where AWS Config stores conformance pack templates.
This field is optional.
Location of an Amazon S3 bucket where AWS Config can deliver evaluation results and conformance pack template that is used to create a pack.
" }, "DeliveryS3KeyPrefix":{ "shape":"DeliveryS3KeyPrefix", - "documentation":"Any folder structure you want to add to an Amazon S3 bucket.
This field is optional.
Any folder structure you want to add to an Amazon S3 bucket.
" }, "ConformancePackInputParameters":{ "shape":"ConformancePackInputParameters", @@ -5296,11 +5416,11 @@ }, "DeliveryS3Bucket":{ "shape":"DeliveryS3Bucket", - "documentation":"Amazon S3 bucket where AWS Config stores conformance pack templates.
This field is optional.
AWS Config stores intermediate files while processing conformance pack template.
" }, "DeliveryS3KeyPrefix":{ "shape":"DeliveryS3KeyPrefix", - "documentation":"The prefix for the Amazon S3 bucket.
This field is optional.
The prefix for the Amazon S3 bucket.
" }, "ConformancePackInputParameters":{ "shape":"ConformancePackInputParameters", @@ -5422,11 +5542,11 @@ }, "DeliveryS3Bucket":{ "shape":"DeliveryS3Bucket", - "documentation":"Amazon S3 bucket where AWS Config stores conformance pack templates.
This field is optional.
Location of an Amazon S3 bucket where AWS Config can deliver evaluation results. AWS Config stores intermediate files while processing conformance pack template.
The delivery bucket name should start with awsconfigconforms. For example: \"Resource\": \"arn:aws:s3:::your_bucket_name/*\". For more information, see Permissions for cross account bucket access.
" }, "DeliveryS3KeyPrefix":{ "shape":"DeliveryS3KeyPrefix", - "documentation":"The prefix for the Amazon S3 bucket.
This field is optional.
The prefix for the Amazon S3 bucket.
" }, "ConformancePackInputParameters":{ "shape":"ConformancePackInputParameters", @@ -5554,6 +5674,53 @@ } } }, + "PutStoredQueryRequest":{ + "type":"structure", + "required":["StoredQuery"], + "members":{ + "StoredQuery":{ + "shape":"StoredQuery", + "documentation":"A list of StoredQuery
objects. The mandatory fields are QueryName
and Expression
.
A list of Tags
object.
Amazon Resource Name (ARN) of the query. For example, arn:partition:service:region:account-id:resource-type/resource-id.
" + } + } + }, + "QueryArn":{ + "type":"string", + "max":500, + "min":1, + "pattern":"^arn:aws[a-z\\-]*:config:[a-z\\-\\d]+:\\d+:stored-query/[a-zA-Z0-9-_]+/query-[a-zA-Z\\d-_/]+$" + }, + "QueryDescription":{ + "type":"string", + "max":256, + "min":0, + "pattern":"[\\s\\S]*" + }, + "QueryExpression":{ + "type":"string", + "max":4096, + "min":1, + "pattern":"[\\s\\S]*" + }, + "QueryId":{ + "type":"string", + "max":36, + "min":1, + "pattern":"^\\S+$" + }, "QueryInfo":{ "type":"structure", "members":{ @@ -5564,6 +5731,12 @@ }, "documentation":"Details about the query.
" }, + "QueryName":{ + "type":"string", + "max":64, + "min":1, + "pattern":"^[a-zA-Z0-9-_]+$" + }, "RecorderName":{ "type":"string", "max":256, @@ -5861,6 +6034,14 @@ "type":"string", "enum":["SSM_DOCUMENT"] }, + "ResourceConcurrentModificationException":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "documentation":"Two users are trying to modify the same query at the same time. Wait for a moment and try again.
", + "exception":true + }, "ResourceCount":{ "type":"structure", "members":{ @@ -6084,6 +6265,9 @@ "AWS::CloudFront::Distribution", "AWS::CloudFront::StreamingDistribution", "AWS::Lambda::Function", + "AWS::NetworkFirewall::Firewall", + "AWS::NetworkFirewall::FirewallPolicy", + "AWS::NetworkFirewall::RuleGroup", "AWS::ElasticBeanstalk::Application", "AWS::ElasticBeanstalk::ApplicationVersion", "AWS::ElasticBeanstalk::Environment", @@ -6468,6 +6652,68 @@ }, "documentation":"The input for the StopConfigurationRecorder action.
" }, + "StoredQuery":{ + "type":"structure", + "required":["QueryName"], + "members":{ + "QueryId":{ + "shape":"QueryId", + "documentation":"The ID of the query.
", + "box":true + }, + "QueryArn":{ + "shape":"QueryArn", + "documentation":"Amazon Resource Name (ARN) of the query. For example, arn:partition:service:region:account-id:resource-type/resource-id.
", + "box":true + }, + "QueryName":{ + "shape":"QueryName", + "documentation":"The name of the query.
" + }, + "Description":{ + "shape":"QueryDescription", + "documentation":"A unique description for the query.
", + "box":true + }, + "Expression":{ + "shape":"QueryExpression", + "documentation":"The expression of the query. For example, SELECT resourceId, resourceType, supplementaryConfiguration.BucketVersioningConfiguration.status WHERE resourceType = 'AWS::S3::Bucket' AND supplementaryConfiguration.BucketVersioningConfiguration.status = 'Off'.
Provides the details of a stored query.
" + }, + "StoredQueryMetadata":{ + "type":"structure", + "required":[ + "QueryId", + "QueryArn", + "QueryName" + ], + "members":{ + "QueryId":{ + "shape":"QueryId", + "documentation":"The ID of the query.
" + }, + "QueryArn":{ + "shape":"QueryArn", + "documentation":"Amazon Resource Name (ARN) of the query. For example, arn:partition:service:region:account-id:resource-type/resource-id.
" + }, + "QueryName":{ + "shape":"QueryName", + "documentation":"The name of the query.
" + }, + "Description":{ + "shape":"QueryDescription", + "documentation":"A unique description for the query.
" + } + }, + "documentation":"Returns details of a specific query.
" + }, + "StoredQueryMetadataList":{ + "type":"list", + "member":{"shape":"StoredQueryMetadata"} + }, "String":{"type":"string"}, "StringWithCharLimit1024":{ "type":"string", diff --git a/services/connect/pom.xml b/services/connect/pom.xml index 4c54e8fcaf11..60ed3aece05e 100644 --- a/services/connect/pom.xml +++ b/services/connect/pom.xml @@ -21,7 +21,7 @@Allows you to confirm that the attachment has been uploaded using the pre-signed URL provided in StartAttachmentUpload API.
" + }, "CreateParticipantConnection":{ "name":"CreateParticipantConnection", "http":{ @@ -27,7 +45,7 @@ {"shape":"ThrottlingException"}, {"shape":"ValidationException"} ], - "documentation":"Creates the participant's connection. Note that ParticipantToken is used for invoking this API instead of ConnectionToken.
The participant token is valid for the lifetime of the participant – until the they are part of a contact.
The response URL for WEBSOCKET
Type has a connect expiry timeout of 100s. Clients must manually connect to the returned websocket URL and subscribe to the desired topic.
For chat, you need to publish the following on the established websocket connection:
{\"topic\":\"aws/subscribe\",\"content\":{\"topics\":[\"aws/chat\"]}}
Upon websocket URL expiry, as specified in the response ConnectionExpiry parameter, clients need to call this API again to obtain a new websocket URL and perform the same steps as before.
" + "documentation":"Creates the participant's connection. Note that ParticipantToken is used for invoking this API instead of ConnectionToken.
The participant token is valid for the lifetime of the participant – until they are part of a contact.
The response URL for WEBSOCKET
Type has a connect expiry timeout of 100s. Clients must manually connect to the returned websocket URL and subscribe to the desired topic.
For chat, you need to publish the following on the established websocket connection:
{\"topic\":\"aws/subscribe\",\"content\":{\"topics\":[\"aws/chat\"]}}
Upon websocket URL expiry, as specified in the response ConnectionExpiry parameter, clients need to call this API again to obtain a new websocket URL and perform the same steps as before.
The Amazon Connect Participant Service APIs do not use Signature Version 4 authentication.
Disconnects a participant. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
" + "documentation":"Disconnects a participant. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
The Amazon Connect Participant Service APIs do not use Signature Version 4 authentication.
" + }, + "GetAttachment":{ + "name":"GetAttachment", + "http":{ + "method":"POST", + "requestUri":"/participant/attachment" + }, + "input":{"shape":"GetAttachmentRequest"}, + "output":{"shape":"GetAttachmentResponse"}, + "errors":[ + {"shape":"AccessDeniedException"}, + {"shape":"InternalServerException"}, + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"} + ], + "documentation":"Provides a pre-signed URL for download of a completed attachment. This is an asynchronous API for use with active contacts.
" }, "GetTranscript":{ "name":"GetTranscript", @@ -59,7 +93,7 @@ {"shape":"ThrottlingException"}, {"shape":"ValidationException"} ], - "documentation":"Retrieves a transcript of the session. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
" + "documentation":"Retrieves a transcript of the session, including details about any attachments. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
The Amazon Connect Participant Service APIs do not use Signature Version 4 authentication.
" }, "SendEvent":{ "name":"SendEvent", @@ -75,7 +109,7 @@ {"shape":"ThrottlingException"}, {"shape":"ValidationException"} ], - "documentation":"Sends an event. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
" + "documentation":"Sends an event. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
The Amazon Connect Participant Service APIs do not use Signature Version 4 authentication.
" }, "SendMessage":{ "name":"SendMessage", @@ -91,7 +125,24 @@ {"shape":"ThrottlingException"}, {"shape":"ValidationException"} ], - "documentation":"Sends a message. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
" + "documentation":"Sends a message. Note that ConnectionToken is used for invoking this API instead of ParticipantToken.
The Amazon Connect Participant Service APIs do not use Signature Version 4 authentication.
Provides a pre-signed Amazon S3 URL in response for uploading the file directly to S3.
" } }, "shapes":{ @@ -105,6 +156,60 @@ "error":{"httpStatusCode":403}, "exception":true }, + "ArtifactId":{ + "type":"string", + "max":256, + "min":1 + }, + "ArtifactStatus":{ + "type":"string", + "enum":[ + "APPROVED", + "REJECTED", + "IN_PROGRESS" + ] + }, + "AttachmentIdList":{ + "type":"list", + "member":{"shape":"ArtifactId"}, + "max":1, + "min":1 + }, + "AttachmentItem":{ + "type":"structure", + "members":{ + "ContentType":{ + "shape":"ContentType", + "documentation":"Describes the MIME file type of the attachment. For a list of supported file types, see Feature specifications in the Amazon Connect Administrator Guide.
" + }, + "AttachmentId":{ + "shape":"ArtifactId", + "documentation":"A unique identifier for the attachment.
" + }, + "AttachmentName":{ + "shape":"AttachmentName", + "documentation":"A case-sensitive name of the attachment being uploaded.
" + }, + "Status":{ + "shape":"ArtifactStatus", + "documentation":"Status of the attachment.
" + } + }, + "documentation":"The case-insensitive input to indicate standard MIME type that describes the format of the file that will be uploaded.
" + }, + "AttachmentName":{ + "type":"string", + "max":256, + "min":1 + }, + "AttachmentSizeInBytes":{ + "type":"long", + "min":1 + }, + "Attachments":{ + "type":"list", + "member":{"shape":"AttachmentItem"} + }, "ChatContent":{ "type":"string", "max":1024, @@ -123,8 +228,15 @@ "ChatItemType":{ "type":"string", "enum":[ + "TYPING", + "PARTICIPANT_JOINED", + "PARTICIPANT_LEFT", + "CHAT_ENDED", + "TRANSFER_SUCCEEDED", + "TRANSFER_FAILED", "MESSAGE", "EVENT", + "ATTACHMENT", "CONNECTION_ACK" ] }, @@ -132,6 +244,46 @@ "type":"string", "max":500 }, + "CompleteAttachmentUploadRequest":{ + "type":"structure", + "required":[ + "AttachmentIds", + "ClientToken", + "ConnectionToken" + ], + "members":{ + "AttachmentIds":{ + "shape":"AttachmentIdList", + "documentation":"A list of unique identifiers for the attachments.
" + }, + "ClientToken":{ + "shape":"NonEmptyClientToken", + "documentation":"A unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", + "idempotencyToken":true + }, + "ConnectionToken":{ + "shape":"ParticipantToken", + "documentation":"The authentication token associated with the participant's connection.
", + "location":"header", + "locationName":"X-Amz-Bearer" + } + } + }, + "CompleteAttachmentUploadResponse":{ + "type":"structure", + "members":{ + } + }, + "ConflictException":{ + "type":"structure", + "required":["Message"], + "members":{ + "Message":{"shape":"Reason"} + }, + "documentation":"An attachment with that identifier is already being uploaded.
", + "error":{"httpStatusCode":409}, + "exception":true + }, "ConnectionCredentials":{ "type":"structure", "members":{ @@ -163,6 +315,11 @@ "max":256, "min":1 }, + "ContentType":{ + "type":"string", + "max":255, + "min":1 + }, "CreateParticipantConnectionRequest":{ "type":"structure", "required":[ @@ -176,7 +333,7 @@ }, "ParticipantToken":{ "shape":"ParticipantToken", - "documentation":"Participant Token as obtained from StartChatContact API response.
", + "documentation":"This is a header parameter.
The Participant Token as obtained from StartChatContact API response.
", "location":"header", "locationName":"X-Amz-Bearer" } @@ -222,6 +379,38 @@ "max":256, "min":1 }, + "GetAttachmentRequest":{ + "type":"structure", + "required":[ + "AttachmentId", + "ConnectionToken" + ], + "members":{ + "AttachmentId":{ + "shape":"ArtifactId", + "documentation":"A unique identifier for the attachment.
" + }, + "ConnectionToken":{ + "shape":"ParticipantToken", + "documentation":"The authentication token associated with the participant's connection.
", + "location":"header", + "locationName":"X-Amz-Bearer" + } + } + }, + "GetAttachmentResponse":{ + "type":"structure", + "members":{ + "Url":{ + "shape":"PreSignedAttachmentUrl", + "documentation":"The pre-signed URL using which file would be downloaded from Amazon S3 by the API caller.
" + }, + "UrlExpiry":{ + "shape":"ISO8601Datetime", + "documentation":"The expiration time of the URL in ISO timestamp. It's specified in ISO 8601 format: yyyy-MM-ddThh:mm:ss.SSSZ. For example, 2019-11-08T02:41:28.172Z.
" + } + } + }, "GetTranscriptRequest":{ "type":"structure", "required":["ConnectionToken"], @@ -327,6 +516,10 @@ "ParticipantRole":{ "shape":"ParticipantRole", "documentation":"The role of the sender. For example, is it a customer, agent, or system.
" + }, + "Attachments":{ + "shape":"Attachments", + "documentation":"Provides information about the attachments.
" } }, "documentation":"An item - message or event - that has been sent.
" @@ -347,6 +540,11 @@ "max":1000, "min":1 }, + "NonEmptyClientToken":{ + "type":"string", + "max":500, + "min":1 + }, "ParticipantId":{ "type":"string", "max":256, @@ -365,6 +563,11 @@ "max":1000, "min":1 }, + "PreSignedAttachmentUrl":{ + "type":"string", + "max":2000, + "min":1 + }, "PreSignedConnectionUrl":{ "type":"string", "max":2000, @@ -465,6 +668,16 @@ } } }, + "ServiceQuotaExceededException":{ + "type":"structure", + "required":["Message"], + "members":{ + "Message":{"shape":"Message"} + }, + "documentation":"The number of attachments per contact exceeds the quota.
", + "error":{"httpStatusCode":402}, + "exception":true + }, "SortKey":{ "type":"string", "enum":[ @@ -472,6 +685,54 @@ "ASCENDING" ] }, + "StartAttachmentUploadRequest":{ + "type":"structure", + "required":[ + "ContentType", + "AttachmentSizeInBytes", + "AttachmentName", + "ClientToken", + "ConnectionToken" + ], + "members":{ + "ContentType":{ + "shape":"ContentType", + "documentation":"Describes the MIME file type of the attachment. For a list of supported file types, see Feature specifications in the Amazon Connect Administrator Guide.
" + }, + "AttachmentSizeInBytes":{ + "shape":"AttachmentSizeInBytes", + "documentation":"The size of the attachment in bytes.
" + }, + "AttachmentName":{ + "shape":"AttachmentName", + "documentation":"A case-sensitive name of the attachment being uploaded.
" + }, + "ClientToken":{ + "shape":"NonEmptyClientToken", + "documentation":"A unique case sensitive identifier to support idempotency of request.
", + "idempotencyToken":true + }, + "ConnectionToken":{ + "shape":"ParticipantToken", + "documentation":"The authentication token associated with the participant's connection.
", + "location":"header", + "locationName":"X-Amz-Bearer" + } + } + }, + "StartAttachmentUploadResponse":{ + "type":"structure", + "members":{ + "AttachmentId":{ + "shape":"ArtifactId", + "documentation":"A unique identifier for the attachment.
" + }, + "UploadMetadata":{ + "shape":"UploadMetadata", + "documentation":"Fields to be used while uploading the attachment.
" + } + } + }, "StartPosition":{ "type":"structure", "members":{ @@ -504,6 +765,44 @@ "type":"list", "member":{"shape":"Item"} }, + "UploadMetadata":{ + "type":"structure", + "members":{ + "Url":{ + "shape":"UploadMetadataUrl", + "documentation":"The pre-signed URL using which file would be downloaded from Amazon S3 by the API caller.
" + }, + "UrlExpiry":{ + "shape":"ISO8601Datetime", + "documentation":"The expiration time of the URL in ISO timestamp. It's specified in ISO 8601 format: yyyy-MM-ddThh:mm:ss.SSSZ. For example, 2019-11-08T02:41:28.172Z.
" + }, + "HeadersToInclude":{ + "shape":"UploadMetadataSignedHeaders", + "documentation":"The headers to be provided while uploading the file to the URL.
" + } + }, + "documentation":"Fields to be used while uploading the attachment.
" + }, + "UploadMetadataSignedHeaders":{ + "type":"map", + "key":{"shape":"UploadMetadataSignedHeadersKey"}, + "value":{"shape":"UploadMetadataSignedHeadersValue"} + }, + "UploadMetadataSignedHeadersKey":{ + "type":"string", + "max":128, + "min":1 + }, + "UploadMetadataSignedHeadersValue":{ + "type":"string", + "max":256, + "min":1 + }, + "UploadMetadataUrl":{ + "type":"string", + "max":2000, + "min":1 + }, "ValidationException":{ "type":"structure", "required":["Message"], diff --git a/services/costandusagereport/pom.xml b/services/costandusagereport/pom.xml index c35f5551a22d..37a7717da167 100644 --- a/services/costandusagereport/pom.xml +++ b/services/costandusagereport/pom.xml @@ -21,7 +21,7 @@The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId
parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the DocumentDB endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the DocumentDB endpoint connection details.
Provides information that defines a DocumentDB endpoint.
" @@ -2706,7 +2714,7 @@ "documentation":"The filter value, which can specify one or more values used to narrow the returned results.
" } }, - "documentation":"Identifies the name and value of a filter object. This filter is used to limit the number and type of AWS DMS objects that are returned for a particular Describe*
call or similar operation. Filters are used as an optional parameter to the following APIs.
Identifies the name and value of a filter object. This filter is used to limit the number and type of AWS DMS objects that are returned for a particular Describe*
call or similar operation. Filters are used as an optional parameter for certain API operations.
Endpoint connection user name.
" + }, + "SecretsManagerAccessRoleArn":{ + "shape":"String", + "documentation":"The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the Db2 LUW endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the Db2 LUW endpoint connection details.
Provides information that defines an IBM Db2 LUW endpoint.
" @@ -3033,7 +3049,7 @@ }, "ControlTablesFileGroup":{ "shape":"String", - "documentation":"Specify a filegroup for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created on the specified filegroup.
" + "documentation":"Specifies a file group for the AWS DMS internal tables. When the replication task starts, all the internal AWS DMS control tables (awsdms_ apply_exception, awsdms_apply, awsdms_changes) are created for the specified file group.
" }, "Password":{ "shape":"SecretString", @@ -3058,6 +3074,14 @@ "UseBcpFullLoad":{ "shape":"BooleanOptional", "documentation":"Use this to attribute to transfer data for full-load operations using BCP. When the target table contains an identity column that does not exist in the source table, you must disable the use BCP for loading table option.
" + }, + "SecretsManagerAccessRoleArn":{ + "shape":"String", + "documentation":"The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the SQL Server endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the SQL Server endpoint connection details.
Provides information that defines a Microsoft SQL Server endpoint.
" @@ -3283,7 +3307,7 @@ }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", - "documentation":"A value that indicates that minor version upgrades are applied automatically to the replication instance during the maintenance window. Changing this parameter doesn't result in an outage, except in the case dsecribed following. The change is asynchronously applied as soon as possible.
An outage does result if these factors apply:
This parameter is set to true
during the maintenance window.
A newer minor version is available.
AWS DMS has enabled automatic patching for the given engine version.
A value that indicates that minor version upgrades are applied automatically to the replication instance during the maintenance window. Changing this parameter doesn't result in an outage, except in the case described following. The change is asynchronously applied as soon as possible.
An outage does result if these factors apply:
This parameter is set to true
during the maintenance window.
A newer minor version is available.
AWS DMS has enabled automatic patching for the given engine version.
The AWS KMS key identifier that is used to encrypt the content on the replication instance. If you don't specify a value for the KmsKeyId
parameter, then AWS DMS uses your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MongoDB endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the MongoDB endpoint connection details.
Provides information that defines a MongoDB endpoint.
" @@ -3494,7 +3526,7 @@ }, "ParallelLoadThreads":{ "shape":"IntegerOptional", - "documentation":"Improves performance when loading data into the MySQLcompatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Improves performance when loading data into the MySQL-compatible target database. Specifies how many threads to use to load the data into the MySQL-compatible target database. Setting a large number of threads can have an adverse effect on database performance, because a separate connection is required for each thread.
Example: parallelLoadThreads=1
Endpoint connection user name.
" + }, + "SecretsManagerAccessRoleArn":{ + "shape":"String", + "documentation":"The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the MySQL endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the MySQL endpoint connection details.
Provides information that defines a MySQL endpoint.
" @@ -3686,6 +3726,14 @@ "Username":{ "shape":"String", "documentation":"Endpoint connection user name.
" + }, + "SecretsManagerAccessRoleArn":{ + "shape":"String", + "documentation":"The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the Oracle endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the Oracle endpoint connection details.
Provides information that defines an Oracle endpoint.
" @@ -3831,6 +3879,14 @@ "SlotName":{ "shape":"String", "documentation":"Sets the name of a previously created logical replication slot for a CDC load of the PostgreSQL source instance.
When used with the AWS DMS API CdcStartPosition
request parameter, this attribute also enables using native CDC start points.
The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the PostgreSQL endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the PostgreSQL endpoint connection details.
Provides information that defines a PostgreSQL endpoint.
" @@ -3871,7 +3927,7 @@ }, "BucketFolder":{ "shape":"String", - "documentation":"An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY
command to upload the .csv files to the target table. The files are deleted once the COPY
operation has finished. For more information, see Amazon Redshift Database Developer Guide
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
" + "documentation":"An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, AWS DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. AWS DMS uses the Redshift COPY
command to upload the .csv files to the target table. The files are deleted once the COPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.
For change-data-capture (CDC) mode, AWS DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
" }, "BucketName":{ "shape":"String", @@ -3972,6 +4028,14 @@ "WriteBufferSize":{ "shape":"IntegerOptional", "documentation":"The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
" + }, + "SecretsManagerAccessRoleArn":{ + "shape":"String", + "documentation":"The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the Amazon Redshift endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.
Provides information that defines an Amazon Redshift endpoint.
" @@ -4749,19 +4813,19 @@ }, "UseCsvNoSupValue":{ "shape":"BooleanOptional", - "documentation":"This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true
for columns not included in the supplemental log, AWS DMS uses the value specified by CsvNoSupValue
. If not set or set to false
, AWS DMS uses the null value for these columns.
This setting is supported in AWS DMS versions 3.4.1 and later.
This setting applies if the S3 output files during a change data capture (CDC) load are written in .csv format. If set to true
for columns not included in the supplemental log, AWS DMS uses the value specified by CsvNoSupValue
. If not set or set to false
, AWS DMS uses the null value for these columns.
This setting is supported in AWS DMS versions 3.4.1 and later.
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If UseCsvNoSupValue
is set to true, specify a string value that you want AWS DMS to use for all columns not included in the supplemental log. If you do not specify a string value, AWS DMS uses the null value for these columns regardless of the UseCsvNoSupValue
setting.
This setting is supported in AWS DMS versions 3.4.1 and later.
This setting only applies if your Amazon S3 output files during a change data capture (CDC) load are written in .csv format. If UseCsvNoSupValue
is set to true, specify a string value that you want AWS DMS to use for all columns not included in the supplemental log. If you do not specify a string value, AWS DMS uses the null value for these columns regardless of the UseCsvNoSupValue
setting.
This setting is supported in AWS DMS versions 3.4.1 and later.
If set to true
, AWS DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by CdcPath
.
This setting is supported in AWS DMS versions 3.4.2 and later.
If set to true
, AWS DMS saves the transaction order for a change data capture (CDC) load on the Amazon S3 target specified by CdcPath
. For more information, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in AWS DMS versions 3.4.2 and later.
Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath
is set, AWS DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target, if CdcPath
is set, it is the folder path where data changes are replicated. If you set PreserveTransactions
to true
, AWS DMS verifies that you have set this parameter to a folder path on your S3 target where AWS DMS can save the transaction order for the CDC load. AWS DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by BucketFolder
and BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify BucketName
as MyTargetBucket
but do not specify BucketFolder
, AWS DMS creates the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as MyTargetBucket
and BucketFolder
as MyTargetData
, AWS DMS creates the CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
This setting is supported in AWS DMS versions 3.4.2 and later.
Specifies the folder path of CDC files. For an S3 source, this setting is required if a task captures change data; otherwise, it's optional. If CdcPath
is set, AWS DMS reads CDC files from this path and replicates the data changes to the target endpoint. For an S3 target if you set PreserveTransactions
to true
, AWS DMS verifies that you have set this parameter to a folder path on your S3 target where AWS DMS can save the transaction order for the CDC load. AWS DMS creates this CDC folder path in either your S3 target working directory or the S3 target location specified by BucketFolder
and BucketName
.
For example, if you specify CdcPath
as MyChangedData
, and you specify BucketName
as MyTargetBucket
but do not specify BucketFolder
, AWS DMS creates the CDC folder path following: MyTargetBucket/MyChangedData
.
If you specify the same CdcPath
, and you specify BucketName
as MyTargetBucket
and BucketFolder
as MyTargetData
, AWS DMS creates the CDC folder path following: MyTargetBucket/MyTargetData/MyChangedData
.
For more information on CDC including transaction order on an S3 target, see Capturing data changes (CDC) including transaction order on the S3 target.
This setting is supported in AWS DMS versions 3.4.2 and later.
Settings for exporting data to Amazon S3.
" @@ -5061,6 +5125,14 @@ "Username":{ "shape":"String", "documentation":"Endpoint connection user name.
" + }, + "SecretsManagerAccessRoleArn":{ + "shape":"String", + "documentation":"The full Amazon Resource Name (ARN) of the IAM role that specifies AWS DMS as the trusted entity and grants the required permissions to access the value in SecretsManagerSecret
. SecretsManagerSecret
has the value of the AWS Secrets Manager secret that allows access to the SAP ASE endpoint.
You can specify one of two sets of values for these permissions. You can specify the values for this setting and SecretsManagerSecretId
. Or you can specify clear-text values for UserName
, Password
, ServerName
, and Port
. You can't specify both. For more information on creating this SecretsManagerSecret
and the SecretsManagerAccessRoleArn
and SecretsManagerSecretId
required to access it, see Using secrets to access AWS Database Migration Service resources in the AWS Database Migration Service User Guide.
The full ARN, partial ARN, or friendly name of the SecretsManagerSecret
that contains the SAP SAE endpoint connection details.
Provides information that defines a SAP ASE endpoint.
" diff --git a/services/databrew/pom.xml b/services/databrew/pom.xml index 8bc9725543cd..704a330c42fc 100644 --- a/services/databrew/pom.xml +++ b/services/databrew/pom.xml @@ -21,7 +21,7 @@Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
The tags to assign to the Elastic IP address.
", + "locationName":"TagSpecification" } } }, @@ -37882,7 +37887,7 @@ "members":{ "ResourceType":{ "shape":"ResourceType", - "documentation":"The type of resource to tag. Currently, the resource types that support tagging on creation are: capacity-reservation
| carrier-gateway
| client-vpn-endpoint
| customer-gateway
| dedicated-host
| dhcp-options
| export-image-task
| export-instance-task
| fleet
| fpga-image
| host-reservation
| import-image-task
| import-snapshot-task
| instance
| internet-gateway
| ipv4pool-ec2
| ipv6pool-ec2
| key-pair
| launch-template
| placement-group
| prefix-list
| natgateway
| network-acl
| route-table
| security-group
| spot-fleet-request
| spot-instances-request
| snapshot
| subnet
| traffic-mirror-filter
| traffic-mirror-session
| traffic-mirror-target
| transit-gateway
| transit-gateway-attachment
| transit-gateway-connect-peer
| transit-gateway-multicast-domain
| transit-gateway-route-table
| volume
|vpc
| vpc-peering-connection
| vpc-endpoint
(for interface and gateway endpoints) | vpc-endpoint-service
(for AWS PrivateLink) | vpc-flow-log
| vpn-connection
| vpn-gateway
.
To tag a resource after it has been created, see CreateTags.
", + "documentation":"The type of resource to tag. Currently, the resource types that support tagging on creation are: capacity-reservation
| carrier-gateway
| client-vpn-endpoint
| customer-gateway
| dedicated-host
| dhcp-options
| egress-only-internet-gateway
| elastic-ip
| elastic-gpu
| export-image-task
| export-instance-task
| fleet
| fpga-image
| host-reservation
| image
| import-image-task
| import-snapshot-task
| instance
| internet-gateway
| ipv4pool-ec2
| ipv6pool-ec2
| key-pair
| launch-template
| local-gateway-route-table-vpc-association
| placement-group
| prefix-list
| natgateway
| network-acl
| network-interface
| reserved-instances
|route-table
| security-group
| snapshot
| spot-fleet-request
| spot-instances-request
| snapshot
| subnet
| traffic-mirror-filter
| traffic-mirror-session
| traffic-mirror-target
| transit-gateway
| transit-gateway-attachment
| transit-gateway-multicast-domain
| transit-gateway-route-table
| volume
|vpc
| vpc-peering-connection
| vpc-endpoint
(for interface and gateway endpoints) | vpc-endpoint-service
(for AWS PrivateLink) | vpc-flow-log
| vpn-connection
| vpn-gateway
.
To tag a resource after it has been created, see CreateTags.
", "locationName":"resourceType" }, "Tags":{ diff --git a/services/ec2instanceconnect/pom.xml b/services/ec2instanceconnect/pom.xml index 015004af56ee..6c6837733d60 100644 --- a/services/ec2instanceconnect/pom.xml +++ b/services/ec2instanceconnect/pom.xml @@ -21,7 +21,7 @@These key-value pairs define parameters for the connection:
HOST
- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
PORT
- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
USER_NAME
- The name under which to log in to the database. The value string for USER_NAME
is \"USERNAME
\".
PASSWORD
- A password, if one is used, for the user name.
ENCRYPTED_PASSWORD
- When you enable connection password protection by setting ConnectionPasswordEncryption
in the Data Catalog encryption settings, this field stores the encrypted password.
JDBC_DRIVER_JAR_URI
- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
JDBC_DRIVER_CLASS_NAME
- The class name of the JDBC driver to use.
JDBC_ENGINE
- The name of the JDBC engine to use.
JDBC_ENGINE_VERSION
- The version of the JDBC engine to use.
CONFIG_FILES
- (Reserved for future use.)
INSTANCE_ID
- The instance ID to use.
JDBC_CONNECTION_URL
- The URL for connecting to a JDBC data source.
JDBC_ENFORCE_SSL
- A Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
CUSTOM_JDBC_CERT
- An Amazon S3 location specifying the customer's root certificate. AWS Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. AWS Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
SKIP_CUSTOM_JDBC_CERT_VALIDATION
- By default, this is false
. AWS Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property to true
to skip AWS Glue’s validation of the customer certificate.
CUSTOM_JDBC_CERT_STRING
- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as the SSL_SERVER_CERT_DN
; in Microsoft SQL Server, this is used as the hostNameInCertificate
.
CONNECTION_URL
- The URL for connecting to a general (non-JDBC) data source.
KAFKA_BOOTSTRAP_SERVERS
- A comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
KAFKA_SSL_ENABLED
- Whether to enable or disable SSL on an Apache Kafka connection. Default value is \"true\".
KAFKA_CUSTOM_CERT
- The Amazon S3 URL for the private CA cert file (.pem format). The default is an empty string.
KAFKA_SKIP_CUSTOM_CERT_VALIDATION
- Whether to skip the validation of the CA cert file or not. AWS Glue validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is \"false\".
These key-value pairs define parameters for the connection:
HOST
- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
PORT
- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
USER_NAME
- The name under which to log in to the database. The value string for USER_NAME
is \"USERNAME
\".
PASSWORD
- A password, if one is used, for the user name.
ENCRYPTED_PASSWORD
- When you enable connection password protection by setting ConnectionPasswordEncryption
in the Data Catalog encryption settings, this field stores the encrypted password.
JDBC_DRIVER_JAR_URI
- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
JDBC_DRIVER_CLASS_NAME
- The class name of the JDBC driver to use.
JDBC_ENGINE
- The name of the JDBC engine to use.
JDBC_ENGINE_VERSION
- The version of the JDBC engine to use.
CONFIG_FILES
- (Reserved for future use.)
INSTANCE_ID
- The instance ID to use.
JDBC_CONNECTION_URL
- The URL for connecting to a JDBC data source.
JDBC_ENFORCE_SSL
- A Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
CUSTOM_JDBC_CERT
- An Amazon S3 location specifying the customer's root certificate. AWS Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. AWS Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
SKIP_CUSTOM_JDBC_CERT_VALIDATION
- By default, this is false
. AWS Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property to true
to skip AWS Glue’s validation of the customer certificate.
CUSTOM_JDBC_CERT_STRING
- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as the SSL_SERVER_CERT_DN
; in Microsoft SQL Server, this is used as the hostNameInCertificate
.
CONNECTION_URL
- The URL for connecting to a general (non-JDBC) data source.
KAFKA_BOOTSTRAP_SERVERS
- A comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
KAFKA_SSL_ENABLED
- Whether to enable or disable SSL on an Apache Kafka connection. Default value is \"true\".
KAFKA_CUSTOM_CERT
- The Amazon S3 URL for the private CA cert file (.pem format). The default is an empty string.
KAFKA_SKIP_CUSTOM_CERT_VALIDATION
- Whether to skip the validation of the CA cert file or not. AWS Glue validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is \"false\".
SECRET_ID
- The secret ID used for the secret manager of credentials.
CONNECTOR_URL
- The connector URL for a MARKETPLACE or CUSTOM connection.
CONNECTOR_TYPE
- The connector type for a MARKETPLACE or CUSTOM connection.
CONNECTOR_CLASS_NAME
- The connector class name for a MARKETPLACE or CUSTOM connection.
The type of the connection. Currently, these types are supported:
JDBC
- Designates a connection to a database through Java Database Connectivity (JDBC).
KAFKA
- Designates a connection to an Apache Kafka streaming platform.
MONGODB
- Designates a connection to a MongoDB document database.
NETWORK
- Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).
SFTP is not supported.
" + "documentation":"The type of the connection. Currently, these types are supported:
JDBC
- Designates a connection to a database through Java Database Connectivity (JDBC).
KAFKA
- Designates a connection to an Apache Kafka streaming platform.
MONGODB
- Designates a connection to a MongoDB document database.
NETWORK
- Designates a network connection to a data source within an Amazon Virtual Private Cloud environment (Amazon VPC).
MARKETPLACE
- Uses configuration settings contained in a connector purchased from AWS Marketplace to read from and write to data stores that are not natively supported by AWS Glue.
CUSTOM
- Uses configuration settings contained in a custom connector to read from and write to data stores that are not natively supported by AWS Glue.
SFTP is not supported.
" }, "MatchCriteria":{ "shape":"MatchCriteria", @@ -3922,7 +3922,11 @@ "KAFKA_BOOTSTRAP_SERVERS", "KAFKA_SSL_ENABLED", "KAFKA_CUSTOM_CERT", - "KAFKA_SKIP_CUSTOM_CERT_VALIDATION" + "KAFKA_SKIP_CUSTOM_CERT_VALIDATION", + "SECRET_ID", + "CONNECTOR_URL", + "CONNECTOR_TYPE", + "CONNECTOR_CLASS_NAME" ] }, "ConnectionType":{ @@ -3932,7 +3936,9 @@ "SFTP", "MONGODB", "KAFKA", - "NETWORK" + "NETWORK", + "MARKETPLACE", + "CUSTOM" ] }, "ConnectionsList":{ @@ -10567,10 +10573,20 @@ "SchemaId":{ "type":"structure", "members":{ - "SchemaArn":{"shape":"GlueResourceArn"}, - "SchemaName":{"shape":"SchemaRegistryNameString"}, - "RegistryName":{"shape":"SchemaRegistryNameString"} - } + "SchemaArn":{ + "shape":"GlueResourceArn", + "documentation":"The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or SchemaName
has to be provided.
The name of the schema. One of SchemaArn
or SchemaName
has to be provided.
The name of the schema registry that contains the schema.
" + } + }, + "documentation":"The unique ID of the schema in the AWS Glue schema registry.
" }, "SchemaListDefinition":{ "type":"list", @@ -10707,9 +10723,16 @@ "SchemaVersionNumber":{ "type":"structure", "members":{ - "LatestVersion":{"shape":"LatestSchemaVersionBoolean"}, - "VersionNumber":{"shape":"VersionLongNumber"} - } + "LatestVersion":{ + "shape":"LatestSchemaVersionBoolean", + "documentation":"The latest version available for the schema.
" + }, + "VersionNumber":{ + "shape":"VersionLongNumber", + "documentation":"The version number of the schema.
" + } + }, + "documentation":"A structure containing the schema version information.
" }, "SchemaVersionStatus":{ "type":"string", diff --git a/services/greengrass/pom.xml b/services/greengrass/pom.xml index 761aea110d8f..39956592bc78 100644 --- a/services/greengrass/pom.xml +++ b/services/greengrass/pom.xml @@ -21,7 +21,7 @@Creates a member within a Managed Blockchain network.
" + "documentation":"Creates a member within a Managed Blockchain network.
Applies only to Hyperledger Fabric.
" }, "CreateNetwork":{ "name":"CreateNetwork", @@ -49,13 +49,13 @@ {"shape":"ResourceLimitExceededException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Creates a new blockchain network using Amazon Managed Blockchain.
" + "documentation":"Creates a new blockchain network using Amazon Managed Blockchain.
Applies only to Hyperledger Fabric.
" }, "CreateNode":{ "name":"CreateNode", "http":{ "method":"POST", - "requestUri":"/networks/{networkId}/members/{memberId}/nodes" + "requestUri":"/networks/{networkId}/nodes" }, "input":{"shape":"CreateNodeInput"}, "output":{"shape":"CreateNodeOutput"}, @@ -69,7 +69,7 @@ {"shape":"ResourceLimitExceededException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Creates a peer node in a member.
" + "documentation":"Creates a node on the specified blockchain network.
Applies to Hyperledger Fabric and Ethereum.
" }, "CreateProposal":{ "name":"CreateProposal", @@ -87,7 +87,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Creates a proposal for a change to the network that other members of the network can vote on, for example, a proposal to add a new member to the network. Any member can create a proposal.
" + "documentation":"Creates a proposal for a change to the network that other members of the network can vote on, for example, a proposal to add a new member to the network. Any member can create a proposal.
Applies only to Hyperledger Fabric.
" }, "DeleteMember":{ "name":"DeleteMember", @@ -105,13 +105,13 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Deletes a member. Deleting a member removes the member and all associated resources from the network. DeleteMember
can only be called for a specified MemberId
if the principal performing the action is associated with the AWS account that owns the member. In all other cases, the DeleteMember
action is carried out as the result of an approved proposal to remove a member. If MemberId
is the last member in a network specified by the last AWS account, the network is deleted also.
Deletes a member. Deleting a member removes the member and all associated resources from the network. DeleteMember
can only be called for a specified MemberId
if the principal performing the action is associated with the AWS account that owns the member. In all other cases, the DeleteMember
action is carried out as the result of an approved proposal to remove a member. If MemberId
is the last member in a network specified by the last AWS account, the network is deleted also.
Applies only to Hyperledger Fabric.
" }, "DeleteNode":{ "name":"DeleteNode", "http":{ "method":"DELETE", - "requestUri":"/networks/{networkId}/members/{memberId}/nodes/{nodeId}" + "requestUri":"/networks/{networkId}/nodes/{nodeId}" }, "input":{"shape":"DeleteNodeInput"}, "output":{"shape":"DeleteNodeOutput"}, @@ -123,7 +123,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Deletes a peer node from a member that your AWS account owns. All data on the node is lost and cannot be recovered.
" + "documentation":"Deletes a node that your AWS account owns. All data on the node is lost and cannot be recovered.
Applies to Hyperledger Fabric and Ethereum.
" }, "GetMember":{ "name":"GetMember", @@ -140,7 +140,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns detailed information about a member.
" + "documentation":"Returns detailed information about a member.
Applies only to Hyperledger Fabric.
" }, "GetNetwork":{ "name":"GetNetwork", @@ -157,13 +157,13 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns detailed information about a network.
" + "documentation":"Returns detailed information about a network.
Applies to Hyperledger Fabric and Ethereum.
" }, "GetNode":{ "name":"GetNode", "http":{ "method":"GET", - "requestUri":"/networks/{networkId}/members/{memberId}/nodes/{nodeId}" + "requestUri":"/networks/{networkId}/nodes/{nodeId}" }, "input":{"shape":"GetNodeInput"}, "output":{"shape":"GetNodeOutput"}, @@ -174,7 +174,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns detailed information about a peer node.
" + "documentation":"Returns detailed information about a node.
Applies to Hyperledger Fabric and Ethereum.
" }, "GetProposal":{ "name":"GetProposal", @@ -191,7 +191,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns detailed information about a proposal.
" + "documentation":"Returns detailed information about a proposal.
Applies only to Hyperledger Fabric.
" }, "ListInvitations":{ "name":"ListInvitations", @@ -209,7 +209,7 @@ {"shape":"ResourceLimitExceededException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns a listing of all invitations for the current AWS account.
" + "documentation":"Returns a list of all invitations for the current AWS account.
Applies only to Hyperledger Fabric.
" }, "ListMembers":{ "name":"ListMembers", @@ -225,7 +225,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns a listing of the members in a network and properties of their configurations.
" + "documentation":"Returns a list of the members in a network and properties of their configurations.
Applies only to Hyperledger Fabric.
" }, "ListNetworks":{ "name":"ListNetworks", @@ -241,13 +241,13 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns information about the networks in which the current AWS account has members.
" + "documentation":"Returns information about the networks in which the current AWS account participates.
Applies to Hyperledger Fabric and Ethereum.
" }, "ListNodes":{ "name":"ListNodes", "http":{ "method":"GET", - "requestUri":"/networks/{networkId}/members/{memberId}/nodes" + "requestUri":"/networks/{networkId}/nodes" }, "input":{"shape":"ListNodesInput"}, "output":{"shape":"ListNodesOutput"}, @@ -257,7 +257,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns information about the nodes within a network.
" + "documentation":"Returns information about the nodes within a network.
Applies to Hyperledger Fabric and Ethereum.
" }, "ListProposalVotes":{ "name":"ListProposalVotes", @@ -273,7 +273,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns the listing of votes for a specified proposal, including the value of each vote and the unique identifier of the member that cast the vote.
" + "documentation":"Returns the list of votes for a specified proposal, including the value of each vote and the unique identifier of the member that cast the vote.
Applies only to Hyperledger Fabric.
" }, "ListProposals":{ "name":"ListProposals", @@ -290,7 +290,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Returns a listing of proposals for the network.
" + "documentation":"Returns a list of proposals for the network.
Applies only to Hyperledger Fabric.
" }, "RejectInvitation":{ "name":"RejectInvitation", @@ -308,7 +308,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Rejects an invitation to join a network. This action can be called by a principal in an AWS account that has received an invitation to create a member and join a network.
" + "documentation":"Rejects an invitation to join a network. This action can be called by a principal in an AWS account that has received an invitation to create a member and join a network.
Applies only to Hyperledger Fabric.
" }, "UpdateMember":{ "name":"UpdateMember", @@ -325,13 +325,13 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Updates a member configuration with new parameters.
" + "documentation":"Updates a member configuration with new parameters.
Applies only to Hyperledger Fabric.
" }, "UpdateNode":{ "name":"UpdateNode", "http":{ "method":"PATCH", - "requestUri":"/networks/{networkId}/members/{memberId}/nodes/{nodeId}" + "requestUri":"/networks/{networkId}/nodes/{nodeId}" }, "input":{"shape":"UpdateNodeInput"}, "output":{"shape":"UpdateNodeOutput"}, @@ -342,7 +342,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Updates a node configuration with new parameters.
" + "documentation":"Updates a node configuration with new parameters.
Applies only to Hyperledger Fabric.
" }, "VoteOnProposal":{ "name":"VoteOnProposal", @@ -360,13 +360,14 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServiceErrorException"} ], - "documentation":"Casts a vote for a specified ProposalId
on behalf of a member. The member to vote as, specified by VoterMemberId
, must be in the same AWS account as the principal that calls the action.
Casts a vote for a specified ProposalId
on behalf of a member. The member to vote as, specified by VoterMemberId
, must be in the same AWS account as the principal that calls the action.
Applies only to Hyperledger Fabric.
" } }, "shapes":{ "AccessDeniedException":{ "type":"structure", "members":{ + "Message":{"shape":"String"} }, "documentation":"You do not have sufficient access to perform this action.
", "error":{"httpStatusCode":403}, @@ -388,7 +389,7 @@ "documentation":"Determines whether the vote percentage must be greater than the ThresholdPercentage
or must be greater than or equal to the ThreholdPercentage
to be approved.
A policy type that defines the voting rules for the network. The rules decide if a proposal is approved. Approval may be based on criteria such as the percentage of YES
votes and the duration of the proposal. The policy applies to all proposals and is specified when the network is created.
A policy type that defines the voting rules for the network. The rules decide if a proposal is approved. Approval may be based on criteria such as the percentage of YES
votes and the duration of the proposal. The policy applies to all proposals and is specified when the network is created.
Applies only to Hyperledger Fabric.
" }, "AvailabilityZoneString":{"type":"string"}, "ClientRequestTokenString":{ @@ -499,7 +500,6 @@ "required":[ "ClientRequestToken", "NetworkId", - "MemberId", "NodeConfiguration" ], "members":{ @@ -510,15 +510,13 @@ }, "NetworkId":{ "shape":"ResourceIdString", - "documentation":"The unique identifier of the network in which this node runs.
", + "documentation":"The unique identifier of the network for the node.
Ethereum public networks have the following NetworkId
s:
n-ethereum-mainnet
n-ethereum-rinkeby
n-ethereum-ropsten
The unique identifier of the member that owns this node.
", - "location":"uri", - "locationName":"memberId" + "documentation":"The unique identifier of the member that owns this node.
Applies only to Hyperledger Fabric.
" }, "NodeConfiguration":{ "shape":"NodeConfiguration", @@ -608,20 +606,19 @@ "type":"structure", "required":[ "NetworkId", - "MemberId", "NodeId" ], "members":{ "NetworkId":{ "shape":"ResourceIdString", - "documentation":"The unique identifier of the network that the node belongs to.
", + "documentation":"The unique identifier of the network that the node is on.
Ethereum public networks have the following NetworkId
s:
n-ethereum-mainnet
n-ethereum-rinkeby
n-ethereum-ropsten
The unique identifier of the member that owns this node.
", - "location":"uri", + "documentation":"The unique identifier of the member that owns this node.
Applies only to Hyperledger Fabric and is required for Hyperledger Fabric.
", + "location":"querystring", "locationName":"memberId" }, "NodeId":{ @@ -654,7 +651,10 @@ }, "Framework":{ "type":"string", - "enum":["HYPERLEDGER_FABRIC"] + "enum":[ + "HYPERLEDGER_FABRIC", + "ETHEREUM" + ] }, "FrameworkVersionString":{ "type":"string", @@ -716,20 +716,19 @@ "type":"structure", "required":[ "NetworkId", - "MemberId", "NodeId" ], "members":{ "NetworkId":{ "shape":"ResourceIdString", - "documentation":"The unique identifier of the network to which the node belongs.
", + "documentation":"The unique identifier of the network that the node is on.
", "location":"uri", "locationName":"networkId" }, "MemberId":{ "shape":"ResourceIdString", - "documentation":"The unique identifier of the member that owns the node.
", - "location":"uri", + "documentation":"The unique identifier of the member that owns the node.
Applies only to Hyperledger Fabric and is required for Hyperledger Fabric.
", + "location":"querystring", "locationName":"memberId" }, "NodeId":{ @@ -827,7 +826,7 @@ }, "NetworkSummary":{"shape":"NetworkSummary"} }, - "documentation":"An invitation to an AWS account to create a member and join the network.
" + "documentation":"An invitation to an AWS account to create a member and join the network.
Applies only to Hyperledger Fabric.
" }, "InvitationList":{ "type":"list", @@ -852,7 +851,7 @@ "documentation":"The AWS account ID to invite.
" } }, - "documentation":"An action to invite a specific AWS account to create a member and join the network. The InviteAction
is carried out when a Proposal
is APPROVED
.
An action to invite a specific AWS account to create a member and join the network. The InviteAction
is carried out when a Proposal
is APPROVED
.
Applies only to Hyperledger Fabric.
" }, "InviteActionList":{ "type":"list", @@ -964,7 +963,7 @@ }, "Status":{ "shape":"NetworkStatus", - "documentation":"An optional status specifier. If provided, only networks currently in this status are listed.
", + "documentation":"An optional status specifier. If provided, only networks currently in this status are listed.
Applies only to Hyperledger Fabric.
", "location":"querystring", "locationName":"status" }, @@ -997,10 +996,7 @@ }, "ListNodesInput":{ "type":"structure", - "required":[ - "NetworkId", - "MemberId" - ], + "required":["NetworkId"], "members":{ "NetworkId":{ "shape":"ResourceIdString", @@ -1010,8 +1006,8 @@ }, "MemberId":{ "shape":"ResourceIdString", - "documentation":"The unique identifier of the member who owns the nodes to list.
", - "location":"uri", + "documentation":"The unique identifier of the member who owns the nodes to list.
Applies only to Hyperledger Fabric and is required for Hyperledger Fabric.
", + "location":"querystring", "locationName":"memberId" }, "Status":{ @@ -1085,7 +1081,7 @@ "members":{ "ProposalVotes":{ "shape":"ProposalVoteList", - "documentation":"The listing of votes.
" + "documentation":"The list of votes.
" }, "NextToken":{ "shape":"PaginationToken", @@ -1186,7 +1182,7 @@ "documentation":"The date and time that the member was created.
" } }, - "documentation":"Member configuration properties.
" + "documentation":"Member configuration properties.
Applies only to Hyperledger Fabric.
" }, "MemberConfiguration":{ "type":"structure", @@ -1212,7 +1208,7 @@ "documentation":"Configuration properties for logging events associated with a member of a Managed Blockchain network.
" } }, - "documentation":"Configuration properties of the member.
" + "documentation":"Configuration properties of the member.
Applies only to Hyperledger Fabric.
" }, "MemberFabricAttributes":{ "type":"structure", @@ -1241,7 +1237,7 @@ }, "AdminPassword":{ "shape":"PasswordString", - "documentation":"The password for the member's initial administrative user. The AdminPassword
must be at least eight characters long and no more than 32 characters. It must contain at least one uppercase letter, one lowercase letter, and one digit. It cannot have a single quote(‘), double quote(“), forward slash(/), backward slash(\\), @, or a space.
The password for the member's initial administrative user. The AdminPassword
must be at least eight characters long and no more than 32 characters. It must contain at least one uppercase letter, one lowercase letter, and one digit. It cannot have a single quotation mark (‘), a double quotation marks (“), a forward slash(/), a backward slash(\\), @, or a space.
Configuration properties for Hyperledger Fabric for a member in a Managed Blockchain network using the Hyperledger Fabric framework.
" @@ -1331,7 +1327,7 @@ "documentation":"An indicator of whether the member is owned by your AWS account or a different AWS account.
" } }, - "documentation":"A summary of configuration properties for a member.
" + "documentation":"A summary of configuration properties for a member.
Applies only to Hyperledger Fabric.
" }, "MemberSummaryList":{ "type":"list", @@ -1389,6 +1385,16 @@ }, "documentation":"Network configuration properties.
" }, + "NetworkEthereumAttributes":{ + "type":"structure", + "members":{ + "ChainId":{ + "shape":"String", + "documentation":"The Ethereum CHAIN_ID
associated with the Ethereum network. Chain IDs are as follows:
mainnet = 1
rinkeby = 4
ropsten = 3
Attributes of Ethereum for a network.
" + }, "NetworkFabricAttributes":{ "type":"structure", "members":{ @@ -1420,6 +1426,10 @@ "Fabric":{ "shape":"NetworkFabricAttributes", "documentation":"Attributes of Hyperledger Fabric for a Managed Blockchain network that uses Hyperledger Fabric.
" + }, + "Ethereum":{ + "shape":"NetworkEthereumAttributes", + "documentation":"Attributes of an Ethereum network for Managed Blockchain resources participating in an Ethereum network.
" } }, "documentation":"Attributes relevant to the network for the blockchain framework that the network uses.
" @@ -1444,7 +1454,7 @@ "type":"string", "max":64, "min":1, - "pattern":"^(?!-)^[^0-9](?!.*--)[A-Za-z0-9-]+[^- ]$" + "pattern":"^(?!-|[0-9])(?!.*-$)(?!.*?--)[a-zA-Z0-9-]+$" }, "NetworkStatus":{ "type":"string", @@ -1499,11 +1509,11 @@ "members":{ "NetworkId":{ "shape":"ResourceIdString", - "documentation":"The unique identifier of the network that the node is in.
" + "documentation":"The unique identifier of the network that the node is on.
" }, "MemberId":{ "shape":"ResourceIdString", - "documentation":"The unique identifier of the member to which the node belongs.
" + "documentation":"The unique identifier of the member to which the node belongs.
Applies only to Hyperledger Fabric.
" }, "Id":{ "shape":"ResourceIdString", @@ -1523,11 +1533,11 @@ }, "LogPublishingConfiguration":{ "shape":"NodeLogPublishingConfiguration", - "documentation":"Configuration properties for logging events associated with a peer node owned by a member in a Managed Blockchain network.
" + "documentation":"Configuration properties for logging events associated with a peer node on a Hyperledger Fabric network on Managed Blockchain.
" }, "StateDB":{ "shape":"StateDBType", - "documentation":"The state database that the node uses. Values are LevelDB
or CouchDB
.
The state database that the node uses. Values are LevelDB
or CouchDB
.
Applies only to Hyperledger Fabric.
" }, "Status":{ "shape":"NodeStatus", @@ -1538,14 +1548,11 @@ "documentation":"The date and time that the node was created.
" } }, - "documentation":"Configuration properties of a peer node.
" + "documentation":"Configuration properties of a node.
" }, "NodeConfiguration":{ "type":"structure", - "required":[ - "InstanceType", - "AvailabilityZone" - ], + "required":["InstanceType"], "members":{ "InstanceType":{ "shape":"InstanceTypeString", @@ -1557,14 +1564,28 @@ }, "LogPublishingConfiguration":{ "shape":"NodeLogPublishingConfiguration", - "documentation":"Configuration properties for logging events associated with a peer node owned by a member in a Managed Blockchain network.
" + "documentation":"Configuration properties for logging events associated with a peer node on a Hyperledger Fabric network on Managed Blockchain.
" }, "StateDB":{ "shape":"StateDBType", - "documentation":"The state database that the node uses. Values are LevelDB
or CouchDB
. When using an Amazon Managed Blockchain network with Hyperledger Fabric version 1.4 or later, the default is CouchDB
.
The state database that the node uses. Values are LevelDB
or CouchDB
. When using an Amazon Managed Blockchain network with Hyperledger Fabric version 1.4 or later, the default is CouchDB
.
Applies only to Hyperledger Fabric.
" } }, - "documentation":"Configuration properties of a peer node.
" + "documentation":"Configuration properties of a node.
" + }, + "NodeEthereumAttributes":{ + "type":"structure", + "members":{ + "HttpEndpoint":{ + "shape":"String", + "documentation":"The endpoint on which the Ethereum node listens to run Ethereum JSON-RPC methods over HTTP connections from a client. Use this endpoint in client code for smart contracts when using an HTTP connection. Connections to this endpoint are authenticated using Signature Version 4.
" + }, + "WebSocketEndpoint":{ + "shape":"String", + "documentation":"The endpoint on which the Ethereum node listens to run Ethereum JSON-RPC methods over WebSockets connections from a client. Use this endpoint in client code for smart contracts when using a WebSockets connection. Connections to this endpoint are authenticated using Signature Version 4.
" + } + }, + "documentation":"Attributes of an Ethereum node.
" }, "NodeFabricAttributes":{ "type":"structure", @@ -1578,7 +1599,7 @@ "documentation":"The endpoint that identifies the peer node for peer channel-based event services.
" } }, - "documentation":"Attributes of Hyperledger Fabric for a peer node on a Managed Blockchain network that uses Hyperledger Fabric.
" + "documentation":"Attributes of Hyperledger Fabric for a peer node on a Hyperledger Fabric network on Managed Blockchain.
" }, "NodeFabricLogPublishingConfiguration":{ "type":"structure", @@ -1600,9 +1621,13 @@ "Fabric":{ "shape":"NodeFabricAttributes", "documentation":"Attributes of Hyperledger Fabric for a peer node on a Managed Blockchain network that uses Hyperledger Fabric.
" + }, + "Ethereum":{ + "shape":"NodeEthereumAttributes", + "documentation":"Attributes of Ethereum for a node on a Managed Blockchain network that uses Ethereum.
" } }, - "documentation":"Attributes relevant to a peer node on a Managed Blockchain network for the blockchain framework that the network uses.
" + "documentation":"Attributes relevant to a node on a Managed Blockchain network for the blockchain framework that the network uses.
" }, "NodeListMaxResults":{ "type":"integer", @@ -1618,13 +1643,14 @@ "documentation":"Configuration properties for logging events associated with a node that is owned by a member of a Managed Blockchain network using the Hyperledger Fabric framework.
" } }, - "documentation":"Configuration properties for logging events associated with a peer node owned by a member in a Managed Blockchain network.
" + "documentation":"Configuration properties for logging events associated with a peer node on a Hyperledger Fabric network on Managed Blockchain.
" }, "NodeStatus":{ "type":"string", "enum":[ "CREATING", "AVAILABLE", + "UNHEALTHY", "CREATE_FAILED", "UPDATING", "DELETING", @@ -1656,7 +1682,7 @@ "documentation":"The EC2 instance type for the node.
" } }, - "documentation":"A summary of configuration properties for a peer node.
" + "documentation":"A summary of configuration properties for a node.
" }, "NodeSummaryList":{ "type":"list", @@ -1726,7 +1752,7 @@ "documentation":" The number of votes remaining to be cast on the proposal by members. In other words, the number of members minus the sum of YES
votes and NO
votes.
Properties of a proposal on a Managed Blockchain network.
" + "documentation":"Properties of a proposal on a Managed Blockchain network.
Applies only to Hyperledger Fabric.
" }, "ProposalActions":{ "type":"structure", @@ -1740,7 +1766,7 @@ "documentation":" The actions to perform for an APPROVED
proposal to remove a member from the network, which deletes the member and all associated member resources from the network.
The actions to carry out if a proposal is APPROVED
.
The actions to carry out if a proposal is APPROVED
.
Applies only to Hyperledger Fabric.
" }, "ProposalDurationInt":{ "type":"integer", @@ -1796,7 +1822,7 @@ "documentation":" The date and time that the proposal expires. This is the CreationDate
plus the ProposalDurationInHours
that is specified in the ProposalThresholdPolicy
. After this date and time, if members have not cast enough votes to determine the outcome according to the voting policy, the proposal is EXPIRED
and Actions
are not carried out.
Properties of a proposal.
" + "documentation":"Properties of a proposal.
Applies only to Hyperledger Fabric.
" }, "ProposalSummaryList":{ "type":"list", @@ -1832,7 +1858,7 @@ "documentation":"The unique identifier of the member to remove.
" } }, - "documentation":"An action to remove a member from a Managed Blockchain network as the result of a removal proposal that is APPROVED
. The member and all associated resources are deleted from the network.
An action to remove a member from a Managed Blockchain network as the result of a removal proposal that is APPROVED
. The member and all associated resources are deleted from the network.
Applies only to Hyperledger Fabric.
" }, "RemoveActionList":{ "type":"list", @@ -1921,13 +1947,13 @@ "members":{ "NetworkId":{ "shape":"ResourceIdString", - "documentation":"The unique ID of the Managed Blockchain network to which the member belongs.
", + "documentation":"The unique identifier of the Managed Blockchain network to which the member belongs.
", "location":"uri", "locationName":"networkId" }, "MemberId":{ "shape":"ResourceIdString", - "documentation":"The unique ID of the member.
", + "documentation":"The unique identifier of the member.
", "location":"uri", "locationName":"memberId" }, @@ -1946,25 +1972,22 @@ "type":"structure", "required":[ "NetworkId", - "MemberId", "NodeId" ], "members":{ "NetworkId":{ "shape":"ResourceIdString", - "documentation":"The unique ID of the Managed Blockchain network to which the node belongs.
", + "documentation":"The unique identifier of the network that the node is on.
", "location":"uri", "locationName":"networkId" }, "MemberId":{ "shape":"ResourceIdString", - "documentation":"The unique ID of the member that owns the node.
", - "location":"uri", - "locationName":"memberId" + "documentation":"The unique identifier of the member that owns the node.
Applies only to Hyperledger Fabric.
" }, "NodeId":{ "shape":"ResourceIdString", - "documentation":"The unique ID of the node.
", + "documentation":"The unique identifier of the node.
", "location":"uri", "locationName":"nodeId" }, @@ -2041,7 +2064,7 @@ "documentation":"The unique identifier of the member that cast the vote.
" } }, - "documentation":"Properties of an individual vote that a member cast for a proposal.
" + "documentation":"Properties of an individual vote that a member cast for a proposal.
Applies only to Hyperledger Fabric.
" }, "VoteValue":{ "type":"string", @@ -2058,8 +2081,8 @@ "documentation":"Defines the rules for the network for voting on proposals, such as the percentage of YES
votes required for the proposal to be approved and the duration of the proposal. The policy applies to all proposals and is specified when the network is created.
The voting rules for the network to decide if a proposal is accepted
" + "documentation":"The voting rules for the network to decide if a proposal is accepted
Applies only to Hyperledger Fabric.
" } }, - "documentation":"Amazon Managed Blockchain is a fully managed service for creating and managing blockchain networks using open source frameworks. Blockchain allows you to build applications where multiple parties can securely and transparently run transactions and share data without the need for a trusted, central authority. Currently, Managed Blockchain supports the Hyperledger Fabric open source framework.
" + "documentation":"Amazon Managed Blockchain is a fully managed service for creating and managing blockchain networks using open-source frameworks. Blockchain allows you to build applications where multiple parties can securely and transparently run transactions and share data without the need for a trusted, central authority.
Managed Blockchain supports the Hyperledger Fabric and Ethereum open-source frameworks. Because of fundamental differences between the frameworks, some API actions or data types may only apply in the context of one framework and not the other. For example, actions related to Hyperledger Fabric network members such as CreateMember
and DeleteMember
do not apply to Ethereum.
The description for each action indicates the framework or frameworks to which it applies. Data types and properties that apply only in the context of a particular framework are similarly indicated.
" } diff --git a/services/marketplacecatalog/pom.xml b/services/marketplacecatalog/pom.xml index e680dec8c518..e100af56dcab 100644 --- a/services/marketplacecatalog/pom.xml +++ b/services/marketplacecatalog/pom.xml @@ -21,7 +21,7 @@Lists the sites for the specified AWS account.
" + }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"GET", + "requestUri":"/tags/{ResourceArn}" + }, + "input":{"shape":"ListTagsForResourceRequest"}, + "output":{"shape":"ListTagsForResourceResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"NotFoundException"} + ], + "documentation":"Lists the tags for the specified resource.
" + }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/tags/{ResourceArn}" + }, + "input":{"shape":"TagResourceRequest"}, + "output":{"shape":"TagResourceResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"NotFoundException"} + ], + "documentation":"Adds tags to the specified resource.
" + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"DELETE", + "requestUri":"/tags/{ResourceArn}" + }, + "input":{"shape":"UntagResourceRequest"}, + "output":{"shape":"UntagResourceResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"}, + {"shape":"NotFoundException"} + ], + "documentation":"Removes tags from the specified resource.
" } }, "shapes":{ @@ -141,6 +186,11 @@ "max":12, "min":12 }, + "Arn":{ + "type":"string", + "max":1011, + "pattern":"^(arn:aws([a-z-]+)?:outposts:[a-z\\d-]+:\\d{12}:([a-z\\d-]+)/)[a-z]{2,8}-[a-f0-9]{17}$" + }, "AvailabilityZone":{ "type":"string", "documentation":"The Availability Zone.
You must specify AvailabilityZone
or AvailabilityZoneId
.
The Amazon Resource Name (ARN) of the resource.
", + "location":"uri", + "locationName":"ResourceArn" + } + } + }, + "ListTagsForResourceResponse":{ + "type":"structure", + "members":{ + "Tags":{ + "shape":"TagMap", + "documentation":"The resource tags.
" + } + } + }, "MaxResults1000":{ "type":"integer", "documentation":"The maximum page size.
", @@ -381,7 +452,7 @@ }, "OutpostDescription":{ "type":"string", - "documentation":"The Outpost description.
", + "documentation":"The description of the Outpost.
", "max":1000, "min":1, "pattern":"^[\\S ]+$" @@ -457,6 +528,12 @@ "min":1, "pattern":"^(?!aws:)[a-zA-Z+-=._:/]+$" }, + "TagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":50, + "min":1 + }, "TagMap":{ "type":"map", "key":{"shape":"TagKey"}, @@ -464,6 +541,30 @@ "max":50, "min":1 }, + "TagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "Tags" + ], + "members":{ + "ResourceArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the resource.
", + "location":"uri", + "locationName":"ResourceArn" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"The tags to add to the resource.
" + } + } + }, + "TagResourceResponse":{ + "type":"structure", + "members":{ + } + }, "TagValue":{ "type":"string", "max":256, @@ -476,6 +577,32 @@ "min":1, "pattern":".*\\S.*" }, + "UntagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "TagKeys" + ], + "members":{ + "ResourceArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the resource.
", + "location":"uri", + "locationName":"ResourceArn" + }, + "TagKeys":{ + "shape":"TagKeyList", + "documentation":"The tag keys.
", + "location":"querystring", + "locationName":"tagKeys" + } + } + }, + "UntagResourceResponse":{ + "type":"structure", + "members":{ + } + }, "ValidationException":{ "type":"structure", "members":{ @@ -496,5 +623,5 @@ "documentation":"Information about the sites.
" } }, - "documentation":"AWS Outposts is a fully-managed service that extends AWS infrastructure, APIs, and tools to customer premises. By providing local access to AWS-managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs.
" + "documentation":"AWS Outposts is a fully managed service that extends AWS infrastructure, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs.
" } diff --git a/services/personalize/pom.xml b/services/personalize/pom.xml index 465f28dc99a0..53b1bf4d25ee 100644 --- a/services/personalize/pom.xml +++ b/services/personalize/pom.xml @@ -21,7 +21,7 @@Sends a command to an Amazon QLDB ledger.
Instead of interacting directly with this API, we recommend that you use the Amazon QLDB Driver or the QLDB Shell to execute data transactions on a ledger.
If you are working with an AWS SDK, use the QLDB Driver. The driver provides a high-level abstraction layer above this qldbsession
data plane and manages SendCommand
API calls for you. For information and a list of supported programming languages, see Getting started with the driver in the Amazon QLDB Developer Guide.
If you are working with the AWS Command Line Interface (AWS CLI), use the QLDB Shell. The shell is a command line interface that uses the QLDB Driver to interact with a ledger. For information, see Accessing Amazon QLDB using the QLDB Shell.
Sends a command to an Amazon QLDB ledger.
Instead of interacting directly with this API, we recommend using the QLDB driver or the QLDB shell to execute data transactions on a ledger.
If you are working with an AWS SDK, use the QLDB driver. The driver provides a high-level abstraction layer above this QLDB Session data plane and manages SendCommand
API calls for you. For information and a list of supported programming languages, see Getting started with the driver in the Amazon QLDB Developer Guide.
If you are working with the AWS Command Line Interface (AWS CLI), use the QLDB shell. The shell is a command line interface that uses the QLDB driver to interact with a ledger. For information, see Accessing Amazon QLDB using the QLDB shell.
Contains server-side performance information for the command.
" + } }, "documentation":"Contains the details of the aborted transaction.
" }, @@ -68,7 +72,7 @@ }, "CommitDigest":{ "shape":"CommitDigest", - "documentation":"Specifies the commit digest for the transaction to commit. For every active transaction, the commit digest must be passed. QLDB validates CommitDigest
and rejects the commit with an error if the digest computed on the client does not match the digest computed by QLDB.
Specifies the commit digest for the transaction to commit. For every active transaction, the commit digest must be passed. QLDB validates CommitDigest
and rejects the commit with an error if the digest computed on the client does not match the digest computed by QLDB.
The purpose of the CommitDigest
parameter is to ensure that QLDB commits a transaction if and only if the server has processed the exact set of statements sent by the client, in the same order that client sent them, and with no duplicates.
Contains the details of the transaction to commit.
" @@ -83,6 +87,14 @@ "CommitDigest":{ "shape":"CommitDigest", "documentation":"The commit digest of the committed transaction.
" + }, + "TimingInformation":{ + "shape":"TimingInformation", + "documentation":"Contains server-side performance information for the command.
" + }, + "ConsumedIOs":{ + "shape":"IOUsage", + "documentation":"Contains metrics about the number of I/O requests that were consumed.
" } }, "documentation":"Contains the details of the committed transaction.
" @@ -96,6 +108,10 @@ "EndSessionResult":{ "type":"structure", "members":{ + "TimingInformation":{ + "shape":"TimingInformation", + "documentation":"Contains server-side performance information for the command.
" + } }, "documentation":"Contains the details of the ended session.
" }, @@ -129,6 +145,14 @@ "FirstPage":{ "shape":"Page", "documentation":"Contains the details of the first fetched page.
" + }, + "TimingInformation":{ + "shape":"TimingInformation", + "documentation":"Contains server-side performance information for the command.
" + }, + "ConsumedIOs":{ + "shape":"IOUsage", + "documentation":"Contains metrics about the number of I/O requests that were consumed.
" } }, "documentation":"Contains the details of the executed statement.
" @@ -157,10 +181,32 @@ "Page":{ "shape":"Page", "documentation":"Contains details of the fetched page.
" + }, + "TimingInformation":{ + "shape":"TimingInformation", + "documentation":"Contains server-side performance information for the command.
" + }, + "ConsumedIOs":{ + "shape":"IOUsage", + "documentation":"Contains metrics about the number of I/O requests that were consumed.
" } }, "documentation":"Contains the page that was fetched.
" }, + "IOUsage":{ + "type":"structure", + "members":{ + "ReadIOs":{ + "shape":"ReadIOs", + "documentation":"The number of read I/O requests that the command performed.
" + }, + "WriteIOs":{ + "shape":"WriteIOs", + "documentation":"The number of write I/O requests that the command performed.
" + } + }, + "documentation":"Contains I/O usage metrics for a command that was invoked.
" + }, "InvalidSessionException":{ "type":"structure", "members":{ @@ -222,6 +268,7 @@ "min":4, "pattern":"^[A-Za-z-0-9+/=]+$" }, + "ProcessingTimeMilliseconds":{"type":"long"}, "RateExceededException":{ "type":"structure", "members":{ @@ -230,6 +277,7 @@ "documentation":"Returned when the rate of requests exceeds the allowed throughput.
", "exception":true }, + "ReadIOs":{"type":"long"}, "SendCommandRequest":{ "type":"structure", "members":{ @@ -323,6 +371,10 @@ "SessionToken":{ "shape":"SessionToken", "documentation":"Session token of the started session. This SessionToken
is required for every subsequent command that is issued during the current session.
Contains server-side performance information for the command.
" } }, "documentation":"Contains the details of the started session.
" @@ -339,6 +391,10 @@ "TransactionId":{ "shape":"TransactionId", "documentation":"The transaction ID of the started transaction.
" + }, + "TimingInformation":{ + "shape":"TimingInformation", + "documentation":"Contains server-side performance information for the command.
" } }, "documentation":"Contains the details of the started transaction.
" @@ -352,6 +408,16 @@ "type":"list", "member":{"shape":"ValueHolder"} }, + "TimingInformation":{ + "type":"structure", + "members":{ + "ProcessingTimeMilliseconds":{ + "shape":"ProcessingTimeMilliseconds", + "documentation":"The amount of time that was taken for the command to finish processing, measured in milliseconds.
" + } + }, + "documentation":"Contains server-side performance information for a command. Amazon QLDB captures timing information between the times when it receives the request and when it sends the corresponding response.
" + }, "TransactionId":{ "type":"string", "max":22, @@ -370,12 +436,13 @@ "documentation":"An Amazon Ion plaintext value contained in a ValueHolder
structure.
A structure that can contain an Amazon Ion value in multiple encoding formats.
" + "documentation":"A structure that can contain a value in multiple encoding formats.
" }, "ValueHolders":{ "type":"list", "member":{"shape":"ValueHolder"} - } + }, + "WriteIOs":{"type":"long"} }, - "documentation":"The transactional data APIs for Amazon QLDB
Instead of interacting directly with this API, we recommend that you use the Amazon QLDB Driver or the QLDB Shell to execute data transactions on a ledger.
If you are working with an AWS SDK, use the QLDB Driver. The driver provides a high-level abstraction layer above this qldbsession
data plane and manages SendCommand
API calls for you. For information and a list of supported programming languages, see Getting started with the driver in the Amazon QLDB Developer Guide.
If you are working with the AWS Command Line Interface (AWS CLI), use the QLDB Shell. The shell is a command line interface that uses the QLDB Driver to interact with a ledger. For information, see Accessing Amazon QLDB using the QLDB Shell.
The transactional data APIs for Amazon QLDB
Instead of interacting directly with this API, we recommend using the QLDB driver or the QLDB shell to execute data transactions on a ledger.
If you are working with an AWS SDK, use the QLDB driver. The driver provides a high-level abstraction layer above this QLDB Session data plane and manages SendCommand
API calls for you. For information and a list of supported programming languages, see Getting started with the driver in the Amazon QLDB Developer Guide.
If you are working with the AWS Command Line Interface (AWS CLI), use the QLDB shell. The shell is a command line interface that uses the QLDB driver to interact with a ledger. For information, see Accessing Amazon QLDB using the QLDB shell.
Imports security findings generated from an integrated third-party product into Security Hub. This action is requested by the integrated product to import its findings into Security Hub.
The maximum allowed size for a finding is 240 Kb. An error is returned for any finding larger than 240 Kb.
After a finding is created, BatchImportFindings
cannot be used to update the following finding fields and objects, which Security Hub customers use to manage their investigation workflow.
Confidence
Criticality
Note
RelatedFindings
Severity
Types
UserDefinedFields
VerificationState
Workflow
Imports security findings generated from an integrated third-party product into Security Hub. This action is requested by the integrated product to import its findings into Security Hub.
The maximum allowed size for a finding is 240 Kb. An error is returned for any finding larger than 240 Kb.
After a finding is created, BatchImportFindings
cannot be used to update the following finding fields and objects, which Security Hub customers use to manage their investigation workflow.
Note
UserDefinedFields
VerificationState
Workflow
BatchImportFindings
can be used to update the following finding fields and objects only if they have not been updated using BatchUpdateFindings
. After they are updated using BatchUpdateFindings
, these fields cannot be updated using BatchImportFindings
.
Confidence
Criticality
RelatedFindings
Severity
Types
The native severity from the finding product that generated the finding.
" } }, - "documentation":"The severity of the finding.
The finding provider can provide the initial severity, but cannot update it after that. The severity can only be updated by a master account. It cannot be updated by a member account.
The finding must have either Label
or Normalized
populated. If only one of these attributes is populated, then Security Hub automatically populates the other one. If neither attribute is populated, then the finding is invalid. Label
is the preferred attribute.
The severity of the finding.
The finding provider can provide the initial severity. The finding provider can only update the severity if it has not been updated using BatchUpdateFindings
.
The finding must have either Label
or Normalized
populated. If only one of these attributes is populated, then Security Hub automatically populates the other one. If neither attribute is populated, then the finding is invalid. Label
is the preferred attribute.
Lists all of the tags on the resource.
" }, + "SyncResource":{ + "name":"SyncResource", + "http":{ + "method":"POST", + "requestUri":"/sync/{resourceType}/{resource}" + }, + "input":{"shape":"SyncResourceRequest"}, + "output":{"shape":"SyncResourceResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"ConflictException"} + ], + "documentation":"Syncs the resource with what is currently recorded in App registry. Specifically, the resource’s App registry system tags are synced with its associated application. The resource is removed if it is not associated with the application. The caller must have permissions to read and update the resource.
" + }, "TagResource":{ "name":"TagResource", "http":{ @@ -257,7 +272,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"Assigns one or more tags (key-value pairs) to the specified resource.
Each tag consists of a key and an optional value. If a tag with the same key is already associated with the resource, this action updates its value.
" + "documentation":"Assigns one or more tags (key-value pairs) to the specified resource.
Each tag consists of a key and an optional value. If a tag with the same key is already associated with the resource, this action updates its value.
This operation returns an empty response if the call was successful.
" }, "UntagResource":{ "name":"UntagResource", @@ -272,7 +287,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"Removes tags from a resource.
" + "documentation":"Removes tags from a resource.
This operation returns an empty response if the call was successful.
" }, "UpdateApplication":{ "name":"UpdateApplication", @@ -1114,6 +1129,51 @@ "pattern":"arn:aws[-a-z]*:cloudformation:[a-z]{2}(-gov)?-[a-z]+-\\d:\\d{12}:stack/[a-zA-Z][-A-Za-z0-9]{0,127}/[0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}" }, "String":{"type":"string"}, + "SyncAction":{ + "type":"string", + "enum":[ + "START_SYNC", + "NO_ACTION" + ] + }, + "SyncResourceRequest":{ + "type":"structure", + "required":[ + "resourceType", + "resource" + ], + "members":{ + "resourceType":{ + "shape":"ResourceType", + "documentation":"The type of resource of which the application will be associated.
", + "location":"uri", + "locationName":"resourceType" + }, + "resource":{ + "shape":"ResourceSpecifier", + "documentation":"An entity you can work with and specify with a name or ID. Examples include an Amazon EC2 instance, an AWS CloudFormation stack, or an Amazon S3 bucket.
", + "location":"uri", + "locationName":"resource" + } + } + }, + "SyncResourceResponse":{ + "type":"structure", + "members":{ + "applicationArn":{ + "shape":"ApplicationArn", + "documentation":"The Amazon resource name (ARN) that specifies the application.
" + }, + "resourceArn":{ + "shape":"Arn", + "documentation":"The Amazon resource name (ARN) that specifies the resource.
" + }, + "actionTaken":{ + "shape":"SyncAction", + "documentation":"The results of the output if an application is associated with an ARN value, which could be syncStarted
or None.
Associates the Service Quotas template with your organization so that when new accounts are created in your organization, the template submits increase requests for the specified service quotas. Use the Service Quotas template to request an increase for any adjustable quota value. After you define the Service Quotas template, use this operation to associate, or enable, the template.
" + "documentation":"Associates your quota request template with your organization. When a new account is created in your organization, the quota increase requests in the template are automatically applied to the account. You can add a quota increase request for any adjustable quota to your template.
" }, "DeleteServiceQuotaIncreaseRequestFromTemplate":{ "name":"DeleteServiceQuotaIncreaseRequestFromTemplate", @@ -51,7 +51,7 @@ {"shape":"TemplatesNotAvailableInRegionException"}, {"shape":"NoAvailableOrganizationException"} ], - "documentation":"Removes a service quota increase request from the Service Quotas template.
" + "documentation":"Deletes the quota increase request for the specified quota from your quota request template.
" }, "DisassociateServiceQuotaTemplate":{ "name":"DisassociateServiceQuotaTemplate", @@ -71,7 +71,7 @@ {"shape":"TemplatesNotAvailableInRegionException"}, {"shape":"NoAvailableOrganizationException"} ], - "documentation":"Disables the Service Quotas template. Once the template is disabled, it does not request quota increases for new accounts in your organization. Disabling the quota template does not apply the quota increase requests from the template.
Related operations
To enable the quota template, call AssociateServiceQuotaTemplate.
To delete a specific service quota from the template, use DeleteServiceQuotaIncreaseRequestFromTemplate.
Disables your quota request template. After a template is disabled, the quota increase requests in the template are not applied to new accounts in your organization. Disabling a quota request template does not apply its quota increase requests.
" }, "GetAWSDefaultServiceQuota":{ "name":"GetAWSDefaultServiceQuota", @@ -88,7 +88,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Retrieves the default service quotas values. The Value returned for each quota is the AWS default value, even if the quotas have been increased..
" + "documentation":"Retrieves the default value for the specified quota. The default value does not reflect any quota increases.
" }, "GetAssociationForServiceQuotaTemplate":{ "name":"GetAssociationForServiceQuotaTemplate", @@ -108,7 +108,7 @@ {"shape":"TemplatesNotAvailableInRegionException"}, {"shape":"NoAvailableOrganizationException"} ], - "documentation":"Retrieves the ServiceQuotaTemplateAssociationStatus
value from the service. Use this action to determine if the Service Quota template is associated, or enabled.
Retrieves the status of the association for the quota request template.
" }, "GetRequestedServiceQuotaChange":{ "name":"GetRequestedServiceQuotaChange", @@ -125,7 +125,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Retrieves the details for a particular increase request.
" + "documentation":"Retrieves information about the specified quota increase request.
" }, "GetServiceQuota":{ "name":"GetServiceQuota", @@ -142,7 +142,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Returns the details for the specified service quota. This operation provides a different Value than the GetAWSDefaultServiceQuota
operation. This operation returns the applied value for each quota. GetAWSDefaultServiceQuota
returns the default AWS value for each quota.
Retrieves the applied quota value for the specified quota. For some quotas, only the default values are available. If the applied quota value is not available for a quota, the quota is not retrieved.
" }, "GetServiceQuotaIncreaseRequestFromTemplate":{ "name":"GetServiceQuotaIncreaseRequestFromTemplate", @@ -163,7 +163,7 @@ {"shape":"TemplatesNotAvailableInRegionException"}, {"shape":"NoAvailableOrganizationException"} ], - "documentation":"Returns the details of the service quota increase request in your template.
" + "documentation":"Retrieves information about the specified quota increase request in your quota request template.
" }, "ListAWSDefaultServiceQuotas":{ "name":"ListAWSDefaultServiceQuotas", @@ -181,7 +181,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Lists all default service quotas for the specified AWS service or all AWS services. ListAWSDefaultServiceQuotas is similar to ListServiceQuotas except for the Value object. The Value object returned by ListAWSDefaultServiceQuotas
is the default value assigned by AWS. This request returns a list of all service quotas for the specified service. The listing of each you'll see the default values are the values that AWS provides for the quotas.
Always check the NextToken
response parameter when calling any of the List*
operations. These operations can return an unexpected list of results, even when there are more results available. When this happens, the NextToken
response parameter contains a value to pass the next call to the same API to request the next part of the list.
Lists the default values for the quotas for the specified AWS service. A default value does not reflect any quota increases.
" }, "ListRequestedServiceQuotaChangeHistory":{ "name":"ListRequestedServiceQuotaChangeHistory", @@ -199,7 +199,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Requests a list of the changes to quotas for a service.
" + "documentation":"Retrieves the quota increase requests for the specified service.
" }, "ListRequestedServiceQuotaChangeHistoryByQuota":{ "name":"ListRequestedServiceQuotaChangeHistoryByQuota", @@ -217,7 +217,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Requests a list of the changes to specific service quotas. This command provides additional granularity over the ListRequestedServiceQuotaChangeHistory
command. Once a quota change request has reached CASE_CLOSED, APPROVED,
or DENIED
, the history has been kept for 90 days.
Retrieves the quota increase requests for the specified quota.
" }, "ListServiceQuotaIncreaseRequestsInTemplate":{ "name":"ListServiceQuotaIncreaseRequestsInTemplate", @@ -237,7 +237,7 @@ {"shape":"TemplatesNotAvailableInRegionException"}, {"shape":"NoAvailableOrganizationException"} ], - "documentation":"Returns a list of the quota increase requests in the template.
" + "documentation":"Lists the quota increase requests in the specified quota request template.
" }, "ListServiceQuotas":{ "name":"ListServiceQuotas", @@ -255,7 +255,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Lists all service quotas for the specified AWS service. This request returns a list of the service quotas for the specified service. you'll see the default values are the values that AWS provides for the quotas.
Always check the NextToken
response parameter when calling any of the List*
operations. These operations can return an unexpected list of results, even when there are more results available. When this happens, the NextToken
response parameter contains a value to pass the next call to the same API to request the next part of the list.
Lists the applied quota values for the specified AWS service. For some quotas, only the default values are available. If the applied quota value is not available for a quota, the quota is not retrieved.
" }, "ListServices":{ "name":"ListServices", @@ -272,7 +272,24 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Lists the AWS services available in Service Quotas. Not all AWS services are available in Service Quotas. To list the see the list of the service quotas for a specific service, use ListServiceQuotas.
" + "documentation":"Lists the names and codes for the services integrated with Service Quotas.
" + }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagsForResourceRequest"}, + "output":{"shape":"ListTagsForResourceResponse"}, + "errors":[ + {"shape":"TooManyRequestsException"}, + {"shape":"NoSuchResourceException"}, + {"shape":"IllegalArgumentException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ServiceException"} + ], + "documentation":"Returns a list of the tags assigned to the specified applied quota.
" }, "PutServiceQuotaIncreaseRequestIntoTemplate":{ "name":"PutServiceQuotaIncreaseRequestIntoTemplate", @@ -294,7 +311,7 @@ {"shape":"TemplatesNotAvailableInRegionException"}, {"shape":"NoAvailableOrganizationException"} ], - "documentation":"Defines and adds a quota to the service quota template. To add a quota to the template, you must provide the ServiceCode
, QuotaCode
, AwsRegion
, and DesiredValue
. Once you add a quota to the template, use ListServiceQuotaIncreaseRequestsInTemplate to see the list of quotas in the template.
Adds a quota increase request to your quota request template.
" }, "RequestServiceQuotaIncrease":{ "name":"RequestServiceQuotaIncrease", @@ -315,7 +332,43 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"Retrieves the details of a service quota increase request. The response to this command provides the details in the RequestedServiceQuotaChange object.
" + "documentation":"Submits a quota increase request for the specified quota.
" + }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TagResourceRequest"}, + "output":{"shape":"TagResourceResponse"}, + "errors":[ + {"shape":"TooManyRequestsException"}, + {"shape":"NoSuchResourceException"}, + {"shape":"TooManyTagsException"}, + {"shape":"TagPolicyViolationException"}, + {"shape":"IllegalArgumentException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ServiceException"} + ], + "documentation":"Adds tags to the specified applied quota. You can include one or more tags to add to the quota.
" + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UntagResourceRequest"}, + "output":{"shape":"UntagResourceResponse"}, + "errors":[ + {"shape":"TooManyRequestsException"}, + {"shape":"NoSuchResourceException"}, + {"shape":"IllegalArgumentException"}, + {"shape":"AccessDeniedException"}, + {"shape":"ServiceException"} + ], + "documentation":"Removes tags from the specified applied quota. You can specify one or more tags to remove.
" } }, "shapes":{ @@ -324,7 +377,7 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"The action you attempted is not allowed unless Service Access with Service Quotas is enabled in your organization. To enable, call AssociateServiceQuotaTemplate.
", + "documentation":"The action you attempted is not allowed unless Service Access with Service Quotas is enabled in your organization.
", "exception":true }, "AccessDeniedException":{ @@ -332,9 +385,15 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"You do not have sufficient access to perform this action.
", + "documentation":"You do not have sufficient permission to perform this action.
", "exception":true }, + "AmazonResourceName":{ + "type":"string", + "max":1011, + "min":1, + "pattern":"arn:aws(-[\\w]+)*:*:.+:[0-9]{12}:.+" + }, "AssociateServiceQuotaTemplateRequest":{ "type":"structure", "members":{ @@ -363,15 +422,15 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the code for the service that you want to delete.
" + "documentation":"The service identifier.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Specifies the code for the quota that you want to delete.
" + "documentation":"The quota identifier.
" }, "AwsRegion":{ "shape":"AwsRegion", - "documentation":"Specifies the AWS Region for the quota that you want to delete.
" + "documentation":"The AWS Region.
" } } }, @@ -413,14 +472,14 @@ "members":{ "ErrorCode":{ "shape":"ErrorCode", - "documentation":"Service Quotas returns the following error values.
DEPENDENCY_ACCESS_DENIED_ERROR
is returned when the caller does not have permission to call the service or service quota. To resolve the error, you need permission to access the service or service quota.
DEPENDENCY_THROTTLING_ERROR
is returned when the service being called is throttling Service Quotas.
DEPENDENCY_SERVICE_ERROR
is returned when the service being called has availability issues.
SERVICE_QUOTA_NOT_AVAILABLE_ERROR
is returned when there was an error in Service Quotas.
Service Quotas returns the following error values:
DEPENDENCY_ACCESS_DENIED_ERROR
- The caller does not have the required permissions to complete the action. To resolve the error, you must have permission to access the service or quota.
DEPENDENCY_THROTTLING_ERROR
- The service is throttling Service Quotas.
DEPENDENCY_SERVICE_ERROR
- The service is not available.
SERVICE_QUOTA_NOT_AVAILABLE_ERROR
- There was an error in Service Quotas.
The error message that provides more detail.
" + "documentation":"The error message.
" } }, - "documentation":"Returns an error that explains why the action did not succeed.
" + "documentation":"An error that explains why an action did not succeed.
" }, "ExceptionMessage":{"type":"string"}, "GetAWSDefaultServiceQuotaRequest":{ @@ -432,11 +491,11 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Identifies the service quota you want to select.
" + "documentation":"The quota identifier.
" } } }, @@ -445,7 +504,7 @@ "members":{ "Quota":{ "shape":"ServiceQuota", - "documentation":"Returns the ServiceQuota object which contains all values for a quota.
" + "documentation":"Information about the quota.
" } } }, @@ -459,7 +518,7 @@ "members":{ "ServiceQuotaTemplateAssociationStatus":{ "shape":"ServiceQuotaTemplateAssociationStatus", - "documentation":"Specifies whether the template is ASSOCIATED
or DISASSOCIATED
. If the template is ASSOCIATED
, then it requests service quota increases for all new accounts created in your organization.
The association status. If the status is ASSOCIATED
, the quota increase requests in the template are automatically applied to new accounts in your organization.
Identifies the quota increase request.
" + "documentation":"The ID of the quota increase request.
" } } }, @@ -478,7 +537,7 @@ "members":{ "RequestedQuota":{ "shape":"RequestedServiceQuotaChange", - "documentation":"Returns the RequestedServiceQuotaChange
object for the specific increase request.
Information about the quota increase request.
" } } }, @@ -492,15 +551,15 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Specifies the quota you want.
" + "documentation":"The quota identifier.
" }, "AwsRegion":{ "shape":"AwsRegion", - "documentation":"Specifies the AWS Region for the quota that you want to use.
" + "documentation":"The AWS Region.
" } } }, @@ -509,7 +568,7 @@ "members":{ "ServiceQuotaIncreaseRequestInTemplate":{ "shape":"ServiceQuotaIncreaseRequestInTemplate", - "documentation":"This object contains the details about the quota increase request.
" + "documentation":"Information about the quota increase request.
" } } }, @@ -522,11 +581,11 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Identifies the service quota you want to select.
" + "documentation":"The quota identifier.
" } } }, @@ -535,7 +594,7 @@ "members":{ "Quota":{ "shape":"ServiceQuota", - "documentation":"Returns the ServiceQuota object which contains all values for a quota.
" + "documentation":"Information about the quota.
" } } }, @@ -545,9 +604,18 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"Invalid input was provided.
", + "documentation":"Invalid input was provided.
", "exception":true }, + "InputTagKeys":{ + "type":"list", + "member":{"shape":"TagKey"} + }, + "InputTags":{ + "type":"list", + "member":{"shape":"Tag"}, + "min":1 + }, "InvalidPaginationTokenException":{ "type":"structure", "members":{ @@ -561,7 +629,7 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"Invalid input was provided for the .
", + "documentation":"The resource is in an invalid state.
", "exception":true }, "ListAWSDefaultServiceQuotasRequest":{ @@ -570,15 +638,15 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "NextToken":{ "shape":"NextToken", - "documentation":"(Optional) Use this parameter in a request if you receive a NextToken
response in a previous request that indicates that there's more output available. In a subsequent call, set it to the value of the previous call's NextToken
response to indicate where the output should continue from. If additional items exist beyond the specified maximum, the NextToken
element is present and has a value (isn't null). Include that value as the NextToken
request parameter in the call to the operation to get the next part of the results. You should check NextToken
after every operation to ensure that you receive all of the results.
The token for the next page of results.
" }, "MaxResults":{ "shape":"MaxResults", - "documentation":"(Optional) Limits the number of results that you want to include in the response. If you don't include this parameter, the response defaults to a value that's specific to the operation. If additional items exist beyond the specified maximum, the NextToken
element is present and has a value (isn't null). Include that value as the NextToken
request parameter in the call to the operation to get the next part of the results. You should check NextToken
after every operation to ensure that you receive all of the results.
The maximum number of results to return with a single call. To retrieve the remaining results, if any, make another call with the token returned from this call.
" } } }, @@ -587,11 +655,11 @@ "members":{ "NextToken":{ "shape":"NextToken", - "documentation":"(Optional) Use this parameter in a request if you receive a NextToken
response in a previous request that indicates that there's more output available. In a subsequent call, set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
The token to use to retrieve the next page of results. This value is null when there are no more results to return.
" }, "Quotas":{ "shape":"ServiceQuotaListDefinition", - "documentation":"A list of the quotas in the account with the AWS default values.
" + "documentation":"Information about the quotas.
" } } }, @@ -604,23 +672,23 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Specifies the service quota that you want to use
" + "documentation":"The quota identifier.
" }, "Status":{ "shape":"RequestStatus", - "documentation":"Specifies the status value of the quota increase request.
" + "documentation":"The status value of the quota increase request.
" }, "NextToken":{ "shape":"NextToken", - "documentation":"(Optional) Use this parameter in a request if you receive a NextToken
response in a previous request that indicates that there's more output available. In a subsequent call, set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
The token for the next page of results.
" }, "MaxResults":{ "shape":"MaxResults", - "documentation":"(Optional) Limits the number of results that you want to include in the response. If you don't include this parameter, the response defaults to a value that's specific to the operation. If additional items exist beyond the specified maximum, the NextToken
element is present and has a value (isn't null). Include that value as the NextToken
request parameter in the call to the operation to get the next part of the results. You should check NextToken
after every operation to ensure that you receive all of the results.
The maximum number of results to return with a single call. To retrieve the remaining results, if any, make another call with the token returned from this call.
" } } }, @@ -629,11 +697,11 @@ "members":{ "NextToken":{ "shape":"NextToken", - "documentation":"If present in the response, this value indicates there's more output available that what's included in the current response. This can occur even when the response includes no values at all, such as when you ask for a filtered view of a very long list. Use this value in the NextToken
request parameter in a subsequent call to the operation to continue processing and get the next part of the output. You should repeat this until the NextToken
response element comes back empty (as null
).
The token to use to retrieve the next page of results. This value is null when there are no more results to return.
" }, "RequestedQuotas":{ "shape":"RequestedServiceQuotaChangeHistoryListDefinition", - "documentation":"Returns a list of service quota requests.
" + "documentation":"Information about the quota increase requests.
" } } }, @@ -642,19 +710,19 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "Status":{ "shape":"RequestStatus", - "documentation":"Specifies the status value of the quota increase request.
" + "documentation":"The status of the quota increase request.
" }, "NextToken":{ "shape":"NextToken", - "documentation":"(Optional) Use this parameter in a request if you receive a NextToken
response in a previous request that indicates that there's more output available. In a subsequent call, set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
The token for the next page of results.
" }, "MaxResults":{ "shape":"MaxResults", - "documentation":"(Optional) Limits the number of results that you want to include in the response. If you don't include this parameter, the response defaults to a value that's specific to the operation. If additional items exist beyond the specified maximum, the NextToken
element is present and has a value (isn't null). Include that value as the NextToken
request parameter in the call to the operation to get the next part of the results. You should check NextToken
after every operation to ensure that you receive all of the results.
The maximum number of results to return with a single call. To retrieve the remaining results, if any, make another call with the token returned from this call.
" } } }, @@ -663,11 +731,11 @@ "members":{ "NextToken":{ "shape":"NextToken", - "documentation":"If present in the response, this value indicates there's more output available that what's included in the current response. This can occur even when the response includes no values at all, such as when you ask for a filtered view of a very long list. Use this value in the NextToken
request parameter in a subsequent call to the operation to continue processing and get the next part of the output. You should repeat this until the NextToken
response element comes back empty (as null
).
The token to use to retrieve the next page of results. This value is null when there are no more results to return.
" }, "RequestedQuotas":{ "shape":"RequestedServiceQuotaChangeHistoryListDefinition", - "documentation":"Returns a list of service quota requests.
" + "documentation":"Information about the quota increase requests.
" } } }, @@ -676,19 +744,19 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"The identifier for a service. When performing an operation, use the ServiceCode
to specify a particular service.
The service identifier.
" }, "AwsRegion":{ "shape":"AwsRegion", - "documentation":"Specifies the AWS Region for the quota that you want to use.
" + "documentation":"The AWS Region.
" }, "NextToken":{ "shape":"NextToken", - "documentation":"(Optional) Use this parameter in a request if you receive a NextToken
response in a previous request that indicates that there's more output available. In a subsequent call, set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
The token for the next page of results.
" }, "MaxResults":{ "shape":"MaxResults", - "documentation":"(Optional) Limits the number of results that you want to include in the response. If you don't include this parameter, the response defaults to a value that's specific to the operation. If additional items exist beyond the specified maximum, the NextToken
element is present and has a value (isn't null). Include that value as the NextToken
request parameter in the call to the operation to get the next part of the results. You should check NextToken
after every operation to ensure that you receive all of the results.
The maximum number of results to return with a single call. To retrieve the remaining results, if any, make another call with the token returned from this call.
" } } }, @@ -697,11 +765,11 @@ "members":{ "ServiceQuotaIncreaseRequestInTemplateList":{ "shape":"ServiceQuotaIncreaseRequestInTemplateList", - "documentation":"Returns the list of values of the quota increase request in the template.
" + "documentation":"Information about the quota increase requests.
" }, "NextToken":{ "shape":"NextToken", - "documentation":"If present in the response, this value indicates there's more output available that what's included in the current response. This can occur even when the response includes no values at all, such as when you ask for a filtered view of a very long list. Use this value in the NextToken
request parameter in a subsequent call to the operation to continue processing and get the next part of the output. You should repeat this until the NextToken
response element comes back empty (as null
).
The token to use to retrieve the next page of results. This value is null when there are no more results to return.
" } } }, @@ -711,15 +779,15 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"The identifier for a service. When performing an operation, use the ServiceCode
to specify a particular service.
The service identifier.
" }, "NextToken":{ "shape":"NextToken", - "documentation":"(Optional) Use this parameter in a request if you receive a NextToken
response in a previous request that indicates that there's more output available. In a subsequent call, set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
The token for the next page of results.
" }, "MaxResults":{ "shape":"MaxResults", - "documentation":"(Optional) Limits the number of results that you want to include in the response. If you don't include this parameter, the response defaults to a value that's specific to the operation. If additional items exist beyond the specified maximum, the NextToken
element is present and has a value (isn't null). Include that value as the NextToken
request parameter in the call to the operation to get the next part of the results. You should check NextToken
after every operation to ensure that you receive all of the results.
The maximum number of results to return with a single call. To retrieve the remaining results, if any, make another call with the token returned from this call.
" } } }, @@ -728,11 +796,11 @@ "members":{ "NextToken":{ "shape":"NextToken", - "documentation":"If present in the response, this value indicates there's more output available that what's included in the current response. This can occur even when the response includes no values at all, such as when you ask for a filtered view of a very long list. Use this value in the NextToken
request parameter in a subsequent call to the operation to continue processing and get the next part of the output. You should repeat this until the NextToken
response element comes back empty (as null
).
The token to use to retrieve the next page of results. This value is null when there are no more results to return.
" }, "Quotas":{ "shape":"ServiceQuotaListDefinition", - "documentation":"The response information for a quota lists all attribute information for the quota.
" + "documentation":"Information about the quotas.
" } } }, @@ -741,11 +809,11 @@ "members":{ "NextToken":{ "shape":"NextToken", - "documentation":"(Optional) Use this parameter in a request if you receive a NextToken
response in a previous request that indicates that there's more output available. In a subsequent call, set it to the value of the previous call's NextToken
response to indicate where the output should continue from.
The token for the next page of results.
" }, "MaxResults":{ "shape":"MaxResults", - "documentation":"(Optional) Limits the number of results that you want to include in the response. If you don't include this parameter, the response defaults to a value that's specific to the operation. If additional items exist beyond the specified maximum, the NextToken
element is present and has a value (isn't null). Include that value as the NextToken
request parameter in the call to the operation to get the next part of the results. You should check NextToken
after every operation to ensure that you receive all of the results.
The maximum number of results to return with a single call. To retrieve the remaining results, if any, make another call with the token returned from this call.
" } } }, @@ -754,11 +822,30 @@ "members":{ "NextToken":{ "shape":"NextToken", - "documentation":"If present in the response, this value indicates there's more output available that what's included in the current response. This can occur even when the response includes no values at all, such as when you ask for a filtered view of a very long list. Use this value in the NextToken
request parameter in a subsequent call to the operation to continue processing and get the next part of the output. You should repeat this until the NextToken
response element comes back empty (as null
).
The token to use to retrieve the next page of results. This value is null when there are no more results to return.
" }, "Services":{ "shape":"ServiceInfoListDefinition", - "documentation":"Returns a list of services.
" + "documentation":"Information about the services.
" + } + } + }, + "ListTagsForResourceRequest":{ + "type":"structure", + "required":["ResourceARN"], + "members":{ + "ResourceARN":{ + "shape":"AmazonResourceName", + "documentation":"The Amazon Resource Name (ARN) for the applied quota for which you want to list tags. You can get this information by using the Service Quotas console, or by listing the quotas using the list-service-quotas AWS CLI command or the ListServiceQuotas AWS API operation.
" + } + } + }, + "ListTagsForResourceResponse":{ + "type":"structure", + "members":{ + "Tags":{ + "shape":"OutputTags", + "documentation":"A complex data type that contains zero or more tag elements.
" } } }, @@ -780,22 +867,22 @@ "members":{ "MetricNamespace":{ "shape":"QuotaMetricNamespace", - "documentation":"The namespace of the metric. The namespace is a container for CloudWatch metrics. You can specify a name for the namespace when you create a metric.
" + "documentation":"The namespace of the metric.
" }, "MetricName":{ "shape":"QuotaMetricName", - "documentation":"The name of the CloudWatch metric that measures usage of a service quota. This is a required field.
" + "documentation":"The name of the metric.
" }, "MetricDimensions":{ "shape":"MetricDimensionsMapDefinition", - "documentation":"A dimension is a name/value pair that is part of the identity of a metric. Every metric has specific characteristics that describe it, and you can think of dimensions as categories for those characteristics. These dimensions are part of the CloudWatch Metric Identity that measures usage against a particular service quota.
" + "documentation":"The metric dimension. This is a name/value pair that is part of the identity of a metric.
" }, "MetricStatisticRecommendation":{ "shape":"Statistic", - "documentation":"Statistics are metric data aggregations over specified periods of time. This is the recommended statistic to use when comparing usage in the CloudWatch Metric against your Service Quota.
" + "documentation":"The metric statistic that we recommend you use when determining quota usage.
" } }, - "documentation":"A structure that uses CloudWatch metrics to gather data about the service quota.
" + "documentation":"Information about the CloudWatch metric that reflects quota usage.
" }, "NextToken":{ "type":"string", @@ -823,9 +910,14 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"The organization that your account belongs to, is not in All Features mode. To enable all features mode, see EnableAllFeatures.
", + "documentation":"The organization that your account belongs to is not in All Features mode.
", "exception":true }, + "OutputTags":{ + "type":"list", + "member":{"shape":"Tag"}, + "max":200 + }, "PeriodUnit":{ "type":"string", "enum":[ @@ -850,19 +942,19 @@ "members":{ "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Specifies the service quota that you want to use.
" + "documentation":"The quota identifier.
" }, "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "AwsRegion":{ "shape":"AwsRegion", - "documentation":"Specifies the AWS Region for the quota.
" + "documentation":"The AWS Region.
" }, "DesiredValue":{ "shape":"QuotaValue", - "documentation":"Specifies the new, increased value for the quota.
" + "documentation":"The new, increased value for the quota.
" } } }, @@ -871,7 +963,7 @@ "members":{ "ServiceQuotaIncreaseRequestInTemplate":{ "shape":"ServiceQuotaIncreaseRequestInTemplate", - "documentation":"A structure that contains information about one service quota increase request.
" + "documentation":"Information about the quota increase request.
" } } }, @@ -899,14 +991,14 @@ "members":{ "PeriodValue":{ "shape":"PeriodValue", - "documentation":"The value of a period.
" + "documentation":"The value.
" }, "PeriodUnit":{ "shape":"PeriodUnit", - "documentation":"The time unit of a period.
" + "documentation":"The time unit.
" } }, - "documentation":"A structure that contains information about the quota period.
" + "documentation":"Information about the quota period.
" }, "QuotaUnit":{"type":"string"}, "QuotaValue":{ @@ -930,15 +1022,15 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Specifies the service quota that you want to use.
" + "documentation":"The quota identifier.
" }, "DesiredValue":{ "shape":"QuotaValue", - "documentation":"Specifies the value submitted in the service quota increase request.
" + "documentation":"The new, increased value for the quota.
" } } }, @@ -947,7 +1039,7 @@ "members":{ "RequestedQuota":{ "shape":"RequestedServiceQuotaChange", - "documentation":"Returns a list of service quota requests.
" + "documentation":"Information about the quota increase request.
" } } }, @@ -966,62 +1058,62 @@ "members":{ "Id":{ "shape":"RequestId", - "documentation":"The unique identifier of a requested service quota change.
" + "documentation":"The unique identifier.
" }, "CaseId":{ "shape":"CustomerServiceEngagementId", - "documentation":"The case Id for the service quota increase request.
" + "documentation":"The case ID.
" }, "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "ServiceName":{ "shape":"ServiceName", - "documentation":"The name of the AWS service specified in the increase request.
" + "documentation":"The service name.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"Specifies the service quota that you want to use.
" + "documentation":"The quota identifier.
" }, "QuotaName":{ "shape":"QuotaName", - "documentation":"Name of the service quota.
" + "documentation":"The quota name.
" }, "DesiredValue":{ "shape":"QuotaValue", - "documentation":"New increased value for the service quota.
" + "documentation":"The new, increased value for the quota.
" }, "Status":{ "shape":"RequestStatus", - "documentation":"State of the service quota increase request.
" + "documentation":"The state of the quota increase request.
" }, "Created":{ "shape":"DateTime", - "documentation":"The date and time when the service quota increase request was received and the case Id was created.
" + "documentation":"The date and time when the quota increase request was received and the case ID was created.
" }, "LastUpdated":{ "shape":"DateTime", - "documentation":"The date and time of the most recent change in the service quota increase request.
" + "documentation":"The date and time of the most recent change.
" }, "Requester":{ "shape":"Requester", - "documentation":"The IAM identity who submitted the service quota increase request.
" + "documentation":"The IAM identity of the requester.
" }, "QuotaArn":{ "shape":"QuotaArn", - "documentation":"The Amazon Resource Name (ARN) of the service quota.
" + "documentation":"The Amazon Resource Name (ARN) of the quota.
" }, "GlobalQuota":{ "shape":"GlobalQuota", - "documentation":"Identifies if the quota is global.
" + "documentation":"Indicates whether the quota is global.
" }, "Unit":{ "shape":"QuotaUnit", - "documentation":"Specifies the unit used for the quota.
" + "documentation":"The unit of measurement.
" } }, - "documentation":"A structure that contains information about a requested change for a quota.
" + "documentation":"Information about a quota increase request.
" }, "RequestedServiceQuotaChangeHistoryListDefinition":{ "type":"list", @@ -1047,7 +1139,7 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"Something went wrong.
", + "documentation":"Something went wrong.
", "exception":true, "fault":true }, @@ -1056,14 +1148,14 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "ServiceName":{ "shape":"ServiceName", - "documentation":"The name of the AWS service specified in the increase request.
" + "documentation":"The service name.
" } }, - "documentation":"A structure that contains the ServiceName
and ServiceCode
. It does not include all details of the service quota. To get those values, use the ListServiceQuotas operation.
Information about a service.
" }, "ServiceInfoListDefinition":{ "type":"list", @@ -1075,92 +1167,92 @@ "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"Specifies the service that you want to use.
" + "documentation":"The service identifier.
" }, "ServiceName":{ "shape":"ServiceName", - "documentation":"The name of the AWS service specified in the increase request.
" + "documentation":"The service name.
" }, "QuotaArn":{ "shape":"QuotaArn", - "documentation":"The Amazon Resource Name (ARN) of the service quota.
" + "documentation":"The Amazon Resource Name (ARN) of the quota.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"The code identifier for the service quota specified.
" + "documentation":"The quota identifier.
" }, "QuotaName":{ "shape":"QuotaName", - "documentation":"The name identifier of the service quota.
" + "documentation":"The quota name.
" }, "Value":{ "shape":"QuotaValue", - "documentation":"The value of service quota.
" + "documentation":"The quota value.
" }, "Unit":{ "shape":"QuotaUnit", - "documentation":"The unit of measurement for the value of the service quota.
" + "documentation":"The unit of measurement.
" }, "Adjustable":{ "shape":"QuotaAdjustable", - "documentation":"Specifies if the quota value can be increased.
" + "documentation":"Indicates whether the quota value can be increased.
" }, "GlobalQuota":{ "shape":"GlobalQuota", - "documentation":"Specifies if the quota is global.
" + "documentation":"Indicates whether the quota is global.
" }, "UsageMetric":{ "shape":"MetricInfo", - "documentation":"Specifies the details about the measurement.
" + "documentation":"Information about the measurement.
" }, "Period":{ "shape":"QuotaPeriod", - "documentation":"Identifies the unit and value of how time is measured.
" + "documentation":"The period of time.
" }, "ErrorReason":{ "shape":"ErrorReason", - "documentation":"Specifies the ErrorCode
and ErrorMessage
when success isn't achieved.
The error code and error reason.
" } }, - "documentation":"A structure that contains the full set of details that define the service quota.
" + "documentation":"Information about a quota.
" }, "ServiceQuotaIncreaseRequestInTemplate":{ "type":"structure", "members":{ "ServiceCode":{ "shape":"ServiceCode", - "documentation":"The code identifier for the AWS service specified in the increase request.
" + "documentation":"The service identifier.
" }, "ServiceName":{ "shape":"ServiceName", - "documentation":"The name of the AWS service specified in the increase request.
" + "documentation":"The service name.
" }, "QuotaCode":{ "shape":"QuotaCode", - "documentation":"The code identifier for the service quota specified in the increase request.
" + "documentation":"The quota identifier.
" }, "QuotaName":{ "shape":"QuotaName", - "documentation":"The name of the service quota in the increase request.
" + "documentation":"The quota name.
" }, "DesiredValue":{ "shape":"QuotaValue", - "documentation":"Identifies the new, increased value of the service quota in the increase request.
" + "documentation":"The new, increased value of the quota.
" }, "AwsRegion":{ "shape":"AwsRegion", - "documentation":"The AWS Region where the increase request occurs.
" + "documentation":"The AWS Region.
" }, "Unit":{ "shape":"QuotaUnit", - "documentation":"The unit of measure for the increase request.
" + "documentation":"The unit of measurement.
" }, "GlobalQuota":{ "shape":"GlobalQuota", - "documentation":"Specifies if the quota is a global quota.
" + "documentation":"Indicates whether the quota is global.
" } }, - "documentation":"A structure that contains information about one service quota increase request.
" + "documentation":"Information about a quota increase request.
" }, "ServiceQuotaIncreaseRequestInTemplateList":{ "type":"list", @@ -1182,7 +1274,7 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"The quota request template is not associated with your organization.
To use the template, call AssociateServiceQuotaTemplate.
", + "documentation":"The quota request template is not associated with your organization.
", "exception":true }, "Statistic":{ @@ -1191,12 +1283,72 @@ "min":1, "pattern":"(Sum|Maximum)" }, + "Tag":{ + "type":"structure", + "required":[ + "Key", + "Value" + ], + "members":{ + "Key":{ + "shape":"TagKey", + "documentation":"A string that contains a tag key. The string length should be between 1 and 128 characters. Valid characters include a-z, A-Z, 0-9, space, and the special characters _ - . : / = + @.
" + }, + "Value":{ + "shape":"TagValue", + "documentation":"A string that contains an optional tag value. The string length should be between 0 and 256 characters. Valid characters include a-z, A-Z, 0-9, space, and the special characters _ - . : / = + @.
" + } + }, + "documentation":"A complex data type that contains a tag key and tag value.
" + }, + "TagKey":{ + "type":"string", + "max":128, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, + "TagPolicyViolationException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"} + }, + "documentation":"The specified tag is a reserved word and cannot be used.
", + "exception":true + }, + "TagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceARN", + "Tags" + ], + "members":{ + "ResourceARN":{ + "shape":"AmazonResourceName", + "documentation":"The Amazon Resource Name (ARN) for the applied quota. You can get this information by using the Service Quotas console, or by listing the quotas using the list-service-quotas AWS CLI command or the ListServiceQuotas AWS API operation.
" + }, + "Tags":{ + "shape":"InputTags", + "documentation":"The tags that you want to add to the resource.
" + } + } + }, + "TagResourceResponse":{ + "type":"structure", + "members":{ + } + }, + "TagValue":{ + "type":"string", + "max":256, + "min":0, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, "TemplatesNotAvailableInRegionException":{ "type":"structure", "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"The Service Quotas template is not available in the Region where you are making the request. Please make the request in us-east-1.
", + "documentation":"The Service Quotas template is not available in this AWS Region.
", "exception":true }, "TooManyRequestsException":{ @@ -1204,9 +1356,39 @@ "members":{ "Message":{"shape":"ExceptionMessage"} }, - "documentation":"Due to throttling, the request was denied. Slow down the rate of request calls, or request an increase for this quota.
", + "documentation":"Due to throttling, the request was denied. Slow down the rate of request calls, or request an increase for this quota.
", + "exception":true + }, + "TooManyTagsException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"} + }, + "documentation":"You've exceeded the number of tags allowed for a resource. For more information, see Tag restrictions in the Service Quotas User Guide.
", "exception":true + }, + "UntagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceARN", + "TagKeys" + ], + "members":{ + "ResourceARN":{ + "shape":"AmazonResourceName", + "documentation":"The Amazon Resource Name (ARN) for the applied quota that you want to untag. You can get this information by using the Service Quotas console, or by listing the quotas using the list-service-quotas AWS CLI command or the ListServiceQuotas AWS API operation.
" + }, + "TagKeys":{ + "shape":"InputTagKeys", + "documentation":"The keys of the tags that you want to remove from the resource.
" + } + } + }, + "UntagResourceResponse":{ + "type":"structure", + "members":{ + } } }, - "documentation":"Service Quotas is a web service that you can use to manage many of your AWS service quotas. Quotas, also referred to as limits, are the maximum values for a resource, item, or operation. This guide provide descriptions of the Service Quotas actions that you can call from an API. For the Service Quotas user guide, which explains how to use Service Quotas from the console, see What is Service Quotas.
AWS provides SDKs that consist of libraries and sample code for programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc...,). The SDKs provide a convenient way to create programmatic access to Service Quotas and AWS. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page.
With Service Quotas, you can view and manage your quotas easily as your AWS workloads grow. Quotas, also referred to as limits, are the maximum number of resources that you can create in your AWS account. For more information, see the Service Quotas User Guide.
" } diff --git a/services/ses/pom.xml b/services/ses/pom.xml index 3b27d331e630..fe2d8d4fc2a4 100644 --- a/services/ses/pom.xml +++ b/services/ses/pom.xml @@ -21,7 +21,7 @@