The maximum number of messages that the journey can send each second.
" + }, + "EndpointReentryInterval": { + "shape": "__string", + "documentation": "Minimum time that must pass before an endpoint can re-enter a given journey. The duration should use an ISO 8601 format, such as PT1H.
" } }, "documentation": "Specifies limits on the messages that a journey can send and the number of times participants can enter a journey.
" @@ -11044,6 +11048,14 @@ "shape": "MapOf__string", "locationName": "tags", "documentation": "This object is not used or supported.
" + }, + "WaitForQuietTime": { + "shape": "__boolean", + "documentation": "Specifies whether endpoints in quiet hours should enter a wait till the end of their quiet hours.
" + }, + "RefreshOnSegmentUpdate": { + "shape": "__boolean", + "documentation": "Specifies whether a journey should be refreshed on segment update.
" } }, "documentation": "Provides information about the status, configuration, and other settings for a journey.
", @@ -11102,7 +11114,7 @@ "members": { "State": { "shape": "State", - "documentation": "The status of the journey. Currently, the only supported value is CANCELLED.
If you cancel a journey, Amazon Pinpoint continues to perform activities that are currently in progress, until those activities are complete. Amazon Pinpoint also continues to collect and aggregate analytics data for those activities, until they are complete, and any activities that were complete when you cancelled the journey.
After you cancel a journey, you can't add, change, or remove any activities from the journey. In addition, Amazon Pinpoint stops evaluating the journey and doesn't perform any activities that haven't started.
" + "documentation": "The status of the journey. Currently, Supported values are ACTIVE, PAUSED, and CANCELLED
If you cancel a journey, Amazon Pinpoint continues to perform activities that are currently in progress, until those activities are complete. Amazon Pinpoint also continues to collect and aggregate analytics data for those activities, until they are complete, and any activities that were complete when you cancelled the journey.
After you cancel a journey, you can't add, change, or remove any activities from the journey. In addition, Amazon Pinpoint stops evaluating the journey and doesn't perform any activities that haven't started.
When the journey is paused, Amazon Pinpoint continues to perform activities that are currently in progress, until those activities are complete. Endpoints will stop entering journeys when the journey is paused and will resume entering the journey after the journey is resumed. For wait activities, wait time is paused when the journey is paused. Currently, PAUSED only supports journeys with a segment refresh interval.
" } }, "documentation": "Changes the status of a journey.
" @@ -12956,7 +12968,8 @@ "ACTIVE", "COMPLETED", "CANCELLED", - "CLOSED" + "CLOSED", + "PAUSED" ] }, "TagResourceRequest": { @@ -14498,7 +14511,15 @@ }, "State": { "shape": "State", - "documentation": "The status of the journey. Valid values are:
DRAFT - Saves the journey and doesn't publish it.
ACTIVE - Saves and publishes the journey. Depending on the journey's schedule, the journey starts running immediately or at the scheduled start time. If a journey's status is ACTIVE, you can't add, change, or remove activities from it.
The CANCELLED, COMPLETED, and CLOSED values are not supported in requests to create or update a journey. To cancel a journey, use the Journey State resource.
" + "documentation": "The status of the journey. Valid values are:
DRAFT - Saves the journey and doesn't publish it.
ACTIVE - Saves and publishes the journey. Depending on the journey's schedule, the journey starts running immediately or at the scheduled start time. If a journey's status is ACTIVE, you can't add, change, or remove activities from it.
PAUSED, CANCELLED, COMPLETED, and CLOSED states are not supported in requests to create or update a journey. To cancel, pause, or resume a journey, use the Journey State resource.
" + }, + "WaitForQuietTime": { + "shape": "__boolean", + "documentation": "Specifies whether endpoints in quiet hours should enter a wait till the end of their quiet hours.
" + }, + "RefreshOnSegmentUpdate": { + "shape": "__boolean", + "documentation": "Specifies whether a journey should be refreshed on segment update.
" } }, "documentation": "Specifies the configuration and other settings for a journey.
", From bfe2e55b7d36462791df2b9f187180373c460c7f Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:08:32 +0000 Subject: [PATCH 04/12] AWS Glue DataBrew Update: This SDK release adds two new dataset features: 1) support for specifying a database connection as a dataset input 2) support for dynamic datasets that accept configurable parameters in S3 path. --- .../feature-AWSGlueDataBrew-7bf6d78.json | 6 + .../codegen-resources/service-2.json | 299 +++++++++++++++--- 2 files changed, 268 insertions(+), 37 deletions(-) create mode 100644 .changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json diff --git a/.changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json b/.changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json new file mode 100644 index 000000000000..71f3e4fb8dfc --- /dev/null +++ b/.changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Glue DataBrew", + "contributor": "", + "description": "This SDK release adds two new dataset features: 1) support for specifying a database connection as a dataset input 2) support for dynamic datasets that accept configurable parameters in S3 path." +} diff --git a/services/databrew/src/main/resources/codegen-resources/service-2.json b/services/databrew/src/main/resources/codegen-resources/service-2.json index 29240a5fcb10..fc98f0ad5873 100644 --- a/services/databrew/src/main/resources/codegen-resources/service-2.json +++ b/services/databrew/src/main/resources/codegen-resources/service-2.json @@ -709,7 +709,7 @@ "documentation":"A column to apply this condition to.
" } }, - "documentation":"Represents an individual condition that evaluates to true or false.
Conditions are used with recipe actions: The action is only performed for column values where the condition evaluates to true.
If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression
elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.
Represents an individual condition that evaluates to true or false.
Conditions are used with recipe actions. The action is only performed for column values where the condition evaluates to true.
If a recipe requires more than one condition, then the recipe must specify multiple ConditionExpression
elements. Each condition is applied to the rows in a dataset first, before the recipe action is performed.
Specifies the file format of a dataset created from an S3 file or folder.
" + "documentation":"The file format of a dataset that is created from an S3 file or folder.
" }, "FormatOptions":{"shape":"FormatOptions"}, "Input":{"shape":"Input"}, + "PathOptions":{ + "shape":"PathOptions", + "documentation":"A set of options that defines how DataBrew interprets an S3 path of the dataset.
" + }, "Tags":{ "shape":"TagMap", "documentation":"Metadata tags to apply to this dataset.
" @@ -1019,24 +1024,24 @@ "members":{ "Delimiter":{ "shape":"Delimiter", - "documentation":"A single character that specifies the delimiter being used in the Csv file.
" + "documentation":"A single character that specifies the delimiter being used in the CSV file.
" }, "HeaderRow":{ "shape":"HeaderRow", - "documentation":"A variable that specifies whether the first row in the file will be parsed as the header. If false, column names will be auto-generated.
" + "documentation":"A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
" } }, - "documentation":"Options that define how DataBrew will read a Csv file when creating a dataset from that file.
" + "documentation":"Represents a set of options that define how DataBrew will read a comma-separated value (CSV) file when creating a dataset from that file.
" }, "CsvOutputOptions":{ "type":"structure", "members":{ "Delimiter":{ "shape":"Delimiter", - "documentation":"A single character that specifies the delimiter used to create Csv job output.
" + "documentation":"A single character that specifies the delimiter used to create CSV job output.
" } }, - "documentation":"Options that define how DataBrew will write a Csv file.
" + "documentation":"Represents a set of options that define how DataBrew will write a comma-separated value (CSV) file.
" }, "DataCatalogInputDefinition":{ "type":"structure", @@ -1064,11 +1069,35 @@ }, "documentation":"Represents how metadata stored in the AWS Glue Data Catalog is defined in a DataBrew dataset.
" }, + "DatabaseInputDefinition":{ + "type":"structure", + "required":[ + "GlueConnectionName", + "DatabaseTableName" + ], + "members":{ + "GlueConnectionName":{ + "shape":"GlueConnectionName", + "documentation":"The AWS Glue Connection that stores the connection information for the target database.
" + }, + "DatabaseTableName":{ + "shape":"DatabaseTableName", + "documentation":"The table within the target database.
" + }, + "TempDirectory":{"shape":"S3Location"} + }, + "documentation":"Connection information for dataset input files stored in a database.
" + }, "DatabaseName":{ "type":"string", "max":255, "min":1 }, + "DatabaseTableName":{ + "type":"string", + "max":255, + "min":1 + }, "Dataset":{ "type":"structure", "required":[ @@ -1094,11 +1123,11 @@ }, "Format":{ "shape":"InputFormat", - "documentation":"Specifies the file format of a dataset created from an S3 file or folder.
" + "documentation":"The file format of a dataset that is created from an S3 file or folder.
" }, "FormatOptions":{ "shape":"FormatOptions", - "documentation":"Options that define how DataBrew interprets the data in the dataset.
" + "documentation":"A set of options that define how DataBrew interprets the data in the dataset.
" }, "Input":{ "shape":"Input", @@ -1116,6 +1145,10 @@ "shape":"Source", "documentation":"The location of the data for the dataset, either Amazon S3 or the AWS Glue Data Catalog.
" }, + "PathOptions":{ + "shape":"PathOptions", + "documentation":"A set of options that defines how DataBrew interprets an S3 path of the dataset.
" + }, "Tags":{ "shape":"TagMap", "documentation":"Metadata tags that have been applied to the dataset.
" @@ -1136,7 +1169,61 @@ "max":255, "min":1 }, + "DatasetParameter":{ + "type":"structure", + "required":[ + "Name", + "Type" + ], + "members":{ + "Name":{ + "shape":"PathParameterName", + "documentation":"The name of the parameter that is used in the dataset's S3 path.
" + }, + "Type":{ + "shape":"ParameterType", + "documentation":"The type of the dataset parameter, can be one of a 'String', 'Number' or 'Datetime'.
" + }, + "DatetimeOptions":{ + "shape":"DatetimeOptions", + "documentation":"Additional parameter options such as a format and a timezone. Required for datetime parameters.
" + }, + "CreateColumn":{ + "shape":"CreateColumn", + "documentation":"Optional boolean value that defines whether the captured value of this parameter should be loaded as an additional column in the dataset.
" + }, + "Filter":{ + "shape":"FilterExpression", + "documentation":"The optional filter expression structure to apply additional matching criteria to the parameter.
" + } + }, + "documentation":"Represents a dataset paramater that defines type and conditions for a parameter in the S3 path of the dataset.
" + }, "Date":{"type":"timestamp"}, + "DatetimeFormat":{ + "type":"string", + "max":100, + "min":2 + }, + "DatetimeOptions":{ + "type":"structure", + "required":["Format"], + "members":{ + "Format":{ + "shape":"DatetimeFormat", + "documentation":"Required option, that defines the datetime format used for a date parameter in the S3 path. Should use only supported datetime specifiers and separation characters, all litera a-z or A-Z character should be escaped with single quotes. E.g. \"MM.dd.yyyy-'at'-HH:mm\".
" + }, + "TimezoneOffset":{ + "shape":"TimezoneOffset", + "documentation":"Optional value for a timezone offset of the datetime parameter value in the S3 path. Shouldn't be used if Format for this parameter includes timezone fields. If no offset specified, UTC is assumed.
" + }, + "LocaleCode":{ + "shape":"LocaleCode", + "documentation":"Optional value for a non-US locale code, needed for correct interpretation of some date formats.
" + } + }, + "documentation":"Represents additional options for correct interpretation of datetime parameters used in the S3 path of a dataset.
" + }, "DeleteDatasetRequest":{ "type":"structure", "required":["Name"], @@ -1301,7 +1388,7 @@ }, "Format":{ "shape":"InputFormat", - "documentation":"Specifies the file format of a dataset created from an S3 file or folder.
" + "documentation":"The file format of a dataset that is created from an S3 file or folder.
" }, "FormatOptions":{"shape":"FormatOptions"}, "Input":{"shape":"Input"}, @@ -1317,6 +1404,10 @@ "shape":"Source", "documentation":"The location of the data for this dataset, Amazon S3 or the AWS Glue Data Catalog.
" }, + "PathOptions":{ + "shape":"PathOptions", + "documentation":"A set of options that defines how DataBrew interprets an S3 path of the dataset.
" + }, "Tags":{ "shape":"TagMap", "documentation":"Metadata tags associated with this dataset.
" @@ -1728,20 +1819,63 @@ "members":{ "SheetNames":{ "shape":"SheetNameList", - "documentation":"Specifies one or more named sheets in the Excel file, which will be included in the dataset.
" + "documentation":"One or more named sheets in the Excel file that will be included in the dataset.
" }, "SheetIndexes":{ "shape":"SheetIndexList", - "documentation":"Specifies one or more sheet numbers in the Excel file, which will be included in the dataset.
" + "documentation":"One or more sheet numbers in the Excel file that will be included in the dataset.
" }, "HeaderRow":{ "shape":"HeaderRow", - "documentation":"A variable that specifies whether the first row in the file will be parsed as the header. If false, column names will be auto-generated.
" + "documentation":"A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
" } }, - "documentation":"Options that define how DataBrew will interpret a Microsoft Excel file, when creating a dataset from that file.
" + "documentation":"Represents a set of options that define how DataBrew will interpret a Microsoft Excel file when creating a dataset from that file.
" }, "ExecutionTime":{"type":"integer"}, + "Expression":{ + "type":"string", + "max":1024, + "min":4, + "pattern":"^[<>0-9A-Za-z_:)(!= ]+$" + }, + "FilesLimit":{ + "type":"structure", + "required":["MaxFiles"], + "members":{ + "MaxFiles":{ + "shape":"MaxFiles", + "documentation":"The number of S3 files to select.
" + }, + "OrderedBy":{ + "shape":"OrderedBy", + "documentation":"A criteria to use for S3 files sorting before their selection. By default uses LAST_MODIFIED_DATE as a sorting criteria. Currently it's the only allowed value.
" + }, + "Order":{ + "shape":"Order", + "documentation":"A criteria to use for S3 files sorting before their selection. By default uses DESCENDING order, i.e. most recent files are selected first. Anotherpossible value is ASCENDING.
" + } + }, + "documentation":"Represents a limit imposed on number of S3 files that should be selected for a dataset from a connected S3 path.
" + }, + "FilterExpression":{ + "type":"structure", + "required":[ + "Expression", + "ValuesMap" + ], + "members":{ + "Expression":{ + "shape":"Expression", + "documentation":"The expression which includes condition names followed by substitution variables, possibly grouped and combined with other conditions. For example, \"(starts_with :prefix1 or starts_with :prefix2) and (ends_with :suffix1 or ends_with :suffix2)\". Substitution variables should start with ':' symbol.
" + }, + "ValuesMap":{ + "shape":"ValuesMap", + "documentation":"The map of substitution variable names to their values used in this filter expression.
" + } + }, + "documentation":"Represents a structure for defining parameter conditions.
" + }, "FormatOptions":{ "type":"structure", "members":{ @@ -1755,10 +1889,15 @@ }, "Csv":{ "shape":"CsvOptions", - "documentation":"Options that define how Csv input is to be interpreted by DataBrew.
" + "documentation":"Options that define how CSV input is to be interpreted by DataBrew.
" } }, - "documentation":"Options that define the structure of either Csv, Excel, or JSON input.
" + "documentation":"Represents a set of options that define the structure of either comma-separated value (CSV), Excel, or JSON input.
" + }, + "GlueConnectionName":{ + "type":"string", + "max":255, + "min":1 }, "HeaderRow":{"type":"boolean"}, "HiddenColumnList":{ @@ -1775,9 +1914,13 @@ "DataCatalogInputDefinition":{ "shape":"DataCatalogInputDefinition", "documentation":"The AWS Glue Data Catalog parameters for the data.
" + }, + "DatabaseInputDefinition":{ + "shape":"DatabaseInputDefinition", + "documentation":"Connection information for dataset input files stored in a database.
" } }, - "documentation":"Information on how DataBrew can find data, in either the AWS Glue Data Catalog or Amazon S3.
" + "documentation":"Represents information on how DataBrew can find data, in either the AWS Glue Data Catalog or Amazon S3.
" }, "InputFormat":{ "type":"string", @@ -1823,7 +1966,7 @@ }, "EncryptionMode":{ "shape":"EncryptionMode", - "documentation":"The encryption mode for the job, which can be one of the following:
SSE-KMS
- Server-side encryption with AWS KMS-managed keys.
SSE-S3
- Server-side encryption with keys managed by Amazon S3.
The encryption mode for the job, which can be one of the following:
SSE-KMS
- Server-side encryption with keys managed by AWS KMS.
SSE-S3
- Server-side encryption with keys managed by Amazon S3.
The Amazon Resource Name (ARN) of the role that will be assumed for this job.
" + "documentation":"The Amazon Resource Name (ARN) of the role to be assumed for this job.
" }, "Timeout":{ "shape":"Timeout", @@ -1883,7 +2026,7 @@ }, "JobSample":{ "shape":"JobSample", - "documentation":"Sample configuration for profile jobs only. Determines the number of rows on which the profile job will be executed. If a JobSample value is not provided, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.
" + "documentation":"A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a JobSample
value isn't provided, the default value is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.
Represents all of the attributes of a DataBrew job.
" @@ -1963,7 +2106,7 @@ }, "JobSample":{ "shape":"JobSample", - "documentation":"Sample configuration for profile jobs only. Determines the number of rows on which the profile job will be executed. If a JobSample value is not provided, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.
" + "documentation":"A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a JobSample
value isn't provided, the default is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.
Represents one run of a DataBrew job.
" @@ -1995,14 +2138,14 @@ "members":{ "Mode":{ "shape":"SampleMode", - "documentation":"Determines whether the profile job will be executed on the entire dataset or on a specified number of rows. Must be one of the following:
FULL_DATASET: Profile job will be executed on the entire dataset.
CUSTOM_ROWS: Profile job will be executed on the number of rows specified in the Size parameter.
A value that determines whether the profile job is run on the entire dataset or a specified number of rows. This value must be one of the following:
FULL_DATASET - The profile job is run on the entire dataset.
CUSTOM_ROWS - The profile job is run on the number of rows specified in the Size
parameter.
Size parameter is only required when the mode is CUSTOM_ROWS. Profile job will be executed on the the specified number of rows. The maximum value for size is Long.MAX_VALUE.
Long.MAX_VALUE = 9223372036854775807
" + "documentation":"The Size
parameter is only required when the mode is CUSTOM_ROWS. The profile job is run on the specified number of rows. The maximum value for size is Long.MAX_VALUE.
Long.MAX_VALUE = 9223372036854775807
" } }, - "documentation":"Sample configuration for Profile Jobs only. Determines the number of rows on which the Profile job will be executed. If a JobSample value is not provided for profile jobs, the default value will be used. The default value is CUSTOM_ROWS for the mode parameter and 20000 for the size parameter.
" + "documentation":"A sample configuration for profile jobs only, which determines the number of rows on which the profile job is run. If a JobSample
value isn't provided, the default is used. The default value is CUSTOM_ROWS for the mode parameter and 20,000 for the size parameter.
Options that define how DataBrew formats job output files.
" + "documentation":"Represents options that define how DataBrew formats job output files.
" } }, - "documentation":"Parameters that specify how and where DataBrew will write the output generated by recipe jobs or profile jobs.
" + "documentation":"Represents options that specify how and where DataBrew writes the output generated by recipe jobs or profile jobs.
" }, "OutputFormat":{ "type":"string", @@ -2388,10 +2552,10 @@ "members":{ "Csv":{ "shape":"CsvOutputOptions", - "documentation":"Options that define how DataBrew writes Csv output.
" + "documentation":"Represents a set of options that define the structure of comma-separated value (CSV) job output.
" } }, - "documentation":"Options that define the structure of Csv job output.
" + "documentation":"Represents a set of options that define the structure of comma-separated (CSV) job output.
" }, "OutputList":{ "type":"list", @@ -2410,11 +2574,50 @@ "min":1, "pattern":"^[A-Za-z0-9]+$" }, + "ParameterType":{ + "type":"string", + "enum":[ + "Datetime", + "Number", + "String" + ] + }, "ParameterValue":{ "type":"string", "max":12288, "min":1 }, + "PathOptions":{ + "type":"structure", + "members":{ + "LastModifiedDateCondition":{ + "shape":"FilterExpression", + "documentation":"If provided, this structure defines a date range for matching S3 objects based on their LastModifiedDate attribute in S3.
" + }, + "FilesLimit":{ + "shape":"FilesLimit", + "documentation":"If provided, this structure imposes a limit on a number of files that should be selected.
" + }, + "Parameters":{ + "shape":"PathParametersMap", + "documentation":"A structure that maps names of parameters used in the S3 path of a dataset to their definitions.
" + } + }, + "documentation":"Represents a set of options that define how DataBrew selects files for a given S3 path in a dataset.
" + }, + "PathParameterName":{ + "type":"string", + "max":255, + "min":1 + }, + "PathParametersMap":{ + "type":"map", + "key":{"shape":"PathParameterName"}, + "value":{"shape":"DatasetParameter"}, + "documentation":"A structure that map names of parameters used in the S3 path of a dataset to their definitions. A definition includes parameter type and conditions.
", + "max":10, + "min":1 + }, "Preview":{"type":"boolean"}, "Project":{ "type":"structure", @@ -2461,7 +2664,7 @@ }, "Sample":{ "shape":"Sample", - "documentation":"The sample size and sampling type to apply to the data. If this parameter isn't specified, then the sample will consiste of the first 500 rows from the dataset.
" + "documentation":"The sample size and sampling type to apply to the data. If this parameter isn't specified, then the sample consists of the first 500 rows from the dataset.
" }, "Tags":{ "shape":"TagMap", @@ -2572,7 +2775,7 @@ }, "RecipeVersion":{ "shape":"RecipeVersion", - "documentation":"The identifier for the version for the recipe. Must be one of the following:
Numeric version (X.Y
) - X
and Y
stand for major and minor version numbers. The maximum length of each is 6 digits, and neither can be negative values. Both X
and Y
are required, and \"0.0\" is not a valid version.
LATEST_WORKING
- the most recent valid version being developed in a DataBrew project.
LATEST_PUBLISHED
- the most recent published version.
The identifier for the version for the recipe. Must be one of the following:
Numeric version (X.Y
) - X
and Y
stand for major and minor version numbers. The maximum length of each is 6 digits, and neither can be negative values. Both X
and Y
are required, and \"0.0\" isn't a valid version.
LATEST_WORKING
- the most recent valid version being developed in a DataBrew project.
LATEST_PUBLISHED
- the most recent published version.
Represents one or more actions to be performed on a DataBrew dataset.
" @@ -2635,7 +2838,7 @@ }, "ConditionExpressions":{ "shape":"ConditionExpressionList", - "documentation":"One or more conditions that must be met, in order for the recipe step to succeed.
All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.
One or more conditions that must be met for the recipe step to succeed.
All of the conditions in the array must be met. In other words, all of the conditions must be combined using a logical AND operation.
Represents a single step from a DataBrew recipe to be performed.
" @@ -2696,7 +2899,7 @@ "documentation":"The unique name of the object in the bucket.
" } }, - "documentation":"An Amazon S3 location (bucket name an object key) where DataBrew can read input data, or write output from a job.
" + "documentation":"Represents an Amazon S3 location (bucket name and object key) where DataBrew can read input data, or write output from a job.
" }, "Sample":{ "type":"structure", @@ -2767,7 +2970,7 @@ }, "CronExpression":{ "shape":"CronExpression", - "documentation":"The date(s) and time(s) when the job will run. For more information, see Cron expressions in the AWS Glue DataBrew Developer Guide.
" + "documentation":"The dates and times when the job is to run. For more information, see Cron expressions in the AWS Glue DataBrew Developer Guide.
" }, "Tags":{ "shape":"TagMap", @@ -2883,7 +3086,8 @@ "type":"string", "enum":[ "S3", - "DATA-CATALOG" + "DATA-CATALOG", + "DATABASE" ] }, "StartColumnIndex":{ @@ -3038,6 +3242,12 @@ "type":"integer", "min":0 }, + "TimezoneOffset":{ + "type":"string", + "max":6, + "min":1, + "pattern":"^(Z|[-+](\\d|\\d{2}|\\d{2}:?\\d{2}))$" + }, "UntagResourceRequest":{ "type":"structure", "required":[ @@ -3079,10 +3289,14 @@ }, "Format":{ "shape":"InputFormat", - "documentation":"Specifies the file format of a dataset created from an S3 file or folder.
" + "documentation":"The file format of a dataset that is created from an S3 file or folder.
" }, "FormatOptions":{"shape":"FormatOptions"}, - "Input":{"shape":"Input"} + "Input":{"shape":"Input"}, + "PathOptions":{ + "shape":"PathOptions", + "documentation":"A set of options that defines how DataBrew interprets an S3 path of the dataset.
" + } } }, "UpdateDatasetResponse":{ @@ -3318,6 +3532,17 @@ "error":{"httpStatusCode":400}, "exception":true }, + "ValueReference":{ + "type":"string", + "max":128, + "min":2, + "pattern":"^:[A-Za-z0-9_]+$" + }, + "ValuesMap":{ + "type":"map", + "key":{"shape":"ValueReference"}, + "value":{"shape":"ConditionValue"} + }, "ViewFrame":{ "type":"structure", "required":["StartColumnIndex"], @@ -3335,7 +3560,7 @@ "documentation":"A list of columns to hide in the view frame.
" } }, - "documentation":"Represents the data being being transformed during an action.
" + "documentation":"Represents the data being transformed during an action.
" } }, "documentation":"AWS Glue DataBrew is a visual, cloud-scale data-preparation service. DataBrew simplifies data preparation tasks, targeting data issues that are hard to spot and time-consuming to fix. DataBrew empowers users of all technical levels to visualize the data and perform one-click data transformations, with no coding required.
" From 91206175b8c2ac6eff202340891779c018b46bb8 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:08:37 +0000 Subject: [PATCH 05/12] Amazon Fraud Detector Update: This release adds support for Batch Predictions in Amazon Fraud Detector. --- .../feature-AmazonFraudDetector-281eaaa.json | 6 + .../codegen-resources/paginators-1.json | 5 + .../codegen-resources/service-2.json | 267 +++++++++++++++++- 3 files changed, 277 insertions(+), 1 deletion(-) create mode 100644 .changes/next-release/feature-AmazonFraudDetector-281eaaa.json diff --git a/.changes/next-release/feature-AmazonFraudDetector-281eaaa.json b/.changes/next-release/feature-AmazonFraudDetector-281eaaa.json new file mode 100644 index 000000000000..e38b8bb68707 --- /dev/null +++ b/.changes/next-release/feature-AmazonFraudDetector-281eaaa.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Fraud Detector", + "contributor": "", + "description": "This release adds support for Batch Predictions in Amazon Fraud Detector." +} diff --git a/services/frauddetector/src/main/resources/codegen-resources/paginators-1.json b/services/frauddetector/src/main/resources/codegen-resources/paginators-1.json index ac4b7cf14ed3..b67febd1c2eb 100644 --- a/services/frauddetector/src/main/resources/codegen-resources/paginators-1.json +++ b/services/frauddetector/src/main/resources/codegen-resources/paginators-1.json @@ -5,6 +5,11 @@ "output_token": "nextToken", "limit_key": "maxResults" }, + "GetBatchPredictionJobs": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults" + }, "GetDetectors": { "input_token": "nextToken", "output_token": "nextToken", diff --git a/services/frauddetector/src/main/resources/codegen-resources/service-2.json b/services/frauddetector/src/main/resources/codegen-resources/service-2.json index 53872732d3b3..714816453e2e 100644 --- a/services/frauddetector/src/main/resources/codegen-resources/service-2.json +++ b/services/frauddetector/src/main/resources/codegen-resources/service-2.json @@ -44,6 +44,37 @@ ], "documentation":"Gets a batch of variables.
" }, + "CancelBatchPredictionJob":{ + "name":"CancelBatchPredictionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CancelBatchPredictionJobRequest"}, + "output":{"shape":"CancelBatchPredictionJobResult"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"} + ], + "documentation":"Cancels the specified batch prediction job.
" + }, + "CreateBatchPredictionJob":{ + "name":"CreateBatchPredictionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateBatchPredictionJobRequest"}, + "output":{"shape":"CreateBatchPredictionJobResult"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"} + ], + "documentation":"Creates a batch prediction job.
" + }, "CreateDetectorVersion":{ "name":"CreateDetectorVersion", "http":{ @@ -124,6 +155,22 @@ ], "documentation":"Creates a variable.
" }, + "DeleteBatchPredictionJob":{ + "name":"DeleteBatchPredictionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteBatchPredictionJobRequest"}, + "output":{"shape":"DeleteBatchPredictionJobResult"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"} + ], + "documentation":"Deletes a batch prediction job.
" + }, "DeleteDetector":{ "name":"DeleteDetector", "http":{ @@ -355,6 +402,22 @@ ], "documentation":"Gets all of the model versions for the specified model type or for the specified model type and model ID. You can also get details for a single, specified model version.
" }, + "GetBatchPredictionJobs":{ + "name":"GetBatchPredictionJobs", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetBatchPredictionJobsRequest"}, + "output":{"shape":"GetBatchPredictionJobsResult"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"}, + {"shape":"AccessDeniedException"} + ], + "documentation":"Gets all batch prediction jobs or a specific job if you specify a job ID. This is a paginated API. If you provide a null maxResults, this action retrieves a maximum of 50 records per page. If you provide a maxResults, the value must be between 1 and 50. To get the next page results, provide the pagination token from the GetBatchPredictionJobsResponse as part of your request. A null pagination token fetches the records from the beginning.
" + }, "GetDetectorVersion":{ "name":"GetDetectorVersion", "http":{ @@ -883,6 +946,17 @@ "documentation":"An exception indicating Amazon Fraud Detector does not have the needed permissions. This can occur if you submit a request, such as PutExternalModel
, that specifies a role that is not in your account.
The job ID for the batch prediction.
" + }, + "status":{ + "shape":"AsyncJobStatus", + "documentation":"The batch prediction status.
" + }, + "failureReason":{ + "shape":"string", + "documentation":"The reason a batch prediction job failed.
" + }, + "startTime":{ + "shape":"time", + "documentation":"Timestamp of when the batch prediction job started.
" + }, + "completionTime":{ + "shape":"time", + "documentation":"Timestamp of when the batch prediction job comleted.
" + }, + "lastHeartbeatTime":{ + "shape":"time", + "documentation":"Timestamp of most recent heartbeat indicating the batch prediction job was making progress.
" + }, + "inputPath":{ + "shape":"s3BucketLocation", + "documentation":"The Amazon S3 location of your training file.
" + }, + "outputPath":{ + "shape":"s3BucketLocation", + "documentation":"The Amazon S3 location of your output file.
" + }, + "eventTypeName":{ + "shape":"identifier", + "documentation":"The name of the event type.
" + }, + "detectorName":{ + "shape":"identifier", + "documentation":"The name of the detector.
" + }, + "detectorVersion":{ + "shape":"floatVersionString", + "documentation":"The detector version.
" + }, + "iamRoleArn":{ + "shape":"iamRoleArn", + "documentation":"The ARN of the IAM role to use for this job request.
" + }, + "arn":{ + "shape":"fraudDetectorArn", + "documentation":"The ARN of batch prediction job.
" + }, + "processedRecordsCount":{ + "shape":"Integer", + "documentation":"The number of records processed by the batch prediction job.
" + }, + "totalRecordsCount":{ + "shape":"Integer", + "documentation":"The total number of records in the batch prediction job.
" + } + }, + "documentation":"The batch prediction details.
" + }, + "BatchPredictionList":{ + "type":"list", + "member":{"shape":"BatchPrediction"} + }, + "CancelBatchPredictionJobRequest":{ + "type":"structure", + "required":["jobId"], + "members":{ + "jobId":{ + "shape":"identifier", + "documentation":"The ID of the batch prediction job to cancel.
" + } + } + }, + "CancelBatchPredictionJobResult":{ + "type":"structure", + "members":{ + } + }, "ConflictException":{ "type":"structure", "required":["message"], @@ -982,6 +1141,56 @@ "documentation":"An exception indicating there was a conflict during a delete operation. The following delete operations can cause a conflict exception:
DeleteDetector: A conflict exception will occur if the detector has associated Rules
or DetectorVersions
. You can only delete a detector if it has no Rules
or DetectorVersions
.
DeleteDetectorVersion: A conflict exception will occur if the DetectorVersion
status is ACTIVE
.
DeleteRule: A conflict exception will occur if the RuleVersion
is in use by an associated ACTIVE
or INACTIVE DetectorVersion
.
The ID of the batch prediction job.
" + }, + "inputPath":{ + "shape":"s3BucketLocation", + "documentation":"The Amazon S3 location of your training file.
" + }, + "outputPath":{ + "shape":"s3BucketLocation", + "documentation":"The Amazon S3 location of your output file.
" + }, + "eventTypeName":{ + "shape":"identifier", + "documentation":"The name of the event type.
" + }, + "detectorName":{ + "shape":"identifier", + "documentation":"The name of the detector.
" + }, + "detectorVersion":{ + "shape":"wholeNumberVersionString", + "documentation":"The detector version.
" + }, + "iamRoleArn":{ + "shape":"iamRoleArn", + "documentation":"The ARN of the IAM role to use for this job request.
" + }, + "tags":{ + "shape":"tagList", + "documentation":"A collection of key and value pairs.
" + } + } + }, + "CreateBatchPredictionJobResult":{ + "type":"structure", + "members":{ + } + }, "CreateDetectorVersionRequest":{ "type":"structure", "required":[ @@ -1256,6 +1465,21 @@ }, "documentation":"The model training validation messages.
" }, + "DeleteBatchPredictionJobRequest":{ + "type":"structure", + "required":["jobId"], + "members":{ + "jobId":{ + "shape":"identifier", + "documentation":"The ID of the batch prediction job to delete.
" + } + } + }, + "DeleteBatchPredictionJobResult":{ + "type":"structure", + "members":{ + } + }, "DeleteDetectorRequest":{ "type":"structure", "required":["detectorId"], @@ -1831,6 +2055,36 @@ }, "documentation":"The message details.
" }, + "GetBatchPredictionJobsRequest":{ + "type":"structure", + "members":{ + "jobId":{ + "shape":"identifier", + "documentation":"The batch prediction job for which to get the details.
" + }, + "maxResults":{ + "shape":"batchPredictionsMaxPageSize", + "documentation":"The maximum number of objects to return for the request.
" + }, + "nextToken":{ + "shape":"string", + "documentation":"The next token from the previous request.
" + } + } + }, + "GetBatchPredictionJobsResult":{ + "type":"structure", + "members":{ + "batchPredictions":{ + "shape":"BatchPredictionList", + "documentation":"An array containing the details of each batch prediction job.
" + }, + "nextToken":{ + "shape":"string", + "documentation":"The next token for the subsequent request.
" + } + } + }, "GetDetectorVersionRequest":{ "type":"structure", "required":[ @@ -2306,6 +2560,7 @@ } } }, + "Integer":{"type":"integer"}, "InternalServerException":{ "type":"structure", "required":["message"], @@ -3555,6 +3810,12 @@ "max":100, "min":50 }, + "batchPredictionsMaxPageSize":{ + "type":"integer", + "box":true, + "max":50, + "min":1 + }, "blob":{"type":"blob"}, "contentType":{ "type":"string", @@ -3709,7 +3970,11 @@ "max":256, "min":0 }, - "time":{"type":"string"}, + "time":{ + "type":"string", + "max":30, + "min":11 + }, "variableName":{ "type":"string", "max":64, From ee8229c688e18a548cdf36e0b19e49af57d04ed8 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:08:39 +0000 Subject: [PATCH 06/12] Amazon SageMaker Service Update: Amazon SageMaker Autopilot now supports 1) feature importance reports for AutoML jobs and 2) PartialFailures for AutoML jobs --- ...eature-AmazonSageMakerService-ff9dccd.json | 6 + .../codegen-resources/service-2.json | 157 ++++++++++++------ 2 files changed, 111 insertions(+), 52 deletions(-) create mode 100644 .changes/next-release/feature-AmazonSageMakerService-ff9dccd.json diff --git a/.changes/next-release/feature-AmazonSageMakerService-ff9dccd.json b/.changes/next-release/feature-AmazonSageMakerService-ff9dccd.json new file mode 100644 index 000000000000..e53fd18bbaed --- /dev/null +++ b/.changes/next-release/feature-AmazonSageMakerService-ff9dccd.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon SageMaker Service", + "contributor": "", + "description": "Amazon SageMaker Autopilot now supports 1) feature importance reports for AutoML jobs and 2) PartialFailures for AutoML jobs" +} diff --git a/services/sagemaker/src/main/resources/codegen-resources/service-2.json b/services/sagemaker/src/main/resources/codegen-resources/service-2.json index 6194f812db85..c520fddcc859 100644 --- a/services/sagemaker/src/main/resources/codegen-resources/service-2.json +++ b/services/sagemaker/src/main/resources/codegen-resources/service-2.json @@ -87,7 +87,7 @@ {"shape":"ResourceLimitExceeded"}, {"shape":"ResourceInUse"} ], - "documentation":"Creates a running App for the specified UserProfile. Supported Apps are JupyterServer and KernelGateway. This operation is automatically invoked by Amazon SageMaker Studio upon access to the associated Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously.
" + "documentation":"Creates a running app for the specified UserProfile. Supported apps are JupyterServer
and KernelGateway
. This operation is automatically invoked by Amazon SageMaker Studio upon access to the associated Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously.
Creates an Autopilot job.
Find the best performing model after you run an Autopilot job by calling . Deploy that model by following the steps described in Step 6.1: Deploy the Model to Amazon SageMaker Hosting Services.
For information about how to use Autopilot, see Automate Model Development with Amazon SageMaker Autopilot.
" + "documentation":"Creates an Autopilot job.
Find the best performing model after you run an Autopilot job by calling .
For information about how to use Autopilot, see Automate Model Development with Amazon SageMaker Autopilot.
" }, "CreateCodeRepository":{ "name":"CreateCodeRepository", @@ -1150,7 +1150,7 @@ "errors":[ {"shape":"ResourceNotFound"} ], - "documentation":"Returns information about an Amazon SageMaker job.
" + "documentation":"Returns information about an Amazon SageMaker AutoML job.
" }, "DescribeCodeRepository":{ "name":"DescribeCodeRepository", @@ -1799,7 +1799,7 @@ "errors":[ {"shape":"ResourceNotFound"} ], - "documentation":"List the Candidates created for the job.
" + "documentation":"List the candidates created for the job.
" }, "ListCodeRepositories":{ "name":"ListCodeRepositories", @@ -3691,6 +3691,10 @@ "FailureReason":{ "shape":"AutoMLFailureReason", "documentation":"The failure reason.
" + }, + "CandidateProperties":{ + "shape":"CandidateProperties", + "documentation":"The AutoML candidate's properties.
" } }, "documentation":"An Autopilot job returns recommendations, or candidates. Each candidate has futher details about the steps involed, and the status.
" @@ -3731,18 +3735,18 @@ "members":{ "DataSource":{ "shape":"AutoMLDataSource", - "documentation":"The data source.
" + "documentation":"The data source for an AutoML channel.
" }, "CompressionType":{ "shape":"CompressionType", - "documentation":"You can use Gzip or None. The default value is None.
" + "documentation":"You can use Gzip
or None
. The default value is None
.
The name of the target variable in supervised learning, a.k.a. 'y'.
" + "documentation":"The name of the target variable in supervised learning, usually represented by 'y'.
" } }, - "documentation":"Similar to Channel. A channel is a named input source that training algorithms can consume. Refer to Channel for detailed descriptions.
" + "documentation":"A channel is a named input source that training algorithms can consume. For more information, see .
" }, "AutoMLContainerDefinition":{ "type":"structure", @@ -3753,18 +3757,18 @@ "members":{ "Image":{ "shape":"ContainerImage", - "documentation":"The ECR path of the container. Refer to ContainerDefinition for more details.
" + "documentation":"The ECR path of the container. For more information, see .
" }, "ModelDataUrl":{ "shape":"Url", - "documentation":"The location of the model artifacts. Refer to ContainerDefinition for more details.
" + "documentation":"The location of the model artifacts. For more information, see .
" }, "Environment":{ "shape":"EnvironmentMap", - "documentation":"Environment variables to set in the container. Refer to ContainerDefinition for more details.
" + "documentation":"Environment variables to set in the container. For more information, see .
" } }, - "documentation":"A list of container definitions that describe the different containers that make up one AutoML candidate. Refer to ContainerDefinition for more details.
" + "documentation":"A list of container definitions that describe the different containers that make up an AutoML candidate. For more information, see .
" }, "AutoMLContainerDefinitions":{ "type":"list", @@ -3825,7 +3829,7 @@ }, "MaxAutoMLJobRuntimeInSeconds":{ "shape":"MaxAutoMLJobRuntimeInSeconds", - "documentation":"The maximum time, in seconds, an AutoML job is allowed to wait for a trial to complete. It must be equal to or greater than MaxRuntimePerTrainingJobInSeconds.
" + "documentation":"The maximum time, in seconds, an AutoML job is allowed to wait for a trial to complete. It must be equal to or greater than MaxRuntimePerTrainingJobInSeconds
.
How long a job is allowed to run, or how many candidates a job is allowed to generate.
" @@ -3835,14 +3839,14 @@ "members":{ "CompletionCriteria":{ "shape":"AutoMLJobCompletionCriteria", - "documentation":"How long a job is allowed to run, or how many candidates a job is allowed to generate.
" + "documentation":"How long an AutoML job is allowed to run, or how many candidates a job is allowed to generate.
" }, "SecurityConfig":{ "shape":"AutoMLSecurityConfig", "documentation":"Security configuration for traffic encryption or Amazon VPC settings.
" } }, - "documentation":"A collection of settings used for a job.
" + "documentation":"A collection of settings used for an AutoML job.
" }, "AutoMLJobName":{ "type":"string", @@ -3913,23 +3917,23 @@ "members":{ "AutoMLJobName":{ "shape":"AutoMLJobName", - "documentation":"The name of the object you are requesting.
" + "documentation":"The name of the AutoML you are requesting.
" }, "AutoMLJobArn":{ "shape":"AutoMLJobArn", - "documentation":"The ARN of the job.
" + "documentation":"The ARN of the AutoML job.
" }, "AutoMLJobStatus":{ "shape":"AutoMLJobStatus", - "documentation":"The job's status.
" + "documentation":"The status of the AutoML job.
" }, "AutoMLJobSecondaryStatus":{ "shape":"AutoMLJobSecondaryStatus", - "documentation":"The job's secondary status.
" + "documentation":"The secondary status of the AutoML job.
" }, "CreationTime":{ "shape":"Timestamp", - "documentation":"When the job was created.
" + "documentation":"When the AutoML job was created.
" }, "EndTime":{ "shape":"Timestamp", @@ -3937,14 +3941,18 @@ }, "LastModifiedTime":{ "shape":"Timestamp", - "documentation":"When the job was last modified.
" + "documentation":"When the AutoML job was last modified.
" }, "FailureReason":{ "shape":"AutoMLFailureReason", - "documentation":"The failure reason of a job.
" + "documentation":"The failure reason of an AutoML job.
" + }, + "PartialFailureReasons":{ + "shape":"AutoMLPartialFailureReasons", + "documentation":"The list of reasons for partial failures within an AutoML job.
" } }, - "documentation":"Provides a summary about a job.
" + "documentation":"Provides a summary about an AutoML job.
" }, "AutoMLMaxResults":{ "type":"integer", @@ -3981,6 +3989,22 @@ }, "documentation":"The output data configuration.
" }, + "AutoMLPartialFailureReason":{ + "type":"structure", + "members":{ + "PartialFailureMessage":{ + "shape":"AutoMLFailureReason", + "documentation":"The message containing the reason for a partial failure of an AutoML job.
" + } + }, + "documentation":"The reason for a partial failure of an AutoML job.
" + }, + "AutoMLPartialFailureReasons":{ + "type":"list", + "member":{"shape":"AutoMLPartialFailureReason"}, + "max":5, + "min":1 + }, "AutoMLS3DataSource":{ "type":"structure", "required":[ @@ -4124,6 +4148,17 @@ }, "documentation":"Details on the cache hit of a pipeline execution step.
" }, + "CandidateArtifactLocations":{ + "type":"structure", + "required":["Explainability"], + "members":{ + "Explainability":{ + "shape":"ExplainabilityLocation", + "documentation":"The S3 prefix to the explainability artifacts generated for the AutoML candidate.
" + } + }, + "documentation":"Location of artifacts for an AutoML candidate job.
" + }, "CandidateDefinitionNotebookLocation":{ "type":"string", "min":1 @@ -4133,6 +4168,16 @@ "max":64, "min":1 }, + "CandidateProperties":{ + "type":"structure", + "members":{ + "CandidateArtifactLocations":{ + "shape":"CandidateArtifactLocations", + "documentation":"The S3 prefix to the artifacts generated for an AutoML candidate.
" + } + }, + "documentation":"The properties of an AutoML candidate job.
" + }, "CandidateSortBy":{ "type":"string", "enum":[ @@ -5070,7 +5115,7 @@ }, "AppType":{ "shape":"AppType", - "documentation":"The type of app.
" + "documentation":"The type of app. Supported apps are JupyterServer
and KernelGateway
. TensorBoard
is not supported.
Identifies an Autopilot job. Must be unique to your account and is case-insensitive.
" + "documentation":"Identifies an Autopilot job. The name must be unique to your account and is case-insensitive.
" }, "InputDataConfig":{ "shape":"AutoMLInputDataConfig", - "documentation":"Similar to InputDataConfig supported by Tuning. Format(s) supported: CSV. Minimum of 500 rows.
" + "documentation":"An array of channel objects that describes the input data and its location. Each channel is a named input source. Similar to InputDataConfig
supported by . Format(s) supported: CSV. Minimum of 500 rows.
Similar to OutputDataConfig supported by Tuning. Format(s) supported: CSV.
" + "documentation":"Provides information about encryption and the Amazon S3 output path needed to store artifacts from an AutoML job. Format(s) supported: CSV.
" }, "ProblemType":{ "shape":"ProblemType", - "documentation":"Defines the kind of preprocessing and algorithms intended for the candidates. Options include: BinaryClassification, MulticlassClassification, and Regression.
" + "documentation":"Defines the type of supervised learning available for the candidates. Options include: BinaryClassification, MulticlassClassification, and Regression. For more information, see Amazon SageMaker Autopilot problem types and algorithm support.
" }, "AutoMLJobObjective":{ "shape":"AutoMLJobObjective", - "documentation":"Defines the objective of a an AutoML job. You provide a AutoMLJobObjective$MetricName and Autopilot infers whether to minimize or maximize it. If a metric is not specified, the most commonly used ObjectiveMetric for problem type is automaically selected.
" + "documentation":"Defines the objective metric used to measure the predictive quality of an AutoML job. You provide a AutoMLJobObjective$MetricName and Autopilot infers whether to minimize or maximize it.
" }, "AutoMLJobConfig":{ "shape":"AutoMLJobConfig", - "documentation":"Contains CompletionCriteria and SecurityConfig.
" + "documentation":"Contains CompletionCriteria and SecurityConfig settings for the AutoML job.
" }, "RoleArn":{ "shape":"RoleArn", @@ -5173,7 +5218,7 @@ }, "GenerateCandidateDefinitionsOnly":{ "shape":"GenerateCandidateDefinitionsOnly", - "documentation":"Generates possible candidates without training a model. A candidate is a combination of data preprocessors, algorithms, and algorithm parameter settings.
" + "documentation":"Generates possible candidates without training the models. A candidate is a combination of data preprocessors, algorithms, and algorithm parameter settings.
" }, "Tags":{ "shape":"TagList", @@ -5187,7 +5232,7 @@ "members":{ "AutoMLJobArn":{ "shape":"AutoMLJobArn", - "documentation":"When a job is created, it is assigned a unique ARN.
" + "documentation":"The unique ARN that is assigned to the AutoML job when it is created.
" } } }, @@ -5414,7 +5459,7 @@ }, "DefaultUserSettings":{ "shape":"UserSettings", - "documentation":"The default settings to use to create a user profile when UserSettings
isn't specified in the call to the CreateUserProfile API.
SecurityGroups
is aggregated when specified in both calls. For all other settings in UserSettings
, the values specified in CreateUserProfile
take precedence over those specified in CreateDomain
.
The default settings to use to create a user profile when UserSettings
isn't specified in the call to the CreateUserProfile
API.
SecurityGroups
is aggregated when specified in both calls. For all other settings in UserSettings
, the values specified in CreateUserProfile
take precedence over those specified in CreateDomain
.
Tags to associated with the Domain. Each tag consists of a key and an optional value. Tag keys must be unique per resource. Tags are searchable using the Search API.
" + "documentation":"Tags to associated with the Domain. Each tag consists of a key and an optional value. Tag keys must be unique per resource. Tags are searchable using the Search
API.
Request information about a job using that job's unique name.
" + "documentation":"Requests information about an AutoML job using its unique name.
" } } }, @@ -8197,15 +8242,15 @@ "members":{ "AutoMLJobName":{ "shape":"AutoMLJobName", - "documentation":"Returns the name of a job.
" + "documentation":"Returns the name of the AutoML job.
" }, "AutoMLJobArn":{ "shape":"AutoMLJobArn", - "documentation":"Returns the job's ARN.
" + "documentation":"Returns the ARN of the AutoML job.
" }, "InputDataConfig":{ "shape":"AutoMLInputDataConfig", - "documentation":"Returns the job's input data config.
" + "documentation":"Returns the input data configuration for the AutoML job..
" }, "OutputDataConfig":{ "shape":"AutoMLOutputDataConfig", @@ -8225,15 +8270,15 @@ }, "AutoMLJobConfig":{ "shape":"AutoMLJobConfig", - "documentation":"Returns the job's config.
" + "documentation":"Returns the configuration for the AutoML job.
" }, "CreationTime":{ "shape":"Timestamp", - "documentation":"Returns the job's creation time.
" + "documentation":"Returns the creation time of the AutoML job.
" }, "EndTime":{ "shape":"Timestamp", - "documentation":"Returns the job's end time.
" + "documentation":"Returns the end time of the AutoML job.
" }, "LastModifiedTime":{ "shape":"Timestamp", @@ -8243,17 +8288,21 @@ "shape":"AutoMLFailureReason", "documentation":"Returns the job's FailureReason.
" }, + "PartialFailureReasons":{ + "shape":"AutoMLPartialFailureReasons", + "documentation":"Returns a list of reasons for partial failures within an AutoML job.
" + }, "BestCandidate":{ "shape":"AutoMLCandidate", "documentation":"Returns the job's BestCandidate.
" }, "AutoMLJobStatus":{ "shape":"AutoMLJobStatus", - "documentation":"Returns the job's AutoMLJobStatus.
" + "documentation":"Returns the status of the AutoML job's AutoMLJobStatus.
" }, "AutoMLJobSecondaryStatus":{ "shape":"AutoMLJobSecondaryStatus", - "documentation":"Returns the job's AutoMLJobSecondaryStatus.
" + "documentation":"Returns the secondary status of the AutoML job.
" }, "GenerateCandidateDefinitionsOnly":{ "shape":"GenerateCandidateDefinitionsOnly", @@ -8265,7 +8314,7 @@ }, "ResolvedAttributes":{ "shape":"ResolvedAttributes", - "documentation":"This contains ProblemType, AutoMLJobObjective and CompletionCriteria. They're auto-inferred values, if not provided by you. If you do provide them, then they'll be the same as provided.
" + "documentation":"This contains ProblemType, AutoMLJobObjective and CompletionCriteria. If you do not provide these values, they are auto-inferred. If you do provide them, they are the values you provide.
" } } }, @@ -11723,6 +11772,10 @@ }, "documentation":"Contains explainability metrics for a model.
" }, + "ExplainabilityLocation":{ + "type":"string", + "min":1 + }, "FailureReason":{ "type":"string", "max":1024 @@ -14203,27 +14256,27 @@ "members":{ "AutoMLJobName":{ "shape":"AutoMLJobName", - "documentation":"List the Candidates created for the job by providing the job's name.
" + "documentation":"List the candidates created for the job by providing the job's name.
" }, "StatusEquals":{ "shape":"CandidateStatus", - "documentation":"List the Candidates for the job and filter by status.
" + "documentation":"List the candidates for the job and filter by status.
" }, "CandidateNameEquals":{ "shape":"CandidateName", - "documentation":"List the Candidates for the job and filter by candidate name.
" + "documentation":"List the candidates for the job and filter by candidate name.
" }, "SortOrder":{ "shape":"AutoMLSortOrder", - "documentation":"The sort order for the results. The default is Ascending.
" + "documentation":"The sort order for the results. The default is Ascending
.
The parameter by which to sort the results. The default is Descending.
" + "documentation":"The parameter by which to sort the results. The default is Descending
.
List the job's Candidates up to a specified limit.
", + "documentation":"List the job's candidates up to a specified limit.
", "box":true }, "NextToken":{ @@ -20698,7 +20751,7 @@ "documentation":"When NotebookOutputOption
is Allowed
, the AWS Key Management Service (KMS) encryption key ID used to encrypt the notebook cell output in the Amazon S3 bucket.
Specifies options for sharing SageMaker Studio notebooks. These settings are specified as part of DefaultUserSettings
when the CreateDomain API is called, and as part of UserSettings
when the CreateUserProfile API is called. When SharingSettings
is not specified, notebook sharing isn't allowed.
Specifies options for sharing SageMaker Studio notebooks. These settings are specified as part of DefaultUserSettings
when the CreateDomain
API is called, and as part of UserSettings
when the CreateUserProfile
API is called. When SharingSettings
is not specified, notebook sharing isn't allowed.
The TensorBoard app settings.
" } }, - "documentation":"A collection of settings that apply to users of Amazon SageMaker Studio. These settings are specified when the CreateUserProfile API is called, and as DefaultUserSettings
when the CreateDomain API is called.
SecurityGroups
is aggregated when specified in both calls. For all other settings in UserSettings
, the values specified in CreateUserProfile
take precedence over those specified in CreateDomain
.
A collection of settings that apply to users of Amazon SageMaker Studio. These settings are specified when the CreateUserProfile
API is called, and as DefaultUserSettings
when the CreateDomain
API is called.
SecurityGroups
is aggregated when specified in both calls. For all other settings in UserSettings
, the values specified in CreateUserProfile
take precedence over those specified in CreateDomain
.
Associates an AWS Identity and Access Management (IAM) role with an AWS Certificate Manager (ACM) certificate. This enables the certificate to be used by the ACM for Nitro Enclaves application inside an enclave. For more information, see AWS Certificate Manager for Nitro Enclaves in the AWS Nitro Enclaves User Guide.
When the IAM role is associated with the ACM certificate, places the certificate, certificate chain, and encrypted private key in an Amazon S3 bucket that only the associated IAM role can access. The private key of the certificate is encrypted with an AWS-managed KMS customer master (CMK) that has an attached attestation-based CMK policy.
To enable the IAM role to access the Amazon S3 object, you must grant it permission to call s3:GetObject
on the Amazon S3 bucket returned by the command. To enable the IAM role to access the AWS KMS CMK, you must grant it permission to call kms:Decrypt
on AWS KMS CMK returned by the command. For more information, see Grant the role permission to access the certificate and encryption key in the AWS Nitro Enclaves User Guide.
Associates an AWS Identity and Access Management (IAM) role with an AWS Certificate Manager (ACM) certificate. This enables the certificate to be used by the ACM for Nitro Enclaves application inside an enclave. For more information, see AWS Certificate Manager for Nitro Enclaves in the AWS Nitro Enclaves User Guide.
When the IAM role is associated with the ACM certificate, the certificate, certificate chain, and encrypted private key are placed in an Amazon S3 bucket that only the associated IAM role can access. The private key of the certificate is encrypted with an AWS-managed KMS customer master (CMK) that has an attached attestation-based CMK policy.
To enable the IAM role to access the Amazon S3 object, you must grant it permission to call s3:GetObject
on the Amazon S3 bucket returned by the command. To enable the IAM role to access the AWS KMS CMK, you must grant it permission to call kms:Decrypt
on the AWS KMS CMK returned by the command. For more information, see Grant the role permission to access the certificate and encryption key in the AWS Nitro Enclaves User Guide.
Creates a placement group in which to launch instances. The strategy of the placement group determines how the instances are organized within the group.
A cluster
placement group is a logical grouping of instances within a single Availability Zone that benefit from low network latency, high network throughput. A spread
placement group places instances on distinct hardware. A partition
placement group places groups of instances in different partitions, where instances in one partition do not share the same hardware with instances in another partition.
For more information, see Placement groups in the Amazon EC2 User Guide.
" }, + "CreateReplaceRootVolumeTask":{ + "name":"CreateReplaceRootVolumeTask", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateReplaceRootVolumeTaskRequest"}, + "output":{"shape":"CreateReplaceRootVolumeTaskResult"}, + "documentation":"Creates a root volume replacement task for an Amazon EC2 instance. The root volume can either be restored to its initial launch state, or it can be restored using a specific snapshot.
For more information, see Replace a root volume in the Amazon Elastic Compute Cloud User Guide.
" + }, "CreateReservedInstancesListing":{ "name":"CreateReservedInstancesListing", "http":{ @@ -2312,6 +2322,16 @@ "output":{"shape":"DescribeRegionsResult"}, "documentation":"Describes the Regions that are enabled for your account, or all Regions.
For a list of the Regions supported by Amazon EC2, see Regions and Endpoints.
For information about enabling and disabling Regions for your account, see Managing AWS Regions in the AWS General Reference.
" }, + "DescribeReplaceRootVolumeTasks":{ + "name":"DescribeReplaceRootVolumeTasks", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeReplaceRootVolumeTasksRequest"}, + "output":{"shape":"DescribeReplaceRootVolumeTasksResult"}, + "documentation":"Describes a root volume replacement task. For more information, see Replace a root volume in the Amazon Elastic Compute Cloud User Guide.
" + }, "DescribeReservedInstances":{ "name":"DescribeReservedInstances", "http":{ @@ -2859,6 +2879,16 @@ "output":{"shape":"DisableFastSnapshotRestoresResult"}, "documentation":"Disables fast snapshot restores for the specified snapshots in the specified Availability Zones.
" }, + "DisableSerialConsoleAccess":{ + "name":"DisableSerialConsoleAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DisableSerialConsoleAccessRequest"}, + "output":{"shape":"DisableSerialConsoleAccessResult"}, + "documentation":"Disables access to the EC2 serial console of all instances for your account. By default, access to the EC2 serial console is disabled for your account. For more information, see Manage account access to the EC2 serial console in the Amazon EC2 User Guide.
" + }, "DisableTransitGatewayRouteTablePropagation":{ "name":"DisableTransitGatewayRouteTablePropagation", "http":{ @@ -2994,7 +3024,7 @@ }, "input":{"shape":"EnableEbsEncryptionByDefaultRequest"}, "output":{"shape":"EnableEbsEncryptionByDefaultResult"}, - "documentation":"Enables EBS encryption by default for your account in the current Region.
After you enable encryption by default, the EBS volumes that you create are are always encrypted, either using the default CMK or the CMK that you specified when you created each volume. For more information, see Amazon EBS encryption in the Amazon Elastic Compute Cloud User Guide.
You can specify the default CMK for encryption by default using ModifyEbsDefaultKmsKeyId or ResetEbsDefaultKmsKeyId.
Enabling encryption by default has no effect on the encryption status of your existing volumes.
After you enable encryption by default, you can no longer launch instances using instance types that do not support encryption. For more information, see Supported instance types.
" + "documentation":"Enables EBS encryption by default for your account in the current Region.
After you enable encryption by default, the EBS volumes that you create are always encrypted, either using the default CMK or the CMK that you specified when you created each volume. For more information, see Amazon EBS encryption in the Amazon Elastic Compute Cloud User Guide.
You can specify the default CMK for encryption by default using ModifyEbsDefaultKmsKeyId or ResetEbsDefaultKmsKeyId.
Enabling encryption by default has no effect on the encryption status of your existing volumes.
After you enable encryption by default, you can no longer launch instances using instance types that do not support encryption. For more information, see Supported instance types.
" }, "EnableFastSnapshotRestores":{ "name":"EnableFastSnapshotRestores", @@ -3006,6 +3036,16 @@ "output":{"shape":"EnableFastSnapshotRestoresResult"}, "documentation":"Enables fast snapshot restores for the specified snapshots in the specified Availability Zones.
You get the full benefit of fast snapshot restores after they enter the enabled
state. To get the current state of fast snapshot restores, use DescribeFastSnapshotRestores. To disable fast snapshot restores, use DisableFastSnapshotRestores.
For more information, see Amazon EBS fast snapshot restore in the Amazon Elastic Compute Cloud User Guide.
" }, + "EnableSerialConsoleAccess":{ + "name":"EnableSerialConsoleAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"EnableSerialConsoleAccessRequest"}, + "output":{"shape":"EnableSerialConsoleAccessResult"}, + "documentation":"Enables access to the EC2 serial console of all instances for your account. By default, access to the EC2 serial console is disabled for your account. For more information, see Manage account access to the EC2 serial console in the Amazon EC2 User Guide.
" + }, "EnableTransitGatewayRouteTablePropagation":{ "name":"EnableTransitGatewayRouteTablePropagation", "http":{ @@ -3254,6 +3294,16 @@ "output":{"shape":"GetReservedInstancesExchangeQuoteResult"}, "documentation":"Returns a quote and exchange information for exchanging one or more specified Convertible Reserved Instances for a new Convertible Reserved Instance. If the exchange cannot be performed, the reason is returned in the response. Use AcceptReservedInstancesExchangeQuote to perform the exchange.
" }, + "GetSerialConsoleAccessStatus":{ + "name":"GetSerialConsoleAccessStatus", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetSerialConsoleAccessStatusRequest"}, + "output":{"shape":"GetSerialConsoleAccessStatusResult"}, + "documentation":"Retrieves the access status of your account to the EC2 serial console of all instances. By default, access to the EC2 serial console is disabled for your account. For more information, see Manage account access to the EC2 serial console in the Amazon EC2 User Guide.
" + }, "GetTransitGatewayAttachmentPropagations":{ "name":"GetTransitGatewayAttachmentPropagations", "http":{ @@ -9931,6 +9981,44 @@ } } }, + "CreateReplaceRootVolumeTaskRequest":{ + "type":"structure", + "required":["InstanceId"], + "members":{ + "InstanceId":{ + "shape":"InstanceId", + "documentation":"The ID of the instance for which to replace the root volume.
" + }, + "SnapshotId":{ + "shape":"SnapshotId", + "documentation":"The ID of the snapshot from which to restore the replacement root volume. If you want to restore the volume to the initial launch state, omit this parameter.
" + }, + "ClientToken":{ + "shape":"String", + "documentation":"Unique, case-sensitive identifier you provide to ensure the idempotency of the request. If you do not specify a client token, a randomly generated token is used for the request to ensure idempotency. For more information, see Ensuring Idempotency.
", + "idempotencyToken":true + }, + "DryRun":{ + "shape":"Boolean", + "documentation":"Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
The tags to apply to the root volume replacement task.
", + "locationName":"TagSpecification" + } + } + }, + "CreateReplaceRootVolumeTaskResult":{ + "type":"structure", + "members":{ + "ReplaceRootVolumeTask":{ + "shape":"ReplaceRootVolumeTask", + "documentation":"Information about the root volume replacement task.
", + "locationName":"replaceRootVolumeTask" + } + } + }, "CreateReservedInstancesListingRequest":{ "type":"structure", "required":[ @@ -16225,6 +16313,53 @@ } } }, + "DescribeReplaceRootVolumeTasksMaxResults":{ + "type":"integer", + "max":50, + "min":1 + }, + "DescribeReplaceRootVolumeTasksRequest":{ + "type":"structure", + "members":{ + "ReplaceRootVolumeTaskIds":{ + "shape":"ReplaceRootVolumeTaskIds", + "documentation":"The ID of the root volume replacement task to view.
", + "locationName":"ReplaceRootVolumeTaskId" + }, + "Filters":{ + "shape":"FilterList", + "documentation":"Filter to use:
instance-id
- The ID of the instance for which the root volume replacement task was created.
The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned nextToken
value.
The token for the next page of results.
" + }, + "DryRun":{ + "shape":"Boolean", + "documentation":"Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Information about the root volume replacement task.
", + "locationName":"replaceRootVolumeTaskSet" + }, + "NextToken":{ + "shape":"String", + "documentation":"The token to use to retrieve the next page of results. This value is null
when there are no more results to return.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
If true
, access to the EC2 serial console of all instances is enabled for your account. If false
, access to the EC2 serial console of all instances is disabled for your account.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
If true
, access to the EC2 serial console of all instances is enabled for your account. If false
, access to the EC2 serial console of all instances is disabled for your account.
Contains the output of GetReservedInstancesExchangeQuote.
" }, + "GetSerialConsoleAccessStatusRequest":{ + "type":"structure", + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
If true
, access to the EC2 serial console of all instances is enabled for your account. If false
, access to the EC2 serial console of all instances is disabled for your account.
The target IOPS rate of the volume. This parameter is valid only for gp3
, io1
, and io2
volumes.
The following are the supported values for each volume type:
gp3
: 3,000-16,000 IOPS
io1
: 100-64,000 IOPS
io2
: 100-64,000 IOPS
Default: If no IOPS value is specified, the existing value is retained.
" + "documentation":"The target IOPS rate of the volume. This parameter is valid only for gp3
, io1
, and io2
volumes.
The following are the supported values for each volume type:
gp3
: 3,000-16,000 IOPS
io1
: 100-64,000 IOPS
io2
: 100-64,000 IOPS
Default: If no IOPS value is specified, the existing value is retained, unless a volume type is modified that supports different values.
" }, "Throughput":{ "shape":"Integer", @@ -33277,6 +33469,68 @@ } } }, + "ReplaceRootVolumeTask":{ + "type":"structure", + "members":{ + "ReplaceRootVolumeTaskId":{ + "shape":"ReplaceRootVolumeTaskId", + "documentation":"The ID of the root volume replacement task.
", + "locationName":"replaceRootVolumeTaskId" + }, + "InstanceId":{ + "shape":"String", + "documentation":"The ID of the instance for which the root volume replacement task was created.
", + "locationName":"instanceId" + }, + "TaskState":{ + "shape":"ReplaceRootVolumeTaskState", + "documentation":"The state of the task. The task can be in one of the following states:
pending
- the replacement volume is being created.
in-progress
- the original volume is being detached and the replacement volume is being attached.
succeeded
- the replacement volume has been successfully attached to the instance and the instance is available.
failing
- the replacement task is in the process of failing.
failed
- the replacement task has failed but the original root volume is still attached.
failing-detached
- the replacement task is in the process of failing. The instance might have no root volume attached.
failed-detached
- the replacement task has failed and the instance has no root volume attached.
The time the task was started.
", + "locationName":"startTime" + }, + "CompleteTime":{ + "shape":"String", + "documentation":"The time the task completed.
", + "locationName":"completeTime" + }, + "Tags":{ + "shape":"TagList", + "documentation":"The tags assigned to the task.
", + "locationName":"tagSet" + } + }, + "documentation":"Information about a root volume replacement task.
" + }, + "ReplaceRootVolumeTaskId":{"type":"string"}, + "ReplaceRootVolumeTaskIds":{ + "type":"list", + "member":{ + "shape":"ReplaceRootVolumeTaskId", + "locationName":"ReplaceRootVolumeTaskId" + } + }, + "ReplaceRootVolumeTaskState":{ + "type":"string", + "enum":[ + "pending", + "in-progress", + "failing", + "succeeded", + "failed", + "failed-detached" + ] + }, + "ReplaceRootVolumeTasks":{ + "type":"list", + "member":{ + "shape":"ReplaceRootVolumeTask", + "locationName":"item" + } + }, "ReplaceRouteRequest":{ "type":"structure", "required":["RouteTableId"], diff --git a/services/ec2/src/main/resources/codegen-resources/waiters-2.json b/services/ec2/src/main/resources/codegen-resources/waiters-2.json index 31c1513e3c8e..9dea88baad0a 100755 --- a/services/ec2/src/main/resources/codegen-resources/waiters-2.json +++ b/services/ec2/src/main/resources/codegen-resources/waiters-2.json @@ -391,7 +391,7 @@ "argument": "length(SecurityGroups[].GroupId) > `0`" }, { - "expected": "InvalidGroupNotFound", + "expected": "InvalidGroup.NotFound", "matcher": "error", "state": "retry" } From 5ae5c329fce0557d0c9f511621e93dda6c59af56 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:08:45 +0000 Subject: [PATCH 08/12] AWS Config Update: Adding new APIs to support ConformancePack Compliance CI in Aggregators --- .../feature-AWSConfig-4c475e6.json | 6 + .../codegen-resources/service-2.json | 260 +++++++++++++++++- 2 files changed, 258 insertions(+), 8 deletions(-) create mode 100644 .changes/next-release/feature-AWSConfig-4c475e6.json diff --git a/.changes/next-release/feature-AWSConfig-4c475e6.json b/.changes/next-release/feature-AWSConfig-4c475e6.json new file mode 100644 index 000000000000..452ebba3c08d --- /dev/null +++ b/.changes/next-release/feature-AWSConfig-4c475e6.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Config", + "contributor": "", + "description": "Adding new APIs to support ConformancePack Compliance CI in Aggregators" +} diff --git a/services/config/src/main/resources/codegen-resources/service-2.json b/services/config/src/main/resources/codegen-resources/service-2.json index f976f905199e..de0b6dc5753b 100755 --- a/services/config/src/main/resources/codegen-resources/service-2.json +++ b/services/config/src/main/resources/codegen-resources/service-2.json @@ -269,6 +269,22 @@ ], "documentation":"Returns a list of compliant and noncompliant rules with the number of resources for compliant and noncompliant rules.
The results can return an empty result page, but if you have a nextToken
, the results are displayed on the next page.
Returns a list of the conformance packs and their associated compliance status with the count of compliant and noncompliant AWS Config rules within each conformance pack.
The results can return an empty result page, but if you have a nextToken
, the results are displayed on the next page.
Returns the number of compliant and noncompliant rules for one or more accounts and regions in an aggregator.
The results can return an empty result page, but if you have a nextToken, the results are displayed on the next page.
Returns the count of compliant and noncompliant conformance packs across all AWS Accounts and AWS Regions. You can filter based on AWS Account ID or AWS Region.
The results can return an empty result page, but if you have a nextToken, the results are displayed on the next page.
Deploys conformance packs across member accounts in an AWS Organization.
Only a master account and a delegated administrator can call this API. When calling this API with a delegated administrator, you must ensure AWS Organizations ListDelegatedAdministrator
permissions are added.
This API enables organization service access for config-multiaccountsetup.amazonaws.com
through the EnableAWSServiceAccess
action and creates a service linked role AWSServiceRoleForConfigMultiAccountSetup
in the master or delegated administrator account of your organization. The service linked role is created only when the role does not exist in the caller account. To use this API with delegated administrator, register a delegated administrator by calling AWS Organization register-delegate-admin
for config-multiaccountsetup.amazonaws.com
.
Prerequisite: Ensure you call EnableAllFeatures
API to enable all features in an organization.
You must specify either the TemplateS3Uri
or the TemplateBody
parameter, but not both. If you provide both AWS Config uses the TemplateS3Uri
parameter and ignores the TemplateBody
parameter.
AWS Config sets the state of a conformance pack to CREATE_IN_PROGRESS and UPDATE_IN_PROGRESS until the conformance pack is created or updated. You cannot update a conformance pack while it is in this state.
You can create 6 conformance packs with 25 AWS Config rules in each pack and 3 delegated administrator per organization.
Deploys conformance packs across member accounts in an AWS Organization.
Only a master account and a delegated administrator can call this API. When calling this API with a delegated administrator, you must ensure AWS Organizations ListDelegatedAdministrator
permissions are added.
This API enables organization service access for config-multiaccountsetup.amazonaws.com
through the EnableAWSServiceAccess
action and creates a service linked role AWSServiceRoleForConfigMultiAccountSetup
in the master or delegated administrator account of your organization. The service linked role is created only when the role does not exist in the caller account. To use this API with delegated administrator, register a delegated administrator by calling AWS Organization register-delegate-admin
for config-multiaccountsetup.amazonaws.com
.
Prerequisite: Ensure you call EnableAllFeatures
API to enable all features in an organization.
You must specify either the TemplateS3Uri
or the TemplateBody
parameter, but not both. If you provide both AWS Config uses the TemplateS3Uri
parameter and ignores the TemplateBody
parameter.
AWS Config sets the state of a conformance pack to CREATE_IN_PROGRESS and UPDATE_IN_PROGRESS until the conformance pack is created or updated. You cannot update a conformance pack while it is in this state.
You can create 50 conformance packs with 25 AWS Config rules in each pack and 3 delegated administrator per organization.
The name of the conformance pack.
" + }, + "Compliance":{ + "shape":"AggregateConformancePackCompliance", + "documentation":"The compliance status of the conformance pack.
" + }, + "AccountId":{ + "shape":"AccountId", + "documentation":"The 12-digit AWS account ID of the source account.
" + }, + "AwsRegion":{ + "shape":"AwsRegion", + "documentation":"The source AWS Region from where the data is aggregated.
" + } + }, + "documentation":"Provides aggregate compliance of the conformance pack. Indicates whether a conformance pack is compliant based on the name of the conformance pack, account ID, and region.
A conformance pack is compliant if all of the rules in that conformance packs are compliant. It is noncompliant if any of the rules are not compliant.
If a conformance pack has rules that return INSUFFICIENT_DATA, the conformance pack returns INSUFFICIENT_DATA only if all the rules within that conformance pack return INSUFFICIENT_DATA. If some of the rules in a conformance pack are compliant and others return INSUFFICIENT_DATA, the conformance pack shows compliant.
The compliance status of the conformance pack.
" + }, + "CompliantRuleCount":{ + "shape":"Integer", + "documentation":"The number of compliant AWS Config Rules.
" + }, + "NonCompliantRuleCount":{ + "shape":"Integer", + "documentation":"The number of noncompliant AWS Config Rules.
" + }, + "TotalRuleCount":{ + "shape":"Integer", + "documentation":"Total number of compliant rules, noncompliant rules, and the rules that do not have any applicable resources to evaluate upon resulting in insufficient data.
" + } + }, + "documentation":"Provides the number of compliant and noncompliant rules within a conformance pack. Also provides the total count of compliant rules, noncompliant rules, and the rules that do not have any applicable resources to evaluate upon resulting in insufficient data.
" + }, + "AggregateConformancePackComplianceCount":{ + "type":"structure", + "members":{ + "CompliantConformancePackCount":{ + "shape":"Integer", + "documentation":"Number of compliant conformance packs.
" + }, + "NonCompliantConformancePackCount":{ + "shape":"Integer", + "documentation":"Number of noncompliant conformance packs.
" + } + }, + "documentation":"The number of conformance packs that are compliant and noncompliant.
" + }, + "AggregateConformancePackComplianceFilters":{ + "type":"structure", + "members":{ + "ConformancePackName":{ + "shape":"ConformancePackName", + "documentation":"The name of the conformance pack.
" + }, + "ComplianceType":{ + "shape":"ConformancePackComplianceType", + "documentation":"The compliance status of the conformance pack.
" + }, + "AccountId":{ + "shape":"AccountId", + "documentation":"The 12-digit AWS account ID of the source account.
" + }, + "AwsRegion":{ + "shape":"AwsRegion", + "documentation":"The source AWS Region from where the data is aggregated.
" + } + }, + "documentation":"Filters the conformance packs based on an account ID, region, compliance type, and the name of the conformance pack.
" + }, + "AggregateConformancePackComplianceSummary":{ + "type":"structure", + "members":{ + "ComplianceSummary":{ + "shape":"AggregateConformancePackComplianceCount", + "documentation":"Returns an AggregateConformancePackComplianceCount
object.
Groups the result based on AWS Account ID or AWS Region.
" + } + }, + "documentation":"Provides a summary of compliance based on either account ID or region.
" + }, + "AggregateConformancePackComplianceSummaryFilters":{ + "type":"structure", + "members":{ + "AccountId":{ + "shape":"AccountId", + "documentation":"The 12-digit AWS account ID of the source account.
" + }, + "AwsRegion":{ + "shape":"AwsRegion", + "documentation":"The source AWS Region from where the data is aggregated.
" + } + }, + "documentation":"Filters the results based on account ID and region.
" + }, + "AggregateConformancePackComplianceSummaryGroupKey":{ + "type":"string", + "enum":[ + "ACCOUNT_ID", + "AWS_REGION" + ] + }, + "AggregateConformancePackComplianceSummaryList":{ + "type":"list", + "member":{"shape":"AggregateConformancePackComplianceSummary"} + }, "AggregateEvaluationResult":{ "type":"structure", "members":{ @@ -2244,7 +2399,7 @@ }, "ComplianceType":{ "shape":"ConformancePackComplianceType", - "documentation":"Filters the results by compliance.
The allowed values are COMPLIANT
and NON_COMPLIANT
.
Filters the results by compliance.
The allowed values are COMPLIANT
and NON_COMPLIANT
. INSUFFICIENT_DATA
is not supported.
Filters the conformance pack by compliance types and AWS Config rule names.
" @@ -2268,7 +2423,7 @@ }, "ConformancePackComplianceStatus":{ "shape":"ConformancePackComplianceType", - "documentation":"The status of the conformance pack. The allowed values are COMPLIANT and NON_COMPLIANT.
" + "documentation":"The status of the conformance pack. The allowed values are COMPLIANT
, NON_COMPLIANT
and INSUFFICIENT_DATA
.
Summary includes the name and status of the conformance pack.
" @@ -2351,7 +2506,7 @@ }, "ComplianceType":{ "shape":"ConformancePackComplianceType", - "documentation":"Filters the results by compliance.
The allowed values are COMPLIANT
and NON_COMPLIANT
.
Filters the results by compliance.
The allowed values are COMPLIANT
and NON_COMPLIANT
. INSUFFICIENT_DATA
is not supported.
The compliance type. The allowed values are COMPLIANT
and NON_COMPLIANT
.
The compliance type. The allowed values are COMPLIANT
and NON_COMPLIANT
. INSUFFICIENT_DATA
is not supported.
Compliance of the AWS Config rule
The allowed values are COMPLIANT
and NON_COMPLIANT
.
Compliance of the AWS Config rule.
The allowed values are COMPLIANT
, NON_COMPLIANT
, and INSUFFICIENT_DATA
.
Controls for the conformance pack. A control is a process to prevent or detect problems while meeting objectives. A control can align with a specific compliance regime or map to internal controls defined by an organization.
" } }, "documentation":"Compliance information of one or more AWS Config rules within a conformance pack. You can filter using AWS Config rule names and compliance types.
" @@ -2540,6 +2699,12 @@ "documentation":"You have specified a template that is not valid or supported.
", "exception":true }, + "ControlsList":{ + "type":"list", + "member":{"shape":"StringWithCharLimit128"}, + "max":20, + "min":0 + }, "CosmosPageLimit":{ "type":"integer", "max":100, @@ -2895,6 +3060,41 @@ } } }, + "DescribeAggregateComplianceByConformancePacksRequest":{ + "type":"structure", + "required":["ConfigurationAggregatorName"], + "members":{ + "ConfigurationAggregatorName":{ + "shape":"ConfigurationAggregatorName", + "documentation":"The name of the configuration aggregator.
" + }, + "Filters":{ + "shape":"AggregateConformancePackComplianceFilters", + "documentation":"Filters the result by AggregateConformancePackComplianceFilters
object.
The maximum number of conformance packs details returned on each page. The default is maximum. If you specify 0, AWS Config uses the default.
" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"The nextToken
string returned on a previous page that you use to get the next page of results in a paginated response.
Returns the AggregateComplianceByConformancePack
object.
The nextToken
string returned on a previous page that you use to get the next page of results in a paginated response.
The name of the configuration aggregator.
" + }, + "Filters":{ + "shape":"AggregateConformancePackComplianceSummaryFilters", + "documentation":"Filters the results based on the AggregateConformancePackComplianceSummaryFilters
object.
Groups the result based on AWS Account ID or AWS Region.
" + }, + "Limit":{ + "shape":"Limit", + "documentation":"The maximum number of results returned on each page. The default is maximum. If you specify 0, AWS Config uses the default.
" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"The nextToken
string returned on a previous page that you use to get the next page of results in a paginated response.
Returns a list of AggregateConformancePackComplianceSummary
object.
Groups the result based on AWS Account ID or AWS Region.
" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"The nextToken
string returned on a previous page that you use to get the next page of results in a paginated response.
Amazon S3 bucket where AWS Config stores conformance pack templates.
This field is optional.
Amazon S3 bucket where AWS Config stores conformance pack templates.
This field is optional. If used, it must be prefixed with awsconfigconforms
.
A comma-separated list that specifies the types of AWS resources for which AWS Config records configuration changes (for example, AWS::EC2::Instance
or AWS::CloudTrail::Trail
).
To record all configuration changes, you must set the allSupported
option to false
.
If you set this option to true
, when AWS Config adds support for a new type of resource, it will not record resources of that type unless you manually add that type to your recording group.
For a list of valid resourceTypes
values, see the resourceType Value column in Supported AWS Resource Types.
A comma-separated list that specifies the types of AWS resources for which AWS Config records configuration changes (for example, AWS::EC2::Instance
or AWS::CloudTrail::Trail
).
To record all configuration changes, you must set the allSupported
option to true
.
If you set this option to false
, when AWS Config adds support for a new type of resource, it will not record resources of that type unless you manually add that type to your recording group.
For a list of valid resourceTypes
values, see the resourceType Value column in Supported AWS Resource Types.
Specifies the types of AWS resource for which AWS Config records configuration changes.
In the recording group, you specify whether all supported types or specific types of resources are recorded.
By default, AWS Config records configuration changes for all supported types of regional resources that AWS Config discovers in the region in which it is running. Regional resources are tied to a region and can be used only in that region. Examples of regional resources are EC2 instances and EBS volumes.
You can also have AWS Config record configuration changes for supported types of global resources (for example, IAM resources). Global resources are not tied to an individual region and can be used in all regions.
The configuration details for any global resource are the same in all regions. If you customize AWS Config in multiple regions to record global resources, it will create multiple configuration items each time a global resource changes: one configuration item for each region. These configuration items will contain identical data. To prevent duplicate configuration items, you should consider customizing AWS Config in only one region to record global resources, unless you want the configuration items to be available in multiple regions.
If you don't want AWS Config to record all resources, you can specify which types of resources it will record with the resourceTypes
parameter.
For a list of supported resource types, see Supported Resource Types.
For more information, see Selecting Which Resources AWS Config Records.
" @@ -6317,6 +6560,7 @@ "AWS::SSM::PatchCompliance", "AWS::Shield::Protection", "AWS::ShieldRegional::Protection", + "AWS::Config::ConformancePackCompliance", "AWS::Config::ResourceCompliance", "AWS::ApiGateway::Stage", "AWS::ApiGateway::RestApi", From 74e891c52f53b7754082cfc34e721580bc443b7a Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:08:44 +0000 Subject: [PATCH 09/12] AWS EC2 Instance Connect Update: Adding support to push SSH keys to the EC2 serial console in order to allow an SSH connection to your Amazon EC2 instance's serial port. --- ...feature-AWSEC2InstanceConnect-8170d8c.json | 6 + .../codegen-resources/service-2.json | 119 ++++++++++++++++-- 2 files changed, 112 insertions(+), 13 deletions(-) create mode 100644 .changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json diff --git a/.changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json b/.changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json new file mode 100644 index 000000000000..0998bbaa62c7 --- /dev/null +++ b/.changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS EC2 Instance Connect", + "contributor": "", + "description": "Adding support to push SSH keys to the EC2 serial console in order to allow an SSH connection to your Amazon EC2 instance's serial port." +} diff --git a/services/ec2instanceconnect/src/main/resources/codegen-resources/service-2.json b/services/ec2instanceconnect/src/main/resources/codegen-resources/service-2.json index 55b2911fe7d0..cee350c406f2 100644 --- a/services/ec2instanceconnect/src/main/resources/codegen-resources/service-2.json +++ b/services/ec2instanceconnect/src/main/resources/codegen-resources/service-2.json @@ -28,7 +28,28 @@ {"shape":"ThrottlingException"}, {"shape":"EC2InstanceNotFoundException"} ], - "documentation":"Pushes an SSH public key to a particular OS user on a given EC2 instance for 60 seconds.
" + "documentation":"Pushes an SSH public key to the specified EC2 instance for use by the specified user. The key remains for 60 seconds. For more information, see Connect to your Linux instance using EC2 Instance Connect in the Amazon EC2 User Guide.
" + }, + "SendSerialConsoleSSHPublicKey":{ + "name":"SendSerialConsoleSSHPublicKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"SendSerialConsoleSSHPublicKeyRequest"}, + "output":{"shape":"SendSerialConsoleSSHPublicKeyResponse"}, + "errors":[ + {"shape":"AuthException"}, + {"shape":"SerialConsoleAccessDisabledException"}, + {"shape":"InvalidArgsException"}, + {"shape":"ServiceException"}, + {"shape":"ThrottlingException"}, + {"shape":"EC2InstanceNotFoundException"}, + {"shape":"EC2InstanceTypeInvalidException"}, + {"shape":"SerialConsoleSessionLimitExceededException"}, + {"shape":"SerialConsoleSessionUnavailableException"} + ], + "documentation":"Pushes an SSH public key to the specified EC2 instance. The key remains for 60 seconds, which gives you 60 seconds to establish a serial console connection to the instance using SSH. For more information, see EC2 Serial Console in the Amazon EC2 User Guide.
" } }, "shapes":{ @@ -37,7 +58,7 @@ "members":{ "Message":{"shape":"String"} }, - "documentation":"Indicates that either your AWS credentials are invalid or you do not have access to the EC2 instance.
", + "documentation":"Either your AWS credentials are not valid or you do not have access to the EC2 instance.
", "exception":true }, "AvailabilityZone":{ @@ -51,7 +72,15 @@ "members":{ "Message":{"shape":"String"} }, - "documentation":"Indicates that the instance requested was not found in the given zone. Check that you have provided a valid instance ID and the correct zone.
", + "documentation":"The specified instance was not found.
", + "exception":true + }, + "EC2InstanceTypeInvalidException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"The instance type is not supported for connecting via the serial console. Only Nitro instance types are currently supported.
", "exception":true }, "InstanceId":{ @@ -71,7 +100,7 @@ "members":{ "Message":{"shape":"String"} }, - "documentation":"Indicates that you provided bad input. Ensure you have a valid instance ID, the correct zone, and a valid SSH public key.
", + "documentation":"One of the parameters is not valid.
", "exception":true }, "RequestId":{"type":"string"}, @@ -91,19 +120,19 @@ "members":{ "InstanceId":{ "shape":"InstanceId", - "documentation":"The EC2 instance you wish to publish the SSH key to.
" + "documentation":"The ID of the EC2 instance.
" }, "InstanceOSUser":{ "shape":"InstanceOSUser", - "documentation":"The OS user on the EC2 instance whom the key may be used to authenticate as.
" + "documentation":"The OS user on the EC2 instance for whom the key can be used to authenticate.
" }, "SSHPublicKey":{ "shape":"SSHPublicKey", - "documentation":"The public key to be published to the instance. To use it after publication you must have the matching private key.
" + "documentation":"The public key material. To use the public key, you must have the matching private key.
" }, "AvailabilityZone":{ "shape":"AvailabilityZone", - "documentation":"The availability zone the EC2 instance was launched in.
" + "documentation":"The Availability Zone in which the EC2 instance was launched.
" } } }, @@ -112,20 +141,84 @@ "members":{ "RequestId":{ "shape":"RequestId", - "documentation":"The request ID as logged by EC2 Connect. Please provide this when contacting AWS Support.
" + "documentation":"The ID of the request. Please provide this ID when contacting AWS Support for assistance.
" }, "Success":{ "shape":"Success", - "documentation":"Indicates request success.
" + "documentation":"Is true if the request succeeds and an error otherwise.
" + } + } + }, + "SendSerialConsoleSSHPublicKeyRequest":{ + "type":"structure", + "required":[ + "InstanceId", + "SSHPublicKey" + ], + "members":{ + "InstanceId":{ + "shape":"InstanceId", + "documentation":"The ID of the EC2 instance.
" + }, + "SerialPort":{ + "shape":"SerialPort", + "documentation":"The serial port of the EC2 instance. Currently only port 0 is supported.
Default: 0
" + }, + "SSHPublicKey":{ + "shape":"SSHPublicKey", + "documentation":"The public key material. To use the public key, you must have the matching private key. For information about the supported key formats and lengths, see Requirements for key pairs in the Amazon EC2 User Guide.
" } } }, + "SendSerialConsoleSSHPublicKeyResponse":{ + "type":"structure", + "members":{ + "RequestId":{ + "shape":"RequestId", + "documentation":"The ID of the request. Please provide this ID when contacting AWS Support for assistance.
" + }, + "Success":{ + "shape":"Success", + "documentation":"Is true if the request succeeds and an error otherwise.
" + } + } + }, + "SerialConsoleAccessDisabledException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"Your account is not authorized to use the EC2 Serial Console. To authorize your account, run the EnableSerialConsoleAccess API. For more information, see EnableSerialConsoleAccess in the Amazon EC2 API Reference.
", + "exception":true + }, + "SerialConsoleSessionLimitExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"The instance currently has 1 active serial console session. Only 1 session is supported at a time.
", + "exception":true + }, + "SerialConsoleSessionUnavailableException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"Unable to start a serial console session. Please try again.
", + "exception":true, + "fault":true + }, + "SerialPort":{ + "type":"integer", + "max":0, + "min":0 + }, "ServiceException":{ "type":"structure", "members":{ "Message":{"shape":"String"} }, - "documentation":"Indicates that the service encountered an error. Follow the message's instructions and try again.
", + "documentation":"The service encountered an error. Follow the instructions in the error message and try again.
", "exception":true, "fault":true }, @@ -136,9 +229,9 @@ "members":{ "Message":{"shape":"String"} }, - "documentation":"Indicates you have been making requests too frequently and have been throttled. Wait for a while and try again. If higher call volume is warranted contact AWS Support.
", + "documentation":"The requests were made too frequently and have been throttled. Wait a while and try again. To increase the limit on your request frequency, contact AWS Support.
", "exception":true } }, - "documentation":"AWS EC2 Connect Service is a service that enables system administrators to publish temporary SSH keys to their EC2 instances in order to establish connections to their instances without leaving a permanent authentication option.
" + "documentation":"Amazon EC2 Instance Connect enables system administrators to publish one-time use SSH public keys to EC2, providing users a simple and secure way to connect to their instances.
" } From d4e46a40f7a630bf7aa80ba8d3a24cb57cd78aa2 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:09:11 +0000 Subject: [PATCH 10/12] Amazon CloudWatch Update: SDK update for new Metric Streams feature --- .../feature-AmazonCloudWatch-4a97487.json | 6 + .../codegen-resources/paginators-1.json | 5 + .../codegen-resources/service-2.json | 367 +++++++++++++++++- 3 files changed, 376 insertions(+), 2 deletions(-) create mode 100644 .changes/next-release/feature-AmazonCloudWatch-4a97487.json diff --git a/.changes/next-release/feature-AmazonCloudWatch-4a97487.json b/.changes/next-release/feature-AmazonCloudWatch-4a97487.json new file mode 100644 index 000000000000..e19f61acf069 --- /dev/null +++ b/.changes/next-release/feature-AmazonCloudWatch-4a97487.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon CloudWatch", + "contributor": "", + "description": "SDK update for new Metric Streams feature" +} diff --git a/services/cloudwatch/src/main/resources/codegen-resources/paginators-1.json b/services/cloudwatch/src/main/resources/codegen-resources/paginators-1.json index 2e351eb7fd20..1990f17a3f6b 100644 --- a/services/cloudwatch/src/main/resources/codegen-resources/paginators-1.json +++ b/services/cloudwatch/src/main/resources/codegen-resources/paginators-1.json @@ -37,6 +37,11 @@ "output_token": "NextToken", "result_key": "DashboardEntries" }, + "ListMetricStreams": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + }, "ListMetrics": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/services/cloudwatch/src/main/resources/codegen-resources/service-2.json b/services/cloudwatch/src/main/resources/codegen-resources/service-2.json index 33b60db887ce..8e50bd8dbb82 100644 --- a/services/cloudwatch/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudwatch/src/main/resources/codegen-resources/service-2.json @@ -78,6 +78,24 @@ ], "documentation":"Permanently deletes the specified Contributor Insights rules.
If you create a rule, delete it, and then re-create it with the same name, historical data from the first time the rule was created might not be available.
" }, + "DeleteMetricStream":{ + "name":"DeleteMetricStream", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteMetricStreamInput"}, + "output":{ + "shape":"DeleteMetricStreamOutput", + "resultWrapper":"DeleteMetricStreamResult" + }, + "errors":[ + {"shape":"InternalServiceFault"}, + {"shape":"InvalidParameterValueException"}, + {"shape":"MissingRequiredParameterException"} + ], + "documentation":"Permanently deletes the metric stream that you specify.
" + }, "DescribeAlarmHistory":{ "name":"DescribeAlarmHistory", "http":{ @@ -281,6 +299,26 @@ ], "documentation":"Gets statistics for the specified metric.
The maximum number of data points returned from a single call is 1,440. If you request more than 1,440 data points, CloudWatch returns an error. To reduce the number of data points, you can narrow the specified time range and make multiple requests across adjacent time ranges, or you can increase the specified period. Data points are not returned in chronological order.
CloudWatch aggregates data points based on the length of the period that you specify. For example, if you request statistics with a one-hour period, CloudWatch aggregates all data points with time stamps that fall within each one-hour period. Therefore, the number of values aggregated by CloudWatch is larger than the number of data points returned.
CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true:
The SampleCount value of the statistic set is 1.
The Min and the Max values of the statistic set are equal.
Percentile statistics are not available for metrics when any of the metric values are negative numbers.
Amazon CloudWatch retains metric data as follows:
Data points with a period of less than 60 seconds are available for 3 hours. These data points are high-resolution metrics and are available only for custom metrics that have been defined with a StorageResolution
of 1.
Data points with a period of 60 seconds (1-minute) are available for 15 days.
Data points with a period of 300 seconds (5-minute) are available for 63 days.
Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months).
Data points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect data using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, but is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour.
CloudWatch started retaining 5-minute and 1-hour metric data as of July 9, 2016.
For information about metrics and dimensions supported by AWS services, see the Amazon CloudWatch Metrics and Dimensions Reference in the Amazon CloudWatch User Guide.
" }, + "GetMetricStream":{ + "name":"GetMetricStream", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetMetricStreamInput"}, + "output":{ + "shape":"GetMetricStreamOutput", + "resultWrapper":"GetMetricStreamResult" + }, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServiceFault"}, + {"shape":"InvalidParameterValueException"}, + {"shape":"MissingRequiredParameterException"}, + {"shape":"InvalidParameterCombinationException"} + ], + "documentation":"Returns information about the metric stream that you specify.
" + }, "GetMetricWidgetImage":{ "name":"GetMetricWidgetImage", "http":{ @@ -311,6 +349,25 @@ ], "documentation":"Returns a list of the dashboards for your account. If you include DashboardNamePrefix
, only those dashboards with names starting with the prefix are listed. Otherwise, all dashboards in your account are listed.
ListDashboards
returns up to 1000 results on one page. If there are more than 1000 dashboards, you can call ListDashboards
again and include the value you received for NextToken
in the first call, to receive the next 1000 results.
Returns a list of metric streams in this account.
" + }, "ListMetrics":{ "name":"ListMetrics", "http":{ @@ -422,7 +479,7 @@ "errors":[ {"shape":"LimitExceededFault"} ], - "documentation":"Creates or updates an alarm and associates it with the specified metric, metric math expression, or anomaly detection model.
Alarms based on anomaly detection models cannot have Auto Scaling actions.
When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA
. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed.
When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.
If you are an IAM user, you must have Amazon EC2 permissions for some alarm operations:
The iam:CreateServiceLinkedRole
for all alarms with EC2 actions
The iam:CreateServiceLinkedRole
to create an alarm with Systems Manager OpsItem actions.
The first time you create an alarm in the AWS Management Console, the CLI, or by using the PutMetricAlarm API, CloudWatch creates the necessary service-linked rolea for you. The service-linked roles are called AWSServiceRoleForCloudWatchEvents
and AWSServiceRoleForCloudWatchAlarms_ActionSSM
. For more information, see AWS service-linked role.
Creates or updates an alarm and associates it with the specified metric, metric math expression, or anomaly detection model.
Alarms based on anomaly detection models cannot have Auto Scaling actions.
When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA
. The alarm is then evaluated and its state is set appropriately. Any actions associated with the new state are then executed.
When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.
If you are an IAM user, you must have Amazon EC2 permissions for some alarm operations:
The iam:CreateServiceLinkedRole
for all alarms with EC2 actions
The iam:CreateServiceLinkedRole
to create an alarm with Systems Manager OpsItem actions.
The first time you create an alarm in the AWS Management Console, the CLI, or by using the PutMetricAlarm API, CloudWatch creates the necessary service-linked role for you. The service-linked roles are called AWSServiceRoleForCloudWatchEvents
and AWSServiceRoleForCloudWatchAlarms_ActionSSM
. For more information, see AWS service-linked role.
Publishes metric data points to Amazon CloudWatch. CloudWatch associates the data points with the specified metric. If the specified metric does not exist, CloudWatch creates the metric. When CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics.
You can publish either individual data points in the Value
field, or arrays of values and the number of times each value occurred during the period by using the Values
and Counts
fields in the MetricDatum
structure. Using the Values
and Counts
method enables you to publish up to 150 values per metric with one PutMetricData
request, and supports retrieving percentile statistics on this data.
Each PutMetricData
request is limited to 40 KB in size for HTTP POST requests. You can send a payload compressed by gzip. Each request is also limited to no more than 20 different metrics.
Although the Value
parameter accepts numbers of type Double
, CloudWatch rejects values that are either too small or too large. Values must be in the range of -2^360 to 2^360. In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported.
You can use up to 10 dimensions per metric to further clarify what data the metric collects. Each dimension consists of a Name and Value pair. For more information about specifying dimensions, see Publishing Metrics in the Amazon CloudWatch User Guide.
You specify the time stamp to be associated with each data point. You can specify time stamps that are as much as two weeks before the current date, and as much as 2 hours after the current day and time.
Data points with time stamps from 24 hours ago or longer can take at least 48 hours to become available for GetMetricData or GetMetricStatistics from the time they are submitted. Data points with time stamps between 3 and 24 hours ago can take as much as 2 hours to become available for for GetMetricData or GetMetricStatistics.
CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true:
The SampleCount
value of the statistic set is 1 and Min
, Max
, and Sum
are all equal.
The Min
and Max
are equal, and Sum
is equal to Min
multiplied by SampleCount
.
Creates or updates a metric stream. Metric streams can automatically stream CloudWatch metrics to AWS destinations including Amazon S3 and to many third-party solutions.
For more information, see Using Metric Streams.
To create a metric stream, you must be logged on to an account that has the iam:PassRole
permission and either the CloudWatchFullAccess
policy or the cloudwatch:PutMetricStream
permission.
When you create or update a metric stream, you choose one of the following:
Stream metrics from all metric namespaces in the account.
Stream metrics from all metric namespaces in the account, except for the namespaces that you list in ExcludeFilters
.
Stream metrics from only the metric namespaces that you list in IncludeFilters
.
When you use PutMetricStream
to create a new metric stream, the stream is created in the running
state. If you use it to update an existing stream, the state of the stream is not changed.
Temporarily sets the state of an alarm for testing purposes. When the updated state differs from the previous value, the action configured for the appropriate state is invoked. For example, if your alarm is configured to send an Amazon SNS message when an alarm is triggered, temporarily changing the alarm state to ALARM
sends an SNS message.
Metric alarms returns to their actual state quickly, often within seconds. Because the metric alarm state change happens quickly, it is typically only visible in the alarm's History tab in the Amazon CloudWatch console or through DescribeAlarmHistory.
If you use SetAlarmState
on a composite alarm, the composite alarm is not guaranteed to return to its actual state. It returns to its actual state only once any of its children alarms change state. It is also reevaluated if you update its configuration.
If an alarm triggers EC2 Auto Scaling policies or application Auto Scaling policies, you must include information in the StateReasonData
parameter to enable the policy to take the correct action.
Starts the streaming of metrics for one or more of your metric streams.
" + }, + "StopMetricStreams":{ + "name":"StopMetricStreams", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopMetricStreamsInput"}, + "output":{ + "shape":"StopMetricStreamsOutput", + "resultWrapper":"StopMetricStreamsResult" + }, + "errors":[ + {"shape":"InternalServiceFault"}, + {"shape":"InvalidParameterValueException"}, + {"shape":"MissingRequiredParameterException"} + ], + "documentation":"Stops the streaming of metrics for one or more of your metric streams.
" + }, "TagResource":{ "name":"TagResource", "http":{ @@ -979,6 +1092,21 @@ } } }, + "DeleteMetricStreamInput":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"MetricStreamName", + "documentation":"The name of the metric stream to delete.
" + } + } + }, + "DeleteMetricStreamOutput":{ + "type":"structure", + "members":{ + } + }, "DescribeAlarmHistoryInput":{ "type":"structure", "members":{ @@ -1554,6 +1682,61 @@ } } }, + "GetMetricStreamInput":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"MetricStreamName", + "documentation":"The name of the metric stream to retrieve information about.
" + } + } + }, + "GetMetricStreamOutput":{ + "type":"structure", + "members":{ + "Arn":{ + "shape":"AmazonResourceName", + "documentation":"The ARN of the metric stream.
" + }, + "Name":{ + "shape":"MetricStreamName", + "documentation":"The name of the metric stream.
" + }, + "IncludeFilters":{ + "shape":"MetricStreamFilters", + "documentation":"If this array of metric namespaces is present, then these namespaces are the only metric namespaces that are streamed by this metric stream.
" + }, + "ExcludeFilters":{ + "shape":"MetricStreamFilters", + "documentation":"If this array of metric namespaces is present, then these namespaces are the only metric namespaces that are not streamed by this metric stream. In this case, all other metric namespaces in the account are streamed by this metric stream.
" + }, + "FirehoseArn":{ + "shape":"AmazonResourceName", + "documentation":"The ARN of the Amazon Kinesis Firehose delivery stream that is used by this metric stream.
" + }, + "RoleArn":{ + "shape":"AmazonResourceName", + "documentation":"The ARN of the IAM role that is used by this metric stream.
" + }, + "State":{ + "shape":"MetricStreamState", + "documentation":"The state of the metric stream. The possible values are running
and stopped
.
The date that the metric stream was created.
" + }, + "LastUpdateDate":{ + "shape":"Timestamp", + "documentation":"The date of the most recent update to the metric stream's configuration.
" + }, + "OutputFormat":{ + "shape":"MetricStreamOutputFormat", + "documentation":"" + } + } + }, "GetMetricWidgetImageInput":{ "type":"structure", "required":["MetricWidget"], @@ -1924,6 +2107,37 @@ } } }, + "ListMetricStreamsInput":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"NextToken", + "documentation":"Include this value, if it was returned by the previous call, to get the next set of metric streams.
" + }, + "MaxResults":{ + "shape":"ListMetricStreamsMaxResults", + "documentation":"The maximum number of results to return in one operation.
" + } + } + }, + "ListMetricStreamsMaxResults":{ + "type":"integer", + "max":500, + "min":1 + }, + "ListMetricStreamsOutput":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"NextToken", + "documentation":"The token that marks the start of the next batch of returned results. You can use this token in a subsequent operation to get the next batch of results.
" + }, + "Entries":{ + "shape":"MetricStreamEntries", + "documentation":"The array of metric stream information.
" + } + } + }, "ListMetricsInput":{ "type":"structure", "members":{ @@ -2344,6 +2558,77 @@ }, "documentation":"This structure defines the metric to be returned, along with the statistics, period, and units.
" }, + "MetricStreamEntries":{ + "type":"list", + "member":{"shape":"MetricStreamEntry"} + }, + "MetricStreamEntry":{ + "type":"structure", + "members":{ + "Arn":{ + "shape":"AmazonResourceName", + "documentation":"The ARN of the metric stream.
" + }, + "CreationDate":{ + "shape":"Timestamp", + "documentation":"The date that the metric stream was originally created.
" + }, + "LastUpdateDate":{ + "shape":"Timestamp", + "documentation":"The date that the configuration of this metric stream was most recently updated.
" + }, + "Name":{ + "shape":"MetricStreamName", + "documentation":"The name of the metric stream.
" + }, + "FirehoseArn":{ + "shape":"AmazonResourceName", + "documentation":"The ARN of the Kinesis Firehose devlivery stream that is used for this metric stream.
" + }, + "State":{ + "shape":"MetricStreamState", + "documentation":"The current state of this stream. Valid values are running
and stopped
.
The output format of this metric stream. Valid values are json
and opentelemetry0.7
.
This structure contains the configuration information about one metric stream.
" + }, + "MetricStreamFilter":{ + "type":"structure", + "members":{ + "Namespace":{ + "shape":"Namespace", + "documentation":"The name of the metric namespace in the filter.
" + } + }, + "documentation":"This structure contains the name of one of the metric namespaces that is listed in a filter of a metric stream.
" + }, + "MetricStreamFilters":{ + "type":"list", + "member":{"shape":"MetricStreamFilter"} + }, + "MetricStreamName":{ + "type":"string", + "max":255, + "min":1 + }, + "MetricStreamNames":{ + "type":"list", + "member":{"shape":"MetricStreamName"} + }, + "MetricStreamOutputFormat":{ + "type":"string", + "enum":[ + "json", + "opentelemetry0.7" + ], + "max":255, + "min":1 + }, + "MetricStreamState":{"type":"string"}, "MetricWidget":{"type":"string"}, "MetricWidgetImage":{"type":"blob"}, "Metrics":{ @@ -2555,7 +2840,7 @@ }, "OKActions":{ "shape":"ResourceList", - "documentation":"The actions to execute when this alarm transitions to an OK
state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region:ec2:stop
| arn:aws:automate:region:ec2:terminate
| arn:aws:automate:region:ec2:recover
| arn:aws:automate:region:ec2:reboot
| arn:aws:sns:region:account-id:sns-topic-name
| arn:aws:autoscaling:region:account-id:scalingPolicy:policy-id:autoScalingGroupName/group-friendly-name:policyName/policy-friendly-name
Valid Values (for use with IAM roles): arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Stop/1.0
| arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Terminate/1.0
| arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Reboot/1.0
The actions to execute when this alarm transitions to an OK
state from any other state. Each action is specified as an Amazon Resource Name (ARN).
Valid Values: arn:aws:automate:region:ec2:stop
| arn:aws:automate:region:ec2:terminate
| arn:aws:automate:region:ec2:recover
| arn:aws:automate:region:ec2:reboot
| arn:aws:sns:region:account-id:sns-topic-name
| arn:aws:autoscaling:region:account-id:scalingPolicy:policy-id:autoScalingGroupName/group-friendly-name:policyName/policy-friendly-name
Valid Values (for use with IAM roles): arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Stop/1.0
| arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Terminate/1.0
| arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Reboot/1.0
| arn:aws:swf:region:account-id:action/actions/AWS_EC2.InstanceId.Recover/1.0
If you are creating a new metric stream, this is the name for the new stream. The name must be different than the names of other metric streams in this account and Region.
If you are updating a metric stream, specify the name of that stream here.
Valid characters are A-Z, a-z, 0-9, \"-\" and \"_\".
" + }, + "IncludeFilters":{ + "shape":"MetricStreamFilters", + "documentation":"If you specify this parameter, the stream sends only the metrics from the metric namespaces that you specify here.
You cannot include IncludeFilters
and ExcludeFilters
in the same operation.
If you specify this parameter, the stream sends metrics from all metric namespaces except for the namespaces that you specify here.
You cannot include ExcludeFilters
and IncludeFilters
in the same operation.
The ARN of the Amazon Kinesis Firehose delivery stream to use for this metric stream. This Amazon Kinesis Firehose delivery stream must already exist and must be in the same account as the metric stream.
" + }, + "RoleArn":{ + "shape":"AmazonResourceName", + "documentation":"The ARN of an IAM role that this metric stream will use to access Amazon Kinesis Firehose resources. This IAM role must already exist and must be in the same account as the metric stream. This IAM role must include the following permissions:
firehose:PutRecord
firehose:PutRecordBatch
The output format for the stream. Valid values are json
and opentelemetry0.7
. For more information about metric stream output formats, see Metric streams output formats.
A list of key-value pairs to associate with the metric stream. You can associate as many as 50 tags with a metric stream.
Tags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values.
" + } + } + }, + "PutMetricStreamOutput":{ + "type":"structure", + "members":{ + "Arn":{ + "shape":"AmazonResourceName", + "documentation":"The ARN of the metric stream.
" + } + } + }, "Range":{ "type":"structure", "required":[ @@ -2783,6 +3116,21 @@ "None" ] }, + "StartMetricStreamsInput":{ + "type":"structure", + "required":["Names"], + "members":{ + "Names":{ + "shape":"MetricStreamNames", + "documentation":"The array of the names of metric streams to start streaming.
This is an \"all or nothing\" operation. If you do not have permission to access all of the metric streams that you list here, then none of the streams that you list in the operation will start streaming.
" + } + } + }, + "StartMetricStreamsOutput":{ + "type":"structure", + "members":{ + } + }, "Stat":{"type":"string"}, "StateReason":{ "type":"string", @@ -2854,6 +3202,21 @@ "PartialData" ] }, + "StopMetricStreamsInput":{ + "type":"structure", + "required":["Names"], + "members":{ + "Names":{ + "shape":"MetricStreamNames", + "documentation":"The array of the names of metric streams to stop streaming.
This is an \"all or nothing\" operation. If you do not have permission to access all of the metric streams that you list here, then none of the streams that you list in the operation will stop streaming.
" + } + } + }, + "StopMetricStreamsOutput":{ + "type":"structure", + "members":{ + } + }, "StorageResolution":{ "type":"integer", "min":1 From 0bdac7ef762aa90f77063c016178ededc5d90961 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:10:09 +0000 Subject: [PATCH 11/12] Updated endpoints.json. --- .changes/next-release/feature-AWSSDKforJavav2-bedacd4.json | 6 ++++++ .../amazon/awssdk/regions/internal/region/endpoints.json | 1 + 2 files changed, 7 insertions(+) create mode 100644 .changes/next-release/feature-AWSSDKforJavav2-bedacd4.json diff --git a/.changes/next-release/feature-AWSSDKforJavav2-bedacd4.json b/.changes/next-release/feature-AWSSDKforJavav2-bedacd4.json new file mode 100644 index 000000000000..ae3f84993e9e --- /dev/null +++ b/.changes/next-release/feature-AWSSDKforJavav2-bedacd4.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated service endpoint metadata." +} diff --git a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json index ddc9574fd26b..0073433e8648 100644 --- a/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json +++ b/core/regions/src/main/resources/software/amazon/awssdk/regions/internal/region/endpoints.json @@ -5645,6 +5645,7 @@ "ap-east-1" : { }, "ap-northeast-1" : { }, "ap-northeast-2" : { }, + "ap-northeast-3" : { }, "ap-south-1" : { }, "ap-southeast-1" : { }, "ap-southeast-2" : { }, From f85a88959fe68464ee0cfbe08e5e77575e43d658 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Tue, 30 Mar 2021 18:10:56 +0000 Subject: [PATCH 12/12] Release 2.16.31. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.16.31.json | 60 +++++++++++++++++++ .../feature-AWSConfig-4c475e6.json | 6 -- ...feature-AWSEC2InstanceConnect-8170d8c.json | 6 -- .../feature-AWSGlueDataBrew-7bf6d78.json | 6 -- .../feature-AWSSDKforJavav2-bedacd4.json | 6 -- .../feature-AmazonCloudWatch-4a97487.json | 6 -- ...ure-AmazonElasticComputeCloud-019ae1f.json | 6 -- .../feature-AmazonFraudDetector-281eaaa.json | 6 -- .../feature-AmazonPinpoint-50a5658.json | 6 -- ...eature-AmazonSageMakerService-ff9dccd.json | 6 -- CHANGELOG.md | 37 ++++++++++++ README.md | 8 +-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-ion-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- utils/pom.xml | 2 +- 322 files changed, 411 insertions(+), 368 deletions(-) create mode 100644 .changes/2.16.31.json delete mode 100644 .changes/next-release/feature-AWSConfig-4c475e6.json delete mode 100644 .changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json delete mode 100644 .changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json delete mode 100644 .changes/next-release/feature-AWSSDKforJavav2-bedacd4.json delete mode 100644 .changes/next-release/feature-AmazonCloudWatch-4a97487.json delete mode 100644 .changes/next-release/feature-AmazonElasticComputeCloud-019ae1f.json delete mode 100644 .changes/next-release/feature-AmazonFraudDetector-281eaaa.json delete mode 100644 .changes/next-release/feature-AmazonPinpoint-50a5658.json delete mode 100644 .changes/next-release/feature-AmazonSageMakerService-ff9dccd.json diff --git a/.changes/2.16.31.json b/.changes/2.16.31.json new file mode 100644 index 000000000000..bc37318083ea --- /dev/null +++ b/.changes/2.16.31.json @@ -0,0 +1,60 @@ +{ + "version": "2.16.31", + "date": "2021-03-30", + "entries": [ + { + "type": "feature", + "category": "AWS SDK for Java v2", + "contributor": "", + "description": "Updated service endpoint metadata." + }, + { + "type": "feature", + "category": "AWS Glue DataBrew", + "contributor": "", + "description": "This SDK release adds two new dataset features: 1) support for specifying a database connection as a dataset input 2) support for dynamic datasets that accept configurable parameters in S3 path." + }, + { + "type": "feature", + "category": "Amazon SageMaker Service", + "contributor": "", + "description": "Amazon SageMaker Autopilot now supports 1) feature importance reports for AutoML jobs and 2) PartialFailures for AutoML jobs" + }, + { + "type": "feature", + "category": "Amazon Elastic Compute Cloud", + "contributor": "", + "description": "ReplaceRootVolume feature enables customers to replace the EBS root volume of a running instance to a previously known state. Add support to grant account-level access to the EC2 serial console" + }, + { + "type": "feature", + "category": "Amazon CloudWatch", + "contributor": "", + "description": "SDK update for new Metric Streams feature" + }, + { + "type": "feature", + "category": "Amazon Pinpoint", + "contributor": "", + "description": "Added support for journey pause/resume, journey updatable import segment and journey quiet time wait." + }, + { + "type": "feature", + "category": "AWS Config", + "contributor": "", + "description": "Adding new APIs to support ConformancePack Compliance CI in Aggregators" + }, + { + "type": "feature", + "category": "Amazon Fraud Detector", + "contributor": "", + "description": "This release adds support for Batch Predictions in Amazon Fraud Detector." + }, + { + "type": "feature", + "category": "AWS EC2 Instance Connect", + "contributor": "", + "description": "Adding support to push SSH keys to the EC2 serial console in order to allow an SSH connection to your Amazon EC2 instance's serial port." + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/feature-AWSConfig-4c475e6.json b/.changes/next-release/feature-AWSConfig-4c475e6.json deleted file mode 100644 index 452ebba3c08d..000000000000 --- a/.changes/next-release/feature-AWSConfig-4c475e6.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Config", - "contributor": "", - "description": "Adding new APIs to support ConformancePack Compliance CI in Aggregators" -} diff --git a/.changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json b/.changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json deleted file mode 100644 index 0998bbaa62c7..000000000000 --- a/.changes/next-release/feature-AWSEC2InstanceConnect-8170d8c.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS EC2 Instance Connect", - "contributor": "", - "description": "Adding support to push SSH keys to the EC2 serial console in order to allow an SSH connection to your Amazon EC2 instance's serial port." -} diff --git a/.changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json b/.changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json deleted file mode 100644 index 71f3e4fb8dfc..000000000000 --- a/.changes/next-release/feature-AWSGlueDataBrew-7bf6d78.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Glue DataBrew", - "contributor": "", - "description": "This SDK release adds two new dataset features: 1) support for specifying a database connection as a dataset input 2) support for dynamic datasets that accept configurable parameters in S3 path." -} diff --git a/.changes/next-release/feature-AWSSDKforJavav2-bedacd4.json b/.changes/next-release/feature-AWSSDKforJavav2-bedacd4.json deleted file mode 100644 index ae3f84993e9e..000000000000 --- a/.changes/next-release/feature-AWSSDKforJavav2-bedacd4.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS SDK for Java v2", - "contributor": "", - "description": "Updated service endpoint metadata." -} diff --git a/.changes/next-release/feature-AmazonCloudWatch-4a97487.json b/.changes/next-release/feature-AmazonCloudWatch-4a97487.json deleted file mode 100644 index e19f61acf069..000000000000 --- a/.changes/next-release/feature-AmazonCloudWatch-4a97487.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon CloudWatch", - "contributor": "", - "description": "SDK update for new Metric Streams feature" -} diff --git a/.changes/next-release/feature-AmazonElasticComputeCloud-019ae1f.json b/.changes/next-release/feature-AmazonElasticComputeCloud-019ae1f.json deleted file mode 100644 index 2e1e4568f692..000000000000 --- a/.changes/next-release/feature-AmazonElasticComputeCloud-019ae1f.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Elastic Compute Cloud", - "contributor": "", - "description": "ReplaceRootVolume feature enables customers to replace the EBS root volume of a running instance to a previously known state. Add support to grant account-level access to the EC2 serial console" -} diff --git a/.changes/next-release/feature-AmazonFraudDetector-281eaaa.json b/.changes/next-release/feature-AmazonFraudDetector-281eaaa.json deleted file mode 100644 index e38b8bb68707..000000000000 --- a/.changes/next-release/feature-AmazonFraudDetector-281eaaa.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Fraud Detector", - "contributor": "", - "description": "This release adds support for Batch Predictions in Amazon Fraud Detector." -} diff --git a/.changes/next-release/feature-AmazonPinpoint-50a5658.json b/.changes/next-release/feature-AmazonPinpoint-50a5658.json deleted file mode 100644 index a499afb1ecda..000000000000 --- a/.changes/next-release/feature-AmazonPinpoint-50a5658.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Pinpoint", - "contributor": "", - "description": "Added support for journey pause/resume, journey updatable import segment and journey quiet time wait." -} diff --git a/.changes/next-release/feature-AmazonSageMakerService-ff9dccd.json b/.changes/next-release/feature-AmazonSageMakerService-ff9dccd.json deleted file mode 100644 index e53fd18bbaed..000000000000 --- a/.changes/next-release/feature-AmazonSageMakerService-ff9dccd.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon SageMaker Service", - "contributor": "", - "description": "Amazon SageMaker Autopilot now supports 1) feature importance reports for AutoML jobs and 2) PartialFailures for AutoML jobs" -} diff --git a/CHANGELOG.md b/CHANGELOG.md index 690ade5c2ac6..c6dc01100066 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,40 @@ +# __2.16.31__ __2021-03-30__ +## __AWS Config__ + - ### Features + - Adding new APIs to support ConformancePack Compliance CI in Aggregators + +## __AWS EC2 Instance Connect__ + - ### Features + - Adding support to push SSH keys to the EC2 serial console in order to allow an SSH connection to your Amazon EC2 instance's serial port. + +## __AWS Glue DataBrew__ + - ### Features + - This SDK release adds two new dataset features: 1) support for specifying a database connection as a dataset input 2) support for dynamic datasets that accept configurable parameters in S3 path. + +## __AWS SDK for Java v2__ + - ### Features + - Updated service endpoint metadata. + +## __Amazon CloudWatch__ + - ### Features + - SDK update for new Metric Streams feature + +## __Amazon Elastic Compute Cloud__ + - ### Features + - ReplaceRootVolume feature enables customers to replace the EBS root volume of a running instance to a previously known state. Add support to grant account-level access to the EC2 serial console + +## __Amazon Fraud Detector__ + - ### Features + - This release adds support for Batch Predictions in Amazon Fraud Detector. + +## __Amazon Pinpoint__ + - ### Features + - Added support for journey pause/resume, journey updatable import segment and journey quiet time wait. + +## __Amazon SageMaker Service__ + - ### Features + - Amazon SageMaker Autopilot now supports 1) feature importance reports for AutoML jobs and 2) PartialFailures for AutoML jobs + # __2.16.30__ __2021-03-29__ ## __AWS Glue__ - ### Features diff --git a/README.md b/README.md index 7bd180ec4d56..b51b81b1a4e9 100644 --- a/README.md +++ b/README.md @@ -49,7 +49,7 @@ To automatically manage module versions (currently all modules have the same ver