diff --git a/apis/cloudformation/2010-05-15/docs-2.json b/apis/cloudformation/2010-05-15/docs-2.json index eafcd587716..2c24eb5ae14 100644 --- a/apis/cloudformation/2010-05-15/docs-2.json +++ b/apis/cloudformation/2010-05-15/docs-2.json @@ -3,7 +3,7 @@ "service": "CloudFormation

CloudFormation allows you to create and manage Amazon Web Services infrastructure deployments predictably and repeatedly. You can use CloudFormation to leverage Amazon Web Services products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification Service, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying Amazon Web Services infrastructure.

With CloudFormation, you declare all your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you.

For more information about CloudFormation, see the CloudFormation product page.

CloudFormation makes use of other Amazon Web Services products. If you need additional technical information about a specific Amazon Web Services product, you can find the product's technical documentation at docs.aws.amazon.com.

", "operations": { "ActivateOrganizationsAccess": "

Activate trusted access with Organizations. With trusted access between StackSets and Organizations activated, the management account has permissions to create and manage StackSets for your organization.

", - "ActivateType": "

Activates a public third-party extension, making it available for use in stack templates. For more information, see Using public extensions in the CloudFormation User Guide.

Once you have activated a public third-party extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

", + "ActivateType": "

Activates a public third-party extension, making it available for use in stack templates. For more information, see Using public extensions in the CloudFormation User Guide.

Once you have activated a public third-party extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

", "BatchDescribeTypeConfigurations": "

Returns configuration data for the specified CloudFormation extensions, from the CloudFormation registry for the account and Region.

For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

", "CancelUpdateStack": "

Cancels an update on the specified stack. If the call completes successfully, the stack rolls back the update and reverts to the previous stack configuration.

You can cancel only stacks that are in the UPDATE_IN_PROGRESS state.

", "ContinueUpdateRollback": "

For a specified stack that's in the UPDATE_ROLLBACK_FAILED state, continues rolling it back to the UPDATE_ROLLBACK_COMPLETE state. Depending on the cause of the failure, you can manually fix the error and continue the rollback. By continuing the rollback, you can return your stack to a working state (the UPDATE_ROLLBACK_COMPLETE state), and then try to update the stack again.

A stack goes into the UPDATE_ROLLBACK_FAILED state when CloudFormation can't roll back all changes after a failed stack update. For example, you might have a stack that's rolling back to an old database instance that was deleted outside of CloudFormation. Because CloudFormation doesn't know the database was deleted, it assumes that the database instance still exists and attempts to roll back to it, causing the update rollback to fail.

", @@ -19,13 +19,13 @@ "DeleteStack": "

Deletes a specified stack. Once the call completes successfully, stack deletion starts. Deleted stacks don't show up in the DescribeStacks operation if the deletion has been completed successfully.

", "DeleteStackInstances": "

Deletes stack instances for the specified accounts, in the specified Amazon Web Services Regions.

", "DeleteStackSet": "

Deletes a stack set. Before you can delete a stack set, all its member stack instances must be deleted. For more information about how to complete this, see DeleteStackInstances.

", - "DeregisterType": "

Marks an extension or extension version as DEPRECATED in the CloudFormation registry, removing it from active use. Deprecated extensions or extension versions cannot be used in CloudFormation operations.

To deregister an entire extension, you must individually deregister all active versions of that extension. If an extension has only a single active version, deregistering that version results in the extension itself being deregistered and marked as deprecated in the registry.

You can't deregister the default version of an extension if there are other active version of that extension. If you do deregister the default version of an extension, the extension type itself is deregistered as well and marked as deprecated.

To view the deprecation status of an extension or extension version, use DescribeType .

", + "DeregisterType": "

Marks an extension or extension version as DEPRECATED in the CloudFormation registry, removing it from active use. Deprecated extensions or extension versions cannot be used in CloudFormation operations.

To deregister an entire extension, you must individually deregister all active versions of that extension. If an extension has only a single active version, deregistering that version results in the extension itself being deregistered and marked as deprecated in the registry.

You can't deregister the default version of an extension if there are other active version of that extension. If you do deregister the default version of an extension, the extension type itself is deregistered as well and marked as deprecated.

To view the deprecation status of an extension or extension version, use DescribeType.

", "DescribeAccountLimits": "

Retrieves your account's CloudFormation limits, such as the maximum number of stacks that you can create in your account. For more information about account limits, see CloudFormation Quotas in the CloudFormation User Guide.

", "DescribeChangeSet": "

Returns the inputs for the change set and a list of changes that CloudFormation will make if you execute the change set. For more information, see Updating Stacks Using Change Sets in the CloudFormation User Guide.

", "DescribeChangeSetHooks": "

Returns hook-related information for the change set and a list of changes that CloudFormation makes when you run the change set.

", "DescribeGeneratedTemplate": "

Describes a generated template. The output includes details about the progress of the creation of a generated template started by a CreateGeneratedTemplate API action or the update of a generated template started with an UpdateGeneratedTemplate API action.

", "DescribeOrganizationsAccess": "

Retrieves information about the account's OrganizationAccess status. This API can be called either by the management account or the delegated administrator by using the CallAs parameter. This API can also be called without the CallAs parameter by the management account.

", - "DescribePublisher": "

Returns information about a CloudFormation extension publisher.

If you don't supply a PublisherId, and you have registered as an extension publisher, DescribePublisher returns information about your own publisher account.

For more information about registering as a publisher, see:

", + "DescribePublisher": "

Returns information about a CloudFormation extension publisher.

If you don't supply a PublisherId, and you have registered as an extension publisher, DescribePublisher returns information about your own publisher account.

For more information about registering as a publisher, see:

", "DescribeResourceScan": "

Describes details of a resource scan.

", "DescribeStackDriftDetectionStatus": "

Returns information about a stack drift detection operation. A stack drift detection operation detects whether a stack's actual configuration differs, or has drifted, from its expected configuration, as defined in the stack template and any values specified as template parameters. A stack is considered to have drifted if one or more of its resources have drifted. For more information about stack and resource drift, see Detecting Unregulated Configuration Changes to Stacks and Resources.

Use DetectStackDrift to initiate a stack drift detection operation. DetectStackDrift returns a StackDriftDetectionId you can use to monitor the progress of the operation using DescribeStackDriftDetectionStatus. Once the drift detection operation has completed, use DescribeStackResourceDrifts to return drift information about the stack and its resources.

", "DescribeStackEvents": "

Returns all stack related events for a specified stack in reverse chronological order. For more information about a stack's event history, see CloudFormation stack creation events in the CloudFormation User Guide.

You can list events for stacks that have failed to create or have been deleted by specifying the unique stack identifier (stack ID).

", @@ -66,21 +66,21 @@ "ListTypeRegistrations": "

Returns a list of registration tokens for the specified extension(s).

", "ListTypeVersions": "

Returns summary information about the versions of an extension.

", "ListTypes": "

Returns summary information about extension that have been registered with CloudFormation.

", - "PublishType": "

Publishes the specified extension to the CloudFormation registry as a public extension in this Region. Public extensions are available for use by all CloudFormation users. For more information about publishing extensions, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.

To publish an extension, you must be registered as a publisher with CloudFormation. For more information, see RegisterPublisher .

", + "PublishType": "

Publishes the specified extension to the CloudFormation registry as a public extension in this Region. Public extensions are available for use by all CloudFormation users. For more information about publishing extensions, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.

To publish an extension, you must be registered as a publisher with CloudFormation. For more information, see RegisterPublisher.

", "RecordHandlerProgress": "

Reports progress of a resource handler to CloudFormation.

Reserved for use by the CloudFormation CLI. Don't use this API in your code.

", "RegisterPublisher": "

Registers your account as a publisher of public extensions in the CloudFormation registry. Public extensions are available for use by all CloudFormation users. This publisher ID applies to your account in all Amazon Web Services Regions.

For information about requirements for registering as a public extension publisher, see Registering your account to publish CloudFormation extensions in the CloudFormation CLI User Guide.

", - "RegisterType": "

Registers an extension with the CloudFormation service. Registering an extension makes it available for use in CloudFormation templates in your Amazon Web Services account, and includes:

For more information about how to develop extensions and ready them for registration, see Creating Resource Providers in the CloudFormation CLI User Guide.

You can have a maximum of 50 resource extension versions registered at a time. This maximum is per account and per Region. Use DeregisterType to deregister specific extension versions if necessary.

Once you have initiated a registration request using RegisterType, you can use DescribeTypeRegistration to monitor the progress of the registration request.

Once you have registered a private extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

", + "RegisterType": "

Registers an extension with the CloudFormation service. Registering an extension makes it available for use in CloudFormation templates in your Amazon Web Services account, and includes:

For more information about how to develop extensions and ready them for registration, see Creating Resource Providers in the CloudFormation CLI User Guide.

You can have a maximum of 50 resource extension versions registered at a time. This maximum is per account and per Region. Use DeregisterType to deregister specific extension versions if necessary.

Once you have initiated a registration request using RegisterType, you can use DescribeTypeRegistration to monitor the progress of the registration request.

Once you have registered a private extension in your account and Region, use SetTypeConfiguration to specify configuration properties for the extension. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

", "RollbackStack": "

When specifying RollbackStack, you preserve the state of previously provisioned resources when an operation fails. You can check the status of the stack through the DescribeStacks operation.

Rolls back the specified stack to the last known stable state from CREATE_FAILED or UPDATE_FAILED stack statuses.

This operation will delete a stack if it doesn't contain a last known stable state. A last known stable state includes any status in a *_COMPLETE. This includes the following stack statuses.

", "SetStackPolicy": "

Sets a stack policy for a specified stack.

", - "SetTypeConfiguration": "

Specifies the configuration data for a registered CloudFormation extension, in the given account and Region.

To view the current configuration data for an extension, refer to the ConfigurationSchema element of DescribeType . For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

It's strongly recommended that you use dynamic references to restrict sensitive configuration definitions, such as third-party credentials. For more details on dynamic references, see Using dynamic references to specify template values in the CloudFormation User Guide.

", + "SetTypeConfiguration": "

Specifies the configuration data for a registered CloudFormation extension, in the given account and Region.

To view the current configuration data for an extension, refer to the ConfigurationSchema element of DescribeType. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

It's strongly recommended that you use dynamic references to restrict sensitive configuration definitions, such as third-party credentials. For more details on dynamic references, see Using dynamic references to specify template values in the CloudFormation User Guide.

", "SetTypeDefaultVersion": "

Specify the default version of an extension. The default version of an extension will be used in CloudFormation operations.

", "SignalResource": "

Sends a signal to the specified resource with a success or failure status. You can use the SignalResource operation in conjunction with a creation policy or update policy. CloudFormation doesn't proceed with a stack creation or update until resources receive the required number of signals or the timeout period is exceeded. The SignalResource operation is useful in cases where you want to send signals from anywhere other than an Amazon EC2 instance.

", "StartResourceScan": "

Starts a scan of the resources in this account in this Region. You can the status of a scan using the ListResourceScans API action.

", "StopStackSetOperation": "

Stops an in-progress operation on a stack set and its associated stack instances. StackSets will cancel all the unstarted stack instance deployments and wait for those are in-progress to complete.

", - "TestType": "

Tests a registered extension to make sure it meets all necessary requirements for being published in the CloudFormation registry.

For more information, see Testing your public extension prior to publishing in the CloudFormation CLI User Guide.

If you don't specify a version, CloudFormation uses the default version of the extension in your account and Region for testing.

To perform testing, CloudFormation assumes the execution role specified when the type was registered. For more information, see RegisterType .

Once you've initiated testing on an extension using TestType, you can pass the returned TypeVersionArn into DescribeType to monitor the current test status and test status description for the extension.

An extension must have a test status of PASSED before it can be published. For more information, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.

", + "TestType": "

Tests a registered extension to make sure it meets all necessary requirements for being published in the CloudFormation registry.

For more information, see Testing your public extension prior to publishing in the CloudFormation CLI User Guide.

If you don't specify a version, CloudFormation uses the default version of the extension in your account and Region for testing.

To perform testing, CloudFormation assumes the execution role specified when the type was registered. For more information, see RegisterType.

Once you've initiated testing on an extension using TestType, you can pass the returned TypeVersionArn into DescribeType to monitor the current test status and test status description for the extension.

An extension must have a test status of PASSED before it can be published. For more information, see Publishing extensions to make them available for public use in the CloudFormation CLI User Guide.

", "UpdateGeneratedTemplate": "

Updates a generated template. This can be used to change the name, add and remove resources, refresh resources, and change the DeletionPolicy and UpdateReplacePolicy settings. You can check the status of the update to the generated template using the DescribeGeneratedTemplate API action.

", "UpdateStack": "

Updates a stack as specified in the template. After the call completes successfully, the stack update starts. You can check the status of the stack through the DescribeStacks action.

To get a copy of the template for an existing stack, you can use the GetTemplate action.

For more information about creating an update template, updating a stack, and monitoring the progress of the update, see Updating a Stack.

", - "UpdateStackInstances": "

Updates the parameter values for stack instances for the specified accounts, within the specified Amazon Web Services Regions. A stack instance refers to a stack in a specific account and Region.

You can only update stack instances in Amazon Web Services Regions and accounts where they already exist; to create additional stack instances, use CreateStackInstances .

During stack set updates, any parameters overridden for a stack instance aren't updated, but retain their overridden value.

You can only update the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances.

", + "UpdateStackInstances": "

Updates the parameter values for stack instances for the specified accounts, within the specified Amazon Web Services Regions. A stack instance refers to a stack in a specific account and Region.

You can only update stack instances in Amazon Web Services Regions and accounts where they already exist; to create additional stack instances, use CreateStackInstances.

During stack set updates, any parameters overridden for a stack instance aren't updated, but retain their overridden value.

You can only update the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances.

", "UpdateStackSet": "

Updates the stack set, and associated stack instances in the specified accounts and Amazon Web Services Regions.

Even if the stack set operation created by updating the stack set fails (completely or partially, below or above a specified failure tolerance), the stack set is updated with your changes. Subsequent CreateStackInstances calls on the specified stack set use the updated stack set.

", "UpdateTerminationProtection": "

Updates termination protection for the specified stack. If a user attempts to delete a stack with termination protection enabled, the operation fails and the stack remains unchanged. For more information, see Protecting a Stack From Being Deleted in the CloudFormation User Guide.

For nested stacks, termination protection is set on the root stack and can't be changed directly on the nested stack.

", "ValidateTemplate": "

Validates a specified template. CloudFormation first checks if the template is valid JSON. If it isn't, CloudFormation checks if the template is valid YAML. If both these checks fail, CloudFormation returns a template validation error.

" @@ -294,15 +294,15 @@ "Capabilities": { "base": null, "refs": { - "CreateChangeSetInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to create the stack.

Only one of the Capabilities and ResourceType parameters can be specified.

", - "CreateStackInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to create the stack.

Only one of the Capabilities and ResourceType parameters can be specified.

", - "CreateStackSetInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack set template contains certain capabilities in order for CloudFormation to create the stack set and related stack instances.

", + "CreateChangeSetInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to create the stack.

Only one of the Capabilities and ResourceType parameters can be specified.

", + "CreateStackInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to create the stack.

Only one of the Capabilities and ResourceType parameters can be specified.

", + "CreateStackSetInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack set template contains certain capabilities in order for CloudFormation to create the stack set and related stack instances.

", "DescribeChangeSetOutput$Capabilities": "

If you execute the change set, the list of capabilities that were explicitly acknowledged when the change set was created.

", "GetTemplateSummaryOutput$Capabilities": "

The capabilities found within the template. If your template contains IAM resources, you must specify the CAPABILITY_IAM or CAPABILITY_NAMED_IAM value for this parameter when you use the CreateStack or UpdateStack actions with your template; otherwise, those actions return an InsufficientCapabilities error.

For more information, see Acknowledging IAM Resources in CloudFormation Templates.

", "Stack$Capabilities": "

The capabilities allowed in the stack.

", "StackSet$Capabilities": "

The capabilities that are allowed in the stack set. Some stack set templates might include resources that can affect permissions in your Amazon Web Services account—for example, by creating new Identity and Access Management (IAM) users. For more information, see Acknowledging IAM Resources in CloudFormation Templates.

", - "UpdateStackInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to update the stack.

Only one of the Capabilities and ResourceType parameters can be specified.

", - "UpdateStackSetInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to update the stack set and its associated stack instances.

", + "UpdateStackInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to update the stack.

Only one of the Capabilities and ResourceType parameters can be specified.

", + "UpdateStackSetInput$Capabilities": "

In some cases, you must explicitly acknowledge that your stack template contains certain capabilities in order for CloudFormation to update the stack set and its associated stack instances.

", "ValidateTemplateOutput$Capabilities": "

The capabilities found within the template. If your template contains IAM resources, you must specify the CAPABILITY_IAM or CAPABILITY_NAMED_IAM value for this parameter when you use the CreateStack or UpdateStack actions with your template; otherwise, those actions return an InsufficientCapabilities error.

For more information, see Acknowledging IAM Resources in CloudFormation Templates.

" } }, @@ -520,7 +520,7 @@ "ConfigurationSchema": { "base": null, "refs": { - "DescribeTypeOutput$ConfigurationSchema": "

A JSON string that represent the current configuration data for the extension in this account and Region.

To set the configuration data for an extension, use SetTypeConfiguration . For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

" + "DescribeTypeOutput$ConfigurationSchema": "

A JSON string that represent the current configuration data for the extension in this account and Region.

To set the configuration data for an extension, use SetTypeConfiguration. For more information, see Configuring extensions at the account level in the CloudFormation User Guide.

" } }, "ConnectionArn": { @@ -951,7 +951,7 @@ "base": null, "refs": { "CreateStackInput$DisableRollback": "

Set to true to disable rollback of the stack if stack creation failed. You can specify either DisableRollback or OnFailure, but not both.

Default: false

", - "ExecuteChangeSetInput$DisableRollback": "

Preserves the state of previously provisioned resources when an operation fails. This parameter can't be specified when the OnStackFailure parameter to the CreateChangeSet API operation was specified.

Default: True

", + "ExecuteChangeSetInput$DisableRollback": "

Preserves the state of previously provisioned resources when an operation fails. This parameter can't be specified when the OnStackFailure parameter to the CreateChangeSet API operation was specified.

Default: True

", "Stack$DisableRollback": "

Boolean to enable or disable rollback on stack creation failures:

", "UpdateStackInput$DisableRollback": "

Preserve the state of previously provisioned resources when an operation fails.

Default: False

" } @@ -1079,7 +1079,7 @@ "GeneratedTemplateDeletionPolicy": { "base": null, "refs": { - "TemplateConfiguration$DeletionPolicy": "

The DeletionPolicy assigned to resources in the generated template. Supported values are:

For more information, see DeletionPolicy attribute in the CloudFormation User Guide.

" + "TemplateConfiguration$DeletionPolicy": "

The DeletionPolicy assigned to resources in the generated template. Supported values are:

For more information, see DeletionPolicy attribute in the CloudFormation User Guide.

" } }, "GeneratedTemplateId": { @@ -1126,7 +1126,7 @@ "GeneratedTemplateUpdateReplacePolicy": { "base": null, "refs": { - "TemplateConfiguration$UpdateReplacePolicy": "

The UpdateReplacePolicy assigned to resources in the generated template. Supported values are:

For more information, see UpdateReplacePolicy attribute in the CloudFormation User Guide.

" + "TemplateConfiguration$UpdateReplacePolicy": "

The UpdateReplacePolicy assigned to resources in the generated template. Supported values are:

For more information, see UpdateReplacePolicy attribute in the CloudFormation User Guide.

" } }, "GetGeneratedTemplateInput": { @@ -1581,7 +1581,7 @@ "base": "

Contains logging configuration information for an extension.

", "refs": { "ActivateTypeInput$LoggingConfig": "

Contains logging configuration information for an extension.

", - "DescribeTypeOutput$LoggingConfig": "

Contains logging configuration information for private extensions. This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon Web Services and published by third parties, CloudFormation returns null. For more information, see RegisterType .

", + "DescribeTypeOutput$LoggingConfig": "

Contains logging configuration information for private extensions. This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon Web Services and published by third parties, CloudFormation returns null. For more information, see RegisterType.

", "RegisterTypeInput$LoggingConfig": "

Specifies logging configuration information for an extension.

" } }, @@ -1696,7 +1696,7 @@ "MonitoringTimeInMinutes": { "base": null, "refs": { - "RollbackConfiguration$MonitoringTimeInMinutes": "

The amount of time, in minutes, during which CloudFormation should monitor all the rollback triggers after the stack creation or update operation deploys all necessary resources.

The default is 0 minutes.

If you specify a monitoring period but don't specify any rollback triggers, CloudFormation still waits the specified period of time before cleaning up old resources after update operations. You can use this monitoring period to perform any manual stack validation desired, and manually cancel the stack creation or update (using CancelUpdateStack , for example) as necessary.

If you specify 0 for this parameter, CloudFormation still monitors the specified rollback triggers during stack creation and update operations. Then, for update operations, it begins disposing of old resources immediately once the operation completes.

" + "RollbackConfiguration$MonitoringTimeInMinutes": "

The amount of time, in minutes, during which CloudFormation should monitor all the rollback triggers after the stack creation or update operation deploys all necessary resources.

The default is 0 minutes.

If you specify a monitoring period but don't specify any rollback triggers, CloudFormation still waits the specified period of time before cleaning up old resources after update operations. You can use this monitoring period to perform any manual stack validation desired, and manually cancel the stack creation or update (using CancelUpdateStack, for example) as necessary.

If you specify 0 for this parameter, CloudFormation still monitors the specified rollback triggers during stack creation and update operations. Then, for update operations, it begins disposing of old resources immediately once the operation completes.

" } }, "NameAlreadyExistsException": { @@ -1740,7 +1740,7 @@ "ListStackResourcesInput$NextToken": "

A string that identifies the next page of stack resources that you want to retrieve.

", "ListStackResourcesOutput$NextToken": "

If the output exceeds 1 MB, a string that identifies the next page of stack resources. If no additional page exists, this value is null.

", "ListStackSetAutoDeploymentTargetsInput$NextToken": "

A string that identifies the next page of stack set deployment targets that you want to retrieve.

", - "ListStackSetAutoDeploymentTargetsOutput$NextToken": "

If the request doesn't return all the remaining results, NextToken is set to a token. To retrieve the next set of results, call ListStackSetAutoDeploymentTargets again and use that value for the NextToken parameter. If the request returns all results, NextToken is set to an empty string.

", + "ListStackSetAutoDeploymentTargetsOutput$NextToken": "

If the request doesn't return all the remaining results, NextToken is set to a token. To retrieve the next set of results, call ListStackSetAutoDeploymentTargets again and use that value for the NextToken parameter. If the request returns all results, NextToken is set to an empty string.

", "ListStackSetOperationResultsInput$NextToken": "

If the previous request didn't return all the remaining results, the response object's NextToken parameter value is set to a token. To retrieve the next set of results, call ListStackSetOperationResults again and assign that token to the request object's NextToken parameter. If there are no remaining results, the previous response object's NextToken parameter is set to null.

", "ListStackSetOperationResultsOutput$NextToken": "

If the request doesn't return all results, NextToken is set to a token. To retrieve the next set of results, call ListOperationResults again and assign that token to the request object's NextToken parameter. If there are no remaining results, NextToken is set to null.

", "ListStackSetOperationsInput$NextToken": "

If the previous paginated request didn't return all of the remaining results, the response object's NextToken parameter value is set to a token. To retrieve the next set of results, call ListStackSetOperations again and assign that token to the request object's NextToken parameter. If there are no remaining results, the previous response object's NextToken parameter is set to null.

", @@ -1795,8 +1795,8 @@ "OnStackFailure": { "base": null, "refs": { - "CreateChangeSetInput$OnStackFailure": "

Determines what action will be taken if stack creation fails. If this parameter is specified, the DisableRollback parameter to the ExecuteChangeSet API operation must not be specified. This must be one of these values:

For nested stacks, when the OnStackFailure parameter is set to DELETE for the change set for the parent stack, any failure in a child stack will cause the parent stack creation to fail and all stacks to be deleted.

", - "DescribeChangeSetOutput$OnStackFailure": "

Determines what action will be taken if stack creation fails. When this parameter is specified, the DisableRollback parameter to the ExecuteChangeSet API operation must not be specified. This must be one of these values:

" + "CreateChangeSetInput$OnStackFailure": "

Determines what action will be taken if stack creation fails. If this parameter is specified, the DisableRollback parameter to the ExecuteChangeSet API operation must not be specified. This must be one of these values:

For nested stacks, when the OnStackFailure parameter is set to DELETE for the change set for the parent stack, any failure in a child stack will cause the parent stack creation to fail and all stacks to be deleted.

", + "DescribeChangeSetOutput$OnStackFailure": "

Determines what action will be taken if stack creation fails. When this parameter is specified, the DisableRollback parameter to the ExecuteChangeSet API operation must not be specified. This must be one of these values:

" } }, "OperationIdAlreadyExistsException": { @@ -1867,10 +1867,10 @@ "base": null, "refs": { "OrganizationalUnitIdList$member": null, - "StackInstance$OrganizationalUnitId": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets .

", - "StackInstanceSummary$OrganizationalUnitId": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets .

", + "StackInstance$OrganizationalUnitId": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets.

", + "StackInstanceSummary$OrganizationalUnitId": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets.

", "StackSetAutoDeploymentTargetSummary$OrganizationalUnitId": "

The organization root ID or organizational unit (OU) IDs where the stack set is targeted.

", - "StackSetOperationResultSummary$OrganizationalUnitId": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets .

" + "StackSetOperationResultSummary$OrganizationalUnitId": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets.

" } }, "OrganizationalUnitIdList": { @@ -1878,7 +1878,7 @@ "refs": { "DeploymentTargets$OrganizationalUnitIds": "

The organization root ID or organizational unit (OU) IDs to which StackSets deploys.

", "ImportStacksToStackSetInput$OrganizationalUnitIds": "

The list of OU ID's to which the stacks being imported has to be mapped as deployment target.

", - "StackSet$OrganizationalUnitIds": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets .

" + "StackSet$OrganizationalUnitIds": "

[Service-managed permissions] The organization root ID or organizational unit (OU) IDs that you specified for DeploymentTargets.

" } }, "Output": { @@ -1956,16 +1956,16 @@ "base": null, "refs": { "CreateChangeSetInput$Parameters": "

A list of Parameter structures that specify input parameters for the change set. For more information, see the Parameter data type.

", - "CreateStackInput$Parameters": "

A list of Parameter structures that specify input parameters for the stack. For more information, see the Parameter data type.

", - "CreateStackInstancesInput$ParameterOverrides": "

A list of stack set parameters whose values you want to override in the selected stack instances.

Any overridden parameter values will be applied to all stack instances in the specified accounts and Amazon Web Services Regions. When specifying parameters and their values, be aware of how CloudFormation sets parameter values during stack instance operations:

During stack set updates, any parameter values overridden for a stack instance aren't updated, but retain their overridden value.

You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template.

", + "CreateStackInput$Parameters": "

A list of Parameter structures that specify input parameters for the stack. For more information, see the Parameter data type.

", + "CreateStackInstancesInput$ParameterOverrides": "

A list of stack set parameters whose values you want to override in the selected stack instances.

Any overridden parameter values will be applied to all stack instances in the specified accounts and Amazon Web Services Regions. When specifying parameters and their values, be aware of how CloudFormation sets parameter values during stack instance operations:

During stack set updates, any parameter values overridden for a stack instance aren't updated, but retain their overridden value.

You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template.

", "CreateStackSetInput$Parameters": "

The input parameters for the stack set template.

", - "DescribeChangeSetOutput$Parameters": "

A list of Parameter structures that describes the input parameters and their values used to create the change set. For more information, see the Parameter data type.

", + "DescribeChangeSetOutput$Parameters": "

A list of Parameter structures that describes the input parameters and their values used to create the change set. For more information, see the Parameter data type.

", "EstimateTemplateCostInput$Parameters": "

A list of Parameter structures that specify input parameters.

", "Stack$Parameters": "

A list of Parameter structures.

", "StackInstance$ParameterOverrides": "

A list of parameters from the stack set template whose values have been overridden in this stack instance.

", "StackSet$Parameters": "

A list of input parameters for a stack set.

", - "UpdateStackInput$Parameters": "

A list of Parameter structures that specify input parameters for the stack. For more information, see the Parameter data type.

", - "UpdateStackInstancesInput$ParameterOverrides": "

A list of input parameters whose values you want to update for the specified stack instances.

Any overridden parameter values will be applied to all stack instances in the specified accounts and Amazon Web Services Regions. When specifying parameters and their values, be aware of how CloudFormation sets parameter values during stack instance update operations:

During stack set updates, any parameter values overridden for a stack instance aren't updated, but retain their overridden value.

You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances.

", + "UpdateStackInput$Parameters": "

A list of Parameter structures that specify input parameters for the stack. For more information, see the Parameter data type.

", + "UpdateStackInstancesInput$ParameterOverrides": "

A list of input parameters whose values you want to update for the specified stack instances.

Any overridden parameter values will be applied to all stack instances in the specified accounts and Amazon Web Services Regions. When specifying parameters and their values, be aware of how CloudFormation sets parameter values during stack instance update operations:

During stack set updates, any parameter values overridden for a stack instance aren't updated, but retain their overridden value.

You can only override the parameter values that are specified in the stack set; to add or delete a parameter itself, use UpdateStackSet to update the stack set template. If you add a parameter to a template, before you can override the parameter value specified in the stack set you must first use UpdateStackSet to update all stack instances with the updated template and parameter value specified in the stack set. Once a stack instance has been updated with the new parameter, you can then override the parameter value using UpdateStackInstances.

", "UpdateStackSetInput$Parameters": "

A list of input parameters for the stack set template.

" } }, @@ -2634,7 +2634,7 @@ "base": null, "refs": { "ActivateTypeInput$ExecutionRoleArn": "

The name of the IAM execution role to use to activate the extension.

", - "DescribeTypeOutput$ExecutionRoleArn": "

The Amazon Resource Name (ARN) of the IAM execution role used to register the extension. This applies only to private extensions you have registered in your account. For more information, see RegisterType .

If the registered extension calls any Amazon Web Services APIs, you must create an IAM execution role that includes the necessary permissions to call those Amazon Web Services APIs, and provision that execution role in your account. CloudFormation then assumes that execution role to provide your extension with the appropriate credentials.

", + "DescribeTypeOutput$ExecutionRoleArn": "

The Amazon Resource Name (ARN) of the IAM execution role used to register the extension. This applies only to private extensions you have registered in your account. For more information, see RegisterType.

If the registered extension calls any Amazon Web Services APIs, you must create an IAM execution role that includes the necessary permissions to call those Amazon Web Services APIs, and provision that execution role in your account. CloudFormation then assumes that execution role to provide your extension with the appropriate credentials.

", "LoggingConfig$LogRoleArn": "

The Amazon Resource Name (ARN) of the role that CloudFormation should assume when sending log entries to CloudWatch Logs.

", "RegisterTypeInput$ExecutionRoleArn": "

The Amazon Resource Name (ARN) of the IAM role for CloudFormation to assume when invoking the extension.

For CloudFormation to assume the specified execution role, the role must contain a trust relationship with the CloudFormation service principal (resources.cloudformation.amazonaws.com). For more information about adding trust relationships, see Modifying a role trust policy in the Identity and Access Management User Guide.

If your extension calls Amazon Web Services APIs in any of its handlers, you must create an IAM execution role that includes the necessary permissions to call those Amazon Web Services APIs, and provision that execution role in your account. When CloudFormation needs to invoke the resource type handler, CloudFormation assumes this execution role to create a temporary session token, which it then passes to the resource type handler, thereby supplying your resource type with the appropriate credentials.

" } @@ -3099,7 +3099,7 @@ } }, "StackSetAutoDeploymentTargetSummary": { - "base": "

One of the targets for the stack set. Returned by the ListStackSetAutoDeploymentTargets API operation.

", + "base": "

One of the targets for the stack set. Returned by the ListStackSetAutoDeploymentTargets API operation.

", "refs": { "StackSetAutoDeploymentTargetSummaries$member": null } @@ -3531,7 +3531,7 @@ "DescribeResourceScanOutput$StartTime": "

The time that the resource scan was started.

", "DescribeResourceScanOutput$EndTime": "

The time that the resource scan was finished.

", "DescribeStackDriftDetectionStatusOutput$Timestamp": "

Time at which the stack drift detection operation was initiated.

", - "DescribeTypeOutput$LastUpdated": "

When the specified extension version was registered. This applies only to:

", + "DescribeTypeOutput$LastUpdated": "

When the specified extension version was registered. This applies only to:

", "DescribeTypeOutput$TimeCreated": "

When the specified private extension version was registered or activated in your account.

", "ResourceScanSummary$StartTime": "

The time that the resource scan was started.

", "ResourceScanSummary$EndTime": "

The time that the resource scan was finished.

", @@ -3554,7 +3554,7 @@ "StackSetOperationSummary$EndTimestamp": "

The time at which the stack set operation ended, across all accounts and Regions specified. Note that this doesn't necessarily mean that the stack set operation was successful, or even attempted, in each account or Region.

", "StackSetSummary$LastDriftCheckTimestamp": "

Most recent time when CloudFormation performed a drift detection operation on the stack set. This value will be NULL for any stack set on which drift detection hasn't yet been performed.

", "TypeConfigurationDetails$LastUpdated": "

When the configuration data was last updated for this extension.

If a configuration hasn't been set for a specified extension, CloudFormation returns null.

", - "TypeSummary$LastUpdated": "

When the specified extension version was registered. This applies only to:

For all other extension types, CloudFormation returns null.

", + "TypeSummary$LastUpdated": "

When the specified extension version was registered. This applies only to:

For all other extension types, CloudFormation returns null.

", "TypeVersionSummary$TimeCreated": "

When the version was registered.

" } }, @@ -3597,7 +3597,7 @@ "Type": { "base": null, "refs": { - "RollbackTrigger$Type": "

The resource type of the rollback trigger. Specify either AWS::CloudWatch::Alarm or AWS::CloudWatch::CompositeAlarm resource types.

" + "RollbackTrigger$Type": "

The resource type of the rollback trigger. Specify either AWS::CloudWatch::Alarm or AWS::CloudWatch::CompositeAlarm resource types.

" } }, "TypeArn": { @@ -3611,11 +3611,11 @@ "ListTypeRegistrationsInput$TypeArn": "

The Amazon Resource Name (ARN) of the extension.

Conditional: You must specify either TypeName and Type, or Arn.

", "ListTypeVersionsInput$Arn": "

The Amazon Resource Name (ARN) of the extension for which you want version summary information.

Conditional: You must specify either TypeName and Type, or Arn.

", "PublishTypeOutput$PublicTypeArn": "

The Amazon Resource Name (ARN) assigned to the public extension upon publication.

", - "SetTypeConfigurationInput$TypeArn": "

The Amazon Resource Name (ARN) for the extension, in this account and Region.

For public extensions, this will be the ARN assigned when you call the ActivateType API operation in this account and Region. For private extensions, this will be the ARN assigned when you call the RegisterType API operation in this account and Region.

Do not include the extension versions suffix at the end of the ARN. You can set the configuration for an extension, but not for a specific extension version.

", + "SetTypeConfigurationInput$TypeArn": "

The Amazon Resource Name (ARN) for the extension, in this account and Region.

For public extensions, this will be the ARN assigned when you call the ActivateType API operation in this account and Region. For private extensions, this will be the ARN assigned when you call the RegisterType API operation in this account and Region.

Do not include the extension versions suffix at the end of the ARN. You can set the configuration for an extension, but not for a specific extension version.

", "TestTypeInput$Arn": "

The Amazon Resource Name (ARN) of the extension.

Conditional: You must specify Arn, or TypeName and Type.

", "TestTypeOutput$TypeVersionArn": "

The Amazon Resource Name (ARN) of the extension.

", - "TypeConfigurationDetails$TypeArn": "

The Amazon Resource Name (ARN) for the extension, in this account and Region.

For public extensions, this will be the ARN assigned when you call the ActivateType API operation in this account and Region. For private extensions, this will be the ARN assigned when you call the RegisterType API operation in this account and Region.

", - "TypeConfigurationIdentifier$TypeArn": "

The Amazon Resource Name (ARN) for the extension, in this account and Region.

For public extensions, this will be the ARN assigned when you call the ActivateType API operation in this account and Region. For private extensions, this will be the ARN assigned when you call the RegisterType API operation in this account and Region.

", + "TypeConfigurationDetails$TypeArn": "

The Amazon Resource Name (ARN) for the extension, in this account and Region.

For public extensions, this will be the ARN assigned when you call the ActivateType API operation in this account and Region. For private extensions, this will be the ARN assigned when you call the RegisterType API operation in this account and Region.

", + "TypeConfigurationIdentifier$TypeArn": "

The Amazon Resource Name (ARN) for the extension, in this account and Region.

For public extensions, this will be the ARN assigned when you call the ActivateType API operation in this account and Region. For private extensions, this will be the ARN assigned when you call the RegisterType API operation in this account and Region.

", "TypeSummary$TypeArn": "

The Amazon Resource Name (ARN) of the extension.

", "TypeVersionSummary$Arn": "

The Amazon Resource Name (ARN) of the extension version.

" } @@ -3623,7 +3623,7 @@ "TypeConfiguration": { "base": null, "refs": { - "SetTypeConfigurationInput$Configuration": "

The configuration data for the extension, in this account and Region.

The configuration data must be formatted as JSON, and validate against the schema returned in the ConfigurationSchema response element of DescribeType . For more information, see Defining account-level configuration data for an extension in the CloudFormation CLI User Guide.

", + "SetTypeConfigurationInput$Configuration": "

The configuration data for the extension, in this account and Region.

The configuration data must be formatted as JSON, and validate against the schema returned in the ConfigurationSchema response element of DescribeType. For more information, see Defining account-level configuration data for an extension in the CloudFormation CLI User Guide.

", "TypeConfigurationDetails$Configuration": "

A JSON string specifying the configuration data for the extension, in this account and Region.

If a configuration hasn't been set for a specified extension, CloudFormation returns {}.

" } }, @@ -3694,7 +3694,7 @@ "DeactivateTypeInput$TypeName": "

The type name of the extension, in this account and Region. If you specified a type name alias when enabling the extension, use the type name alias.

Conditional: You must specify either Arn, or TypeName and Type.

", "DeregisterTypeInput$TypeName": "

The name of the extension.

Conditional: You must specify either TypeName and Type, or Arn.

", "DescribeTypeInput$TypeName": "

The name of the extension.

Conditional: You must specify either TypeName and Type, or Arn.

", - "DescribeTypeOutput$TypeName": "

The name of the extension.

If the extension is a public third-party type you have activated with a type name alias, CloudFormation returns the type name alias. For more information, see ActivateType .

", + "DescribeTypeOutput$TypeName": "

The name of the extension.

If the extension is a public third-party type you have activated with a type name alias, CloudFormation returns the type name alias. For more information, see ActivateType.

", "DescribeTypeOutput$OriginalTypeName": "

For public extensions that have been activated for this account and Region, the type name of the public extension.

If you specified a TypeNameAlias when enabling the extension in this account and Region, CloudFormation treats that alias as the extension's type name within the account and Region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.

", "ListTypeRegistrationsInput$TypeName": "

The name of the extension.

Conditional: You must specify either TypeName and Type, or Arn.

", "ListTypeVersionsInput$TypeName": "

The name of the extension for which you want version summary information.

Conditional: You must specify either TypeName and Type, or Arn.

", @@ -3707,7 +3707,7 @@ "TestTypeInput$TypeName": "

The name of the extension to test.

Conditional: You must specify Arn, or TypeName and Type.

", "TypeConfigurationDetails$TypeName": "

The name of the extension.

", "TypeConfigurationIdentifier$TypeName": "

The name of the extension type to which this configuration applies.

", - "TypeSummary$TypeName": "

The name of the extension.

If you specified a TypeNameAlias when you call the ActivateType API operation in your account and Region, CloudFormation considers that alias as the type name.

", + "TypeSummary$TypeName": "

The name of the extension.

If you specified a TypeNameAlias when you call the ActivateType API operation in your account and Region, CloudFormation considers that alias as the type name.

", "TypeSummary$OriginalTypeName": "

For public extensions that have been activated for this account and Region, the type name of the public extension.

If you specified a TypeNameAlias when enabling the extension in this account and Region, CloudFormation treats that alias as the extension's type name within the account and Region, not the type name of the public extension. For more information, see Specifying aliases to refer to extensions in the CloudFormation User Guide.

", "TypeVersionSummary$TypeName": "

The name of the extension.

" } @@ -3758,10 +3758,10 @@ "refs": { "DeregisterTypeInput$VersionId": "

The ID of a specific version of the extension. The version ID is the value at the end of the Amazon Resource Name (ARN) assigned to the extension version when it is registered.

", "DescribeTypeInput$VersionId": "

The ID of a specific version of the extension. The version ID is the value at the end of the Amazon Resource Name (ARN) assigned to the extension version when it is registered.

If you specify a VersionId, DescribeType returns information about that specific extension version. Otherwise, it returns information about the default extension version.

", - "DescribeTypeOutput$DefaultVersionId": "

The ID of the default version of the extension. The default version is used when the extension version isn't specified.

This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon Web Services and published by third parties, CloudFormation returns null. For more information, see RegisterType .

To set the default version of an extension, use SetTypeDefaultVersion.

", + "DescribeTypeOutput$DefaultVersionId": "

The ID of the default version of the extension. The default version is used when the extension version isn't specified.

This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon Web Services and published by third parties, CloudFormation returns null. For more information, see RegisterType.

To set the default version of an extension, use SetTypeDefaultVersion.

", "SetTypeDefaultVersionInput$VersionId": "

The ID of a specific version of the extension. The version ID is the value at the end of the Amazon Resource Name (ARN) assigned to the extension version when it is registered.

", "TestTypeInput$VersionId": "

The version of the extension to test.

You can specify the version id with either Arn, or with TypeName and Type.

If you don't specify a version, CloudFormation uses the default version of the extension in this account and Region for testing.

", - "TypeSummary$DefaultVersionId": "

The ID of the default version of the extension. The default version is used when the extension version isn't specified.

This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon and published by third parties, CloudFormation returns null. For more information, see RegisterType .

To set the default version of an extension, use SetTypeDefaultVersion.

", + "TypeSummary$DefaultVersionId": "

The ID of the default version of the extension. The default version is used when the extension version isn't specified.

This applies only to private extensions you have registered in your account. For public extensions, both those provided by Amazon and published by third parties, CloudFormation returns null. For more information, see RegisterType.

To set the default version of an extension, use SetTypeDefaultVersion.

", "TypeVersionSummary$VersionId": "

The ID of a specific version of the extension. The version ID is the value at the end of the Amazon Resource Name (ARN) assigned to the extension version when it's registered.

" } }, diff --git a/apis/ec2/2016-11-15/api-2.json b/apis/ec2/2016-11-15/api-2.json index 823b7ba567e..6782d0810a2 100644 --- a/apis/ec2/2016-11-15/api-2.json +++ b/apis/ec2/2016-11-15/api-2.json @@ -2613,6 +2613,15 @@ "input":{"shape":"DescribeLockedSnapshotsRequest"}, "output":{"shape":"DescribeLockedSnapshotsResult"} }, + "DescribeMacHosts":{ + "name":"DescribeMacHosts", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeMacHostsRequest"}, + "output":{"shape":"DescribeMacHostsResult"} + }, "DescribeManagedPrefixLists":{ "name":"DescribeManagedPrefixLists", "http":{ @@ -17015,6 +17024,39 @@ } } }, + "DescribeMacHostsRequest":{ + "type":"structure", + "members":{ + "Filters":{ + "shape":"FilterList", + "locationName":"Filter" + }, + "HostIds":{ + "shape":"RequestHostIdList", + "locationName":"HostId" + }, + "MaxResults":{"shape":"DescribeMacHostsRequestMaxResults"}, + "NextToken":{"shape":"String"} + } + }, + "DescribeMacHostsRequestMaxResults":{ + "type":"integer", + "max":500, + "min":5 + }, + "DescribeMacHostsResult":{ + "type":"structure", + "members":{ + "MacHosts":{ + "shape":"MacHostList", + "locationName":"macHostSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, "DescribeManagedPrefixListsRequest":{ "type":"structure", "members":{ @@ -32002,6 +32044,33 @@ ] }, "Long":{"type":"long"}, + "MacHost":{ + "type":"structure", + "members":{ + "HostId":{ + "shape":"DedicatedHostId", + "locationName":"hostId" + }, + "MacOSLatestSupportedVersions":{ + "shape":"MacOSVersionStringList", + "locationName":"macOSLatestSupportedVersionSet" + } + } + }, + "MacHostList":{ + "type":"list", + "member":{ + "shape":"MacHost", + "locationName":"item" + } + }, + "MacOSVersionStringList":{ + "type":"list", + "member":{ + "shape":"String", + "locationName":"item" + } + }, "MaintenanceDetails":{ "type":"structure", "members":{ diff --git a/apis/ec2/2016-11-15/docs-2.json b/apis/ec2/2016-11-15/docs-2.json index b0aa5545a73..78b1ab8cb7a 100644 --- a/apis/ec2/2016-11-15/docs-2.json +++ b/apis/ec2/2016-11-15/docs-2.json @@ -294,6 +294,7 @@ "DescribeLocalGatewayVirtualInterfaces": "

Describes the specified local gateway virtual interfaces.

", "DescribeLocalGateways": "

Describes one or more local gateways. By default, all local gateways are described. Alternatively, you can filter the results.

", "DescribeLockedSnapshots": "

Describes the lock status for a snapshot.

", + "DescribeMacHosts": "

Describes the specified EC2 Mac Dedicated Host or all of your EC2 Mac Dedicated Hosts.

", "DescribeManagedPrefixLists": "

Describes your managed prefix lists and any Amazon Web Services-managed prefix lists.

To view the entries for your prefix list, use GetManagedPrefixListEntries.

", "DescribeMovingAddresses": "

This action is deprecated.

Describes your Elastic IP addresses that are being moved from or being restored to the EC2-Classic platform. This request does not return information about any other Elastic IP addresses in your account.

", "DescribeNatGateways": "

Describes one or more of your NAT gateways.

", @@ -5164,6 +5165,7 @@ "refs": { "DedicatedHostIdList$member": null, "LaunchTemplatePlacementRequest$HostId": "

The ID of the Dedicated Host for the instance.

", + "MacHost$HostId": "

The EC2 Mac Dedicated Host ID.

", "ModifyInstancePlacementRequest$HostId": "

The ID of the Dedicated Host with which to associate the instance.

", "RequestHostIdList$member": null, "RequestHostIdSet$member": null @@ -6964,6 +6966,22 @@ "refs": { } }, + "DescribeMacHostsRequest": { + "base": null, + "refs": { + } + }, + "DescribeMacHostsRequestMaxResults": { + "base": null, + "refs": { + "DescribeMacHostsRequest$MaxResults": "

The maximum number of results to return for the request in a single page. The remaining results can be seen by sending another request with the returned nextToken value. This value can be between 5 and 500. If maxResults is given a larger value than 500, you receive an error.

" + } + }, + "DescribeMacHostsResult": { + "base": null, + "refs": { + } + }, "DescribeManagedPrefixListsRequest": { "base": null, "refs": { @@ -9433,6 +9451,7 @@ "DescribeLocalGatewayVirtualInterfacesRequest$Filters": "

One or more filters.

", "DescribeLocalGatewaysRequest$Filters": "

One or more filters.

", "DescribeLockedSnapshotsRequest$Filters": "

The filters.

", + "DescribeMacHostsRequest$Filters": "

The filters.

", "DescribeManagedPrefixListsRequest$Filters": "

One or more filters.

", "DescribeMovingAddressesRequest$Filters": "

One or more filters.

", "DescribeNatGatewaysRequest$Filter": "

The filters.

", @@ -14283,6 +14302,24 @@ "VpnGateway$AmazonSideAsn": "

The private Autonomous System Number (ASN) for the Amazon side of a BGP session.

" } }, + "MacHost": { + "base": "

Information about the EC2 Mac Dedicated Host.

", + "refs": { + "MacHostList$member": null + } + }, + "MacHostList": { + "base": null, + "refs": { + "DescribeMacHostsResult$MacHosts": "

Information about the EC2 Mac Dedicated Hosts.

" + } + }, + "MacOSVersionStringList": { + "base": null, + "refs": { + "MacHost$MacOSLatestSupportedVersions": "

The latest macOS versions that the EC2 Mac Dedicated Host can launch without being upgraded.

" + } + }, "MaintenanceDetails": { "base": "

Details for Site-to-Site VPN tunnel endpoint maintenance events.

", "refs": { @@ -17413,6 +17450,7 @@ "base": null, "refs": { "DescribeHostsRequest$HostIds": "

The IDs of the Dedicated Hosts. The IDs are used for targeted instance launches.

", + "DescribeMacHostsRequest$HostIds": "

The IDs of the EC2 Mac Dedicated Hosts.

", "ModifyHostsRequest$HostIds": "

The IDs of the Dedicated Hosts to modify.

", "ReleaseHostsRequest$HostIds": "

The IDs of the Dedicated Hosts to release.

" } @@ -19744,6 +19782,8 @@ "DescribeLocalGatewaysResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", "DescribeLockedSnapshotsRequest$NextToken": "

The token returned from a previous paginated request. Pagination continues from the end of the items returned by the previous request.

", "DescribeLockedSnapshotsResult$NextToken": "

The token to include in another request to get the next page of items. This value is null when there are no more items to return.

", + "DescribeMacHostsRequest$NextToken": "

The token to use to retrieve the next page of results.

", + "DescribeMacHostsResult$NextToken": "

The token to use to retrieve the next page of results.

", "DescribeMovingAddressesRequest$NextToken": "

The token for the next page of results.

", "DescribeMovingAddressesResult$NextToken": "

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", "DescribeNatGatewaysRequest$NextToken": "

The token returned from a previous paginated request. Pagination continues from the end of the items returned by the previous request.

", @@ -20453,6 +20493,7 @@ "LockSnapshotResult$SnapshotId": "

The ID of the snapshot

", "LockedSnapshotsInfo$OwnerId": "

The account ID of the Amazon Web Services account that owns the snapshot.

", "LockedSnapshotsInfo$SnapshotId": "

The ID of the snapshot.

", + "MacOSVersionStringList$member": null, "MaintenanceDetails$PendingMaintenance": "

Verify existence of a pending maintenance.

", "ManagedPrefixList$AddressFamily": "

The IP address version.

", "ManagedPrefixList$StateMessage": "

The state message.

", diff --git a/apis/ec2/2016-11-15/paginators-1.json b/apis/ec2/2016-11-15/paginators-1.json index d0520c22a85..ea572e61e86 100644 --- a/apis/ec2/2016-11-15/paginators-1.json +++ b/apis/ec2/2016-11-15/paginators-1.json @@ -342,6 +342,12 @@ "output_token": "NextToken", "result_key": "LocalGateways" }, + "DescribeMacHosts": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken", + "result_key": "MacHosts" + }, "DescribeManagedPrefixLists": { "input_token": "NextToken", "limit_key": "MaxResults", diff --git a/apis/finspace/2021-03-12/api-2.json b/apis/finspace/2021-03-12/api-2.json index 6ae0719865d..9592cea24a1 100644 --- a/apis/finspace/2021-03-12/api-2.json +++ b/apis/finspace/2021-03-12/api-2.json @@ -895,7 +895,12 @@ "max":100, "min":1 }, - "AvailabilityZoneId":{"type":"string"}, + "AvailabilityZoneId":{ + "type":"string", + "max":12, + "min":8, + "pattern":"^[a-zA-Z0-9-]+$" + }, "AvailabilityZoneIds":{ "type":"list", "member":{"shape":"AvailabilityZoneId"} @@ -939,7 +944,8 @@ "ChangesetId":{ "type":"string", "max":26, - "min":1 + "min":1, + "pattern":"^[a-zA-Z0-9]+$" }, "ChangesetStatus":{ "type":"string", @@ -1182,6 +1188,7 @@ "changesetId":{"shape":"ChangesetId"}, "segmentConfigurations":{"shape":"KxDataviewSegmentConfigurationList"}, "autoUpdate":{"shape":"booleanValue"}, + "readWrite":{"shape":"booleanValue"}, "description":{"shape":"Description"}, "tags":{"shape":"TagMap"}, "clientToken":{ @@ -1202,6 +1209,7 @@ "segmentConfigurations":{"shape":"KxDataviewSegmentConfigurationList"}, "description":{"shape":"Description"}, "autoUpdate":{"shape":"booleanValue"}, + "readWrite":{"shape":"booleanValue"}, "createdTimestamp":{"shape":"Timestamp"}, "lastModifiedTimestamp":{"shape":"Timestamp"}, "status":{"shape":"KxDataviewStatus"} @@ -1969,6 +1977,7 @@ "activeVersions":{"shape":"KxDataviewActiveVersionList"}, "description":{"shape":"Description"}, "autoUpdate":{"shape":"booleanValue"}, + "readWrite":{"shape":"booleanValue"}, "environmentId":{"shape":"EnvironmentId"}, "createdTimestamp":{"shape":"Timestamp"}, "lastModifiedTimestamp":{"shape":"Timestamp"}, @@ -2313,13 +2322,13 @@ }, "KxCommandLineArgumentKey":{ "type":"string", - "max":50, + "max":1024, "min":1, "pattern":"^(?![Aa][Ww][Ss])(s|([a-zA-Z][a-zA-Z0-9_]+))|(AWS_ZIP_DEFAULT)" }, "KxCommandLineArgumentValue":{ "type":"string", - "max":50, + "max":1024, "min":1, "pattern":"^[a-zA-Z0-9_:./,]+$" }, @@ -2407,6 +2416,7 @@ "status":{"shape":"KxDataviewStatus"}, "description":{"shape":"Description"}, "autoUpdate":{"shape":"booleanValue"}, + "readWrite":{"shape":"booleanValue"}, "createdTimestamp":{"shape":"Timestamp"}, "lastModifiedTimestamp":{"shape":"Timestamp"}, "statusReason":{"shape":"KxDataviewStatusReason"} @@ -2426,7 +2436,8 @@ ], "members":{ "dbPaths":{"shape":"SegmentConfigurationDbPathList"}, - "volumeName":{"shape":"KxVolumeName"} + "volumeName":{"shape":"KxVolumeName"}, + "onDemand":{"shape":"booleanValue"} } }, "KxDataviewSegmentConfigurationList":{ @@ -2522,7 +2533,6 @@ }, "KxNAS1Size":{ "type":"integer", - "max":33600, "min":1200 }, "KxNAS1Type":{ @@ -3160,7 +3170,7 @@ "type":"string", "max":1093, "min":9, - "pattern":"^s3:\\/\\/[a-z0-9][a-z0-9-]{1,61}[a-z0-9]\\/([^\\/]+\\/)*[^\\/]*$" + "pattern":"^s3:\\/\\/[a-z0-9][a-z0-9-.]{1,61}[a-z0-9]\\/([^\\/]+\\/)*[^\\/]*$" }, "SamlMetadataDocument":{ "type":"string", @@ -3498,6 +3508,7 @@ "activeVersions":{"shape":"KxDataviewActiveVersionList"}, "status":{"shape":"KxDataviewStatus"}, "autoUpdate":{"shape":"booleanValue"}, + "readWrite":{"shape":"booleanValue"}, "description":{"shape":"Description"}, "createdTimestamp":{"shape":"Timestamp"}, "lastModifiedTimestamp":{"shape":"Timestamp"} diff --git a/apis/finspace/2021-03-12/docs-2.json b/apis/finspace/2021-03-12/docs-2.json index 058237c95eb..744976afd14 100644 --- a/apis/finspace/2021-03-12/docs-2.json +++ b/apis/finspace/2021-03-12/docs-2.json @@ -901,18 +901,18 @@ "refs": { "CreateKxClusterRequest$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", "CreateKxClusterResponse$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", - "CreateKxDataviewRequest$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", - "CreateKxDataviewResponse$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", - "CreateKxVolumeRequest$azMode": "

The number of availability zones you want to assign per cluster. Currently, FinSpace only support SINGLE for volumes.

", - "CreateKxVolumeResponse$azMode": "

The number of availability zones you want to assign per cluster. Currently, FinSpace only support SINGLE for volumes.

", + "CreateKxDataviewRequest$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", + "CreateKxDataviewResponse$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", + "CreateKxVolumeRequest$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", + "CreateKxVolumeResponse$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", "GetKxClusterResponse$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", - "GetKxDataviewResponse$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", - "GetKxVolumeResponse$azMode": "

The number of availability zones you want to assign per cluster. Currently, FinSpace only support SINGLE for volumes.

", + "GetKxDataviewResponse$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", + "GetKxVolumeResponse$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", "KxCluster$azMode": "

The number of availability zones assigned per cluster. This can be one of the following:

", - "KxDataviewListEntry$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", - "KxVolume$azMode": "

The number of availability zones assigned to the volume. Currently, only SINGLE is supported.

", - "UpdateKxDataviewResponse$azMode": "

The number of availability zones you want to assign per cluster. This can be one of the following

", - "UpdateKxVolumeResponse$azMode": "

The number of availability zones you want to assign per cluster. Currently, FinSpace only support SINGLE for volumes.

" + "KxDataviewListEntry$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", + "KxVolume$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", + "UpdateKxDataviewResponse$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

", + "UpdateKxVolumeResponse$azMode": "

The number of availability zones you want to assign per volume. Currently, FinSpace only supports SINGLE for volumes. This places dataview in a single AZ.

" } }, "KxCacheStorageConfiguration": { @@ -1262,10 +1262,10 @@ "KxHostType": { "base": null, "refs": { - "CreateKxScalingGroupRequest$hostType": "

The memory and CPU capabilities of the scaling group host on which FinSpace Managed kdb clusters will be placed.

", + "CreateKxScalingGroupRequest$hostType": "

The memory and CPU capabilities of the scaling group host on which FinSpace Managed kdb clusters will be placed.

You can add one of the following values:

", "CreateKxScalingGroupResponse$hostType": "

The memory and CPU capabilities of the scaling group host on which FinSpace Managed kdb clusters will be placed.

", - "GetKxScalingGroupResponse$hostType": "

The memory and CPU capabilities of the scaling group host on which FinSpace Managed kdb clusters will be placed.

", - "KxScalingGroup$hostType": "

The memory and CPU capabilities of the scaling group host on which FinSpace Managed kdb clusters will be placed.

" + "GetKxScalingGroupResponse$hostType": "

The memory and CPU capabilities of the scaling group host on which FinSpace Managed kdb clusters will be placed.

It can have one of the following values:

", + "KxScalingGroup$hostType": "

The memory and CPU capabilities of the scaling group host on which FinSpace Managed kdb clusters will be placed.

You can add one of the following values:

" } }, "KxNAS1Configuration": { @@ -2129,10 +2129,16 @@ "base": null, "refs": { "CreateKxDataviewRequest$autoUpdate": "

The option to specify whether you want to apply all the future additions and corrections automatically to the dataview, when you ingest new changesets. The default value is false.

", + "CreateKxDataviewRequest$readWrite": "

The option to specify whether you want to make the dataview writable to perform database maintenance. The following are some considerations related to writable dataviews.



", "CreateKxDataviewResponse$autoUpdate": "

The option to select whether you want to apply all the future additions and corrections automatically to the dataview when you ingest new changesets. The default value is false.

", + "CreateKxDataviewResponse$readWrite": "

Returns True if the dataview is created as writeable and False otherwise.

", "GetKxDataviewResponse$autoUpdate": "

The option to specify whether you want to apply all the future additions and corrections automatically to the dataview when new changesets are ingested. The default value is false.

", + "GetKxDataviewResponse$readWrite": "

Returns True if the dataview is created as writeable and False otherwise.

", "KxDataviewListEntry$autoUpdate": "

The option to specify whether you want to apply all the future additions and corrections automatically to the dataview when you ingest new changesets. The default value is false.

", - "UpdateKxDataviewResponse$autoUpdate": "

The option to specify whether you want to apply all the future additions and corrections automatically to the dataview when new changesets are ingested. The default value is false.

" + "KxDataviewListEntry$readWrite": "

Returns True if the dataview is created as writeable and False otherwise.

", + "KxDataviewSegmentConfiguration$onDemand": "

Enables on-demand caching on the selected database path when a particular file or a column of the database is accessed. When on demand caching is True, dataviews perform minimal loading of files on the filesystem as needed. When it is set to False, everything is cached. The default value is False.

", + "UpdateKxDataviewResponse$autoUpdate": "

The option to specify whether you want to apply all the future additions and corrections automatically to the dataview when new changesets are ingested. The default value is false.

", + "UpdateKxDataviewResponse$readWrite": "

Returns True if the dataview is created as writeable and False otherwise.

" } }, "dnsStatus": { diff --git a/apis/logs/2014-03-28/api-2.json b/apis/logs/2014-03-28/api-2.json index 72a8ee58a39..48174c7e8f6 100644 --- a/apis/logs/2014-03-28/api-2.json +++ b/apis/logs/2014-03-28/api-2.json @@ -2338,8 +2338,11 @@ "event":true }, "LogEvent":{ - "type":"string", - "min":1 + "type":"structure", + "members":{ + "timestamp":{"shape":"Timestamp"}, + "message":{"shape":"EventMessage"} + } }, "LogEventIndex":{"type":"integer"}, "LogGroup":{ diff --git a/apis/logs/2014-03-28/docs-2.json b/apis/logs/2014-03-28/docs-2.json index 714cd5a9cca..3761615478e 100644 --- a/apis/logs/2014-03-28/docs-2.json +++ b/apis/logs/2014-03-28/docs-2.json @@ -4,7 +4,7 @@ "operations": { "AssociateKmsKey": "

Associates the specified KMS key with either one log group in the account, or with all stored CloudWatch Logs query insights results in the account.

When you use AssociateKmsKey, you specify either the logGroupName parameter or the resourceIdentifier parameter. You can't specify both of those parameters in the same operation.

If you delete the key that is used to encrypt log events or log group query results, then all the associated stored log events or query results that were encrypted with that key will be unencryptable and unusable.

CloudWatch Logs supports only symmetric KMS keys. Do not use an associate an asymmetric KMS key with your log group or query results. For more information, see Using Symmetric and Asymmetric Keys.

It can take up to 5 minutes for this operation to take effect.

If you attempt to associate a KMS key with a log group but the KMS key does not exist or the KMS key is disabled, you receive an InvalidParameterException error.

", "CancelExportTask": "

Cancels the specified export task.

The task must be in the PENDING or RUNNING state.

", - "CreateDelivery": "

Creates a delivery. A delivery is a connection between a logical delivery source and a logical delivery destination that you have already created.

Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Kinesis Data Firehose.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

You can't update an existing delivery. You can only create and delete deliveries.

", + "CreateDelivery": "

Creates a delivery. A delivery is a connection between a logical delivery source and a logical delivery destination that you have already created.

Only some Amazon Web Services services support being configured as a delivery source using this operation. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

A delivery destination can represent a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

You can't update an existing delivery. You can only create and delete deliveries.

", "CreateExportTask": "

Creates an export task so that you can efficiently export data from a log group to an Amazon S3 bucket. When you perform a CreateExportTask operation, you must use credentials that have permission to write to the S3 bucket that you specify as the destination.

Exporting log data to S3 buckets that are encrypted by KMS is supported. Exporting log data to Amazon S3 buckets that have S3 Object Lock enabled with a retention period is also supported.

Exporting to S3 buckets that are encrypted with AES-256 is supported.

This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate log data for each export task, specify a prefix to be used as the Amazon S3 key prefix for all exported objects.

Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities.

", "CreateLogAnomalyDetector": "

Creates an anomaly detector that regularly scans one or more log groups and look for patterns and anomalies in the logs.

An anomaly detector can help surface issues by automatically discovering anomalies in your log event traffic. An anomaly detector uses machine learning algorithms to scan log events and find patterns. A pattern is a shared text structure that recurs among your log fields. Patterns provide a useful tool for analyzing large sets of logs because a large number of log events can often be compressed into a few patterns.

The anomaly detector uses pattern recognition to find anomalies, which are unusual log events. It uses the evaluationFrequency to compare current log events and patterns with trained baselines.

Fields within a pattern are called tokens. Fields that vary within a pattern, such as a request ID or timestamp, are referred to as dynamic tokens and represented by <*>.

The following is an example of a pattern:

[INFO] Request time: <*> ms

This pattern represents log events like [INFO] Request time: 327 ms and other similar log events that differ only by the number, in this csse 327. When the pattern is displayed, the different numbers are replaced by <*>

Any parts of log events that are masked as sensitive data are not scanned for anomalies. For more information about masking sensitive data, see Help protect sensitive log data with masking.

", "CreateLogGroup": "

Creates a log group with the specified name. You can create up to 1,000,000 log groups per Region per account.

You must use the following guidelines when naming a log group:

When you create a log group, by default the log events in the log group do not expire. To set a retention policy so that events expire and are deleted after a specified time, use PutRetentionPolicy.

If you associate an KMS key with the log group, ingested data is encrypted using the KMS key. This association is stored as long as the data encrypted with the KMS key is still within CloudWatch Logs. This enables CloudWatch Logs to decrypt this data whenever it is requested.

If you attempt to associate a KMS key with the log group but the KMS key does not exist or the KMS key is disabled, you receive an InvalidParameterException error.

CloudWatch Logs supports only symmetric KMS keys. Do not associate an asymmetric KMS key with your log group. For more information, see Using Symmetric and Asymmetric Keys.

", @@ -25,7 +25,7 @@ "DeleteRetentionPolicy": "

Deletes the specified retention policy.

Log events do not expire if they belong to log groups without a retention policy.

", "DeleteSubscriptionFilter": "

Deletes the specified subscription filter.

", "DescribeAccountPolicies": "

Returns a list of all CloudWatch Logs account policies in the account.

", - "DescribeDeliveries": "

Retrieves a list of the deliveries that have been created in the account.

A delivery is a connection between a delivery source and a delivery destination .

A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Kinesis Data Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.

", + "DescribeDeliveries": "

Retrieves a list of the deliveries that have been created in the account.

A delivery is a connection between a delivery source and a delivery destination .

A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.

", "DescribeDeliveryDestinations": "

Retrieves a list of the delivery destinations that have been created in the account.

", "DescribeDeliverySources": "

Retrieves a list of the delivery sources that have been created in the account.

", "DescribeDestinations": "

Lists all your destinations. The results are ASCII-sorted by destination name.

", @@ -40,7 +40,7 @@ "DisassociateKmsKey": "

Disassociates the specified KMS key from the specified log group or from all CloudWatch Logs Insights query results in the account.

When you use DisassociateKmsKey, you specify either the logGroupName parameter or the resourceIdentifier parameter. You can't specify both of those parameters in the same operation.

It can take up to 5 minutes for this operation to take effect.

", "FilterLogEvents": "

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.

You must have the logs:FilterLogEvents permission to perform this operation.

You can specify the log group to search by using either logGroupIdentifier or logGroupName. You must include one of these two parameters, but you can't include both.

By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events) or all the events found within the specified time range. If the results include a token, that means there are more log events available. You can get additional results by specifying the token in a subsequent call. This operation can return empty results while there are more log events available through the token.

The returned log events are sorted by event timestamp, the timestamp when the event was ingested by CloudWatch Logs, and the ID of the PutLogEvents request.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account and view data from the linked source accounts. For more information, see CloudWatch cross-account observability.

", "GetDataProtectionPolicy": "

Returns information about a log group data protection policy.

", - "GetDelivery": "

Returns complete information about one logical delivery. A delivery is a connection between a delivery source and a delivery destination .

A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Kinesis Data Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.

You need to specify the delivery id in this operation. You can find the IDs of the deliveries in your account with the DescribeDeliveries operation.

", + "GetDelivery": "

Returns complete information about one logical delivery. A delivery is a connection between a delivery source and a delivery destination .

A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose. Only some Amazon Web Services services support being configured as a delivery source. These services are listed in Enable logging from Amazon Web Services services.

You need to specify the delivery id in this operation. You can find the IDs of the deliveries in your account with the DescribeDeliveries operation.

", "GetDeliveryDestination": "

Retrieves complete information about one delivery destination.

", "GetDeliveryDestinationPolicy": "

Retrieves the delivery destination policy assigned to the delivery destination that you specify. For more information about delivery destinations and their policies, see PutDeliveryDestinationPolicy.

", "GetDeliverySource": "

Retrieves complete information about one delivery source.

", @@ -53,11 +53,11 @@ "ListLogAnomalyDetectors": "

Retrieves a list of the log anomaly detectors in the account.

", "ListTagsForResource": "

Displays the tags associated with a CloudWatch Logs resource. Currently, log groups and destinations support tagging.

", "ListTagsLogGroup": "

The ListTagsLogGroup operation is on the path to deprecation. We recommend that you use ListTagsForResource instead.

Lists the tags for the specified log group.

", - "PutAccountPolicy": "

Creates an account-level data protection policy or subscription filter policy that applies to all log groups or a subset of log groups in the account.

Data protection policy

A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.

Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.

If you use PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.

By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.

For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.

To use the PutAccountPolicy operation for a data protection policy, you must be signed on with the logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.

The PutAccountPolicy operation applies to all log groups in the account. You can use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.

Subscription filter policy

A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Kinesis Data Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.

The following destinations are supported for subscription filters:

Each account can have one account-level subscription filter policy. If you are updating an existing filter, you must specify the correct name in PolicyName. To perform a PutAccountPolicy subscription filter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.

", + "PutAccountPolicy": "

Creates an account-level data protection policy or subscription filter policy that applies to all log groups or a subset of log groups in the account.

Data protection policy

A data protection policy can help safeguard sensitive data that's ingested by your log groups by auditing and masking the sensitive log data. Each account can have only one account-level data protection policy.

Sensitive data is detected and masked when it is ingested into a log group. When you set a data protection policy, log events ingested into the log groups before that time are not masked.

If you use PutAccountPolicy to create a data protection policy for your whole account, it applies to both existing log groups and all log groups that are created later in this account. The account-level policy is applied to existing log groups with eventual consistency. It might take up to 5 minutes before sensitive data in existing log groups begins to be masked.

By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.

For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.

To use the PutAccountPolicy operation for a data protection policy, you must be signed on with the logs:PutDataProtectionPolicy and logs:PutAccountPolicy permissions.

The PutAccountPolicy operation applies to all log groups in the account. You can use PutDataProtectionPolicy to create a data protection policy that applies to just one log group. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.

Subscription filter policy

A subscription filter policy sets up a real-time feed of log events from CloudWatch Logs to other Amazon Web Services services. Account-level subscription filter policies apply to both existing log groups and log groups that are created later in this account. Supported destinations are Kinesis Data Streams, Firehose, and Lambda. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.

The following destinations are supported for subscription filters:

Each account can have one account-level subscription filter policy. If you are updating an existing filter, you must specify the correct name in PolicyName. To perform a PutAccountPolicy subscription filter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.

", "PutDataProtectionPolicy": "

Creates a data protection policy for the specified log group. A data protection policy can help safeguard sensitive data that's ingested by the log group by auditing and masking the sensitive log data.

Sensitive data is detected and masked when it is ingested into the log group. When you set a data protection policy, log events ingested into the log group before that time are not masked.

By default, when a user views a log event that includes masked data, the sensitive data is replaced by asterisks. A user who has the logs:Unmask permission can use a GetLogEvents or FilterLogEvents operation with the unmask parameter set to true to view the unmasked log events. Users with the logs:Unmask can also view unmasked data in the CloudWatch Logs console by running a CloudWatch Logs Insights query with the unmask query command.

For more information, including a list of types of data that can be audited and masked, see Protect sensitive log data with masking.

The PutDataProtectionPolicy operation applies to only the specified log group. You can also use PutAccountPolicy to create an account-level data protection policy that applies to all log groups in the account, including both existing log groups and log groups that are created level. If a log group has its own data protection policy and the account also has an account-level data protection policy, then the two policies are cumulative. Any sensitive term specified in either policy is masked.

", - "PutDeliveryDestination": "

Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Kinesis Data Firehose are supported as logs delivery destinations.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify.

", + "PutDeliveryDestination": "

Creates or updates a logical delivery destination. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, and Firehose are supported as logs delivery destinations.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

If you use this operation to update an existing delivery destination, all the current delivery destination parameters are overwritten with the new parameter values that you specify.

", "PutDeliveryDestinationPolicy": "

Creates and assigns an IAM policy that grants permissions to CloudWatch Logs to deliver logs cross-account to a specified destination in this account. To configure the delivery of logs from an Amazon Web Services service in another account to a logs delivery destination in the current account, you must do the following:

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

The contents of the policy must include two statements. One statement enables general logs delivery, and the other allows delivery to the chosen destination. See the examples for the needed policies.

", - "PutDeliverySource": "

Creates or updates a logical delivery source. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Kinesis Data Firehose.

To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify.

", + "PutDeliverySource": "

Creates or updates a logical delivery source. A delivery source represents an Amazon Web Services resource that sends logs to an logs delivery destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose.

To configure logs delivery between a delivery destination and an Amazon Web Services service that is supported as a delivery source, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

If you use this operation to update an existing delivery source, all the current delivery source parameters are overwritten with the new parameter values that you specify.

", "PutDestination": "

Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions.

A destination encapsulates a physical resource (such as an Amazon Kinesis stream). With a destination, you can subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.

Through an access policy, a destination controls what is written to it. By default, PutDestination does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination.

To perform a PutDestination operation, you must also have the iam:PassRole permission.

", "PutDestinationPolicy": "

Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination.

", "PutLogEvents": "

Uploads a batch of log events to the specified log stream.

The sequence token is now ignored in PutLogEvents actions. PutLogEvents actions are always accepted and never return InvalidSequenceTokenException or DataAlreadyAcceptedException even if the sequence token is not valid. You can use parallel PutLogEvents actions on the same log stream.

The batch of events must satisfy the following constraints:

If a call to PutLogEvents returns \"UnrecognizedClientException\" the most likely cause is a non-valid Amazon Web Services access key ID or secret key.

", @@ -65,7 +65,7 @@ "PutQueryDefinition": "

Creates or updates a query definition for CloudWatch Logs Insights. For more information, see Analyzing Log Data with CloudWatch Logs Insights.

To update a query definition, specify its queryDefinitionId in your request. The values of name, queryString, and logGroupNames are changed to the values that you specify in your update operation. No current values are retained from the current query definition. For example, imagine updating a current query definition that includes log groups. If you don't specify the logGroupNames parameter in your update operation, the query definition changes to contain no log groups.

You must have the logs:PutQueryDefinition permission to be able to perform this operation.

", "PutResourcePolicy": "

Creates or updates a resource policy allowing other Amazon Web Services services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per Amazon Web Services Region.

", "PutRetentionPolicy": "

Sets the retention of the specified log group. With a retention policy, you can configure the number of days for which to retain log events in the specified log group.

CloudWatch Logs doesn’t immediately delete log events when they reach their retention setting. It typically takes up to 72 hours after that before log events are deleted, but in rare situations might take longer.

To illustrate, imagine that you change a log group to have a longer retention setting when it contains log events that are past the expiration date, but haven’t been deleted. Those log events will take up to 72 hours to be deleted after the new retention date is reached. To make sure that log data is deleted permanently, keep a log group at its lower retention setting until 72 hours after the previous retention period ends. Alternatively, wait to change the retention setting until you confirm that the earlier log events are deleted.

When log events reach their retention setting they are marked for deletion. After they are marked for deletion, they do not add to your archival storage costs anymore, even if they are not actually deleted until later. These log events marked for deletion are also not included when you use an API to retrieve the storedBytes value to see how many bytes a log group is storing.

", - "PutSubscriptionFilter": "

Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.

The following destinations are supported for subscription filters:

Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in filterName.

To perform a PutSubscriptionFilter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.

", + "PutSubscriptionFilter": "

Creates or updates a subscription filter and associates it with the specified log group. With subscription filters, you can subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. When log events are sent to the receiving service, they are Base64 encoded and compressed with the GZIP format.

The following destinations are supported for subscription filters:

Each log group can have up to two subscription filters associated with it. If you are updating an existing filter, you must specify the correct name in filterName.

To perform a PutSubscriptionFilter operation for any destination except a Lambda function, you must also have the iam:PassRole permission.

", "StartLiveTail": "

Starts a Live Tail streaming session for one or more log groups. A Live Tail session returns a stream of log events that have been recently ingested in the log groups. For more information, see Use Live Tail to view logs in near real time.

The response to this operation is a response stream, over which the server sends live log events and the client receives them.

The following objects are sent over the stream:

You can end a session before it times out by closing the session stream or by closing the client that is receiving the stream. The session also ends if the established connection between the client and the server breaks.

For examples of using an SDK to start a Live Tail session, see Start a Live Tail session using an Amazon Web Services SDK.

", "StartQuery": "

Schedules a query of a log group using CloudWatch Logs Insights. You specify the log group and time range to query and the query string to use.

For more information, see CloudWatch Logs Insights Query Syntax.

After you run a query using StartQuery, the query results are stored by CloudWatch Logs. You can use GetQueryResults to retrieve the results of a query, using the queryId that StartQuery returns.

If you have associated a KMS key with the query results in this account, then StartQuery uses that key to encrypt the results when it stores them. If no key is associated with query results, the query results are encrypted with the default CloudWatch Logs encryption method.

Queries time out after 60 minutes of runtime. If your queries are timing out, reduce the time range being searched or partition your query into a number of queries.

If you are using CloudWatch cross-account observability, you can use this operation in a monitoring account to start a query in a linked source account. For more information, see CloudWatch cross-account observability. For a cross-account StartQuery operation, the query definition must be defined in the monitoring account.

You can have up to 30 concurrent CloudWatch Logs insights queries, including queries that have been added to dashboards.

", "StopQuery": "

Stops a CloudWatch Logs Insights query that is in progress. If the query has already ended, the operation returns an error indicating that the specified query is not running.

", @@ -121,7 +121,7 @@ "base": null, "refs": { "AccountPolicy$policyDocument": "

The policy document for this account policy.

The JSON specified in policyDocument can be up to 30,720 characters.

", - "PutAccountPolicyRequest$policyDocument": "

Specify the policy, in JSON.

Data protection policy

A data protection policy must include two JSON blocks:

For an example data protection policy, see the Examples section on this page.

The contents of the two DataIdentifer arrays must match exactly.

In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.

The JSON specified in policyDocument can be up to 30,720 characters long.

Subscription filter policy

A subscription filter policy can include the following attributes in a JSON block:

" + "PutAccountPolicyRequest$policyDocument": "

Specify the policy, in JSON.

Data protection policy

A data protection policy must include two JSON blocks:

For an example data protection policy, see the Examples section on this page.

The contents of the two DataIdentifer arrays must match exactly.

In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is different than the operation's policyName parameter, and is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.

The JSON specified in policyDocument can be up to 30,720 characters long.

Subscription filter policy

A subscription filter policy can include the following attributes in a JSON block:

" } }, "AmazonResourceName": { @@ -200,7 +200,7 @@ "Delivery$arn": "

The Amazon Resource Name (ARN) that uniquely identifies this delivery.

", "Delivery$deliveryDestinationArn": "

The ARN of the delivery destination that is associated with this delivery.

", "DeliveryDestination$arn": "

The Amazon Resource Name (ARN) that uniquely identifies this delivery destination.

", - "DeliveryDestinationConfiguration$destinationResourceArn": "

The ARN of the Amazon Web Services destination that this delivery destination represents. That Amazon Web Services destination can be a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Kinesis Data Firehose.

", + "DeliveryDestinationConfiguration$destinationResourceArn": "

The ARN of the Amazon Web Services destination that this delivery destination represents. That Amazon Web Services destination can be a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery stream in Firehose.

", "DeliverySource$arn": "

The Amazon Resource Name (ARN) that uniquely identifies this delivery source.

", "Destination$arn": "

The ARN of this destination.

", "LogGroup$arn": "

The Amazon Resource Name (ARN) of the log group. This version of the ARN includes a trailing :* after the log group name.

Use this version to refer to the ARN in IAM policies when specifying permissions for most API actions. The exception is when specifying permissions for TagResource, UntagResource, and ListTagsForResource. The permissions for those three actions require the ARN version that doesn't include a trailing :*.

", @@ -296,7 +296,7 @@ "base": null, "refs": { "GetDataProtectionPolicyResponse$policyDocument": "

The data protection policy document for this log group.

", - "PutDataProtectionPolicyRequest$policyDocument": "

Specify the data protection policy, in JSON.

This policy must include two JSON blocks:

For an example data protection policy, see the Examples section on this page.

The contents of the two DataIdentifer arrays must match exactly.

In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.

The JSON specified in policyDocument can be up to 30,720 characters.

", + "PutDataProtectionPolicyRequest$policyDocument": "

Specify the data protection policy, in JSON.

This policy must include two JSON blocks:

For an example data protection policy, see the Examples section on this page.

The contents of the two DataIdentifer arrays must match exactly.

In addition to the two JSON blocks, the policyDocument can also include Name, Description, and Version fields. The Name is used as a dimension when CloudWatch Logs reports audit findings metrics to CloudWatch.

The JSON specified in policyDocument can be up to 30,720 characters.

", "PutDataProtectionPolicyResponse$policyDocument": "

The data protection policy used for this log group.

" } }, @@ -414,7 +414,7 @@ } }, "DeliveryDestination": { - "base": "

This structure contains information about one delivery destination in your account. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, are supported as Kinesis Data Firehose delivery destinations.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

", + "base": "

This structure contains information about one delivery destination in your account. A delivery destination is an Amazon Web Services resource that represents an Amazon Web Services service that logs can be sent to. CloudWatch Logs, Amazon S3, are supported as Firehose delivery destinations.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

", "refs": { "DeliveryDestinations$member": null, "GetDeliveryDestinationResponse$deliveryDestination": "

A structure containing information about the delivery destination.

", @@ -450,8 +450,8 @@ "DeliveryDestinationType": { "base": null, "refs": { - "Delivery$deliveryDestinationType": "

Displays whether the delivery destination associated with this delivery is CloudWatch Logs, Amazon S3, or Kinesis Data Firehose.

", - "DeliveryDestination$deliveryDestinationType": "

Displays whether this delivery destination is CloudWatch Logs, Amazon S3, or Kinesis Data Firehose.

" + "Delivery$deliveryDestinationType": "

Displays whether the delivery destination associated with this delivery is CloudWatch Logs, Amazon S3, or Firehose.

", + "DeliveryDestination$deliveryDestinationType": "

Displays whether this delivery destination is CloudWatch Logs, Amazon S3, or Firehose.

" } }, "DeliveryDestinations": { @@ -469,7 +469,7 @@ } }, "DeliverySource": { - "base": "

This structure contains information about one delivery source in your account. A delivery source is an Amazon Web Services resource that sends logs to an Amazon Web Services destination. The destination can be CloudWatch Logs, Amazon S3, or Kinesis Data Firehose.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

", + "base": "

This structure contains information about one delivery source in your account. A delivery source is an Amazon Web Services resource that sends logs to an Amazon Web Services destination. The destination can be CloudWatch Logs, Amazon S3, or Firehose.

Only some Amazon Web Services services support being configured as a delivery source. These services are listed as Supported [V2 Permissions] in the table at Enabling logging from Amazon Web Services services.

To configure logs delivery between a supported Amazon Web Services service and a destination, you must do the following:

You can configure a single delivery source to send logs to multiple destinations by creating multiple deliveries. You can also create multiple deliveries to configure multiple delivery sources to send logs to the same delivery destination.

", "refs": { "DeliverySources$member": null, "GetDeliverySourceResponse$deliverySource": "

A structure containing information about the delivery source.

", @@ -776,6 +776,7 @@ "FilteredLogEvent$message": "

The data contained in the log event.

", "InputLogEvent$message": "

The raw event message. Each log event can be no larger than 256 KB.

", "LiveTailSessionLogEvent$message": "

The log event message text.

", + "LogEvent$message": "

The message content of the log event.

", "MetricFilterMatchRecord$eventMessage": "

The raw event data.

", "OutputLogEvent$message": "

The data contained in the log event.

", "TestEventMessages$member": null @@ -1218,7 +1219,7 @@ } }, "LogEvent": { - "base": null, + "base": "

This structure contains the information for one sample log event that is associated with an anomaly found by a log anomaly detector.

", "refs": { "LogSamples$member": null } @@ -1226,8 +1227,8 @@ "LogEventIndex": { "base": null, "refs": { - "RejectedLogEventsInfo$tooNewLogEventStartIndex": "

The log events that are too new.

", - "RejectedLogEventsInfo$tooOldLogEventEndIndex": "

The log events that are dated too far in the past.

", + "RejectedLogEventsInfo$tooNewLogEventStartIndex": "

The index of the first log event that is too new. This field is inclusive.

", + "RejectedLogEventsInfo$tooOldLogEventEndIndex": "

The index of the last log event that is too old. This field is exclusive.

", "RejectedLogEventsInfo$expiredLogEventEndIndex": "

The expired log events.

" } }, @@ -1410,7 +1411,7 @@ "base": null, "refs": { "DeliverySource$logType": "

The type of log that the source is sending. For valid values for this parameter, see the documentation for the source service.

", - "PutDeliverySourceRequest$logType": "

Defines the type of log that the source is sending. For Amazon CodeWhisperer, the valid value is EVENT_LOGS.

" + "PutDeliverySourceRequest$logType": "

Defines the type of log that the source is sending.

" } }, "MalformedQueryException": { @@ -2212,6 +2213,7 @@ "InputLogEvent$timestamp": "

The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

", "LiveTailSessionLogEvent$timestamp": "

The timestamp specifying when this log event was created.

", "LiveTailSessionLogEvent$ingestionTime": "

The timestamp specifying when this log event was ingested into the log group.

", + "LogEvent$timestamp": "

The time stamp of the log event.

", "LogGroup$creationTime": "

The creation time of the log group, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

", "LogStream$creationTime": "

The creation time of the stream, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

", "LogStream$firstEventTimestamp": "

The time of the first event, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

", diff --git a/apis/managedblockchain-query/2023-05-04/api-2.json b/apis/managedblockchain-query/2023-05-04/api-2.json index 36fcb627190..f8c8b852578 100644 --- a/apis/managedblockchain-query/2023-05-04/api-2.json +++ b/apis/managedblockchain-query/2023-05-04/api-2.json @@ -102,6 +102,23 @@ {"shape":"ServiceQuotaExceededException"} ] }, + "ListFilteredTransactionEvents":{ + "name":"ListFilteredTransactionEvents", + "http":{ + "method":"POST", + "requestUri":"/list-filtered-transaction-events", + "responseCode":200 + }, + "input":{"shape":"ListFilteredTransactionEventsInput"}, + "output":{"shape":"ListFilteredTransactionEventsOutput"}, + "errors":[ + {"shape":"ThrottlingException"}, + {"shape":"ValidationException"}, + {"shape":"AccessDeniedException"}, + {"shape":"InternalServerException"}, + {"shape":"ServiceQuotaExceededException"} + ] + }, "ListTokenBalances":{ "name":"ListTokenBalances", "http":{ @@ -167,6 +184,19 @@ }, "exception":true }, + "AddressIdentifierFilter":{ + "type":"structure", + "required":["transactionEventToAddress"], + "members":{ + "transactionEventToAddress":{"shape":"AddressIdentifierFilterTransactionEventToAddressList"} + } + }, + "AddressIdentifierFilterTransactionEventToAddressList":{ + "type":"list", + "member":{"shape":"ChainAddress"}, + "max":1, + "min":1 + }, "AssetContract":{ "type":"structure", "required":[ @@ -267,6 +297,10 @@ "time":{"shape":"Timestamp"} } }, + "Boolean":{ + "type":"boolean", + "box":true + }, "ChainAddress":{ "type":"string", "pattern":"[-A-Za-z0-9]{13,74}" @@ -454,6 +488,48 @@ "nextToken":{"shape":"NextToken"} } }, + "ListFilteredTransactionEventsInput":{ + "type":"structure", + "required":[ + "network", + "addressIdentifierFilter" + ], + "members":{ + "network":{"shape":"String"}, + "addressIdentifierFilter":{"shape":"AddressIdentifierFilter"}, + "timeFilter":{"shape":"TimeFilter"}, + "voutFilter":{"shape":"VoutFilter"}, + "confirmationStatusFilter":{"shape":"ConfirmationStatusFilter"}, + "sort":{"shape":"ListFilteredTransactionEventsSort"}, + "nextToken":{"shape":"NextToken"}, + "maxResults":{"shape":"ListFilteredTransactionEventsInputMaxResultsInteger"} + } + }, + "ListFilteredTransactionEventsInputMaxResultsInteger":{ + "type":"integer", + "box":true, + "max":250, + "min":1 + }, + "ListFilteredTransactionEventsOutput":{ + "type":"structure", + "required":["events"], + "members":{ + "events":{"shape":"TransactionEventList"}, + "nextToken":{"shape":"NextToken"} + } + }, + "ListFilteredTransactionEventsSort":{ + "type":"structure", + "members":{ + "sortBy":{"shape":"ListFilteredTransactionEventsSortBy"}, + "sortOrder":{"shape":"SortOrder"} + } + }, + "ListFilteredTransactionEventsSortBy":{ + "type":"string", + "enum":["blockchainInstant"] + }, "ListTokenBalancesInput":{ "type":"structure", "required":["tokenFilter"], @@ -480,12 +556,10 @@ }, "ListTransactionEventsInput":{ "type":"structure", - "required":[ - "transactionHash", - "network" - ], + "required":["network"], "members":{ "transactionHash":{"shape":"QueryTransactionHash"}, + "transactionId":{"shape":"QueryTransactionId"}, "network":{"shape":"QueryNetwork"}, "nextToken":{"shape":"NextToken"}, "maxResults":{"shape":"ListTransactionEventsInputMaxResultsInteger"} @@ -611,6 +685,10 @@ "type":"string", "pattern":"(0x[A-Fa-f0-9]{64}|[A-Fa-f0-9]{64})" }, + "QueryTransactionId":{ + "type":"string", + "pattern":"(0x[A-Fa-f0-9]{64}|[A-Fa-f0-9]{64})" + }, "QuotaCode":{"type":"string"}, "ResourceId":{"type":"string"}, "ResourceNotFoundException":{ @@ -690,6 +768,13 @@ "exception":true, "retryable":{"throttling":true} }, + "TimeFilter":{ + "type":"structure", + "members":{ + "from":{"shape":"BlockchainInstant"}, + "to":{"shape":"BlockchainInstant"} + } + }, "Timestamp":{"type":"timestamp"}, "TokenBalance":{ "type":"structure", @@ -779,7 +864,13 @@ "contractAddress":{"shape":"ChainAddress"}, "tokenId":{"shape":"QueryTokenId"}, "transactionId":{"shape":"String"}, - "voutIndex":{"shape":"Integer"} + "voutIndex":{"shape":"Integer"}, + "voutSpent":{"shape":"Boolean"}, + "spentVoutTransactionId":{"shape":"String"}, + "spentVoutTransactionHash":{"shape":"String"}, + "spentVoutIndex":{"shape":"Integer"}, + "blockchainInstant":{"shape":"BlockchainInstant"}, + "confirmationStatus":{"shape":"ConfirmationStatus"} } }, "TransactionEventList":{ @@ -848,6 +939,13 @@ "fieldValidationFailed", "other" ] + }, + "VoutFilter":{ + "type":"structure", + "required":["voutSpent"], + "members":{ + "voutSpent":{"shape":"Boolean"} + } } } } diff --git a/apis/managedblockchain-query/2023-05-04/docs-2.json b/apis/managedblockchain-query/2023-05-04/docs-2.json index efc92f33874..5c55a7f79eb 100644 --- a/apis/managedblockchain-query/2023-05-04/docs-2.json +++ b/apis/managedblockchain-query/2023-05-04/docs-2.json @@ -7,9 +7,10 @@ "GetTokenBalance": "

Gets the balance of a specific token, including native tokens, for a given address (wallet or contract) on the blockchain.

Only the native tokens BTC and ETH, and the ERC-20, ERC-721, and ERC 1155 token standards are supported.

", "GetTransaction": "

Gets the details of a transaction.

This action will return transaction details for all transactions that are confirmed on the blockchain, even if they have not reached finality.

", "ListAssetContracts": "

Lists all the contracts for a given contract type deployed by an address (either a contract address or a wallet address).

The Bitcoin blockchain networks do not support this operation.

", + "ListFilteredTransactionEvents": "

Lists all the transaction events for an address on the blockchain.

This operation is only supported on the Bitcoin networks.

", "ListTokenBalances": "

This action returns the following for a given blockchain network:

You must always specify the network property of the tokenFilter when using this operation.

", - "ListTransactionEvents": "

An array of TransactionEvent objects. Each object contains details about the transaction event.

This action will return transaction details for all transactions that are confirmed on the blockchain, even if they have not reached finality.

", - "ListTransactions": "

Lists all of the transactions on a given wallet address or to a specific contract.

" + "ListTransactionEvents": "

Lists all the transaction events for a transaction

This action will return transaction details for all transactions that are confirmed on the blockchain, even if they have not reached finality.

", + "ListTransactions": "

Lists all the transaction events for a transaction.

" }, "shapes": { "AccessDeniedException": { @@ -17,6 +18,18 @@ "refs": { } }, + "AddressIdentifierFilter": { + "base": "

This is the container for the unique public address on the blockchain.

", + "refs": { + "ListFilteredTransactionEventsInput$addressIdentifierFilter": "

This is the unique public address on the blockchain for which the transaction events are being requested.

" + } + }, + "AddressIdentifierFilterTransactionEventToAddressList": { + "base": null, + "refs": { + "AddressIdentifierFilter$transactionEventToAddress": "

The container for the recipient address of the transaction.

" + } + }, "AssetContract": { "base": "

This container contains information about an contract.

", "refs": { @@ -87,13 +100,24 @@ "GetTokenBalanceOutput$lastUpdatedTime": null, "ListTransactionsInput$fromBlockchainInstant": null, "ListTransactionsInput$toBlockchainInstant": null, + "TimeFilter$from": null, + "TimeFilter$to": null, "TokenBalance$atBlockchainInstant": "

The time for when the TokenBalance is requested or the current time if a time is not provided in the request.

This time will only be recorded up to the second.

", - "TokenBalance$lastUpdatedTime": "

The Timestamp of the last transaction at which the balance for the token in the wallet was updated.

" + "TokenBalance$lastUpdatedTime": "

The Timestamp of the last transaction at which the balance for the token in the wallet was updated.

", + "TransactionEvent$blockchainInstant": null + } + }, + "Boolean": { + "base": null, + "refs": { + "TransactionEvent$voutSpent": "

Specifies if the transaction output is spent or unspent. This is only returned for BITCOIN_VOUT event types.

This is only returned for BITCOIN_VOUT event types.

", + "VoutFilter$voutSpent": "

Specifies if the transaction output is spent or unspent.

" } }, "ChainAddress": { "base": null, "refs": { + "AddressIdentifierFilterTransactionEventToAddressList$member": null, "AssetContract$deployerAddress": "

The address of the contract deployer.

", "ContractFilter$deployerAddress": "

The network address of the deployer.

", "ContractIdentifier$contractAddress": "

Container for the blockchain address about a contract.

", @@ -108,7 +132,7 @@ "Transaction$contractAddress": "

The blockchain address for the contract.

", "TransactionEvent$from": "

The wallet address initiating the transaction. It can either be a public key or a contract.

", "TransactionEvent$to": "

The wallet address receiving the transaction. It can either be a public key or a contract.

", - "TransactionEvent$contractAddress": "

The blockchain address. for the contract

" + "TransactionEvent$contractAddress": "

The blockchain address for the contract

" } }, "ConfirmationStatus": { @@ -116,13 +140,15 @@ "refs": { "ConfirmationStatusIncludeList$member": null, "Transaction$confirmationStatus": "

Specifies whether the transaction has reached Finality.

", + "TransactionEvent$confirmationStatus": "

This container specifies whether the transaction has reached Finality.

", "TransactionOutputItem$confirmationStatus": "

Specifies whether to list transactions that have not reached Finality.

" } }, "ConfirmationStatusFilter": { "base": "

The container for the ConfirmationStatusFilter that filters for the finality of the results.

", "refs": { - "ListTransactionsInput$confirmationStatusFilter": "

This filter is used to include transactions in the response that haven't reached finality . Transactions that have reached finiality are always part of the response.

" + "ListFilteredTransactionEventsInput$confirmationStatusFilter": null, + "ListTransactionsInput$confirmationStatusFilter": "

This filter is used to include transactions in the response that haven't reached finality . Transactions that have reached finality are always part of the response.

" } }, "ConfirmationStatusIncludeList": { @@ -214,10 +240,11 @@ "base": null, "refs": { "ContractMetadata$decimals": "

The decimals used by the token contract.

", - "InternalServerException$retryAfterSeconds": "

The container of the retryAfterSeconds value.

", + "InternalServerException$retryAfterSeconds": "

Specifies the retryAfterSeconds value.

", "ThrottlingException$retryAfterSeconds": "

The container of the retryAfterSeconds value.

", "Transaction$signatureV": "

The signature of the transaction. The Z coordinate of a point V.

", - "TransactionEvent$voutIndex": "

The position of the vout in the transaction output list.

" + "TransactionEvent$voutIndex": "

The position of the transaction output in the transaction output list.

", + "TransactionEvent$spentVoutIndex": "

The position of the spent transaction output in the output list of the creating transaction.

This is only returned for BITCOIN_VIN event types.

" } }, "InternalServerException": { @@ -233,7 +260,7 @@ "ListAssetContractsInputMaxResultsInteger": { "base": null, "refs": { - "ListAssetContractsInput$maxResults": "

The maximum number of contracts to list.

Default:100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" + "ListAssetContractsInput$maxResults": "

The maximum number of contracts to list.

Default: 100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" } }, "ListAssetContractsOutput": { @@ -241,6 +268,34 @@ "refs": { } }, + "ListFilteredTransactionEventsInput": { + "base": null, + "refs": { + } + }, + "ListFilteredTransactionEventsInputMaxResultsInteger": { + "base": null, + "refs": { + "ListFilteredTransactionEventsInput$maxResults": "

The maximum number of transaction events to list.

Default: 100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" + } + }, + "ListFilteredTransactionEventsOutput": { + "base": null, + "refs": { + } + }, + "ListFilteredTransactionEventsSort": { + "base": "

Lists all the transaction events for an address on the blockchain.

This operation is only supported on the Bitcoin blockchain networks.

", + "refs": { + "ListFilteredTransactionEventsInput$sort": "

The order by which the results will be sorted.

" + } + }, + "ListFilteredTransactionEventsSortBy": { + "base": null, + "refs": { + "ListFilteredTransactionEventsSort$sortBy": "

Container on how the results will be sorted by?

" + } + }, "ListTokenBalancesInput": { "base": null, "refs": { @@ -249,7 +304,7 @@ "ListTokenBalancesInputMaxResultsInteger": { "base": null, "refs": { - "ListTokenBalancesInput$maxResults": "

The maximum number of token balances to return.

Default:100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" + "ListTokenBalancesInput$maxResults": "

The maximum number of token balances to return.

Default: 100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" } }, "ListTokenBalancesOutput": { @@ -265,7 +320,7 @@ "ListTransactionEventsInputMaxResultsInteger": { "base": null, "refs": { - "ListTransactionEventsInput$maxResults": "

The maximum number of transaction events to list.

Default:100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" + "ListTransactionEventsInput$maxResults": "

The maximum number of transaction events to list.

Default: 100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" } }, "ListTransactionEventsOutput": { @@ -281,7 +336,7 @@ "ListTransactionsInputMaxResultsInteger": { "base": null, "refs": { - "ListTransactionsInput$maxResults": "

The maximum number of transactions to list.

Default:100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" + "ListTransactionsInput$maxResults": "

The maximum number of transactions to list.

Default: 100

Even if additional results can be retrieved, the request can return less results than maxResults or an empty array of results.

To retrieve the next set of results, make another request with the returned nextToken value. The value of nextToken is null when there are no more results to return

" } }, "ListTransactionsOutput": { @@ -292,7 +347,7 @@ "ListTransactionsSort": { "base": "

The container for determining how the list transaction result will be sorted.

", "refs": { - "ListTransactionsInput$sort": "

The order by which the results will be sorted. If ASCENNDING is selected, the results will be ordered by fromTime.

" + "ListTransactionsInput$sort": "

The order by which the results will be sorted.

" } }, "ListTransactionsSortBy": { @@ -313,6 +368,8 @@ "refs": { "ListAssetContractsInput$nextToken": "

The pagination token that indicates the next set of results to retrieve.

", "ListAssetContractsOutput$nextToken": "

The pagination token that indicates the next set of results to retrieve.

", + "ListFilteredTransactionEventsInput$nextToken": "

The pagination token that indicates the next set of results to retrieve.

", + "ListFilteredTransactionEventsOutput$nextToken": "

The pagination token that indicates the next set of results to retrieve.

", "ListTokenBalancesInput$nextToken": "

The pagination token that indicates the next set of results to retrieve.

", "ListTokenBalancesOutput$nextToken": "

The pagination token that indicates the next set of results to retrieve.

", "ListTransactionEventsInput$nextToken": "

The pagination token that indicates the next set of results to retrieve.

", @@ -328,7 +385,7 @@ } }, "OwnerIdentifier": { - "base": "

The container for the identifier of the owner.

", + "base": "

The container for the owner identifier.

", "refs": { "BatchGetTokenBalanceErrorItem$ownerIdentifier": null, "BatchGetTokenBalanceInputItem$ownerIdentifier": null, @@ -378,11 +435,17 @@ "QueryTransactionHash": { "base": null, "refs": { - "GetTransactionInput$transactionHash": "

The hash of the transaction. It is generated whenever a transaction is verified and added to the blockchain.

", - "ListTransactionEventsInput$transactionHash": "

The hash of the transaction. It is generated whenever a transaction is verified and added to the blockchain.

", - "Transaction$transactionHash": "

The hash of the transaction. It is generated whenever a transaction is verified and added to the blockchain.

", - "TransactionEvent$transactionHash": "

The hash of the transaction. It is generated whenever a transaction is verified and added to the blockchain.

", - "TransactionOutputItem$transactionHash": "

The hash of the transaction. It is generated whenever a transaction is verified and added to the blockchain.

" + "GetTransactionInput$transactionHash": "

The hash of a transaction. It is generated when a transaction is created.

", + "ListTransactionEventsInput$transactionHash": "

The hash of a transaction. It is generated when a transaction is created.

", + "Transaction$transactionHash": "

The hash of a transaction. It is generated when a transaction is created.

", + "TransactionEvent$transactionHash": "

The hash of a transaction. It is generated when a transaction is created.

", + "TransactionOutputItem$transactionHash": "

The hash of a transaction. It is generated when a transaction is created.

" + } + }, + "QueryTransactionId": { + "base": null, + "refs": { + "ListTransactionEventsInput$transactionId": "

The identifier of a Bitcoin transaction. It is generated when a transaction is created.

transactionId is only supported on the Bitcoin networks.

" } }, "QuotaCode": { @@ -426,6 +489,7 @@ "SortOrder": { "base": null, "refs": { + "ListFilteredTransactionEventsSort$sortOrder": "

The container for the sort order for ListFilteredTransactionEvents. The SortOrder field only accepts the values ASCENDING and DESCENDING. Not providing SortOrder will default to ASCENDING.

", "ListTransactionsSort$sortOrder": "

The container for the sort order for ListTransactions. The SortOrder field only accepts the values ASCENDING and DESCENDING. Not providing SortOrder will default to ASCENDING.

" } }, @@ -438,6 +502,7 @@ "ContractMetadata$name": "

The name of the token contract.

", "ContractMetadata$symbol": "

The symbol of the token contract.

", "GetTokenBalanceOutput$balance": "

The container for the token balance.

", + "ListFilteredTransactionEventsInput$network": "

The blockchain network where the transaction occurred.

Valid Values: BITCOIN_MAINNET | BITCOIN_TESTNET

", "TokenBalance$balance": "

The container of the token balance.

", "Transaction$blockNumber": "

The block number in which the transaction is recorded.

", "Transaction$gasUsed": "

The amount of gas used for the transaction.

", @@ -446,9 +511,11 @@ "Transaction$signatureR": "

The signature of the transaction. The X coordinate of a point R.

", "Transaction$signatureS": "

The signature of the transaction. The Y coordinate of a point S.

", "Transaction$transactionFee": "

The transaction fee.

", - "Transaction$transactionId": "

The unique identifier of the transaction. It is generated whenever a transaction is verified and added to the blockchain.

", + "Transaction$transactionId": "

The identifier of a Bitcoin transaction. It is generated when a transaction is created.

", "TransactionEvent$value": "

The value that was transacted.

", - "TransactionEvent$transactionId": "

The unique identifier of the transaction. It is generated whenever a transaction is verified and added to the blockchain.

", + "TransactionEvent$transactionId": "

The identifier of a Bitcoin transaction. It is generated when a transaction is created.

", + "TransactionEvent$spentVoutTransactionId": "

The transactionId that created the spent transaction output.

This is only returned for BITCOIN_VIN event types.

", + "TransactionEvent$spentVoutTransactionHash": "

The transactionHash that created the spent transaction output.

This is only returned for BITCOIN_VIN event types.

", "ValidationExceptionField$name": "

The name of the field that triggered the ValidationException.

", "ValidationExceptionField$message": "

The ValidationException message.

" } @@ -458,6 +525,12 @@ "refs": { } }, + "TimeFilter": { + "base": "

This container is used to specify a time frame.

", + "refs": { + "ListFilteredTransactionEventsInput$timeFilter": "

This container specifies the time frame for the transaction events returned in the response.

" + } + }, "Timestamp": { "base": null, "refs": { @@ -510,6 +583,7 @@ "TransactionEventList": { "base": null, "refs": { + "ListFilteredTransactionEventsOutput$events": "

The transaction events returned by the request.

", "ListTransactionEventsOutput$events": "

An array of TransactionEvent objects. Each object contains details about the transaction events.

" } }, @@ -547,6 +621,12 @@ "refs": { "ValidationException$reason": "

The container for the reason for the exception

" } + }, + "VoutFilter": { + "base": "

This container specifies filtering attributes related to BITCOIN_VOUT event types

", + "refs": { + "ListFilteredTransactionEventsInput$voutFilter": "

This container specifies filtering attributes related to BITCOIN_VOUT event types

" + } } } } diff --git a/apis/managedblockchain-query/2023-05-04/paginators-1.json b/apis/managedblockchain-query/2023-05-04/paginators-1.json index 7625c5cab4a..3948bd42661 100644 --- a/apis/managedblockchain-query/2023-05-04/paginators-1.json +++ b/apis/managedblockchain-query/2023-05-04/paginators-1.json @@ -6,6 +6,12 @@ "limit_key": "maxResults", "result_key": "contracts" }, + "ListFilteredTransactionEvents": { + "input_token": "nextToken", + "output_token": "nextToken", + "limit_key": "maxResults", + "result_key": "events" + }, "ListTokenBalances": { "input_token": "nextToken", "output_token": "nextToken", diff --git a/gems/aws-sdk-cloudformation/CHANGELOG.md b/gems/aws-sdk-cloudformation/CHANGELOG.md index ca84b2ddd50..546a6fd2618 100644 --- a/gems/aws-sdk-cloudformation/CHANGELOG.md +++ b/gems/aws-sdk-cloudformation/CHANGELOG.md @@ -1,6 +1,11 @@ Unreleased Changes ------------------ +1.103.0 (2024-03-19) +------------------ + +* Feature - Documentation update, March 2024. Corrects some formatting. + 1.102.0 (2024-03-18) ------------------ diff --git a/gems/aws-sdk-cloudformation/VERSION b/gems/aws-sdk-cloudformation/VERSION index 1c55b869e98..e402df2ddc9 100644 --- a/gems/aws-sdk-cloudformation/VERSION +++ b/gems/aws-sdk-cloudformation/VERSION @@ -1 +1 @@ -1.102.0 +1.103.0 diff --git a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation.rb b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation.rb index 257bf6905a1..6e83877b924 100644 --- a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation.rb +++ b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation.rb @@ -57,6 +57,6 @@ # @!group service module Aws::CloudFormation - GEM_VERSION = '1.102.0' + GEM_VERSION = '1.103.0' end diff --git a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/client.rb b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/client.rb index 8993857db9b..dafbfcbcdf5 100644 --- a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/client.rb +++ b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/client.rb @@ -408,7 +408,7 @@ def activate_organizations_access(params = {}, options = {}) # extensions][1] in the *CloudFormation User Guide*. # # Once you have activated a public third-party extension in your account - # and Region, use [ `SetTypeConfiguration` ][2] to specify configuration + # and Region, use [SetTypeConfiguration][2] to specify configuration # properties for the extension. For more information, see [Configuring # extensions at the account level][3] in the *CloudFormation User # Guide*. @@ -843,19 +843,19 @@ def continue_update_rollback(params = {}, options = {}) # review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [ AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [ AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [ AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [ AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [ AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM resources in # CloudFormation templates][8]. @@ -871,8 +871,8 @@ def continue_update_rollback(params = {}, options = {}) # your stack template contains one or more macros, and you choose to # create a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` ][9] - # and [ `AWS::Serverless` ][10] transforms, which are macros hosted by + # acknowledge this capability. This includes the [AWS::Include][9] and + # [AWS::Serverless][10] transforms, which are macros hosted by # CloudFormation. # # This capacity doesn't apply to creating change sets, and specifying @@ -1005,8 +1005,8 @@ def continue_update_rollback(params = {}, options = {}) # # @option params [String] :on_stack_failure # Determines what action will be taken if stack creation fails. If this - # parameter is specified, the `DisableRollback` parameter to the [ - # `ExecuteChangeSet` ][1] API operation must not be specified. This must + # parameter is specified, the `DisableRollback` parameter to the + # [ExecuteChangeSet][1] API operation must not be specified. This must # be one of these values: # # * `DELETE` - Deletes the change set if the stack creation fails. This @@ -1016,11 +1016,11 @@ def continue_update_rollback(params = {}, options = {}) # # * `DO_NOTHING` - if the stack creation fails, do nothing. This is # equivalent to specifying `true` for the `DisableRollback` parameter - # to the [ `ExecuteChangeSet` ][1] API operation. + # to the [ExecuteChangeSet][1] API operation. # # * `ROLLBACK` - if the stack creation fails, roll back the stack. This # is equivalent to specifying `false` for the `DisableRollback` - # parameter to the [ `ExecuteChangeSet` ][1] API operation. + # parameter to the [ExecuteChangeSet][1] API operation. # # For nested stacks, when the `OnStackFailure` parameter is set to # `DELETE` for the change set for the parent stack, any failure in a @@ -1249,7 +1249,7 @@ def create_generated_template(params = {}, options = {}) # # @option params [Array] :parameters # A list of `Parameter` structures that specify input parameters for the - # stack. For more information, see the [ `Parameter` ][1] data type. + # stack. For more information, see the [Parameter][1] data type. # # # @@ -1305,19 +1305,19 @@ def create_generated_template(params = {}, options = {}) # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -1333,8 +1333,8 @@ def create_generated_template(params = {}, options = {}) # your stack template contains one or more macros, and you choose to # create a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` ][9] - # and [ `AWS::Serverless` ][10] transforms, which are macros hosted by + # acknowledge this capability. This includes the [AWS::Include][9] and + # [AWS::Serverless][10] transforms, which are macros hosted by # CloudFormation. # # If you want to create a stack from a stack template that contains @@ -1601,8 +1601,8 @@ def create_stack(params = {}, options = {}) # instance aren't updated, but retain their overridden value. # # You can only override the parameter *values* that are specified in the - # stack set; to add or delete a parameter itself, use [ `UpdateStackSet` - # ][1] to update the stack set template. + # stack set; to add or delete a parameter itself, use + # [UpdateStackSet][1] to update the stack set template. # # # @@ -1778,19 +1778,19 @@ def create_stack_instances(params = {}, options = {}) # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -1806,11 +1806,12 @@ def create_stack_instances(params = {}, options = {}) # Templates][9]. # # Stack sets with service-managed permissions don't currently support - # the use of macros in templates. (This includes the [ `AWS::Include` - # ][10] and [ `AWS::Serverless` ][11] transforms, which are macros - # hosted by CloudFormation.) Even if you specify this capability for a - # stack set with service-managed permissions, if you reference a macro - # in your template the stack set operation will fail. + # the use of macros in templates. (This includes the + # [AWS::Include][10] and [AWS::Serverless][11] transforms, which are + # macros hosted by CloudFormation.) Even if you specify this + # capability for a stack set with service-managed permissions, if you + # reference a macro in your template the stack set operation will + # fail. # # # @@ -2363,7 +2364,7 @@ def delete_stack_set(params = {}, options = {}) # deregistered as well and marked as deprecated. # # To view the deprecation status of an extension or extension version, - # use [ `DescribeType` ][1]. + # use [DescribeType][1]. # # # @@ -2794,7 +2795,7 @@ def describe_organizations_access(params = {}, options = {}) # # For more information about registering as a publisher, see: # - # * [ `RegisterPublisher` ][1] + # * [RegisterPublisher][1] # # * [Publishing extensions to make them available for public use][2] in # the *CloudFormation CLI User Guide* @@ -4280,16 +4281,16 @@ def estimate_template_cost(params = {}, options = {}) # @option params [Boolean] :disable_rollback # Preserves the state of previously provisioned resources when an # operation fails. This parameter can't be specified when the - # `OnStackFailure` parameter to the [ `CreateChangeSet` ][1] API - # operation was specified. + # `OnStackFailure` parameter to the [CreateChangeSet][1] API operation + # was specified. # # * `True` - if the stack creation fails, do nothing. This is equivalent # to specifying `DO_NOTHING` for the `OnStackFailure` parameter to the - # [ `CreateChangeSet` ][1] API operation. + # [CreateChangeSet][1] API operation. # # * `False` - if the stack creation fails, roll back the stack. This is # equivalent to specifying `ROLLBACK` for the `OnStackFailure` - # parameter to the [ `CreateChangeSet` ][1] API operation. + # parameter to the [CreateChangeSet][1] API operation. # # Default: `True` # @@ -6280,7 +6281,7 @@ def list_types(params = {}, options = {}) # public use][1] in the *CloudFormation CLI User Guide*. # # To publish an extension, you must be registered as a publisher with - # CloudFormation. For more information, see [ `RegisterPublisher` ][2]. + # CloudFormation. For more information, see [RegisterPublisher][2]. # # # @@ -6509,8 +6510,8 @@ def register_publisher(params = {}, options = {}) # *CloudFormation CLI User Guide*. # # You can have a maximum of 50 resource extension versions registered at - # a time. This maximum is per account and per Region. Use [ - # `DeregisterType` ][2] to deregister specific extension versions if + # a time. This maximum is per account and per Region. Use + # [DeregisterType][2] to deregister specific extension versions if # necessary. # # Once you have initiated a registration request using RegisterType, you @@ -6518,7 +6519,7 @@ def register_publisher(params = {}, options = {}) # registration request. # # Once you have registered a private extension in your account and - # Region, use [ `SetTypeConfiguration` ][3] to specify configuration + # Region, use [SetTypeConfiguration][3] to specify configuration # properties for the extension. For more information, see [Configuring # extensions at the account level][4] in the *CloudFormation User # Guide*. @@ -6758,7 +6759,7 @@ def set_stack_policy(params = {}, options = {}) # extension, in the given account and Region. # # To view the current configuration data for an extension, refer to the - # `ConfigurationSchema` element of [ `DescribeType` ][1]. For more + # `ConfigurationSchema` element of [DescribeType][1]. For more # information, see [Configuring extensions at the account level][2] in # the *CloudFormation User Guide*. # @@ -6778,9 +6779,9 @@ def set_stack_policy(params = {}, options = {}) # Region. # # For public extensions, this will be the ARN assigned when you call the - # [ `ActivateType` ][1] API operation in this account and Region. For - # private extensions, this will be the ARN assigned when you call the [ - # `RegisterType` ][2] API operation in this account and Region. + # [ActivateType][1] API operation in this account and Region. For + # private extensions, this will be the ARN assigned when you call the + # [RegisterType][2] API operation in this account and Region. # # Do not include the extension versions suffix at the end of the ARN. # You can set the configuration for an extension, but not for a specific @@ -6795,8 +6796,8 @@ def set_stack_policy(params = {}, options = {}) # The configuration data for the extension, in this account and Region. # # The configuration data must be formatted as JSON, and validate against - # the schema returned in the `ConfigurationSchema` response element of [ - # `DescribeType` ][1]. For more information, see [Defining account-level + # the schema returned in the `ConfigurationSchema` response element of + # [DescribeType][1]. For more information, see [Defining account-level # configuration data for an extension][2] in the *CloudFormation CLI # User Guide*. # @@ -7054,11 +7055,11 @@ def stop_stack_set_operation(params = {}, options = {}) # version of the extension in your account and Region for testing. # # To perform testing, CloudFormation assumes the execution role - # specified when the type was registered. For more information, see [ - # `RegisterType` ][2]. + # specified when the type was registered. For more information, see + # [RegisterType][2]. # # Once you've initiated testing on an extension using `TestType`, you - # can pass the returned `TypeVersionArn` into [ `DescribeType` ][3] to + # can pass the returned `TypeVersionArn` into [DescribeType][3] to # monitor the current test status and test status description for the # extension. # @@ -7356,7 +7357,7 @@ def update_generated_template(params = {}, options = {}) # # @option params [Array] :parameters # A list of `Parameter` structures that specify input parameters for the - # stack. For more information, see the [ `Parameter` ][1] data type. + # stack. For more information, see the [Parameter][1] data type. # # # @@ -7390,19 +7391,19 @@ def update_generated_template(params = {}, options = {}) # review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [ AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [ AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ ` AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [ AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [ AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -7418,8 +7419,8 @@ def update_generated_template(params = {}, options = {}) # your stack template contains one or more macros, and you choose to # update a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` ][9] - # and [ `AWS::Serverless` ][10] transforms, which are macros hosted by + # acknowledge this capability. This includes the [AWS::Include][9] and + # [AWS::Serverless][10] transforms, which are macros hosted by # CloudFormation. # # If you want to update a stack from a stack template that contains @@ -7631,20 +7632,20 @@ def update_stack(params = {}, options = {}) # # You can only update stack instances in Amazon Web Services Regions and # accounts where they already exist; to create additional stack - # instances, use [ `CreateStackInstances` ][1]. + # instances, use [CreateStackInstances][1]. # # During stack set updates, any parameters overridden for a stack # instance aren't updated, but retain their overridden value. # # You can only update the parameter *values* that are specified in the - # stack set; to add or delete a parameter itself, use [ `UpdateStackSet` - # ][2] to update the stack set template. If you add a parameter to a - # template, before you can override the parameter value specified in the - # stack set you must first use [ `UpdateStackSet` ][2] to update all - # stack instances with the updated template and parameter value - # specified in the stack set. Once a stack instance has been updated - # with the new parameter, you can then override the parameter value - # using `UpdateStackInstances`. + # stack set; to add or delete a parameter itself, use + # [UpdateStackSet][2] to update the stack set template. If you add a + # parameter to a template, before you can override the parameter value + # specified in the stack set you must first use [UpdateStackSet][2] to + # update all stack instances with the updated template and parameter + # value specified in the stack set. Once a stack instance has been + # updated with the new parameter, you can then override the parameter + # value using `UpdateStackInstances`. # # # @@ -7710,11 +7711,11 @@ def update_stack(params = {}, options = {}) # stack set; to add or delete a parameter itself, use `UpdateStackSet` # to update the stack set template. If you add a parameter to a # template, before you can override the parameter value specified in the - # stack set you must first use [ `UpdateStackSet` ][1] to update all - # stack instances with the updated template and parameter value - # specified in the stack set. Once a stack instance has been updated - # with the new parameter, you can then override the parameter value - # using `UpdateStackInstances`. + # stack set you must first use [UpdateStackSet][1] to update all stack + # instances with the updated template and parameter value specified in + # the stack set. Once a stack instance has been updated with the new + # parameter, you can then override the parameter value using + # `UpdateStackInstances`. # # # @@ -7889,19 +7890,19 @@ def update_stack_instances(params = {}, options = {}) # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -7917,11 +7918,12 @@ def update_stack_instances(params = {}, options = {}) # Templates][9]. # # Stack sets with service-managed permissions do not currently support - # the use of macros in templates. (This includes the [ `AWS::Include` - # ][10] and [ `AWS::Serverless` ][11] transforms, which are macros - # hosted by CloudFormation.) Even if you specify this capability for a - # stack set with service-managed permissions, if you reference a macro - # in your template the stack set operation will fail. + # the use of macros in templates. (This includes the + # [AWS::Include][10] and [AWS::Serverless][11] transforms, which are + # macros hosted by CloudFormation.) Even if you specify this + # capability for a stack set with service-managed permissions, if you + # reference a macro in your template the stack set operation will + # fail. # # # @@ -8328,7 +8330,7 @@ def build_request(operation_name, params = {}) params: params, config: config) context[:gem_name] = 'aws-sdk-cloudformation' - context[:gem_version] = '1.102.0' + context[:gem_version] = '1.103.0' Seahorse::Client::Request.new(handlers, context) end diff --git a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/resource.rb b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/resource.rb index 30175a5e824..3dd0e530917 100644 --- a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/resource.rb +++ b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/resource.rb @@ -113,7 +113,7 @@ def client # [1]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html # @option options [Array] :parameters # A list of `Parameter` structures that specify input parameters for the - # stack. For more information, see the [ `Parameter` ][1] data type. + # stack. For more information, see the [Parameter][1] data type. # # # @@ -164,19 +164,19 @@ def client # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -192,8 +192,8 @@ def client # your stack template contains one or more macros, and you choose to # create a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` ][9] - # and [ `AWS::Serverless` ][10] transforms, which are macros hosted by + # acknowledge this capability. This includes the [AWS::Include][9] and + # [AWS::Serverless][10] transforms, which are macros hosted by # CloudFormation. # # If you want to create a stack from a stack template that contains diff --git a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/stack.rb b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/stack.rb index 38d68e5743b..23c97c4e5fa 100644 --- a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/stack.rb +++ b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/stack.rb @@ -494,7 +494,7 @@ def cancel_update(options = {}) # [1]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-anatomy.html # @option options [Array] :parameters # A list of `Parameter` structures that specify input parameters for the - # stack. For more information, see the [ `Parameter` ][1] data type. + # stack. For more information, see the [Parameter][1] data type. # # # @@ -545,19 +545,19 @@ def cancel_update(options = {}) # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -573,8 +573,8 @@ def cancel_update(options = {}) # your stack template contains one or more macros, and you choose to # create a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` ][9] - # and [ `AWS::Serverless` ][10] transforms, which are macros hosted by + # acknowledge this capability. This includes the [AWS::Include][9] and + # [AWS::Serverless][10] transforms, which are macros hosted by # CloudFormation. # # If you want to create a stack from a stack template that contains @@ -877,7 +877,7 @@ def delete(options = {}) # will be used. # @option options [Array] :parameters # A list of `Parameter` structures that specify input parameters for the - # stack. For more information, see the [ `Parameter` ][1] data type. + # stack. For more information, see the [Parameter][1] data type. # # # @@ -910,19 +910,19 @@ def delete(options = {}) # review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [ AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [ AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ ` AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [ AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [ AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -938,8 +938,8 @@ def delete(options = {}) # your stack template contains one or more macros, and you choose to # update a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` ][9] - # and [ `AWS::Serverless` ][10] transforms, which are macros hosted by + # acknowledge this capability. This includes the [AWS::Include][9] and + # [AWS::Serverless][10] transforms, which are macros hosted by # CloudFormation. # # If you want to update a stack from a stack template that contains diff --git a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/types.rb b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/types.rb index 905094b35ac..3cc4c0efc3d 100644 --- a/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/types.rb +++ b/gems/aws-sdk-cloudformation/lib/aws-sdk-cloudformation/types.rb @@ -781,19 +781,19 @@ class ContinueUpdateRollbackOutput < Aws::EmptyStructure; end # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [ AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [ AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [ AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [ AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [ AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM resources in # CloudFormation templates][8]. @@ -809,9 +809,9 @@ class ContinueUpdateRollbackOutput < Aws::EmptyStructure; end # your stack template contains one or more macros, and you choose to # create a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` - # ][9] and [ `AWS::Serverless` ][10] transforms, which are macros - # hosted by CloudFormation. + # acknowledge this capability. This includes the [AWS::Include][9] + # and [AWS::Serverless][10] transforms, which are macros hosted by + # CloudFormation. # # This capacity doesn't apply to creating change sets, and # specifying it when creating change sets has no effect. @@ -957,8 +957,8 @@ class ContinueUpdateRollbackOutput < Aws::EmptyStructure; end # @!attribute [rw] on_stack_failure # Determines what action will be taken if stack creation fails. If # this parameter is specified, the `DisableRollback` parameter to the - # [ `ExecuteChangeSet` ][1] API operation must not be specified. This - # must be one of these values: + # [ExecuteChangeSet][1] API operation must not be specified. This must + # be one of these values: # # * `DELETE` - Deletes the change set if the stack creation fails. # This is only valid when the `ChangeSetType` parameter is set to @@ -967,11 +967,11 @@ class ContinueUpdateRollbackOutput < Aws::EmptyStructure; end # # * `DO_NOTHING` - if the stack creation fails, do nothing. This is # equivalent to specifying `true` for the `DisableRollback` - # parameter to the [ `ExecuteChangeSet` ][1] API operation. + # parameter to the [ExecuteChangeSet][1] API operation. # # * `ROLLBACK` - if the stack creation fails, roll back the stack. # This is equivalent to specifying `false` for the `DisableRollback` - # parameter to the [ `ExecuteChangeSet` ][1] API operation. + # parameter to the [ExecuteChangeSet][1] API operation. # # For nested stacks, when the `OnStackFailure` parameter is set to # `DELETE` for the change set for the parent stack, any failure in a @@ -1135,8 +1135,7 @@ class CreateGeneratedTemplateOutput < Struct.new( # # @!attribute [rw] parameters # A list of `Parameter` structures that specify input parameters for - # the stack. For more information, see the [ `Parameter` ][1] data - # type. + # the stack. For more information, see the [Parameter][1] data type. # # # @@ -1198,19 +1197,19 @@ class CreateGeneratedTemplateOutput < Struct.new( # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -1226,9 +1225,9 @@ class CreateGeneratedTemplateOutput < Struct.new( # your stack template contains one or more macros, and you choose to # create a stack directly from the processed template, without first # reviewing the resulting changes in a change set, you must - # acknowledge this capability. This includes the [ `AWS::Include` - # ][9] and [ `AWS::Serverless` ][10] transforms, which are macros - # hosted by CloudFormation. + # acknowledge this capability. This includes the [AWS::Include][9] + # and [AWS::Serverless][10] transforms, which are macros hosted by + # CloudFormation. # # If you want to create a stack from a stack template that contains # macros *and* nested stacks, you must create the stack directly @@ -1470,8 +1469,8 @@ class CreateStackInput < Struct.new( # stack instance aren't updated, but retain their overridden value. # # You can only override the parameter *values* that are specified in - # the stack set; to add or delete a parameter itself, use [ - # `UpdateStackSet` ][1] to update the stack set template. + # the stack set; to add or delete a parameter itself, use + # [UpdateStackSet][1] to update the stack set template. # # # @@ -1648,19 +1647,19 @@ class CreateStackOutput < Struct.new( # you review all permissions associated with them and edit their # permissions if necessary. # - # * [ `AWS::IAM::AccessKey` ][1] + # * [AWS::IAM::AccessKey][1] # - # * [ `AWS::IAM::Group` ][2] + # * [AWS::IAM::Group][2] # - # * [ `AWS::IAM::InstanceProfile` ][3] + # * [AWS::IAM::InstanceProfile][3] # - # * [ `AWS::IAM::Policy` ][4] + # * [AWS::IAM::Policy][4] # - # * [ `AWS::IAM::Role` ][5] + # * [AWS::IAM::Role][5] # - # * [ `AWS::IAM::User` ][6] + # * [AWS::IAM::User][6] # - # * [ `AWS::IAM::UserToGroupAddition` ][7] + # * [AWS::IAM::UserToGroupAddition][7] # # For more information, see [Acknowledging IAM Resources in # CloudFormation Templates][8]. @@ -1676,11 +1675,11 @@ class CreateStackOutput < Struct.new( # Processing on Templates][9]. # # Stack sets with service-managed permissions don't currently - # support the use of macros in templates. (This includes the [ - # `AWS::Include` ][10] and [ `AWS::Serverless` ][11] transforms, - # which are macros hosted by CloudFormation.) Even if you specify - # this capability for a stack set with service-managed permissions, - # if you reference a macro in your template the stack set operation + # support the use of macros in templates. (This includes the + # [AWS::Include][10] and [AWS::Serverless][11] transforms, which are + # macros hosted by CloudFormation.) Even if you specify this + # capability for a stack set with service-managed permissions, if + # you reference a macro in your template the stack set operation # will fail. # # @@ -2416,7 +2415,7 @@ class DescribeChangeSetInput < Struct.new( # @!attribute [rw] parameters # A list of `Parameter` structures that describes the input parameters # and their values used to create the change set. For more - # information, see the [ `Parameter` ][1] data type. + # information, see the [Parameter][1] data type. # # # @@ -2495,8 +2494,8 @@ class DescribeChangeSetInput < Struct.new( # @!attribute [rw] on_stack_failure # Determines what action will be taken if stack creation fails. When # this parameter is specified, the `DisableRollback` parameter to the - # [ `ExecuteChangeSet` ][1] API operation must not be specified. This - # must be one of these values: + # [ExecuteChangeSet][1] API operation must not be specified. This must + # be one of these values: # # * `DELETE` - Deletes the change set if the stack creation fails. # This is only valid when the `ChangeSetType` parameter is set to @@ -2505,11 +2504,11 @@ class DescribeChangeSetInput < Struct.new( # # * `DO_NOTHING` - if the stack creation fails, do nothing. This is # equivalent to specifying `true` for the `DisableRollback` - # parameter to the [ `ExecuteChangeSet` ][1] API operation. + # parameter to the [ExecuteChangeSet][1] API operation. # # * `ROLLBACK` - if the stack creation fails, roll back the stack. # This is equivalent to specifying `false` for the `DisableRollback` - # parameter to the [ `ExecuteChangeSet` ][1] API operation. + # parameter to the [ExecuteChangeSet][1] API operation. # # # @@ -3471,7 +3470,7 @@ class DescribeTypeInput < Struct.new( # # If the extension is a public third-party type you have activated # with a type name alias, CloudFormation returns the type name alias. - # For more information, see [ `ActivateType` ][1]. + # For more information, see [ActivateType][1]. # # # @@ -3485,7 +3484,7 @@ class DescribeTypeInput < Struct.new( # This applies only to private extensions you have registered in your # account. For public extensions, both those provided by Amazon Web # Services and published by third parties, CloudFormation returns - # `null`. For more information, see [ `RegisterType` ][1]. + # `null`. For more information, see [RegisterType][1]. # # To set the default version of an extension, use # SetTypeDefaultVersion. @@ -3604,7 +3603,7 @@ class DescribeTypeInput < Struct.new( # This applies only to private extensions you have registered in your # account. For public extensions, both those provided by Amazon Web # Services and published by third parties, CloudFormation returns - # `null`. For more information, see [ `RegisterType` ][1]. + # `null`. For more information, see [RegisterType][1]. # # # @@ -3620,8 +3619,8 @@ class DescribeTypeInput < Struct.new( # @!attribute [rw] execution_role_arn # The Amazon Resource Name (ARN) of the IAM execution role used to # register the extension. This applies only to private extensions you - # have registered in your account. For more information, see [ - # `RegisterType` ][1]. + # have registered in your account. For more information, see + # [RegisterType][1]. # # If the registered extension calls any Amazon Web Services APIs, you # must create an Hash # resp.anomalies[0].histogram["Time"] #=> Integer # resp.anomalies[0].log_samples #=> Array - # resp.anomalies[0].log_samples[0] #=> String + # resp.anomalies[0].log_samples[0].timestamp #=> Integer + # resp.anomalies[0].log_samples[0].message #=> String # resp.anomalies[0].pattern_tokens #=> Array # resp.anomalies[0].pattern_tokens[0].dynamic_token_position #=> Integer # resp.anomalies[0].pattern_tokens[0].is_dynamic #=> Boolean @@ -3348,25 +3347,24 @@ def list_tags_log_group(params = {}, options = {}) # from CloudWatch Logs to other Amazon Web Services services. # Account-level subscription filter policies apply to both existing log # groups and log groups that are created later in this account. - # Supported destinations are Kinesis Data Streams, Kinesis Data - # Firehose, and Lambda. When log events are sent to the receiving - # service, they are Base64 encoded and compressed with the GZIP format. + # Supported destinations are Kinesis Data Streams, Firehose, and Lambda. + # When log events are sent to the receiving service, they are Base64 + # encoded and compressed with the GZIP format. # # The following destinations are supported for subscription filters: # # * An Kinesis Data Streams data stream in the same account as the # subscription policy, for same-account delivery. # - # * An Kinesis Data Firehose data stream in the same account as the - # subscription policy, for same-account delivery. + # * An Firehose data stream in the same account as the subscription + # policy, for same-account delivery. # # * A Lambda function in the same account as the subscription policy, # for same-account delivery. # # * A logical destination in a different account created with # [PutDestination][5], for cross-account delivery. Kinesis Data - # Streams and Kinesis Data Firehose are supported as logical - # destinations. + # Streams and Firehose are supported as logical destinations. # # Each account can have one account-level subscription filter policy. If # you are updating an existing filter, you must specify the correct name @@ -3403,8 +3401,7 @@ def list_tags_log_group(params = {}, options = {}) # `FindingsDestination` object. You can optionally use that # `FindingsDestination` object to list one or more destinations to # send audit findings to. If you specify destinations such as log - # groups, Kinesis Data Firehose streams, and S3 buckets, they must - # already exist. + # groups, Firehose streams, and S3 buckets, they must already exist. # # * The second block must include both a `DataIdentifer` array and an # `Operation` property with an `Deidentify` action. The @@ -3440,16 +3437,15 @@ def list_tags_log_group(params = {}, options = {}) # * An Kinesis Data Streams data stream in the same account as the # subscription policy, for same-account delivery. # - # * An Kinesis Data Firehose data stream in the same account as the - # subscription policy, for same-account delivery. + # * An Firehose data stream in the same account as the subscription + # policy, for same-account delivery. # # * A Lambda function in the same account as the subscription policy, # for same-account delivery. # # * A logical destination in a different account created with # [PutDestination][2], for cross-account delivery. Kinesis Data - # Streams and Kinesis Data Firehose are supported as logical - # destinations. + # Streams and Firehose are supported as logical destinations. # # * **RoleArn** The ARN of an IAM role that grants CloudWatch Logs # permissions to deliver ingested log events to the destination @@ -3582,8 +3578,7 @@ def put_account_policy(params = {}, options = {}) # `FindingsDestination` object. You can optionally use that # `FindingsDestination` object to list one or more destinations to # send audit findings to. If you specify destinations such as log - # groups, Kinesis Data Firehose streams, and S3 buckets, they must - # already exist. + # groups, Firehose streams, and S3 buckets, they must already exist. # # * The second block must include both a `DataIdentifer` array and an # `Operation` property with an `Deidentify` action. The @@ -3641,8 +3636,7 @@ def put_data_protection_policy(params = {}, options = {}) # Creates or updates a logical *delivery destination*. A delivery # destination is an Amazon Web Services resource that represents an # Amazon Web Services service that logs can be sent to. CloudWatch Logs, - # Amazon S3, and Kinesis Data Firehose are supported as logs delivery - # destinations. + # Amazon S3, and Firehose are supported as logs delivery destinations. # # To configure logs delivery between a supported Amazon Web Services # service and a destination, you must do the following: @@ -3812,7 +3806,7 @@ def put_delivery_destination_policy(params = {}, options = {}) # Creates or updates a logical *delivery source*. A delivery source # represents an Amazon Web Services resource that sends logs to an logs # delivery destination. The destination can be CloudWatch Logs, Amazon - # S3, or Kinesis Data Firehose. + # S3, or Firehose. # # To configure logs delivery between a delivery destination and an # Amazon Web Services service that is supported as a delivery source, @@ -3866,8 +3860,15 @@ def put_delivery_destination_policy(params = {}, options = {}) # `arn:aws:workmail:us-east-1:123456789012:organization/m-1234EXAMPLEabcd1234abcd1234abcd1234` # # @option params [required, String] :log_type - # Defines the type of log that the source is sending. For Amazon - # CodeWhisperer, the valid value is `EVENT_LOGS`. + # Defines the type of log that the source is sending. + # + # * For Amazon CodeWhisperer, the valid value is `EVENT_LOGS`. + # + # * For IAM Identity Centerr, the valid value is `ERROR_LOGS`. + # + # * For Amazon WorkMail, the valid values are `ACCESS_CONTROL_LOGS`, + # `AUTHENTICATION_LOGS`, `WORKMAIL_AVAILABILITY_PROVIDER_LOGS`, and + # `WORKMAIL_MAILBOX_ACCESS_LOGS`. # # @option params [Hash] :tags # An optional list of key-value pairs to associate with the resource. @@ -4450,8 +4451,7 @@ def put_retention_policy(params = {}, options = {}) # # * A logical destination created with [PutDestination][2] that belongs # to a different account, for cross-account delivery. We currently - # support Kinesis Data Streams and Kinesis Data Firehose as logical - # destinations. + # support Kinesis Data Streams and Firehose as logical destinations. # # * An Amazon Kinesis Data Firehose delivery stream that belongs to the # same account as the subscription filter, for same-account delivery. @@ -5349,7 +5349,7 @@ def build_request(operation_name, params = {}) params: params, config: config) context[:gem_name] = 'aws-sdk-cloudwatchlogs' - context[:gem_version] = '1.79.0' + context[:gem_version] = '1.80.0' Seahorse::Client::Request.new(handlers, context) end diff --git a/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client_api.rb b/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client_api.rb index be55673f91f..01f655f39ab 100644 --- a/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client_api.rb +++ b/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/client_api.rb @@ -196,7 +196,7 @@ module ClientApi LiveTailSessionResults = Shapes::ListShape.new(name: 'LiveTailSessionResults') LiveTailSessionStart = Shapes::StructureShape.new(name: 'LiveTailSessionStart') LiveTailSessionUpdate = Shapes::StructureShape.new(name: 'LiveTailSessionUpdate') - LogEvent = Shapes::StringShape.new(name: 'LogEvent') + LogEvent = Shapes::StructureShape.new(name: 'LogEvent') LogEventIndex = Shapes::IntegerShape.new(name: 'LogEventIndex') LogGroup = Shapes::StructureShape.new(name: 'LogGroup') LogGroupArn = Shapes::StringShape.new(name: 'LogGroupArn') @@ -904,6 +904,10 @@ module ClientApi LiveTailSessionUpdate.add_member(:session_results, Shapes::ShapeRef.new(shape: LiveTailSessionResults, location_name: "sessionResults")) LiveTailSessionUpdate.struct_class = Types::LiveTailSessionUpdate + LogEvent.add_member(:timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "timestamp")) + LogEvent.add_member(:message, Shapes::ShapeRef.new(shape: EventMessage, location_name: "message")) + LogEvent.struct_class = Types::LogEvent + LogGroup.add_member(:log_group_name, Shapes::ShapeRef.new(shape: LogGroupName, location_name: "logGroupName")) LogGroup.add_member(:creation_time, Shapes::ShapeRef.new(shape: Timestamp, location_name: "creationTime")) LogGroup.add_member(:retention_in_days, Shapes::ShapeRef.new(shape: Days, location_name: "retentionInDays")) diff --git a/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/types.rb b/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/types.rb index bb5b4904494..bb548f8e38b 100644 --- a/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/types.rb +++ b/gems/aws-sdk-cloudwatchlogs/lib/aws-sdk-cloudwatchlogs/types.rb @@ -146,7 +146,7 @@ class AccountPolicy < Struct.new( # @!attribute [rw] log_samples # An array of sample log event messages that are considered to be part # of this anomaly. - # @return [Array] + # @return [Array] # # @!attribute [rw] pattern_tokens # An array of structures where each structure contains information @@ -925,7 +925,7 @@ class DeleteSubscriptionFilterRequest < Struct.new( # # @!attribute [rw] delivery_destination_type # Displays whether the delivery destination associated with this - # delivery is CloudWatch Logs, Amazon S3, or Kinesis Data Firehose. + # delivery is CloudWatch Logs, Amazon S3, or Firehose. # @return [String] # # @!attribute [rw] tags @@ -948,8 +948,8 @@ class Delivery < Struct.new( # This structure contains information about one *delivery destination* # in your account. A delivery destination is an Amazon Web Services # resource that represents an Amazon Web Services service that logs can - # be sent to. CloudWatch Logs, Amazon S3, are supported as Kinesis Data - # Firehose delivery destinations. + # be sent to. CloudWatch Logs, Amazon S3, are supported as Firehose + # delivery destinations. # # To configure logs delivery between a supported Amazon Web Services # service and a destination, you must do the following: @@ -991,7 +991,7 @@ class Delivery < Struct.new( # # @!attribute [rw] delivery_destination_type # Displays whether this delivery destination is CloudWatch Logs, - # Amazon S3, or Kinesis Data Firehose. + # Amazon S3, or Firehose. # @return [String] # # @!attribute [rw] output_format @@ -1027,7 +1027,7 @@ class DeliveryDestination < Struct.new( # The ARN of the Amazon Web Services destination that this delivery # destination represents. That Amazon Web Services destination can be # a log group in CloudWatch Logs, an Amazon S3 bucket, or a delivery - # stream in Kinesis Data Firehose. + # stream in Firehose. # @return [String] # # @see http://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeliveryDestinationConfiguration AWS API Documentation @@ -1041,7 +1041,7 @@ class DeliveryDestinationConfiguration < Struct.new( # This structure contains information about one *delivery source* in # your account. A delivery source is an Amazon Web Services resource # that sends logs to an Amazon Web Services destination. The destination - # can be CloudWatch Logs, Amazon S3, or Kinesis Data Firehose. + # can be CloudWatch Logs, Amazon S3, or Firehose. # # Only some Amazon Web Services services support being configured as a # delivery source. These services are listed as **Supported \[V2 @@ -3006,6 +3006,26 @@ class LiveTailSessionUpdate < Struct.new( include Aws::Structure end + # This structure contains the information for one sample log event that + # is associated with an anomaly found by a log anomaly detector. + # + # @!attribute [rw] timestamp + # The time stamp of the log event. + # @return [Integer] + # + # @!attribute [rw] message + # The message content of the log event. + # @return [String] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/LogEvent AWS API Documentation + # + class LogEvent < Struct.new( + :timestamp, + :message) + SENSITIVE = [] + include Aws::Structure + end + # Represents a log group. # # @!attribute [rw] log_group_name @@ -3493,8 +3513,7 @@ class Policy < Struct.new( # `FindingsDestination` object. You can optionally use that # `FindingsDestination` object to list one or more destinations to # send audit findings to. If you specify destinations such as log - # groups, Kinesis Data Firehose streams, and S3 buckets, they must - # already exist. + # groups, Firehose streams, and S3 buckets, they must already exist. # # * The second block must include both a `DataIdentifer` array and an # `Operation` property with an `Deidentify` action. The @@ -3530,16 +3549,15 @@ class Policy < Struct.new( # * An Kinesis Data Streams data stream in the same account as the # subscription policy, for same-account delivery. # - # * An Kinesis Data Firehose data stream in the same account as the - # subscription policy, for same-account delivery. + # * An Firehose data stream in the same account as the subscription + # policy, for same-account delivery. # # * A Lambda function in the same account as the subscription # policy, for same-account delivery. # # * A logical destination in a different account created with # [PutDestination][2], for cross-account delivery. Kinesis Data - # Streams and Kinesis Data Firehose are supported as logical - # destinations. + # Streams and Firehose are supported as logical destinations. # # * **RoleArn** The ARN of an IAM role that grants CloudWatch Logs # permissions to deliver ingested log events to the destination @@ -3635,8 +3653,7 @@ class PutAccountPolicyResponse < Struct.new( # `FindingsDestination` object. You can optionally use that # `FindingsDestination` object to list one or more destinations to # send audit findings to. If you specify destinations such as log - # groups, Kinesis Data Firehose streams, and S3 buckets, they must - # already exist. + # groups, Firehose streams, and S3 buckets, they must already exist. # # * The second block must include both a `DataIdentifer` array and an # `Operation` property with an `Deidentify` action. The @@ -3786,8 +3803,15 @@ class PutDeliveryDestinationResponse < Struct.new( # @return [String] # # @!attribute [rw] log_type - # Defines the type of log that the source is sending. For Amazon - # CodeWhisperer, the valid value is `EVENT_LOGS`. + # Defines the type of log that the source is sending. + # + # * For Amazon CodeWhisperer, the valid value is `EVENT_LOGS`. + # + # * For IAM Identity Centerr, the valid value is `ERROR_LOGS`. + # + # * For Amazon WorkMail, the valid values are `ACCESS_CONTROL_LOGS`, + # `AUTHENTICATION_LOGS`, `WORKMAIL_AVAILABILITY_PROVIDER_LOGS`, and + # `WORKMAIL_MAILBOX_ACCESS_LOGS`. # @return [String] # # @!attribute [rw] tags @@ -4380,11 +4404,13 @@ class QueryStatistics < Struct.new( # Represents the rejected events. # # @!attribute [rw] too_new_log_event_start_index - # The log events that are too new. + # The index of the first log event that is too new. This field is + # inclusive. # @return [Integer] # # @!attribute [rw] too_old_log_event_end_index - # The log events that are dated too far in the past. + # The index of the last log event that is too old. This field is + # exclusive. # @return [Integer] # # @!attribute [rw] expired_log_event_end_index diff --git a/gems/aws-sdk-cloudwatchlogs/sig/types.rbs b/gems/aws-sdk-cloudwatchlogs/sig/types.rbs index d266e8f4258..b5921405f6f 100644 --- a/gems/aws-sdk-cloudwatchlogs/sig/types.rbs +++ b/gems/aws-sdk-cloudwatchlogs/sig/types.rbs @@ -35,7 +35,7 @@ module Aws::CloudWatchLogs attr_accessor active: bool attr_accessor state: ("Active" | "Suppressed" | "Baseline") attr_accessor histogram: ::Hash[::String, ::Integer] - attr_accessor log_samples: ::Array[::String] + attr_accessor log_samples: ::Array[Types::LogEvent] attr_accessor pattern_tokens: ::Array[Types::PatternToken] attr_accessor log_group_arn_list: ::Array[::String] attr_accessor suppressed: bool @@ -731,6 +731,12 @@ module Aws::CloudWatchLogs SENSITIVE: [] end + class LogEvent + attr_accessor timestamp: ::Integer + attr_accessor message: ::String + SENSITIVE: [] + end + class LogGroup attr_accessor log_group_name: ::String attr_accessor creation_time: ::Integer diff --git a/gems/aws-sdk-ec2/CHANGELOG.md b/gems/aws-sdk-ec2/CHANGELOG.md index 61aa61146df..2d9f5ab097a 100644 --- a/gems/aws-sdk-ec2/CHANGELOG.md +++ b/gems/aws-sdk-ec2/CHANGELOG.md @@ -1,6 +1,11 @@ Unreleased Changes ------------------ +1.444.0 (2024-03-19) +------------------ + +* Feature - This release adds the new DescribeMacHosts API operation for getting information about EC2 Mac Dedicated Hosts. Users can now see the latest macOS versions that their underlying Apple Mac can support without needing to be updated. + 1.443.0 (2024-03-15) ------------------ diff --git a/gems/aws-sdk-ec2/VERSION b/gems/aws-sdk-ec2/VERSION index e95f7391fda..1b3c5c87be3 100644 --- a/gems/aws-sdk-ec2/VERSION +++ b/gems/aws-sdk-ec2/VERSION @@ -1 +1 @@ -1.443.0 +1.444.0 diff --git a/gems/aws-sdk-ec2/lib/aws-sdk-ec2.rb b/gems/aws-sdk-ec2/lib/aws-sdk-ec2.rb index 3f7d1658a1c..4676caf8e62 100644 --- a/gems/aws-sdk-ec2/lib/aws-sdk-ec2.rb +++ b/gems/aws-sdk-ec2/lib/aws-sdk-ec2.rb @@ -76,6 +76,6 @@ # @!group service module Aws::EC2 - GEM_VERSION = '1.443.0' + GEM_VERSION = '1.444.0' end diff --git a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client.rb b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client.rb index 19eef1a5bb1..068ecc7ec67 100644 --- a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client.rb +++ b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client.rb @@ -27776,6 +27776,69 @@ def describe_locked_snapshots(params = {}, options = {}) req.send_request(options) end + # Describes the specified EC2 Mac Dedicated Host or all of your EC2 Mac + # Dedicated Hosts. + # + # @option params [Array] :filters + # The filters. + # + # * `availability-zone` - The Availability Zone of the EC2 Mac Dedicated + # Host. + # + # * `instance-type` - The instance type size that the EC2 Mac Dedicated + # Host is configured to support. + # + # @option params [Array] :host_ids + # The IDs of the EC2 Mac Dedicated Hosts. + # + # @option params [Integer] :max_results + # The maximum number of results to return for the request in a single + # page. The remaining results can be seen by sending another request + # with the returned `nextToken` value. This value can be between 5 and + # 500. If `maxResults` is given a larger value than 500, you receive an + # error. + # + # @option params [String] :next_token + # The token to use to retrieve the next page of results. + # + # @return [Types::DescribeMacHostsResult] Returns a {Seahorse::Client::Response response} object which responds to the following methods: + # + # * {Types::DescribeMacHostsResult#mac_hosts #mac_hosts} => Array<Types::MacHost> + # * {Types::DescribeMacHostsResult#next_token #next_token} => String + # + # The returned {Seahorse::Client::Response response} is a pageable response and is Enumerable. For details on usage see {Aws::PageableResponse PageableResponse}. + # + # @example Request syntax with placeholder values + # + # resp = client.describe_mac_hosts({ + # filters: [ + # { + # name: "String", + # values: ["String"], + # }, + # ], + # host_ids: ["DedicatedHostId"], + # max_results: 1, + # next_token: "String", + # }) + # + # @example Response structure + # + # resp.mac_hosts #=> Array + # resp.mac_hosts[0].host_id #=> String + # resp.mac_hosts[0].mac_os_latest_supported_versions #=> Array + # resp.mac_hosts[0].mac_os_latest_supported_versions[0] #=> String + # resp.next_token #=> String + # + # @see http://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMacHosts AWS API Documentation + # + # @overload describe_mac_hosts(params = {}) + # @param [Hash] params ({}) + def describe_mac_hosts(params = {}, options = {}) + req = build_request(:describe_mac_hosts, params) + req.send_request(options) + end + # Describes your managed prefix lists and any Amazon Web # Services-managed prefix lists. # @@ -58918,7 +58981,7 @@ def build_request(operation_name, params = {}) params: params, config: config) context[:gem_name] = 'aws-sdk-ec2' - context[:gem_version] = '1.443.0' + context[:gem_version] = '1.444.0' Seahorse::Client::Request.new(handlers, context) end diff --git a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client_api.rb b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client_api.rb index 1da5e6b0bb3..a5b8871ed34 100644 --- a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client_api.rb +++ b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/client_api.rb @@ -938,6 +938,9 @@ module ClientApi DescribeLockedSnapshotsMaxResults = Shapes::IntegerShape.new(name: 'DescribeLockedSnapshotsMaxResults') DescribeLockedSnapshotsRequest = Shapes::StructureShape.new(name: 'DescribeLockedSnapshotsRequest') DescribeLockedSnapshotsResult = Shapes::StructureShape.new(name: 'DescribeLockedSnapshotsResult') + DescribeMacHostsRequest = Shapes::StructureShape.new(name: 'DescribeMacHostsRequest') + DescribeMacHostsRequestMaxResults = Shapes::IntegerShape.new(name: 'DescribeMacHostsRequestMaxResults') + DescribeMacHostsResult = Shapes::StructureShape.new(name: 'DescribeMacHostsResult') DescribeManagedPrefixListsRequest = Shapes::StructureShape.new(name: 'DescribeManagedPrefixListsRequest') DescribeManagedPrefixListsResult = Shapes::StructureShape.new(name: 'DescribeManagedPrefixListsResult') DescribeMovingAddressesMaxResults = Shapes::IntegerShape.new(name: 'DescribeMovingAddressesMaxResults') @@ -2026,6 +2029,9 @@ module ClientApi LockedSnapshotsInfoList = Shapes::ListShape.new(name: 'LockedSnapshotsInfoList') LogDestinationType = Shapes::StringShape.new(name: 'LogDestinationType') Long = Shapes::IntegerShape.new(name: 'Long') + MacHost = Shapes::StructureShape.new(name: 'MacHost') + MacHostList = Shapes::ListShape.new(name: 'MacHostList') + MacOSVersionStringList = Shapes::ListShape.new(name: 'MacOSVersionStringList') MaintenanceDetails = Shapes::StructureShape.new(name: 'MaintenanceDetails') ManagedPrefixList = Shapes::StructureShape.new(name: 'ManagedPrefixList') ManagedPrefixListSet = Shapes::ListShape.new(name: 'ManagedPrefixListSet') @@ -6990,6 +6996,16 @@ module ClientApi DescribeLockedSnapshotsResult.add_member(:next_token, Shapes::ShapeRef.new(shape: String, location_name: "nextToken")) DescribeLockedSnapshotsResult.struct_class = Types::DescribeLockedSnapshotsResult + DescribeMacHostsRequest.add_member(:filters, Shapes::ShapeRef.new(shape: FilterList, location_name: "Filter")) + DescribeMacHostsRequest.add_member(:host_ids, Shapes::ShapeRef.new(shape: RequestHostIdList, location_name: "HostId")) + DescribeMacHostsRequest.add_member(:max_results, Shapes::ShapeRef.new(shape: DescribeMacHostsRequestMaxResults, location_name: "MaxResults")) + DescribeMacHostsRequest.add_member(:next_token, Shapes::ShapeRef.new(shape: String, location_name: "NextToken")) + DescribeMacHostsRequest.struct_class = Types::DescribeMacHostsRequest + + DescribeMacHostsResult.add_member(:mac_hosts, Shapes::ShapeRef.new(shape: MacHostList, location_name: "macHostSet")) + DescribeMacHostsResult.add_member(:next_token, Shapes::ShapeRef.new(shape: String, location_name: "nextToken")) + DescribeMacHostsResult.struct_class = Types::DescribeMacHostsResult + DescribeManagedPrefixListsRequest.add_member(:dry_run, Shapes::ShapeRef.new(shape: Boolean, location_name: "DryRun")) DescribeManagedPrefixListsRequest.add_member(:filters, Shapes::ShapeRef.new(shape: FilterList, location_name: "Filter")) DescribeManagedPrefixListsRequest.add_member(:max_results, Shapes::ShapeRef.new(shape: PrefixListMaxResults, location_name: "MaxResults")) @@ -11203,6 +11219,14 @@ module ClientApi LockedSnapshotsInfoList.member = Shapes::ShapeRef.new(shape: LockedSnapshotsInfo, location_name: "item") + MacHost.add_member(:host_id, Shapes::ShapeRef.new(shape: DedicatedHostId, location_name: "hostId")) + MacHost.add_member(:mac_os_latest_supported_versions, Shapes::ShapeRef.new(shape: MacOSVersionStringList, location_name: "macOSLatestSupportedVersionSet")) + MacHost.struct_class = Types::MacHost + + MacHostList.member = Shapes::ShapeRef.new(shape: MacHost, location_name: "item") + + MacOSVersionStringList.member = Shapes::ShapeRef.new(shape: String, location_name: "item") + MaintenanceDetails.add_member(:pending_maintenance, Shapes::ShapeRef.new(shape: String, location_name: "pendingMaintenance")) MaintenanceDetails.add_member(:maintenance_auto_applied_after, Shapes::ShapeRef.new(shape: MillisecondDateTime, location_name: "maintenanceAutoAppliedAfter")) MaintenanceDetails.add_member(:last_maintenance_applied, Shapes::ShapeRef.new(shape: MillisecondDateTime, location_name: "lastMaintenanceApplied")) @@ -18278,6 +18302,20 @@ module ClientApi o.output = Shapes::ShapeRef.new(shape: DescribeLockedSnapshotsResult) end) + api.add_operation(:describe_mac_hosts, Seahorse::Model::Operation.new.tap do |o| + o.name = "DescribeMacHosts" + o.http_method = "POST" + o.http_request_uri = "/" + o.input = Shapes::ShapeRef.new(shape: DescribeMacHostsRequest) + o.output = Shapes::ShapeRef.new(shape: DescribeMacHostsResult) + o[:pager] = Aws::Pager.new( + limit_key: "max_results", + tokens: { + "next_token" => "next_token" + } + ) + end) + api.add_operation(:describe_managed_prefix_lists, Seahorse::Model::Operation.new.tap do |o| o.name = "DescribeManagedPrefixLists" o.http_method = "POST" diff --git a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/endpoints.rb b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/endpoints.rb index 1123ec13b69..0b6fb12f168 100644 --- a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/endpoints.rb +++ b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/endpoints.rb @@ -4100,6 +4100,20 @@ def self.build(context) end end + class DescribeMacHosts + def self.build(context) + unless context.config.regional_endpoint + endpoint = context.config.endpoint.to_s + end + Aws::EC2::EndpointParameters.new( + region: context.config.region, + use_dual_stack: context.config.use_dualstack_endpoint, + use_fips: context.config.use_fips_endpoint, + endpoint: endpoint, + ) + end + end + class DescribeManagedPrefixLists def self.build(context) unless context.config.regional_endpoint diff --git a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/plugins/endpoints.rb b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/plugins/endpoints.rb index 38723ab0d16..4e4867b89cb 100644 --- a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/plugins/endpoints.rb +++ b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/plugins/endpoints.rb @@ -642,6 +642,8 @@ def parameters_for_operation(context) Aws::EC2::Endpoints::DescribeLocalGateways.build(context) when :describe_locked_snapshots Aws::EC2::Endpoints::DescribeLockedSnapshots.build(context) + when :describe_mac_hosts + Aws::EC2::Endpoints::DescribeMacHosts.build(context) when :describe_managed_prefix_lists Aws::EC2::Endpoints::DescribeManagedPrefixLists.build(context) when :describe_moving_addresses diff --git a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/types.rb b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/types.rb index 1fb80c3c8f3..f9595f297a3 100644 --- a/gems/aws-sdk-ec2/lib/aws-sdk-ec2/types.rb +++ b/gems/aws-sdk-ec2/lib/aws-sdk-ec2/types.rb @@ -22086,6 +22086,60 @@ class DescribeLockedSnapshotsResult < Struct.new( include Aws::Structure end + # @!attribute [rw] filters + # The filters. + # + # * `availability-zone` - The Availability Zone of the EC2 Mac + # Dedicated Host. + # + # * `instance-type` - The instance type size that the EC2 Mac + # Dedicated Host is configured to support. + # @return [Array] + # + # @!attribute [rw] host_ids + # The IDs of the EC2 Mac Dedicated Hosts. + # @return [Array] + # + # @!attribute [rw] max_results + # The maximum number of results to return for the request in a single + # page. The remaining results can be seen by sending another request + # with the returned `nextToken` value. This value can be between 5 and + # 500. If `maxResults` is given a larger value than 500, you receive + # an error. + # @return [Integer] + # + # @!attribute [rw] next_token + # The token to use to retrieve the next page of results. + # @return [String] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMacHostsRequest AWS API Documentation + # + class DescribeMacHostsRequest < Struct.new( + :filters, + :host_ids, + :max_results, + :next_token) + SENSITIVE = [] + include Aws::Structure + end + + # @!attribute [rw] mac_hosts + # Information about the EC2 Mac Dedicated Hosts. + # @return [Array] + # + # @!attribute [rw] next_token + # The token to use to retrieve the next page of results. + # @return [String] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/DescribeMacHostsResult AWS API Documentation + # + class DescribeMacHostsResult < Struct.new( + :mac_hosts, + :next_token) + SENSITIVE = [] + include Aws::Structure + end + # @!attribute [rw] dry_run # Checks whether you have the required permissions for the action, # without actually making the request, and provides an error response. @@ -46203,6 +46257,26 @@ class LockedSnapshotsInfo < Struct.new( include Aws::Structure end + # Information about the EC2 Mac Dedicated Host. + # + # @!attribute [rw] host_id + # The EC2 Mac Dedicated Host ID. + # @return [String] + # + # @!attribute [rw] mac_os_latest_supported_versions + # The latest macOS versions that the EC2 Mac Dedicated Host can launch + # without being upgraded. + # @return [Array] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15/MacHost AWS API Documentation + # + class MacHost < Struct.new( + :host_id, + :mac_os_latest_supported_versions) + SENSITIVE = [] + include Aws::Structure + end + # Details for Site-to-Site VPN tunnel endpoint maintenance events. # # @!attribute [rw] pending_maintenance diff --git a/gems/aws-sdk-ec2/sig/client.rbs b/gems/aws-sdk-ec2/sig/client.rbs index 0f36293d163..a0452ac0110 100644 --- a/gems/aws-sdk-ec2/sig/client.rbs +++ b/gems/aws-sdk-ec2/sig/client.rbs @@ -6212,6 +6212,25 @@ module Aws ) -> _DescribeLockedSnapshotsResponseSuccess | (?Hash[Symbol, untyped] params, ?Hash[Symbol, untyped] options) -> _DescribeLockedSnapshotsResponseSuccess + interface _DescribeMacHostsResponseSuccess + include ::Seahorse::Client::_ResponseSuccess[Types::DescribeMacHostsResult] + def mac_hosts: () -> ::Array[Types::MacHost] + def next_token: () -> ::String + end + # https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/EC2/Client.html#describe_mac_hosts-instance_method + def describe_mac_hosts: ( + ?filters: Array[ + { + name: ::String?, + values: Array[::String]? + }, + ], + ?host_ids: Array[::String], + ?max_results: ::Integer, + ?next_token: ::String + ) -> _DescribeMacHostsResponseSuccess + | (?Hash[Symbol, untyped] params, ?Hash[Symbol, untyped] options) -> _DescribeMacHostsResponseSuccess + interface _DescribeManagedPrefixListsResponseSuccess include ::Seahorse::Client::_ResponseSuccess[Types::DescribeManagedPrefixListsResult] def next_token: () -> ::String diff --git a/gems/aws-sdk-ec2/sig/types.rbs b/gems/aws-sdk-ec2/sig/types.rbs index 5324b7df8c6..08bca75b6a0 100644 --- a/gems/aws-sdk-ec2/sig/types.rbs +++ b/gems/aws-sdk-ec2/sig/types.rbs @@ -5000,6 +5000,20 @@ module Aws::EC2 SENSITIVE: [] end + class DescribeMacHostsRequest + attr_accessor filters: ::Array[Types::Filter] + attr_accessor host_ids: ::Array[::String] + attr_accessor max_results: ::Integer + attr_accessor next_token: ::String + SENSITIVE: [] + end + + class DescribeMacHostsResult + attr_accessor mac_hosts: ::Array[Types::MacHost] + attr_accessor next_token: ::String + SENSITIVE: [] + end + class DescribeManagedPrefixListsRequest attr_accessor dry_run: bool attr_accessor filters: ::Array[Types::Filter] @@ -10099,6 +10113,12 @@ module Aws::EC2 SENSITIVE: [] end + class MacHost + attr_accessor host_id: ::String + attr_accessor mac_os_latest_supported_versions: ::Array[::String] + SENSITIVE: [] + end + class MaintenanceDetails attr_accessor pending_maintenance: ::String attr_accessor maintenance_auto_applied_after: ::Time diff --git a/gems/aws-sdk-finspace/CHANGELOG.md b/gems/aws-sdk-finspace/CHANGELOG.md index fd3bb943af8..cde06554369 100644 --- a/gems/aws-sdk-finspace/CHANGELOG.md +++ b/gems/aws-sdk-finspace/CHANGELOG.md @@ -1,6 +1,11 @@ Unreleased Changes ------------------ +1.30.0 (2024-03-19) +------------------ + +* Feature - Adding new attributes readWrite and onDemand to dataview models for Database Maintenance operations. + 1.29.0 (2024-01-26) ------------------ diff --git a/gems/aws-sdk-finspace/VERSION b/gems/aws-sdk-finspace/VERSION index 5e57fb89558..034552a83ee 100644 --- a/gems/aws-sdk-finspace/VERSION +++ b/gems/aws-sdk-finspace/VERSION @@ -1 +1 @@ -1.29.0 +1.30.0 diff --git a/gems/aws-sdk-finspace/lib/aws-sdk-finspace.rb b/gems/aws-sdk-finspace/lib/aws-sdk-finspace.rb index cd6c008ada8..87d1a9b7ba8 100644 --- a/gems/aws-sdk-finspace/lib/aws-sdk-finspace.rb +++ b/gems/aws-sdk-finspace/lib/aws-sdk-finspace.rb @@ -52,6 +52,6 @@ # @!group service module Aws::Finspace - GEM_VERSION = '1.29.0' + GEM_VERSION = '1.30.0' end diff --git a/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client.rb b/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client.rb index 09d69c7a95f..a1dd5dcbcbc 100644 --- a/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client.rb +++ b/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client.rb @@ -774,6 +774,7 @@ def create_kx_changeset(params = {}, options = {}) # { # db_paths: ["DbPath"], # required # volume_name: "KxVolumeName", # required + # on_demand: false, # }, # ], # }, @@ -865,6 +866,7 @@ def create_kx_changeset(params = {}, options = {}) # resp.databases[0].dataview_configuration.segment_configurations[0].db_paths #=> Array # resp.databases[0].dataview_configuration.segment_configurations[0].db_paths[0] #=> String # resp.databases[0].dataview_configuration.segment_configurations[0].volume_name #=> String + # resp.databases[0].dataview_configuration.segment_configurations[0].on_demand #=> Boolean # resp.cache_storage_configurations #=> Array # resp.cache_storage_configurations[0].type #=> String # resp.cache_storage_configurations[0].size #=> Integer @@ -990,12 +992,9 @@ def create_kx_database(params = {}, options = {}) # A unique identifier for the dataview. # # @option params [required, String] :az_mode - # The number of availability zones you want to assign per cluster. This - # can be one of the following - # - # * `SINGLE` – Assigns one availability zone per cluster. - # - # * `MULTI` – Assigns all the availability zones per cluster. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # # @option params [String] :availability_zone_id # The identifier of the availability zones. @@ -1016,6 +1015,25 @@ def create_kx_database(params = {}, options = {}) # additions and corrections automatically to the dataview, when you # ingest new changesets. The default value is false. # + # @option params [Boolean] :read_write + # The option to specify whether you want to make the dataview writable + # to perform database maintenance. The following are some considerations + # related to writable dataviews.

 + # + # * You cannot create partial writable dataviews. When you create + # writeable dataviews you must provide the entire database path. + # + # * You cannot perform updates on a writeable dataview. Hence, + # `autoUpdate` must be set as **False** if `readWrite` is **True** for + # a dataview. + # + # * You must also use a unique volume for creating a writeable dataview. + # So, if you choose a volume that is already in use by another + # dataview, the dataview creation fails. + # + # * Once you create a dataview as writeable, you cannot change it to + # read-only. So, you cannot update the `readWrite` parameter later. + # # @option params [String] :description # A description of the dataview. # @@ -1040,6 +1058,7 @@ def create_kx_database(params = {}, options = {}) # * {Types::CreateKxDataviewResponse#segment_configurations #segment_configurations} => Array<Types::KxDataviewSegmentConfiguration> # * {Types::CreateKxDataviewResponse#description #description} => String # * {Types::CreateKxDataviewResponse#auto_update #auto_update} => Boolean + # * {Types::CreateKxDataviewResponse#read_write #read_write} => Boolean # * {Types::CreateKxDataviewResponse#created_timestamp #created_timestamp} => Time # * {Types::CreateKxDataviewResponse#last_modified_timestamp #last_modified_timestamp} => Time # * {Types::CreateKxDataviewResponse#status #status} => String @@ -1057,9 +1076,11 @@ def create_kx_database(params = {}, options = {}) # { # db_paths: ["DbPath"], # required # volume_name: "KxVolumeName", # required + # on_demand: false, # }, # ], # auto_update: false, + # read_write: false, # description: "Description", # tags: { # "TagKey" => "TagValue", @@ -1079,8 +1100,10 @@ def create_kx_database(params = {}, options = {}) # resp.segment_configurations[0].db_paths #=> Array # resp.segment_configurations[0].db_paths[0] #=> String # resp.segment_configurations[0].volume_name #=> String + # resp.segment_configurations[0].on_demand #=> Boolean # resp.description #=> String # resp.auto_update #=> Boolean + # resp.read_write #=> Boolean # resp.created_timestamp #=> Time # resp.last_modified_timestamp #=> Time # resp.status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "FAILED", "DELETING" @@ -1175,6 +1198,26 @@ def create_kx_environment(params = {}, options = {}) # The memory and CPU capabilities of the scaling group host on which # FinSpace Managed kdb clusters will be placed. # + # You can add one of the following values: + # + # * `kx.sg.4xlarge` – The host type with a configuration of 108 GiB + # memory and 16 vCPUs. + # + # * `kx.sg.8xlarge` – The host type with a configuration of 216 GiB + # memory and 32 vCPUs. + # + # * `kx.sg.16xlarge` – The host type with a configuration of 432 GiB + # memory and 64 vCPUs. + # + # * `kx.sg.32xlarge` – The host type with a configuration of 864 GiB + # memory and 128 vCPUs. + # + # * `kx.sg1.16xlarge` – The host type with a configuration of 1949 GiB + # memory and 64 vCPUs. + # + # * `kx.sg1.24xlarge` – The host type with a configuration of 2948 GiB + # memory and 96 vCPUs. + # # @option params [required, String] :availability_zone_id # The identifier of the availability zones. # @@ -1312,8 +1355,9 @@ def create_kx_user(params = {}, options = {}) # `volumeType` as *NAS\_1*. # # @option params [required, String] :az_mode - # The number of availability zones you want to assign per cluster. - # Currently, FinSpace only support `SINGLE` for volumes. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # # @option params [required, Array] :availability_zone_ids # The identifier of the availability zones. @@ -1815,6 +1859,7 @@ def get_kx_changeset(params = {}, options = {}) # resp.databases[0].dataview_configuration.segment_configurations[0].db_paths #=> Array # resp.databases[0].dataview_configuration.segment_configurations[0].db_paths[0] #=> String # resp.databases[0].dataview_configuration.segment_configurations[0].volume_name #=> String + # resp.databases[0].dataview_configuration.segment_configurations[0].on_demand #=> Boolean # resp.cache_storage_configurations #=> Array # resp.cache_storage_configurations[0].type #=> String # resp.cache_storage_configurations[0].size #=> Integer @@ -1981,6 +2026,7 @@ def get_kx_database(params = {}, options = {}) # * {Types::GetKxDataviewResponse#active_versions #active_versions} => Array<Types::KxDataviewActiveVersion> # * {Types::GetKxDataviewResponse#description #description} => String # * {Types::GetKxDataviewResponse#auto_update #auto_update} => Boolean + # * {Types::GetKxDataviewResponse#read_write #read_write} => Boolean # * {Types::GetKxDataviewResponse#environment_id #environment_id} => String # * {Types::GetKxDataviewResponse#created_timestamp #created_timestamp} => Time # * {Types::GetKxDataviewResponse#last_modified_timestamp #last_modified_timestamp} => Time @@ -2006,18 +2052,21 @@ def get_kx_database(params = {}, options = {}) # resp.segment_configurations[0].db_paths #=> Array # resp.segment_configurations[0].db_paths[0] #=> String # resp.segment_configurations[0].volume_name #=> String + # resp.segment_configurations[0].on_demand #=> Boolean # resp.active_versions #=> Array # resp.active_versions[0].changeset_id #=> String # resp.active_versions[0].segment_configurations #=> Array # resp.active_versions[0].segment_configurations[0].db_paths #=> Array # resp.active_versions[0].segment_configurations[0].db_paths[0] #=> String # resp.active_versions[0].segment_configurations[0].volume_name #=> String + # resp.active_versions[0].segment_configurations[0].on_demand #=> Boolean # resp.active_versions[0].attached_clusters #=> Array # resp.active_versions[0].attached_clusters[0] #=> String # resp.active_versions[0].created_timestamp #=> Time # resp.active_versions[0].version_id #=> String # resp.description #=> String # resp.auto_update #=> Boolean + # resp.read_write #=> Boolean # resp.environment_id #=> String # resp.created_timestamp #=> Time # resp.last_modified_timestamp #=> Time @@ -2581,12 +2630,14 @@ def list_kx_databases(params = {}, options = {}) # resp.kx_dataviews[0].segment_configurations[0].db_paths #=> Array # resp.kx_dataviews[0].segment_configurations[0].db_paths[0] #=> String # resp.kx_dataviews[0].segment_configurations[0].volume_name #=> String + # resp.kx_dataviews[0].segment_configurations[0].on_demand #=> Boolean # resp.kx_dataviews[0].active_versions #=> Array # resp.kx_dataviews[0].active_versions[0].changeset_id #=> String # resp.kx_dataviews[0].active_versions[0].segment_configurations #=> Array # resp.kx_dataviews[0].active_versions[0].segment_configurations[0].db_paths #=> Array # resp.kx_dataviews[0].active_versions[0].segment_configurations[0].db_paths[0] #=> String # resp.kx_dataviews[0].active_versions[0].segment_configurations[0].volume_name #=> String + # resp.kx_dataviews[0].active_versions[0].segment_configurations[0].on_demand #=> Boolean # resp.kx_dataviews[0].active_versions[0].attached_clusters #=> Array # resp.kx_dataviews[0].active_versions[0].attached_clusters[0] #=> String # resp.kx_dataviews[0].active_versions[0].created_timestamp #=> Time @@ -2594,6 +2645,7 @@ def list_kx_databases(params = {}, options = {}) # resp.kx_dataviews[0].status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "FAILED", "DELETING" # resp.kx_dataviews[0].description #=> String # resp.kx_dataviews[0].auto_update #=> Boolean + # resp.kx_dataviews[0].read_write #=> Boolean # resp.kx_dataviews[0].created_timestamp #=> Time # resp.kx_dataviews[0].last_modified_timestamp #=> Time # resp.kx_dataviews[0].status_reason #=> String @@ -3110,6 +3162,7 @@ def update_kx_cluster_code_configuration(params = {}, options = {}) # { # db_paths: ["DbPath"], # required # volume_name: "KxVolumeName", # required + # on_demand: false, # }, # ], # }, @@ -3224,6 +3277,7 @@ def update_kx_database(params = {}, options = {}) # * {Types::UpdateKxDataviewResponse#active_versions #active_versions} => Array<Types::KxDataviewActiveVersion> # * {Types::UpdateKxDataviewResponse#status #status} => String # * {Types::UpdateKxDataviewResponse#auto_update #auto_update} => Boolean + # * {Types::UpdateKxDataviewResponse#read_write #read_write} => Boolean # * {Types::UpdateKxDataviewResponse#description #description} => String # * {Types::UpdateKxDataviewResponse#created_timestamp #created_timestamp} => Time # * {Types::UpdateKxDataviewResponse#last_modified_timestamp #last_modified_timestamp} => Time @@ -3240,6 +3294,7 @@ def update_kx_database(params = {}, options = {}) # { # db_paths: ["DbPath"], # required # volume_name: "KxVolumeName", # required + # on_demand: false, # }, # ], # client_token: "ClientTokenString", # required @@ -3257,18 +3312,21 @@ def update_kx_database(params = {}, options = {}) # resp.segment_configurations[0].db_paths #=> Array # resp.segment_configurations[0].db_paths[0] #=> String # resp.segment_configurations[0].volume_name #=> String + # resp.segment_configurations[0].on_demand #=> Boolean # resp.active_versions #=> Array # resp.active_versions[0].changeset_id #=> String # resp.active_versions[0].segment_configurations #=> Array # resp.active_versions[0].segment_configurations[0].db_paths #=> Array # resp.active_versions[0].segment_configurations[0].db_paths[0] #=> String # resp.active_versions[0].segment_configurations[0].volume_name #=> String + # resp.active_versions[0].segment_configurations[0].on_demand #=> Boolean # resp.active_versions[0].attached_clusters #=> Array # resp.active_versions[0].attached_clusters[0] #=> String # resp.active_versions[0].created_timestamp #=> Time # resp.active_versions[0].version_id #=> String # resp.status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "FAILED", "DELETING" # resp.auto_update #=> Boolean + # resp.read_write #=> Boolean # resp.description #=> String # resp.created_timestamp #=> Time # resp.last_modified_timestamp #=> Time @@ -3633,7 +3691,7 @@ def build_request(operation_name, params = {}) params: params, config: config) context[:gem_name] = 'aws-sdk-finspace' - context[:gem_version] = '1.29.0' + context[:gem_version] = '1.30.0' Seahorse::Client::Request.new(handlers, context) end diff --git a/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client_api.rb b/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client_api.rb index 47d12295591..a985144279d 100644 --- a/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client_api.rb +++ b/gems/aws-sdk-finspace/lib/aws-sdk-finspace/client_api.rb @@ -448,6 +448,7 @@ module ClientApi CreateKxDataviewRequest.add_member(:changeset_id, Shapes::ShapeRef.new(shape: ChangesetId, location_name: "changesetId")) CreateKxDataviewRequest.add_member(:segment_configurations, Shapes::ShapeRef.new(shape: KxDataviewSegmentConfigurationList, location_name: "segmentConfigurations")) CreateKxDataviewRequest.add_member(:auto_update, Shapes::ShapeRef.new(shape: booleanValue, location_name: "autoUpdate")) + CreateKxDataviewRequest.add_member(:read_write, Shapes::ShapeRef.new(shape: booleanValue, location_name: "readWrite")) CreateKxDataviewRequest.add_member(:description, Shapes::ShapeRef.new(shape: Description, location_name: "description")) CreateKxDataviewRequest.add_member(:tags, Shapes::ShapeRef.new(shape: TagMap, location_name: "tags")) CreateKxDataviewRequest.add_member(:client_token, Shapes::ShapeRef.new(shape: ClientTokenString, required: true, location_name: "clientToken", metadata: {"idempotencyToken"=>true})) @@ -462,6 +463,7 @@ module ClientApi CreateKxDataviewResponse.add_member(:segment_configurations, Shapes::ShapeRef.new(shape: KxDataviewSegmentConfigurationList, location_name: "segmentConfigurations")) CreateKxDataviewResponse.add_member(:description, Shapes::ShapeRef.new(shape: Description, location_name: "description")) CreateKxDataviewResponse.add_member(:auto_update, Shapes::ShapeRef.new(shape: booleanValue, location_name: "autoUpdate")) + CreateKxDataviewResponse.add_member(:read_write, Shapes::ShapeRef.new(shape: booleanValue, location_name: "readWrite")) CreateKxDataviewResponse.add_member(:created_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "createdTimestamp")) CreateKxDataviewResponse.add_member(:last_modified_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "lastModifiedTimestamp")) CreateKxDataviewResponse.add_member(:status, Shapes::ShapeRef.new(shape: KxDataviewStatus, location_name: "status")) @@ -718,6 +720,7 @@ module ClientApi GetKxDataviewResponse.add_member(:active_versions, Shapes::ShapeRef.new(shape: KxDataviewActiveVersionList, location_name: "activeVersions")) GetKxDataviewResponse.add_member(:description, Shapes::ShapeRef.new(shape: Description, location_name: "description")) GetKxDataviewResponse.add_member(:auto_update, Shapes::ShapeRef.new(shape: booleanValue, location_name: "autoUpdate")) + GetKxDataviewResponse.add_member(:read_write, Shapes::ShapeRef.new(shape: booleanValue, location_name: "readWrite")) GetKxDataviewResponse.add_member(:environment_id, Shapes::ShapeRef.new(shape: EnvironmentId, location_name: "environmentId")) GetKxDataviewResponse.add_member(:created_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "createdTimestamp")) GetKxDataviewResponse.add_member(:last_modified_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "lastModifiedTimestamp")) @@ -900,6 +903,7 @@ module ClientApi KxDataviewListEntry.add_member(:status, Shapes::ShapeRef.new(shape: KxDataviewStatus, location_name: "status")) KxDataviewListEntry.add_member(:description, Shapes::ShapeRef.new(shape: Description, location_name: "description")) KxDataviewListEntry.add_member(:auto_update, Shapes::ShapeRef.new(shape: booleanValue, location_name: "autoUpdate")) + KxDataviewListEntry.add_member(:read_write, Shapes::ShapeRef.new(shape: booleanValue, location_name: "readWrite")) KxDataviewListEntry.add_member(:created_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "createdTimestamp")) KxDataviewListEntry.add_member(:last_modified_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "lastModifiedTimestamp")) KxDataviewListEntry.add_member(:status_reason, Shapes::ShapeRef.new(shape: KxDataviewStatusReason, location_name: "statusReason")) @@ -907,6 +911,7 @@ module ClientApi KxDataviewSegmentConfiguration.add_member(:db_paths, Shapes::ShapeRef.new(shape: SegmentConfigurationDbPathList, required: true, location_name: "dbPaths")) KxDataviewSegmentConfiguration.add_member(:volume_name, Shapes::ShapeRef.new(shape: KxVolumeName, required: true, location_name: "volumeName")) + KxDataviewSegmentConfiguration.add_member(:on_demand, Shapes::ShapeRef.new(shape: booleanValue, location_name: "onDemand")) KxDataviewSegmentConfiguration.struct_class = Types::KxDataviewSegmentConfiguration KxDataviewSegmentConfigurationList.member = Shapes::ShapeRef.new(shape: KxDataviewSegmentConfiguration) @@ -1221,6 +1226,7 @@ module ClientApi UpdateKxDataviewResponse.add_member(:active_versions, Shapes::ShapeRef.new(shape: KxDataviewActiveVersionList, location_name: "activeVersions")) UpdateKxDataviewResponse.add_member(:status, Shapes::ShapeRef.new(shape: KxDataviewStatus, location_name: "status")) UpdateKxDataviewResponse.add_member(:auto_update, Shapes::ShapeRef.new(shape: booleanValue, location_name: "autoUpdate")) + UpdateKxDataviewResponse.add_member(:read_write, Shapes::ShapeRef.new(shape: booleanValue, location_name: "readWrite")) UpdateKxDataviewResponse.add_member(:description, Shapes::ShapeRef.new(shape: Description, location_name: "description")) UpdateKxDataviewResponse.add_member(:created_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "createdTimestamp")) UpdateKxDataviewResponse.add_member(:last_modified_timestamp, Shapes::ShapeRef.new(shape: Timestamp, location_name: "lastModifiedTimestamp")) diff --git a/gems/aws-sdk-finspace/lib/aws-sdk-finspace/types.rb b/gems/aws-sdk-finspace/lib/aws-sdk-finspace/types.rb index 3eac3a446ab..4204e2e71d9 100644 --- a/gems/aws-sdk-finspace/lib/aws-sdk-finspace/types.rb +++ b/gems/aws-sdk-finspace/lib/aws-sdk-finspace/types.rb @@ -893,12 +893,9 @@ class CreateKxDatabaseResponse < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # This can be one of the following - # - # * `SINGLE` – Assigns one availability zone per cluster. - # - # * `MULTI` – Assigns all the availability zones per cluster. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_id @@ -924,6 +921,26 @@ class CreateKxDatabaseResponse < Struct.new( # ingest new changesets. The default value is false. # @return [Boolean] # + # @!attribute [rw] read_write + # The option to specify whether you want to make the dataview writable + # to perform database maintenance. The following are some + # considerations related to writable dataviews.

 + # + # * You cannot create partial writable dataviews. When you create + # writeable dataviews you must provide the entire database path. + # + # * You cannot perform updates on a writeable dataview. Hence, + # `autoUpdate` must be set as **False** if `readWrite` is **True** + # for a dataview. + # + # * You must also use a unique volume for creating a writeable + # dataview. So, if you choose a volume that is already in use by + # another dataview, the dataview creation fails. + # + # * Once you create a dataview as writeable, you cannot change it to + # read-only. So, you cannot update the `readWrite` parameter later. + # @return [Boolean] + # # @!attribute [rw] description # A description of the dataview. # @return [String] @@ -951,6 +968,7 @@ class CreateKxDataviewRequest < Struct.new( :changeset_id, :segment_configurations, :auto_update, + :read_write, :description, :tags, :client_token) @@ -972,12 +990,9 @@ class CreateKxDataviewRequest < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # This can be one of the following - # - # * `SINGLE` – Assigns one availability zone per cluster. - # - # * `MULTI` – Assigns all the availability zones per cluster. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_id @@ -1006,6 +1021,11 @@ class CreateKxDataviewRequest < Struct.new( # ingest new changesets. The default value is false. # @return [Boolean] # + # @!attribute [rw] read_write + # Returns True if the dataview is created as writeable and False + # otherwise. + # @return [Boolean] + # # @!attribute [rw] created_timestamp # The timestamp at which the dataview was created in FinSpace. The # value is determined as epoch time in milliseconds. For example, the @@ -1042,6 +1062,7 @@ class CreateKxDataviewResponse < Struct.new( :segment_configurations, :description, :auto_update, + :read_write, :created_timestamp, :last_modified_timestamp, :status) @@ -1146,6 +1167,26 @@ class CreateKxEnvironmentResponse < Struct.new( # @!attribute [rw] host_type # The memory and CPU capabilities of the scaling group host on which # FinSpace Managed kdb clusters will be placed. + # + # You can add one of the following values: + # + # * `kx.sg.4xlarge` – The host type with a configuration of 108 GiB + # memory and 16 vCPUs. + # + # * `kx.sg.8xlarge` – The host type with a configuration of 216 GiB + # memory and 32 vCPUs. + # + # * `kx.sg.16xlarge` – The host type with a configuration of 432 GiB + # memory and 64 vCPUs. + # + # * `kx.sg.32xlarge` – The host type with a configuration of 864 GiB + # memory and 128 vCPUs. + # + # * `kx.sg1.16xlarge` – The host type with a configuration of 1949 GiB + # memory and 64 vCPUs. + # + # * `kx.sg1.24xlarge` – The host type with a configuration of 2948 GiB + # memory and 96 vCPUs. # @return [String] # # @!attribute [rw] availability_zone_id @@ -1339,8 +1380,9 @@ class CreateKxUserResponse < Struct.new( # @return [Types::KxNAS1Configuration] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # Currently, FinSpace only support `SINGLE` for volumes. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_ids @@ -1418,8 +1460,9 @@ class CreateKxVolumeRequest < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # Currently, FinSpace only support `SINGLE` for volumes. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] description @@ -2324,12 +2367,9 @@ class GetKxDataviewRequest < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # This can be one of the following - # - # * `SINGLE` – Assigns one availability zone per cluster. - # - # * `MULTI` – Assigns all the availability zones per cluster. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_id @@ -2364,6 +2404,11 @@ class GetKxDataviewRequest < Struct.new( # changesets are ingested. The default value is false. # @return [Boolean] # + # @!attribute [rw] read_write + # Returns True if the dataview is created as writeable and False + # otherwise. + # @return [Boolean] + # # @!attribute [rw] environment_id # A unique identifier for the kdb environment, from where you want to # retrieve the dataview details. @@ -2409,6 +2454,7 @@ class GetKxDataviewResponse < Struct.new( :active_versions, :description, :auto_update, + :read_write, :environment_id, :created_timestamp, :last_modified_timestamp, @@ -2555,6 +2601,26 @@ class GetKxScalingGroupRequest < Struct.new( # @!attribute [rw] host_type # The memory and CPU capabilities of the scaling group host on which # FinSpace Managed kdb clusters will be placed. + # + # It can have one of the following values: + # + # * `kx.sg.4xlarge` – The host type with a configuration of 108 GiB + # memory and 16 vCPUs. + # + # * `kx.sg.8xlarge` – The host type with a configuration of 216 GiB + # memory and 32 vCPUs. + # + # * `kx.sg.16xlarge` – The host type with a configuration of 432 GiB + # memory and 64 vCPUs. + # + # * `kx.sg.32xlarge` – The host type with a configuration of 864 GiB + # memory and 128 vCPUs. + # + # * `kx.sg1.16xlarge` – The host type with a configuration of 1949 GiB + # memory and 64 vCPUs. + # + # * `kx.sg1.24xlarge` – The host type with a configuration of 2948 GiB + # memory and 96 vCPUs. # @return [String] # # @!attribute [rw] clusters @@ -2749,8 +2815,9 @@ class GetKxVolumeRequest < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # Currently, FinSpace only support `SINGLE` for volumes. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_ids @@ -3335,12 +3402,9 @@ class KxDataviewConfiguration < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # This can be one of the following - # - # * `SINGLE` – Assigns one availability zone per cluster. - # - # * `MULTI` – Assigns all the availability zones per cluster. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_id @@ -3377,6 +3441,11 @@ class KxDataviewConfiguration < Struct.new( # ingest new changesets. The default value is false. # @return [Boolean] # + # @!attribute [rw] read_write + # Returns True if the dataview is created as writeable and False + # otherwise. + # @return [Boolean] + # # @!attribute [rw] created_timestamp # The timestamp at which the dataview list entry was created in # FinSpace. The value is determined as epoch time in milliseconds. For @@ -3409,6 +3478,7 @@ class KxDataviewListEntry < Struct.new( :status, :description, :auto_update, + :read_write, :created_timestamp, :last_modified_timestamp, :status_reason) @@ -3432,11 +3502,20 @@ class KxDataviewListEntry < Struct.new( # The name of the volume where you want to add data. # @return [String] # + # @!attribute [rw] on_demand + # Enables on-demand caching on the selected database path when a + # particular file or a column of the database is accessed. When on + # demand caching is **True**, dataviews perform minimal loading of + # files on the filesystem as needed. When it is set to **False**, + # everything is cached. The default value is **False**. + # @return [Boolean] + # # @see http://docs.aws.amazon.com/goto/WebAPI/finspace-2021-03-12/KxDataviewSegmentConfiguration AWS API Documentation # class KxDataviewSegmentConfiguration < Struct.new( :db_paths, - :volume_name) + :volume_name, + :on_demand) SENSITIVE = [] include Aws::Structure end @@ -3680,6 +3759,26 @@ class KxSavedownStorageConfiguration < Struct.new( # @!attribute [rw] host_type # The memory and CPU capabilities of the scaling group host on which # FinSpace Managed kdb clusters will be placed. + # + # You can add one of the following values: + # + # * `kx.sg.4xlarge` – The host type with a configuration of 108 GiB + # memory and 16 vCPUs. + # + # * `kx.sg.8xlarge` – The host type with a configuration of 216 GiB + # memory and 32 vCPUs. + # + # * `kx.sg.16xlarge` – The host type with a configuration of 432 GiB + # memory and 64 vCPUs. + # + # * `kx.sg.32xlarge` – The host type with a configuration of 864 GiB + # memory and 128 vCPUs. + # + # * `kx.sg1.16xlarge` – The host type with a configuration of 1949 GiB + # memory and 64 vCPUs. + # + # * `kx.sg1.24xlarge` – The host type with a configuration of 2948 GiB + # memory and 96 vCPUs. # @return [String] # # @!attribute [rw] clusters @@ -3848,8 +3947,9 @@ class KxUser < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones assigned to the volume. Currently, - # only `SINGLE` is supported. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_ids @@ -4878,12 +4978,9 @@ class UpdateKxDataviewRequest < Struct.new( # @return [String] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # This can be one of the following - # - # * `SINGLE` – Assigns one availability zone per cluster. - # - # * `MULTI` – Assigns all the availability zones per cluster. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_id @@ -4923,6 +5020,11 @@ class UpdateKxDataviewRequest < Struct.new( # changesets are ingested. The default value is false. # @return [Boolean] # + # @!attribute [rw] read_write + # Returns True if the dataview is created as writeable and False + # otherwise. + # @return [Boolean] + # # @!attribute [rw] description # A description of the dataview. # @return [String] @@ -4954,6 +5056,7 @@ class UpdateKxDataviewResponse < Struct.new( :active_versions, :status, :auto_update, + :read_write, :description, :created_timestamp, :last_modified_timestamp) @@ -5366,8 +5469,9 @@ class UpdateKxVolumeRequest < Struct.new( # @return [Time] # # @!attribute [rw] az_mode - # The number of availability zones you want to assign per cluster. - # Currently, FinSpace only support `SINGLE` for volumes. + # The number of availability zones you want to assign per volume. + # Currently, FinSpace only supports `SINGLE` for volumes. This places + # dataview in a single AZ. # @return [String] # # @!attribute [rw] availability_zone_ids diff --git a/gems/aws-sdk-finspace/sig/client.rbs b/gems/aws-sdk-finspace/sig/client.rbs index 538ae9f8c41..62b2a223082 100644 --- a/gems/aws-sdk-finspace/sig/client.rbs +++ b/gems/aws-sdk-finspace/sig/client.rbs @@ -183,7 +183,8 @@ module Aws segment_configurations: Array[ { db_paths: Array[::String], - volume_name: ::String + volume_name: ::String, + on_demand: bool? }, ]? }? @@ -276,6 +277,7 @@ module Aws def segment_configurations: () -> ::Array[Types::KxDataviewSegmentConfiguration] def description: () -> ::String def auto_update: () -> bool + def read_write: () -> bool def created_timestamp: () -> ::Time def last_modified_timestamp: () -> ::Time def status: () -> ("CREATING" | "ACTIVE" | "UPDATING" | "FAILED" | "DELETING") @@ -291,10 +293,12 @@ module Aws ?segment_configurations: Array[ { db_paths: Array[::String], - volume_name: ::String + volume_name: ::String, + on_demand: bool? }, ], ?auto_update: bool, + ?read_write: bool, ?description: ::String, ?tags: Hash[::String, ::String], client_token: ::String @@ -582,6 +586,7 @@ module Aws def active_versions: () -> ::Array[Types::KxDataviewActiveVersion] def description: () -> ::String def auto_update: () -> bool + def read_write: () -> bool def environment_id: () -> ::String def created_timestamp: () -> ::Time def last_modified_timestamp: () -> ::Time @@ -915,7 +920,8 @@ module Aws segment_configurations: Array[ { db_paths: Array[::String], - volume_name: ::String + volume_name: ::String, + on_demand: bool? }, ]? }? @@ -955,6 +961,7 @@ module Aws def active_versions: () -> ::Array[Types::KxDataviewActiveVersion] def status: () -> ("CREATING" | "ACTIVE" | "UPDATING" | "FAILED" | "DELETING") def auto_update: () -> bool + def read_write: () -> bool def description: () -> ::String def created_timestamp: () -> ::Time def last_modified_timestamp: () -> ::Time @@ -969,7 +976,8 @@ module Aws ?segment_configurations: Array[ { db_paths: Array[::String], - volume_name: ::String + volume_name: ::String, + on_demand: bool? }, ], client_token: ::String diff --git a/gems/aws-sdk-finspace/sig/types.rbs b/gems/aws-sdk-finspace/sig/types.rbs index 0ebf2c46335..3f6c67aae5b 100644 --- a/gems/aws-sdk-finspace/sig/types.rbs +++ b/gems/aws-sdk-finspace/sig/types.rbs @@ -169,6 +169,7 @@ module Aws::Finspace attr_accessor changeset_id: ::String attr_accessor segment_configurations: ::Array[Types::KxDataviewSegmentConfiguration] attr_accessor auto_update: bool + attr_accessor read_write: bool attr_accessor description: ::String attr_accessor tags: ::Hash[::String, ::String] attr_accessor client_token: ::String @@ -185,6 +186,7 @@ module Aws::Finspace attr_accessor segment_configurations: ::Array[Types::KxDataviewSegmentConfiguration] attr_accessor description: ::String attr_accessor auto_update: bool + attr_accessor read_write: bool attr_accessor created_timestamp: ::Time attr_accessor last_modified_timestamp: ::Time attr_accessor status: ("CREATING" | "ACTIVE" | "UPDATING" | "FAILED" | "DELETING") @@ -505,6 +507,7 @@ module Aws::Finspace attr_accessor active_versions: ::Array[Types::KxDataviewActiveVersion] attr_accessor description: ::String attr_accessor auto_update: bool + attr_accessor read_write: bool attr_accessor environment_id: ::String attr_accessor created_timestamp: ::Time attr_accessor last_modified_timestamp: ::Time @@ -713,6 +716,7 @@ module Aws::Finspace attr_accessor status: ("CREATING" | "ACTIVE" | "UPDATING" | "FAILED" | "DELETING") attr_accessor description: ::String attr_accessor auto_update: bool + attr_accessor read_write: bool attr_accessor created_timestamp: ::Time attr_accessor last_modified_timestamp: ::Time attr_accessor status_reason: ::String @@ -722,6 +726,7 @@ module Aws::Finspace class KxDataviewSegmentConfiguration attr_accessor db_paths: ::Array[::String] attr_accessor volume_name: ::String + attr_accessor on_demand: bool SENSITIVE: [] end @@ -1113,6 +1118,7 @@ module Aws::Finspace attr_accessor active_versions: ::Array[Types::KxDataviewActiveVersion] attr_accessor status: ("CREATING" | "ACTIVE" | "UPDATING" | "FAILED" | "DELETING") attr_accessor auto_update: bool + attr_accessor read_write: bool attr_accessor description: ::String attr_accessor created_timestamp: ::Time attr_accessor last_modified_timestamp: ::Time diff --git a/gems/aws-sdk-managedblockchainquery/CHANGELOG.md b/gems/aws-sdk-managedblockchainquery/CHANGELOG.md index 5c5136c2fca..4b89448a6d7 100644 --- a/gems/aws-sdk-managedblockchainquery/CHANGELOG.md +++ b/gems/aws-sdk-managedblockchainquery/CHANGELOG.md @@ -1,6 +1,11 @@ Unreleased Changes ------------------ +1.9.0 (2024-03-19) +------------------ + +* Feature - Introduces a new API for Amazon Managed Blockchain Query: ListFilteredTransactionEvents. + 1.8.0 (2024-02-01) ------------------ diff --git a/gems/aws-sdk-managedblockchainquery/VERSION b/gems/aws-sdk-managedblockchainquery/VERSION index 27f9cd322bb..f8e233b2733 100644 --- a/gems/aws-sdk-managedblockchainquery/VERSION +++ b/gems/aws-sdk-managedblockchainquery/VERSION @@ -1 +1 @@ -1.8.0 +1.9.0 diff --git a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery.rb b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery.rb index b03bcbf4507..809a124f88d 100644 --- a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery.rb +++ b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery.rb @@ -53,6 +53,6 @@ # @!group service module Aws::ManagedBlockchainQuery - GEM_VERSION = '1.8.0' + GEM_VERSION = '1.9.0' end diff --git a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client.rb b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client.rb index b3bfeb8bf56..5189b66a0f6 100644 --- a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client.rb +++ b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client.rb @@ -582,8 +582,8 @@ def get_token_balance(params = {}, options = {}) # [1]: https://docs.aws.amazon.com/managed-blockchain/latest/ambq-dg/key-concepts.html#finality # # @option params [required, String] :transaction_hash - # The hash of the transaction. It is generated whenever a transaction is - # verified and added to the blockchain. + # The hash of a transaction. It is generated when a transaction is + # created. # # @option params [required, String] :network # The blockchain network where the transaction occurred. @@ -646,7 +646,7 @@ def get_transaction(params = {}, options = {}) # @option params [Integer] :max_results # The maximum number of contracts to list. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -694,6 +694,124 @@ def list_asset_contracts(params = {}, options = {}) req.send_request(options) end + # Lists all the transaction events for an address on the blockchain. + # + # This operation is only supported on the Bitcoin networks. + # + # + # + # @option params [required, String] :network + # The blockchain network where the transaction occurred. + # + # Valid Values: `BITCOIN_MAINNET` \| `BITCOIN_TESTNET` + # + # @option params [required, Types::AddressIdentifierFilter] :address_identifier_filter + # This is the unique public address on the blockchain for which the + # transaction events are being requested. + # + # @option params [Types::TimeFilter] :time_filter + # This container specifies the time frame for the transaction events + # returned in the response. + # + # @option params [Types::VoutFilter] :vout_filter + # This container specifies filtering attributes related to BITCOIN\_VOUT + # event types + # + # @option params [Types::ConfirmationStatusFilter] :confirmation_status_filter + # The container for the `ConfirmationStatusFilter` that filters for the + # [ *finality* ][1] of the results. + # + # + # + # [1]: https://docs.aws.amazon.com/managed-blockchain/latest/ambq-dg/key-concepts.html#finality + # + # @option params [Types::ListFilteredTransactionEventsSort] :sort + # The order by which the results will be sorted. + # + # @option params [String] :next_token + # The pagination token that indicates the next set of results to + # retrieve. + # + # @option params [Integer] :max_results + # The maximum number of transaction events to list. + # + # Default: `100` + # + # Even if additional results can be retrieved, the request can return + # less results than `maxResults` or an empty array of results. + # + # To retrieve the next set of results, make another request with the + # returned `nextToken` value. The value of `nextToken` is `null` when + # there are no more results to return + # + # + # + # @return [Types::ListFilteredTransactionEventsOutput] Returns a {Seahorse::Client::Response response} object which responds to the following methods: + # + # * {Types::ListFilteredTransactionEventsOutput#events #events} => Array<Types::TransactionEvent> + # * {Types::ListFilteredTransactionEventsOutput#next_token #next_token} => String + # + # The returned {Seahorse::Client::Response response} is a pageable response and is Enumerable. For details on usage see {Aws::PageableResponse PageableResponse}. + # + # @example Request syntax with placeholder values + # + # resp = client.list_filtered_transaction_events({ + # network: "String", # required + # address_identifier_filter: { # required + # transaction_event_to_address: ["ChainAddress"], # required + # }, + # time_filter: { + # from: { + # time: Time.now, + # }, + # to: { + # time: Time.now, + # }, + # }, + # vout_filter: { + # vout_spent: false, # required + # }, + # confirmation_status_filter: { + # include: ["FINAL"], # required, accepts FINAL, NONFINAL + # }, + # sort: { + # sort_by: "blockchainInstant", # accepts blockchainInstant + # sort_order: "ASCENDING", # accepts ASCENDING, DESCENDING + # }, + # next_token: "NextToken", + # max_results: 1, + # }) + # + # @example Response structure + # + # resp.events #=> Array + # resp.events[0].network #=> String, one of "ETHEREUM_MAINNET", "ETHEREUM_SEPOLIA_TESTNET", "BITCOIN_MAINNET", "BITCOIN_TESTNET" + # resp.events[0].transaction_hash #=> String + # resp.events[0].event_type #=> String, one of "ERC20_TRANSFER", "ERC20_MINT", "ERC20_BURN", "ERC20_DEPOSIT", "ERC20_WITHDRAWAL", "ERC721_TRANSFER", "ERC1155_TRANSFER", "BITCOIN_VIN", "BITCOIN_VOUT", "INTERNAL_ETH_TRANSFER", "ETH_TRANSFER" + # resp.events[0].from #=> String + # resp.events[0].to #=> String + # resp.events[0].value #=> String + # resp.events[0].contract_address #=> String + # resp.events[0].token_id #=> String + # resp.events[0].transaction_id #=> String + # resp.events[0].vout_index #=> Integer + # resp.events[0].vout_spent #=> Boolean + # resp.events[0].spent_vout_transaction_id #=> String + # resp.events[0].spent_vout_transaction_hash #=> String + # resp.events[0].spent_vout_index #=> Integer + # resp.events[0].blockchain_instant.time #=> Time + # resp.events[0].confirmation_status #=> String, one of "FINAL", "NONFINAL" + # resp.next_token #=> String + # + # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/ListFilteredTransactionEvents AWS API Documentation + # + # @overload list_filtered_transaction_events(params = {}) + # @param [Hash] params ({}) + def list_filtered_transaction_events(params = {}, options = {}) + req = build_request(:list_filtered_transaction_events, params) + req.send_request(options) + end + # This action returns the following for a given blockchain network: # # * Lists all token balances owned by an address (either a contract @@ -730,7 +848,7 @@ def list_asset_contracts(params = {}, options = {}) # @option params [Integer] :max_results # The maximum number of token balances to return. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -784,8 +902,7 @@ def list_token_balances(params = {}, options = {}) req.send_request(options) end - # An array of `TransactionEvent` objects. Each object contains details - # about the transaction event. + # Lists all the transaction events for a transaction # # This action will return transaction details for all transactions that # are *confirmed* on the blockchain, even if they have not reached @@ -797,9 +914,17 @@ def list_token_balances(params = {}, options = {}) # # [1]: https://docs.aws.amazon.com/managed-blockchain/latest/ambq-dg/key-concepts.html#finality # - # @option params [required, String] :transaction_hash - # The hash of the transaction. It is generated whenever a transaction is - # verified and added to the blockchain. + # @option params [String] :transaction_hash + # The hash of a transaction. It is generated when a transaction is + # created. + # + # @option params [String] :transaction_id + # The identifier of a Bitcoin transaction. It is generated when a + # transaction is created. + # + # `transactionId` is only supported on the Bitcoin networks. + # + # # # @option params [required, String] :network # The blockchain network where the transaction events occurred. @@ -811,7 +936,7 @@ def list_token_balances(params = {}, options = {}) # @option params [Integer] :max_results # The maximum number of transaction events to list. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -832,7 +957,8 @@ def list_token_balances(params = {}, options = {}) # @example Request syntax with placeholder values # # resp = client.list_transaction_events({ - # transaction_hash: "QueryTransactionHash", # required + # transaction_hash: "QueryTransactionHash", + # transaction_id: "QueryTransactionId", # network: "ETHEREUM_MAINNET", # required, accepts ETHEREUM_MAINNET, ETHEREUM_SEPOLIA_TESTNET, BITCOIN_MAINNET, BITCOIN_TESTNET # next_token: "NextToken", # max_results: 1, @@ -851,6 +977,12 @@ def list_token_balances(params = {}, options = {}) # resp.events[0].token_id #=> String # resp.events[0].transaction_id #=> String # resp.events[0].vout_index #=> Integer + # resp.events[0].vout_spent #=> Boolean + # resp.events[0].spent_vout_transaction_id #=> String + # resp.events[0].spent_vout_transaction_hash #=> String + # resp.events[0].spent_vout_index #=> Integer + # resp.events[0].blockchain_instant.time #=> Time + # resp.events[0].confirmation_status #=> String, one of "FINAL", "NONFINAL" # resp.next_token #=> String # # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/ListTransactionEvents AWS API Documentation @@ -862,8 +994,7 @@ def list_transaction_events(params = {}, options = {}) req.send_request(options) end - # Lists all of the transactions on a given wallet address or to a - # specific contract. + # Lists all the transaction events for a transaction. # # @option params [required, String] :address # The address (either a contract or wallet), whose transactions are @@ -879,8 +1010,7 @@ def list_transaction_events(params = {}, options = {}) # The container for time. # # @option params [Types::ListTransactionsSort] :sort - # The order by which the results will be sorted. If `ASCENNDING` is - # selected, the results will be ordered by `fromTime`. + # The order by which the results will be sorted. # # @option params [String] :next_token # The pagination token that indicates the next set of results to @@ -889,7 +1019,7 @@ def list_transaction_events(params = {}, options = {}) # @option params [Integer] :max_results # The maximum number of transactions to list. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -903,7 +1033,7 @@ def list_transaction_events(params = {}, options = {}) # @option params [Types::ConfirmationStatusFilter] :confirmation_status_filter # This filter is used to include transactions in the response that # haven't reached [ *finality* ][1]. Transactions that have reached - # finiality are always part of the response. + # finality are always part of the response. # # # @@ -969,7 +1099,7 @@ def build_request(operation_name, params = {}) params: params, config: config) context[:gem_name] = 'aws-sdk-managedblockchainquery' - context[:gem_version] = '1.8.0' + context[:gem_version] = '1.9.0' Seahorse::Client::Request.new(handlers, context) end diff --git a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client_api.rb b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client_api.rb index cb0d6c9e71d..0d3251a887d 100644 --- a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client_api.rb +++ b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/client_api.rb @@ -14,6 +14,8 @@ module ClientApi include Seahorse::Model AccessDeniedException = Shapes::StructureShape.new(name: 'AccessDeniedException') + AddressIdentifierFilter = Shapes::StructureShape.new(name: 'AddressIdentifierFilter') + AddressIdentifierFilterTransactionEventToAddressList = Shapes::ListShape.new(name: 'AddressIdentifierFilterTransactionEventToAddressList') AssetContract = Shapes::StructureShape.new(name: 'AssetContract') AssetContractList = Shapes::ListShape.new(name: 'AssetContractList') BatchGetTokenBalanceErrorItem = Shapes::StructureShape.new(name: 'BatchGetTokenBalanceErrorItem') @@ -25,6 +27,7 @@ module ClientApi BatchGetTokenBalanceOutputList = Shapes::ListShape.new(name: 'BatchGetTokenBalanceOutputList') BlockHash = Shapes::StringShape.new(name: 'BlockHash') BlockchainInstant = Shapes::StructureShape.new(name: 'BlockchainInstant') + Boolean = Shapes::BooleanShape.new(name: 'Boolean') ChainAddress = Shapes::StringShape.new(name: 'ChainAddress') ConfirmationStatus = Shapes::StringShape.new(name: 'ConfirmationStatus') ConfirmationStatusFilter = Shapes::StructureShape.new(name: 'ConfirmationStatusFilter') @@ -47,6 +50,11 @@ module ClientApi ListAssetContractsInput = Shapes::StructureShape.new(name: 'ListAssetContractsInput') ListAssetContractsInputMaxResultsInteger = Shapes::IntegerShape.new(name: 'ListAssetContractsInputMaxResultsInteger') ListAssetContractsOutput = Shapes::StructureShape.new(name: 'ListAssetContractsOutput') + ListFilteredTransactionEventsInput = Shapes::StructureShape.new(name: 'ListFilteredTransactionEventsInput') + ListFilteredTransactionEventsInputMaxResultsInteger = Shapes::IntegerShape.new(name: 'ListFilteredTransactionEventsInputMaxResultsInteger') + ListFilteredTransactionEventsOutput = Shapes::StructureShape.new(name: 'ListFilteredTransactionEventsOutput') + ListFilteredTransactionEventsSort = Shapes::StructureShape.new(name: 'ListFilteredTransactionEventsSort') + ListFilteredTransactionEventsSortBy = Shapes::StringShape.new(name: 'ListFilteredTransactionEventsSortBy') ListTokenBalancesInput = Shapes::StructureShape.new(name: 'ListTokenBalancesInput') ListTokenBalancesInputMaxResultsInteger = Shapes::IntegerShape.new(name: 'ListTokenBalancesInputMaxResultsInteger') ListTokenBalancesOutput = Shapes::StructureShape.new(name: 'ListTokenBalancesOutput') @@ -67,6 +75,7 @@ module ClientApi QueryTokenStandard = Shapes::StringShape.new(name: 'QueryTokenStandard') QueryTransactionEventType = Shapes::StringShape.new(name: 'QueryTransactionEventType') QueryTransactionHash = Shapes::StringShape.new(name: 'QueryTransactionHash') + QueryTransactionId = Shapes::StringShape.new(name: 'QueryTransactionId') QuotaCode = Shapes::StringShape.new(name: 'QuotaCode') ResourceId = Shapes::StringShape.new(name: 'ResourceId') ResourceNotFoundException = Shapes::StructureShape.new(name: 'ResourceNotFoundException') @@ -76,6 +85,7 @@ module ClientApi SortOrder = Shapes::StringShape.new(name: 'SortOrder') String = Shapes::StringShape.new(name: 'String') ThrottlingException = Shapes::StructureShape.new(name: 'ThrottlingException') + TimeFilter = Shapes::StructureShape.new(name: 'TimeFilter') Timestamp = Shapes::TimestampShape.new(name: 'Timestamp') TokenBalance = Shapes::StructureShape.new(name: 'TokenBalance') TokenBalanceList = Shapes::ListShape.new(name: 'TokenBalanceList') @@ -90,10 +100,16 @@ module ClientApi ValidationExceptionField = Shapes::StructureShape.new(name: 'ValidationExceptionField') ValidationExceptionFieldList = Shapes::ListShape.new(name: 'ValidationExceptionFieldList') ValidationExceptionReason = Shapes::StringShape.new(name: 'ValidationExceptionReason') + VoutFilter = Shapes::StructureShape.new(name: 'VoutFilter') AccessDeniedException.add_member(:message, Shapes::ShapeRef.new(shape: ExceptionMessage, required: true, location_name: "message")) AccessDeniedException.struct_class = Types::AccessDeniedException + AddressIdentifierFilter.add_member(:transaction_event_to_address, Shapes::ShapeRef.new(shape: AddressIdentifierFilterTransactionEventToAddressList, required: true, location_name: "transactionEventToAddress")) + AddressIdentifierFilter.struct_class = Types::AddressIdentifierFilter + + AddressIdentifierFilterTransactionEventToAddressList.member = Shapes::ShapeRef.new(shape: ChainAddress) + AssetContract.add_member(:contract_identifier, Shapes::ShapeRef.new(shape: ContractIdentifier, required: true, location_name: "contractIdentifier")) AssetContract.add_member(:token_standard, Shapes::ShapeRef.new(shape: QueryTokenStandard, required: true, location_name: "tokenStandard")) AssetContract.add_member(:deployer_address, Shapes::ShapeRef.new(shape: ChainAddress, required: true, location_name: "deployerAddress")) @@ -197,6 +213,24 @@ module ClientApi ListAssetContractsOutput.add_member(:next_token, Shapes::ShapeRef.new(shape: NextToken, location_name: "nextToken")) ListAssetContractsOutput.struct_class = Types::ListAssetContractsOutput + ListFilteredTransactionEventsInput.add_member(:network, Shapes::ShapeRef.new(shape: String, required: true, location_name: "network")) + ListFilteredTransactionEventsInput.add_member(:address_identifier_filter, Shapes::ShapeRef.new(shape: AddressIdentifierFilter, required: true, location_name: "addressIdentifierFilter")) + ListFilteredTransactionEventsInput.add_member(:time_filter, Shapes::ShapeRef.new(shape: TimeFilter, location_name: "timeFilter")) + ListFilteredTransactionEventsInput.add_member(:vout_filter, Shapes::ShapeRef.new(shape: VoutFilter, location_name: "voutFilter")) + ListFilteredTransactionEventsInput.add_member(:confirmation_status_filter, Shapes::ShapeRef.new(shape: ConfirmationStatusFilter, location_name: "confirmationStatusFilter")) + ListFilteredTransactionEventsInput.add_member(:sort, Shapes::ShapeRef.new(shape: ListFilteredTransactionEventsSort, location_name: "sort")) + ListFilteredTransactionEventsInput.add_member(:next_token, Shapes::ShapeRef.new(shape: NextToken, location_name: "nextToken")) + ListFilteredTransactionEventsInput.add_member(:max_results, Shapes::ShapeRef.new(shape: ListFilteredTransactionEventsInputMaxResultsInteger, location_name: "maxResults")) + ListFilteredTransactionEventsInput.struct_class = Types::ListFilteredTransactionEventsInput + + ListFilteredTransactionEventsOutput.add_member(:events, Shapes::ShapeRef.new(shape: TransactionEventList, required: true, location_name: "events")) + ListFilteredTransactionEventsOutput.add_member(:next_token, Shapes::ShapeRef.new(shape: NextToken, location_name: "nextToken")) + ListFilteredTransactionEventsOutput.struct_class = Types::ListFilteredTransactionEventsOutput + + ListFilteredTransactionEventsSort.add_member(:sort_by, Shapes::ShapeRef.new(shape: ListFilteredTransactionEventsSortBy, location_name: "sortBy")) + ListFilteredTransactionEventsSort.add_member(:sort_order, Shapes::ShapeRef.new(shape: SortOrder, location_name: "sortOrder")) + ListFilteredTransactionEventsSort.struct_class = Types::ListFilteredTransactionEventsSort + ListTokenBalancesInput.add_member(:owner_filter, Shapes::ShapeRef.new(shape: OwnerFilter, location_name: "ownerFilter")) ListTokenBalancesInput.add_member(:token_filter, Shapes::ShapeRef.new(shape: TokenFilter, required: true, location_name: "tokenFilter")) ListTokenBalancesInput.add_member(:next_token, Shapes::ShapeRef.new(shape: NextToken, location_name: "nextToken")) @@ -207,7 +241,8 @@ module ClientApi ListTokenBalancesOutput.add_member(:next_token, Shapes::ShapeRef.new(shape: NextToken, location_name: "nextToken")) ListTokenBalancesOutput.struct_class = Types::ListTokenBalancesOutput - ListTransactionEventsInput.add_member(:transaction_hash, Shapes::ShapeRef.new(shape: QueryTransactionHash, required: true, location_name: "transactionHash")) + ListTransactionEventsInput.add_member(:transaction_hash, Shapes::ShapeRef.new(shape: QueryTransactionHash, location_name: "transactionHash")) + ListTransactionEventsInput.add_member(:transaction_id, Shapes::ShapeRef.new(shape: QueryTransactionId, location_name: "transactionId")) ListTransactionEventsInput.add_member(:network, Shapes::ShapeRef.new(shape: QueryNetwork, required: true, location_name: "network")) ListTransactionEventsInput.add_member(:next_token, Shapes::ShapeRef.new(shape: NextToken, location_name: "nextToken")) ListTransactionEventsInput.add_member(:max_results, Shapes::ShapeRef.new(shape: ListTransactionEventsInputMaxResultsInteger, location_name: "maxResults")) @@ -259,6 +294,10 @@ module ClientApi ThrottlingException.add_member(:retry_after_seconds, Shapes::ShapeRef.new(shape: Integer, location: "header", location_name: "Retry-After")) ThrottlingException.struct_class = Types::ThrottlingException + TimeFilter.add_member(:from, Shapes::ShapeRef.new(shape: BlockchainInstant, location_name: "from")) + TimeFilter.add_member(:to, Shapes::ShapeRef.new(shape: BlockchainInstant, location_name: "to")) + TimeFilter.struct_class = Types::TimeFilter + TokenBalance.add_member(:owner_identifier, Shapes::ShapeRef.new(shape: OwnerIdentifier, location_name: "ownerIdentifier")) TokenBalance.add_member(:token_identifier, Shapes::ShapeRef.new(shape: TokenIdentifier, location_name: "tokenIdentifier")) TokenBalance.add_member(:balance, Shapes::ShapeRef.new(shape: String, required: true, location_name: "balance")) @@ -310,6 +349,12 @@ module ClientApi TransactionEvent.add_member(:token_id, Shapes::ShapeRef.new(shape: QueryTokenId, location_name: "tokenId")) TransactionEvent.add_member(:transaction_id, Shapes::ShapeRef.new(shape: String, location_name: "transactionId")) TransactionEvent.add_member(:vout_index, Shapes::ShapeRef.new(shape: Integer, location_name: "voutIndex")) + TransactionEvent.add_member(:vout_spent, Shapes::ShapeRef.new(shape: Boolean, location_name: "voutSpent")) + TransactionEvent.add_member(:spent_vout_transaction_id, Shapes::ShapeRef.new(shape: String, location_name: "spentVoutTransactionId")) + TransactionEvent.add_member(:spent_vout_transaction_hash, Shapes::ShapeRef.new(shape: String, location_name: "spentVoutTransactionHash")) + TransactionEvent.add_member(:spent_vout_index, Shapes::ShapeRef.new(shape: Integer, location_name: "spentVoutIndex")) + TransactionEvent.add_member(:blockchain_instant, Shapes::ShapeRef.new(shape: BlockchainInstant, location_name: "blockchainInstant")) + TransactionEvent.add_member(:confirmation_status, Shapes::ShapeRef.new(shape: ConfirmationStatus, location_name: "confirmationStatus")) TransactionEvent.struct_class = Types::TransactionEvent TransactionEventList.member = Shapes::ShapeRef.new(shape: TransactionEvent) @@ -333,6 +378,9 @@ module ClientApi ValidationExceptionFieldList.member = Shapes::ShapeRef.new(shape: ValidationExceptionField) + VoutFilter.add_member(:vout_spent, Shapes::ShapeRef.new(shape: Boolean, required: true, location_name: "voutSpent")) + VoutFilter.struct_class = Types::VoutFilter + # @api private API = Seahorse::Model::Api.new.tap do |api| @@ -427,6 +475,25 @@ module ClientApi ) end) + api.add_operation(:list_filtered_transaction_events, Seahorse::Model::Operation.new.tap do |o| + o.name = "ListFilteredTransactionEvents" + o.http_method = "POST" + o.http_request_uri = "/list-filtered-transaction-events" + o.input = Shapes::ShapeRef.new(shape: ListFilteredTransactionEventsInput) + o.output = Shapes::ShapeRef.new(shape: ListFilteredTransactionEventsOutput) + o.errors << Shapes::ShapeRef.new(shape: ThrottlingException) + o.errors << Shapes::ShapeRef.new(shape: ValidationException) + o.errors << Shapes::ShapeRef.new(shape: AccessDeniedException) + o.errors << Shapes::ShapeRef.new(shape: InternalServerException) + o.errors << Shapes::ShapeRef.new(shape: ServiceQuotaExceededException) + o[:pager] = Aws::Pager.new( + limit_key: "max_results", + tokens: { + "next_token" => "next_token" + } + ) + end) + api.add_operation(:list_token_balances, Seahorse::Model::Operation.new.tap do |o| o.name = "ListTokenBalances" o.http_method = "POST" diff --git a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/endpoints.rb b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/endpoints.rb index 76bc1038df8..11e9ba40972 100644 --- a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/endpoints.rb +++ b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/endpoints.rb @@ -82,6 +82,20 @@ def self.build(context) end end + class ListFilteredTransactionEvents + def self.build(context) + unless context.config.regional_endpoint + endpoint = context.config.endpoint.to_s + end + Aws::ManagedBlockchainQuery::EndpointParameters.new( + region: context.config.region, + use_dual_stack: context.config.use_dualstack_endpoint, + use_fips: context.config.use_fips_endpoint, + endpoint: endpoint, + ) + end + end + class ListTokenBalances def self.build(context) unless context.config.regional_endpoint diff --git a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/plugins/endpoints.rb b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/plugins/endpoints.rb index d13895ccb34..e7cae872d45 100644 --- a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/plugins/endpoints.rb +++ b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/plugins/endpoints.rb @@ -68,6 +68,8 @@ def parameters_for_operation(context) Aws::ManagedBlockchainQuery::Endpoints::GetTransaction.build(context) when :list_asset_contracts Aws::ManagedBlockchainQuery::Endpoints::ListAssetContracts.build(context) + when :list_filtered_transaction_events + Aws::ManagedBlockchainQuery::Endpoints::ListFilteredTransactionEvents.build(context) when :list_token_balances Aws::ManagedBlockchainQuery::Endpoints::ListTokenBalances.build(context) when :list_transaction_events diff --git a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/types.rb b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/types.rb index 5f55bfd3254..207fe5779de 100644 --- a/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/types.rb +++ b/gems/aws-sdk-managedblockchainquery/lib/aws-sdk-managedblockchainquery/types.rb @@ -24,6 +24,20 @@ class AccessDeniedException < Struct.new( include Aws::Structure end + # This is the container for the unique public address on the blockchain. + # + # @!attribute [rw] transaction_event_to_address + # The container for the recipient address of the transaction. + # @return [Array] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/AddressIdentifierFilter AWS API Documentation + # + class AddressIdentifierFilter < Struct.new( + :transaction_event_to_address) + SENSITIVE = [] + include Aws::Structure + end + # This container contains information about an contract. # # @!attribute [rw] contract_identifier @@ -62,7 +76,7 @@ class AssetContract < Struct.new( # @return [Types::TokenIdentifier] # # @!attribute [rw] owner_identifier - # The container for the identifier of the owner. + # The container for the owner identifier. # @return [Types::OwnerIdentifier] # # @!attribute [rw] at_blockchain_instant @@ -120,7 +134,7 @@ class BatchGetTokenBalanceInput < Struct.new( # @return [Types::TokenIdentifier] # # @!attribute [rw] owner_identifier - # The container for the identifier of the owner. + # The container for the owner identifier. # @return [Types::OwnerIdentifier] # # @!attribute [rw] at_blockchain_instant @@ -159,7 +173,7 @@ class BatchGetTokenBalanceOutput < Struct.new( # The container for the properties of a token balance output. # # @!attribute [rw] owner_identifier - # The container for the identifier of the owner. + # The container for the owner identifier. # @return [Types::OwnerIdentifier] # # @!attribute [rw] token_identifier @@ -377,7 +391,7 @@ class GetTokenBalanceInput < Struct.new( end # @!attribute [rw] owner_identifier - # The container for the identifier of the owner. + # The container for the owner identifier. # @return [Types::OwnerIdentifier] # # @!attribute [rw] token_identifier @@ -415,8 +429,8 @@ class GetTokenBalanceOutput < Struct.new( end # @!attribute [rw] transaction_hash - # The hash of the transaction. It is generated whenever a transaction - # is verified and added to the blockchain. + # The hash of a transaction. It is generated when a transaction is + # created. # @return [String] # # @!attribute [rw] network @@ -452,7 +466,7 @@ class GetTransactionOutput < Struct.new( # @return [String] # # @!attribute [rw] retry_after_seconds - # The container of the `retryAfterSeconds` value. + # Specifies the `retryAfterSeconds` value. # @return [Integer] # # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/InternalServerException AWS API Documentation @@ -476,7 +490,7 @@ class InternalServerException < Struct.new( # @!attribute [rw] max_results # The maximum number of contracts to list. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -517,6 +531,119 @@ class ListAssetContractsOutput < Struct.new( include Aws::Structure end + # @!attribute [rw] network + # The blockchain network where the transaction occurred. + # + # Valid Values: `BITCOIN_MAINNET` \| `BITCOIN_TESTNET` + # @return [String] + # + # @!attribute [rw] address_identifier_filter + # This is the unique public address on the blockchain for which the + # transaction events are being requested. + # @return [Types::AddressIdentifierFilter] + # + # @!attribute [rw] time_filter + # This container specifies the time frame for the transaction events + # returned in the response. + # @return [Types::TimeFilter] + # + # @!attribute [rw] vout_filter + # This container specifies filtering attributes related to + # BITCOIN\_VOUT event types + # @return [Types::VoutFilter] + # + # @!attribute [rw] confirmation_status_filter + # The container for the `ConfirmationStatusFilter` that filters for + # the [ *finality* ][1] of the results. + # + # + # + # [1]: https://docs.aws.amazon.com/managed-blockchain/latest/ambq-dg/key-concepts.html#finality + # @return [Types::ConfirmationStatusFilter] + # + # @!attribute [rw] sort + # The order by which the results will be sorted. + # @return [Types::ListFilteredTransactionEventsSort] + # + # @!attribute [rw] next_token + # The pagination token that indicates the next set of results to + # retrieve. + # @return [String] + # + # @!attribute [rw] max_results + # The maximum number of transaction events to list. + # + # Default: `100` + # + # Even if additional results can be retrieved, the request can return + # less results than `maxResults` or an empty array of results. + # + # To retrieve the next set of results, make another request with the + # returned `nextToken` value. The value of `nextToken` is `null` when + # there are no more results to return + # + # + # @return [Integer] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/ListFilteredTransactionEventsInput AWS API Documentation + # + class ListFilteredTransactionEventsInput < Struct.new( + :network, + :address_identifier_filter, + :time_filter, + :vout_filter, + :confirmation_status_filter, + :sort, + :next_token, + :max_results) + SENSITIVE = [] + include Aws::Structure + end + + # @!attribute [rw] events + # The transaction events returned by the request. + # @return [Array] + # + # @!attribute [rw] next_token + # The pagination token that indicates the next set of results to + # retrieve. + # @return [String] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/ListFilteredTransactionEventsOutput AWS API Documentation + # + class ListFilteredTransactionEventsOutput < Struct.new( + :events, + :next_token) + SENSITIVE = [] + include Aws::Structure + end + + # Lists all the transaction events for an address on the blockchain. + # + # This operation is only supported on the Bitcoin blockchain networks. + # + # + # + # @!attribute [rw] sort_by + # Container on how the results will be sorted by? + # @return [String] + # + # @!attribute [rw] sort_order + # The container for the *sort order* for + # `ListFilteredTransactionEvents`. The `SortOrder` field only accepts + # the values `ASCENDING` and `DESCENDING`. Not providing `SortOrder` + # will default to `ASCENDING`. + # @return [String] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/ListFilteredTransactionEventsSort AWS API Documentation + # + class ListFilteredTransactionEventsSort < Struct.new( + :sort_by, + :sort_order) + SENSITIVE = [] + include Aws::Structure + end + # @!attribute [rw] owner_filter # The contract or wallet address on the blockchain network by which to # filter the request. You must specify the `address` property of the @@ -543,7 +670,7 @@ class ListAssetContractsOutput < Struct.new( # @!attribute [rw] max_results # The maximum number of token balances to return. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -586,8 +713,17 @@ class ListTokenBalancesOutput < Struct.new( end # @!attribute [rw] transaction_hash - # The hash of the transaction. It is generated whenever a transaction - # is verified and added to the blockchain. + # The hash of a transaction. It is generated when a transaction is + # created. + # @return [String] + # + # @!attribute [rw] transaction_id + # The identifier of a Bitcoin transaction. It is generated when a + # transaction is created. + # + # `transactionId` is only supported on the Bitcoin networks. + # + # # @return [String] # # @!attribute [rw] network @@ -602,7 +738,7 @@ class ListTokenBalancesOutput < Struct.new( # @!attribute [rw] max_results # The maximum number of transaction events to list. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -618,6 +754,7 @@ class ListTokenBalancesOutput < Struct.new( # class ListTransactionEventsInput < Struct.new( :transaction_hash, + :transaction_id, :network, :next_token, :max_results) @@ -662,8 +799,7 @@ class ListTransactionEventsOutput < Struct.new( # @return [Types::BlockchainInstant] # # @!attribute [rw] sort - # The order by which the results will be sorted. If `ASCENNDING` is - # selected, the results will be ordered by `fromTime`. + # The order by which the results will be sorted. # @return [Types::ListTransactionsSort] # # @!attribute [rw] next_token @@ -674,7 +810,7 @@ class ListTransactionEventsOutput < Struct.new( # @!attribute [rw] max_results # The maximum number of transactions to list. # - # Default:`100` + # Default: `100` # # Even if additional results can be retrieved, the request can return # less results than `maxResults` or an empty array of results. @@ -689,7 +825,7 @@ class ListTransactionEventsOutput < Struct.new( # @!attribute [rw] confirmation_status_filter # This filter is used to include transactions in the response that # haven't reached [ *finality* ][1]. Transactions that have reached - # finiality are always part of the response. + # finality are always part of the response. # # # @@ -765,7 +901,7 @@ class OwnerFilter < Struct.new( include Aws::Structure end - # The container for the identifier of the owner. + # The container for the owner identifier. # # @!attribute [rw] address # The contract or wallet address for the owner. @@ -870,6 +1006,25 @@ class ThrottlingException < Struct.new( include Aws::Structure end + # This container is used to specify a time frame. + # + # @!attribute [rw] from + # The container for time. + # @return [Types::BlockchainInstant] + # + # @!attribute [rw] to + # The container for time. + # @return [Types::BlockchainInstant] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/TimeFilter AWS API Documentation + # + class TimeFilter < Struct.new( + :from, + :to) + SENSITIVE = [] + include Aws::Structure + end + # The balance of the token. # # @!attribute [rw] owner_identifier @@ -1002,8 +1157,8 @@ class TokenIdentifier < Struct.new( # @return [String] # # @!attribute [rw] transaction_hash - # The hash of the transaction. It is generated whenever a transaction - # is verified and added to the blockchain. + # The hash of a transaction. It is generated when a transaction is + # created. # @return [String] # # @!attribute [rw] block_number @@ -1065,8 +1220,8 @@ class TokenIdentifier < Struct.new( # @return [String] # # @!attribute [rw] transaction_id - # The unique identifier of the transaction. It is generated whenever a - # transaction is verified and added to the blockchain. + # The identifier of a Bitcoin transaction. It is generated when a + # transaction is created. # @return [String] # # @!attribute [rw] confirmation_status @@ -1111,8 +1266,8 @@ class Transaction < Struct.new( # @return [String] # # @!attribute [rw] transaction_hash - # The hash of the transaction. It is generated whenever a transaction - # is verified and added to the blockchain. + # The hash of a transaction. It is generated when a transaction is + # created. # @return [String] # # @!attribute [rw] event_type @@ -1134,7 +1289,7 @@ class Transaction < Struct.new( # @return [String] # # @!attribute [rw] contract_address - # The blockchain address. for the contract + # The blockchain address for the contract # @return [String] # # @!attribute [rw] token_id @@ -1142,14 +1297,58 @@ class Transaction < Struct.new( # @return [String] # # @!attribute [rw] transaction_id - # The unique identifier of the transaction. It is generated whenever a - # transaction is verified and added to the blockchain. + # The identifier of a Bitcoin transaction. It is generated when a + # transaction is created. # @return [String] # # @!attribute [rw] vout_index - # The position of the vout in the transaction output list. + # The position of the transaction output in the transaction output + # list. # @return [Integer] # + # @!attribute [rw] vout_spent + # Specifies if the transaction output is spent or unspent. This is + # only returned for BITCOIN\_VOUT event types. + # + # This is only returned for `BITCOIN_VOUT` event types. + # + # + # @return [Boolean] + # + # @!attribute [rw] spent_vout_transaction_id + # The transactionId that *created* the spent transaction output. + # + # This is only returned for `BITCOIN_VIN` event types. + # + # + # @return [String] + # + # @!attribute [rw] spent_vout_transaction_hash + # The transactionHash that *created* the spent transaction output. + # + # This is only returned for `BITCOIN_VIN` event types. + # + # + # @return [String] + # + # @!attribute [rw] spent_vout_index + # The position of the spent transaction output in the output list of + # the *creating transaction*. + # + # This is only returned for `BITCOIN_VIN` event types. + # + # + # @return [Integer] + # + # @!attribute [rw] blockchain_instant + # The container for time. + # @return [Types::BlockchainInstant] + # + # @!attribute [rw] confirmation_status + # This container specifies whether the transaction has reached + # Finality. + # @return [String] + # # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/TransactionEvent AWS API Documentation # class TransactionEvent < Struct.new( @@ -1162,7 +1361,13 @@ class TransactionEvent < Struct.new( :contract_address, :token_id, :transaction_id, - :vout_index) + :vout_index, + :vout_spent, + :spent_vout_transaction_id, + :spent_vout_transaction_hash, + :spent_vout_index, + :blockchain_instant, + :confirmation_status) SENSITIVE = [] include Aws::Structure end @@ -1170,8 +1375,8 @@ class TransactionEvent < Struct.new( # The container of the transaction output. # # @!attribute [rw] transaction_hash - # The hash of the transaction. It is generated whenever a transaction - # is verified and added to the blockchain. + # The hash of a transaction. It is generated when a transaction is + # created. # @return [String] # # @!attribute [rw] network @@ -1241,5 +1446,20 @@ class ValidationExceptionField < Struct.new( include Aws::Structure end + # This container specifies filtering attributes related to + # `BITCOIN_VOUT` event types + # + # @!attribute [rw] vout_spent + # Specifies if the transaction output is spent or unspent. + # @return [Boolean] + # + # @see http://docs.aws.amazon.com/goto/WebAPI/managedblockchain-query-2023-05-04/VoutFilter AWS API Documentation + # + class VoutFilter < Struct.new( + :vout_spent) + SENSITIVE = [] + include Aws::Structure + end + end end diff --git a/gems/aws-sdk-managedblockchainquery/sig/client.rbs b/gems/aws-sdk-managedblockchainquery/sig/client.rbs index efe4ea0b226..356b4b262e7 100644 --- a/gems/aws-sdk-managedblockchainquery/sig/client.rbs +++ b/gems/aws-sdk-managedblockchainquery/sig/client.rbs @@ -165,6 +165,40 @@ module Aws ) -> _ListAssetContractsResponseSuccess | (Hash[Symbol, untyped] params, ?Hash[Symbol, untyped] options) -> _ListAssetContractsResponseSuccess + interface _ListFilteredTransactionEventsResponseSuccess + include ::Seahorse::Client::_ResponseSuccess[Types::ListFilteredTransactionEventsOutput] + def events: () -> ::Array[Types::TransactionEvent] + def next_token: () -> ::String + end + # https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/ManagedBlockchainQuery/Client.html#list_filtered_transaction_events-instance_method + def list_filtered_transaction_events: ( + network: ::String, + address_identifier_filter: { + transaction_event_to_address: Array[::String] + }, + ?time_filter: { + from: { + time: ::Time? + }?, + to: { + time: ::Time? + }? + }, + ?vout_filter: { + vout_spent: bool + }, + ?confirmation_status_filter: { + include: Array[("FINAL" | "NONFINAL")] + }, + ?sort: { + sort_by: ("blockchainInstant")?, + sort_order: ("ASCENDING" | "DESCENDING")? + }, + ?next_token: ::String, + ?max_results: ::Integer + ) -> _ListFilteredTransactionEventsResponseSuccess + | (Hash[Symbol, untyped] params, ?Hash[Symbol, untyped] options) -> _ListFilteredTransactionEventsResponseSuccess + interface _ListTokenBalancesResponseSuccess include ::Seahorse::Client::_ResponseSuccess[Types::ListTokenBalancesOutput] def token_balances: () -> ::Array[Types::TokenBalance] @@ -192,7 +226,8 @@ module Aws end # https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/ManagedBlockchainQuery/Client.html#list_transaction_events-instance_method def list_transaction_events: ( - transaction_hash: ::String, + ?transaction_hash: ::String, + ?transaction_id: ::String, network: ("ETHEREUM_MAINNET" | "ETHEREUM_SEPOLIA_TESTNET" | "BITCOIN_MAINNET" | "BITCOIN_TESTNET"), ?next_token: ::String, ?max_results: ::Integer diff --git a/gems/aws-sdk-managedblockchainquery/sig/types.rbs b/gems/aws-sdk-managedblockchainquery/sig/types.rbs index b98258cdf0f..3af112ce316 100644 --- a/gems/aws-sdk-managedblockchainquery/sig/types.rbs +++ b/gems/aws-sdk-managedblockchainquery/sig/types.rbs @@ -13,6 +13,11 @@ module Aws::ManagedBlockchainQuery SENSITIVE: [] end + class AddressIdentifierFilter + attr_accessor transaction_event_to_address: ::Array[::String] + SENSITIVE: [] + end + class AssetContract attr_accessor contract_identifier: Types::ContractIdentifier attr_accessor token_standard: ("ERC20" | "ERC721" | "ERC1155") @@ -146,6 +151,30 @@ module Aws::ManagedBlockchainQuery SENSITIVE: [] end + class ListFilteredTransactionEventsInput + attr_accessor network: ::String + attr_accessor address_identifier_filter: Types::AddressIdentifierFilter + attr_accessor time_filter: Types::TimeFilter + attr_accessor vout_filter: Types::VoutFilter + attr_accessor confirmation_status_filter: Types::ConfirmationStatusFilter + attr_accessor sort: Types::ListFilteredTransactionEventsSort + attr_accessor next_token: ::String + attr_accessor max_results: ::Integer + SENSITIVE: [] + end + + class ListFilteredTransactionEventsOutput + attr_accessor events: ::Array[Types::TransactionEvent] + attr_accessor next_token: ::String + SENSITIVE: [] + end + + class ListFilteredTransactionEventsSort + attr_accessor sort_by: ("blockchainInstant") + attr_accessor sort_order: ("ASCENDING" | "DESCENDING") + SENSITIVE: [] + end + class ListTokenBalancesInput attr_accessor owner_filter: Types::OwnerFilter attr_accessor token_filter: Types::TokenFilter @@ -162,6 +191,7 @@ module Aws::ManagedBlockchainQuery class ListTransactionEventsInput attr_accessor transaction_hash: ::String + attr_accessor transaction_id: ::String attr_accessor network: ("ETHEREUM_MAINNET" | "ETHEREUM_SEPOLIA_TESTNET" | "BITCOIN_MAINNET" | "BITCOIN_TESTNET") attr_accessor next_token: ::String attr_accessor max_results: ::Integer @@ -232,6 +262,12 @@ module Aws::ManagedBlockchainQuery SENSITIVE: [] end + class TimeFilter + attr_accessor from: Types::BlockchainInstant + attr_accessor to: Types::BlockchainInstant + SENSITIVE: [] + end + class TokenBalance attr_accessor owner_identifier: Types::OwnerIdentifier attr_accessor token_identifier: Types::TokenIdentifier @@ -290,6 +326,12 @@ module Aws::ManagedBlockchainQuery attr_accessor token_id: ::String attr_accessor transaction_id: ::String attr_accessor vout_index: ::Integer + attr_accessor vout_spent: bool + attr_accessor spent_vout_transaction_id: ::String + attr_accessor spent_vout_transaction_hash: ::String + attr_accessor spent_vout_index: ::Integer + attr_accessor blockchain_instant: Types::BlockchainInstant + attr_accessor confirmation_status: ("FINAL" | "NONFINAL") SENSITIVE: [] end @@ -313,5 +355,10 @@ module Aws::ManagedBlockchainQuery attr_accessor message: ::String SENSITIVE: [] end + + class VoutFilter + attr_accessor vout_spent: bool + SENSITIVE: [] + end end end