diff --git a/packages/@aws-cdk/aws-bedrock-agentcore-alpha/README.md b/packages/@aws-cdk/aws-bedrock-agentcore-alpha/README.md index be41167d0beee..3c2ee6006917e 100644 --- a/packages/@aws-cdk/aws-bedrock-agentcore-alpha/README.md +++ b/packages/@aws-cdk/aws-bedrock-agentcore-alpha/README.md @@ -1460,7 +1460,7 @@ specifications for defining API targets. It connects to REST APIs using OpenAPI - Supports OAUTH and API_KEY credential providers (Do not support IAM, you must provide `credentialProviderConfigurations`) - Ideal for integrating with external REST services -- Need API schema. The construct provide [3 ways to upload a API schema to OpenAPI target](#api-schema-for-openapi-and-smithy-target) +- Need API schema. The construct provide [3 ways to upload an API schema to OpenAPI target](#api-schema-for-openapi-and-smithy-target) **Smithy Model Target** : Smithy is a language for defining services and software development kits (SDKs). Smithy models provide a more structured approach to defining APIs compared to OpenAPI, and are particularly useful for connecting to AWS services. @@ -1468,7 +1468,7 @@ AgentCore Gateway supports built-in AWS service models only. It connects to serv - Supports OAUTH and API_KEY credential providers - Ideal for AWS service integrations -- Need API schema. The construct provide 3 ways to upload a API schema to Smity target +- Need API schema. The construct provide 3 ways to upload an API schema to Smity target - When using the default IAM authentication (no `credentialProviderConfigurations` specified), The construct only grants permission to read the Smithy schema file from S3. You MUST manually grant permissions for the gateway role to invoke the actual Smithy API endpoints diff --git a/packages/@aws-cdk/aws-glue-alpha/README.md b/packages/@aws-cdk/aws-glue-alpha/README.md index 7bb6d2c6ee830..b558294f4af4d 100644 --- a/packages/@aws-cdk/aws-glue-alpha/README.md +++ b/packages/@aws-cdk/aws-glue-alpha/README.md @@ -547,7 +547,7 @@ new glue.Database(this, 'MyDatabase', { ## Table -A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in a S3 bucket), and format for the files (Json, Avro, Parquet, etc.): +A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in an S3 bucket), and format for the files (Json, Avro, Parquet, etc.): ```ts declare const myDatabase: glue.Database; @@ -565,7 +565,7 @@ new glue.S3Table(this, 'MyTable', { }); ``` -By default, a S3 bucket will be created to store the table's data but you can manually pass the `bucket` and `s3Prefix`: +By default, an S3 bucket will be created to store the table's data but you can manually pass the `bucket` and `s3Prefix`: ```ts declare const myBucket: s3.Bucket; diff --git a/packages/@aws-cdk/aws-iot-actions-alpha/README.md b/packages/@aws-cdk/aws-iot-actions-alpha/README.md index cb5409d657b0a..76c247b88d7c0 100644 --- a/packages/@aws-cdk/aws-iot-actions-alpha/README.md +++ b/packages/@aws-cdk/aws-iot-actions-alpha/README.md @@ -23,7 +23,7 @@ Currently supported are: - Republish a message to another MQTT topic - Invoke a Lambda function -- Put objects to a S3 bucket +- Put objects to an S3 bucket - Put logs to CloudWatch Logs - Capture CloudWatch metrics - Change state for a CloudWatch alarm @@ -73,9 +73,9 @@ new iot.TopicRule(this, 'TopicRule', { }); ``` -## Put objects to a S3 bucket +## Put objects to an S3 bucket -The code snippet below creates an AWS IoT Rule that puts objects to a S3 bucket +The code snippet below creates an AWS IoT Rule that puts objects to an S3 bucket when it is triggered. ```ts diff --git a/packages/@aws-cdk/aws-pipes-alpha/README.md b/packages/@aws-cdk/aws-pipes-alpha/README.md index 752736eb6439d..b5fe9fc4e155f 100644 --- a/packages/@aws-cdk/aws-pipes-alpha/README.md +++ b/packages/@aws-cdk/aws-pipes-alpha/README.md @@ -51,12 +51,12 @@ const pipe = new pipes.Pipe(this, 'Pipe', { }); ``` -This minimal example creates a pipe with a SQS queue as source and a SQS queue as target. +This minimal example creates a pipe with an SQS queue as source and an SQS queue as target. Messages from the source are put into the body of the target message. ## Source -A source is a AWS Service that is polled. The following sources are possible: +A source is an AWS Service that is polled. The following sources are possible: - [Amazon DynamoDB stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-dynamodb.html) - [Amazon Kinesis stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-kinesis.html) diff --git a/packages/aws-cdk-lib/aws-apigatewayv2-integrations/README.md b/packages/aws-cdk-lib/aws-apigatewayv2-integrations/README.md index 85236d3e3fe7e..a7369c46374ff 100644 --- a/packages/aws-cdk-lib/aws-apigatewayv2-integrations/README.md +++ b/packages/aws-cdk-lib/aws-apigatewayv2-integrations/README.md @@ -164,7 +164,7 @@ httpApi.addRoutes({ SQS integrations enable integrating an HTTP API route with AWS SQS. This allows the HTTP API to send, receive and delete messages from an SQS queue. -The following code configures a SQS integrations: +The following code configures an SQS integrations: ```ts import * as sqs from 'aws-cdk-lib/aws-sqs'; @@ -453,7 +453,7 @@ webSocketApi.addRoute('sendMessage', { AWS type integrations enable integrating with any supported AWS service. This is only supported for WebSocket APIs. When a client connects/disconnects or sends a message specific to a route, the API Gateway service forwards the request to the specified AWS service. -The following code configures a `$connect` route with a AWS integration that integrates with a dynamodb table. On websocket api connect, +The following code configures a `$connect` route with an AWS integration that integrates with a dynamodb table. On websocket api connect, it will write new entry to the dynamodb table. ```ts diff --git a/packages/aws-cdk-lib/aws-cloudfront-origins/README.md b/packages/aws-cdk-lib/aws-cloudfront-origins/README.md index 63a1b4e6a7bbb..8258d09699a0b 100644 --- a/packages/aws-cdk-lib/aws-cloudfront-origins/README.md +++ b/packages/aws-cdk-lib/aws-cloudfront-origins/README.md @@ -5,7 +5,7 @@ S3 buckets, Elastic Load Balancing v2 load balancers, or any other domain name. ## S3 Bucket -An S3 bucket can be used as an origin. An S3 bucket origin can either be configured using a standard S3 bucket or using a S3 bucket that's configured as a website endpoint (see AWS docs for [Using an S3 Bucket](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#using-s3-as-origin)). +An S3 bucket can be used as an origin. An S3 bucket origin can either be configured using a standard S3 bucket or using an S3 bucket that's configured as a website endpoint (see AWS docs for [Using an S3 Bucket](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#using-s3-as-origin)). > Note: `S3Origin` has been deprecated. Use `S3BucketOrigin` for standard S3 origins and `S3StaticWebsiteOrigin` for static website S3 origins. @@ -424,7 +424,7 @@ S3 bucket to allow the OAI read access: See AWS docs on [Giving an origin access identity permission to read files in the Amazon S3 bucket](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-restricting-access-to-s3-oai) for more details. -### Setting up a S3 origin with no origin access control +### Setting up an S3 origin with no origin access control To setup a standard S3 origin with no access control (no OAI nor OAC), use `origins.S3BucketOrigin.withBucketDefaults()`: diff --git a/packages/aws-cdk-lib/aws-codebuild/README.md b/packages/aws-cdk-lib/aws-codebuild/README.md index 9e06119f06c48..ce22f1dc753d7 100644 --- a/packages/aws-cdk-lib/aws-codebuild/README.md +++ b/packages/aws-cdk-lib/aws-codebuild/README.md @@ -883,7 +883,7 @@ by events via an event rule. ### Using Project as an event target The `aws-cdk-lib/aws-events-targets.CodeBuildProject` allows using an AWS CodeBuild -project as a AWS CloudWatch event rule target: +project as an AWS CloudWatch event rule target: ```ts // start build when a commit is pushed diff --git a/packages/aws-cdk-lib/aws-config/README.md b/packages/aws-cdk-lib/aws-config/README.md index c4b6a46454228..b087986c1cb14 100644 --- a/packages/aws-cdk-lib/aws-config/README.md +++ b/packages/aws-cdk-lib/aws-config/README.md @@ -78,7 +78,7 @@ new config.CloudFormationStackDriftDetectionCheck(this, 'Drift', { #### CloudFormation Stack notifications -Checks whether your CloudFormation stacks are sending event notifications to a SNS topic. +Checks whether your CloudFormation stacks are sending event notifications to an SNS topic. ```ts // topics to which CloudFormation stacks may send event notifications diff --git a/packages/aws-cdk-lib/aws-ecs/README.md b/packages/aws-cdk-lib/aws-ecs/README.md index 74a3de9ce1106..dce420f62556b 100644 --- a/packages/aws-cdk-lib/aws-ecs/README.md +++ b/packages/aws-cdk-lib/aws-ecs/README.md @@ -735,7 +735,7 @@ taskDefinition.addContainer('windowsservercore', { Amazon ECS supports Active Directory authentication for Linux containers through a special kind of service account called a group Managed Service Account (gMSA). For more details, please see the [product documentation on how to implement on Windows containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html), or this [blog post on how to implement on Linux containers](https://aws.amazon.com/blogs/containers/using-windows-authentication-with-gmsa-on-linux-containers-on-amazon-ecs/). -There are two types of CredentialSpecs, domained-join or domainless. Both types support creation from a S3 bucket, a SSM parameter, or by directly specifying a location for the file in the constructor. +There are two types of CredentialSpecs, domained-join or domainless. Both types support creation from an S3 bucket, a SSM parameter, or by directly specifying a location for the file in the constructor. A domian-joined gMSA container looks like: @@ -760,7 +760,7 @@ A domianless gMSA container looks like: declare const bucket: s3.Bucket; declare const taskDefinition: ecs.TaskDefinition; -// Domainless gMSA container from a S3 bucket object. +// Domainless gMSA container from an S3 bucket object. taskDefinition.addContainer('gmsa-domainless-container', { image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"), cpu: 128, diff --git a/packages/aws-cdk-lib/aws-events/README.md b/packages/aws-cdk-lib/aws-events/README.md index 70b7e5481c0fe..34ed2b246137a 100644 --- a/packages/aws-cdk-lib/aws-events/README.md +++ b/packages/aws-cdk-lib/aws-events/README.md @@ -65,7 +65,7 @@ const onCommitRule = repo.onCommit('OnCommit', { You can add additional targets, with optional [input transformer](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_InputTransformer.html) -using `eventRule.addTarget(target[, input])`. For example, we can add a SNS +using `eventRule.addTarget(target[, input])`. For example, we can add an SNS topic target which formats a human-readable message for the commit. For example, this adds an SNS topic as a target: diff --git a/packages/aws-cdk-lib/aws-lambda-destinations/README.md b/packages/aws-cdk-lib/aws-lambda-destinations/README.md index 38231ce76babd..37f1ae5e6bce2 100644 --- a/packages/aws-cdk-lib/aws-lambda-destinations/README.md +++ b/packages/aws-cdk-lib/aws-lambda-destinations/README.md @@ -14,7 +14,7 @@ The following destinations are supported * EventBridge event bus * S3 bucket -Example with a SNS topic for successful invocations: +Example with an SNS topic for successful invocations: ```ts // An sns topic for successful invocations of a lambda function @@ -31,7 +31,7 @@ const myFn = new lambda.Function(this, 'Fn', { }) ``` -Example with a SQS queue for unsuccessful invocations: +Example with an SQS queue for unsuccessful invocations: ```ts // An sqs queue for unsuccessful invocations of a lambda function diff --git a/packages/aws-cdk-lib/aws-logs/README.md b/packages/aws-cdk-lib/aws-logs/README.md index f117d8f363fa2..bdc6dae7fdbaa 100644 --- a/packages/aws-cdk-lib/aws-logs/README.md +++ b/packages/aws-cdk-lib/aws-logs/README.md @@ -85,7 +85,7 @@ Account ID strings. Non-ARN principals, like Service principals or Any principal ## Encrypting Log Groups By default, log group data is always encrypted in CloudWatch Logs. You have the -option to encrypt log group data using a AWS KMS customer master key (CMK) should +option to encrypt log group data using an AWS KMS customer master key (CMK) should you not wish to use the default AWS encryption. Keep in mind that if you decide to encrypt a log group, any service or IAM identity that needs to read the encrypted log streams in the future will require the same CMK to decrypt the data. diff --git a/packages/aws-cdk-lib/aws-scheduler-targets/README.md b/packages/aws-cdk-lib/aws-scheduler-targets/README.md index 9017bcb8e844f..dde1a8cdc36c4 100644 --- a/packages/aws-cdk-lib/aws-scheduler-targets/README.md +++ b/packages/aws-cdk-lib/aws-scheduler-targets/README.md @@ -149,7 +149,7 @@ new Schedule(this, 'Schedule', { Use the `SnsPublish` target to publish messages to an Amazon SNS topic. -The code snippets below create an event rule with a Amazon SNS topic as a target. +The code snippets below create an event rule with an Amazon SNS topic as a target. It's called every hour by Amazon EventBridge Scheduler with a custom payload. ```ts @@ -322,7 +322,7 @@ new Schedule(this, 'Schedule', { }); ``` -The code snippet below creates an event rule with a EC2 task definition and cluster as the target which is called every hour by EventBridge Scheduler. +The code snippet below creates an event rule with an EC2 task definition and cluster as the target which is called every hour by EventBridge Scheduler. ```ts import * as ecs from 'aws-cdk-lib/aws-ecs'; diff --git a/packages/aws-cdk-lib/aws-servicecatalog/README.md b/packages/aws-cdk-lib/aws-servicecatalog/README.md index be9cc06961432..ef66d54470cd8 100644 --- a/packages/aws-cdk-lib/aws-servicecatalog/README.md +++ b/packages/aws-cdk-lib/aws-servicecatalog/README.md @@ -175,8 +175,8 @@ const product = new servicecatalog.CloudFormationProduct(this, 'Product', { ### Using Assets in your Product Stack -You can reference assets in a Product Stack. For example, we can add a handler to a Lambda function or a S3 Asset directly from a local asset file. -In this case, you must provide a S3 Bucket with a bucketName to store your assets. +You can reference assets in a Product Stack. For example, we can add a handler to a Lambda function or an S3 Asset directly from a local asset file. +In this case, you must provide an S3 Bucket with a bucketName to store your assets. ```ts import * as lambda from 'aws-cdk-lib/aws-lambda'; diff --git a/packages/aws-cdk-lib/aws-ssm/README.md b/packages/aws-cdk-lib/aws-ssm/README.md index 1672787a628c7..8a84f18672f84 100644 --- a/packages/aws-cdk-lib/aws-ssm/README.md +++ b/packages/aws-cdk-lib/aws-ssm/README.md @@ -76,7 +76,7 @@ const stringValue = ssm.StringParameter.valueFromLookup(this, '/My/Public/Parame When using `valueFromLookup` an initial value of 'dummy-value-for-${parameterName}' (`dummy-value-for-/My/Public/Parameter` in the above example) is returned prior to the lookup being performed. This can lead to errors if you are using this -value in places that require a certain format. For example if you have stored the ARN for a SNS +value in places that require a certain format. For example if you have stored the ARN for an SNS topic in a SSM Parameter which you want to lookup and provide to `Topic.fromTopicArn()` ```ts diff --git a/packages/aws-cdk-lib/aws-stepfunctions/README.md b/packages/aws-cdk-lib/aws-stepfunctions/README.md index ce28a2e46c49b..faa1179dd5b1a 100644 --- a/packages/aws-cdk-lib/aws-stepfunctions/README.md +++ b/packages/aws-cdk-lib/aws-stepfunctions/README.md @@ -947,8 +947,8 @@ distributedMap.itemProcessor(new sfn.Pass(this, 'Pass State')); }); distributedMap.itemProcessor(new sfn.Pass(this, 'Pass')); ``` -* Objects in a S3 bucket with an optional prefix. - * When `DistributedMap` is required to iterate over objects stored in a S3 bucket, then an object of `S3ObjectsItemReader` can be passed to `itemReader` to configure the iterator source. Note that `S3ObjectsItemReader` will default to use Distributed map's query language. If the +* Objects in an S3 bucket with an optional prefix. + * When `DistributedMap` is required to iterate over objects stored in an S3 bucket, then an object of `S3ObjectsItemReader` can be passed to `itemReader` to configure the iterator source. Note that `S3ObjectsItemReader` will default to use Distributed map's query language. If the map does not specify a query language, then it falls back to the State machine's query language. An exmaple of using `S3ObjectsItemReader` is as follows: ```ts @@ -997,7 +997,7 @@ distributedMap.itemProcessor(new sfn.Pass(this, 'Pass State')); ``` * Both `bucket` and `bucketNamePath` are mutually exclusive. * JSON array in a JSON file stored in S3 - * When `DistributedMap` is required to iterate over a JSON array stored in a JSON file in a S3 bucket, then an object of `S3JsonItemReader` can be passed to `itemReader` to configure the iterator source as follows: + * When `DistributedMap` is required to iterate over a JSON array stored in a JSON file in an S3 bucket, then an object of `S3JsonItemReader` can be passed to `itemReader` to configure the iterator source as follows: ```ts import * as s3 from 'aws-cdk-lib/aws-s3'; diff --git a/packages/aws-cdk-lib/aws-synthetics/README.md b/packages/aws-cdk-lib/aws-synthetics/README.md index 95275a03720b7..3f9429b7e61d5 100644 --- a/packages/aws-cdk-lib/aws-synthetics/README.md +++ b/packages/aws-cdk-lib/aws-synthetics/README.md @@ -254,7 +254,7 @@ new synthetics.Canary(this, 'Asset Canary', { runtime: synthetics.Runtime.SYNTHETICS_NODEJS_PUPPETEER_6_2, }); -// To supply the code from a S3 bucket: +// To supply the code from an S3 bucket: import * as s3 from 'aws-cdk-lib/aws-s3'; const bucket = new s3.Bucket(this, 'Code Bucket'); new synthetics.Canary(this, 'Bucket Canary', { diff --git a/packages/aws-cdk-lib/custom-resources/README.md b/packages/aws-cdk-lib/custom-resources/README.md index d3b37ae8c9ccb..cd15e042be6f7 100644 --- a/packages/aws-cdk-lib/custom-resources/README.md +++ b/packages/aws-cdk-lib/custom-resources/README.md @@ -450,7 +450,7 @@ const myProvider = new cr.Provider(this, 'MyProvider', { ### Customizing Provider Function environment encryption key -Sometimes it may be useful to manually set a AWS KMS key for the Provider Function Lambda and therefore +Sometimes it may be useful to manually set an AWS KMS key for the Provider Function Lambda and therefore be able to view, manage and audit the key usage. ```ts @@ -983,7 +983,7 @@ new s3deploy.BucketDeployment(nestedStackB, "s3deployB", { ### Setting Log Group Removal Policy -The `addLogRetentionLifetime` method of `CustomResourceConfig` will associate a log group with a AWS-vended custom resource lambda. +The `addLogRetentionLifetime` method of `CustomResourceConfig` will associate a log group with an AWS-vended custom resource lambda. The `addRemovalPolicy` method will configure the custom resource lambda log group removal policy to `DESTROY`. ```ts import * as cdk from 'aws-cdk-lib'; diff --git a/packages/aws-cdk-lib/cx-api/README.md b/packages/aws-cdk-lib/cx-api/README.md index 38f7485ee7896..026e7f6b2d2c1 100644 --- a/packages/aws-cdk-lib/cx-api/README.md +++ b/packages/aws-cdk-lib/cx-api/README.md @@ -32,7 +32,7 @@ _cdk.json_ * `@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption` -Enable this feature flag to restrict the decryption of a SQS queue, which is subscribed to a SNS topic, to +Enable this feature flag to restrict the decryption of an SQS queue, which is subscribed to an SNS topic, to only the topic which it is subscribed to and not the whole SNS service of an account. Previously the decryption was only restricted to the SNS service principal. To make the SQS subscription more @@ -361,7 +361,7 @@ _cdk.json_ When enabled, remove default deployment alarm settings. -When this featuer flag is enabled, remove the default deployment alarm settings when creating a AWS ECS service. +When this featuer flag is enabled, remove the default deployment alarm settings when creating an AWS ECS service. _cdk.json_