Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions packages/@aws-cdk/aws-bedrock-agentcore-alpha/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1460,15 +1460,15 @@ specifications for defining API targets. It connects to REST APIs using OpenAPI

- Supports OAUTH and API_KEY credential providers (Do not support IAM, you must provide `credentialProviderConfigurations`)
- Ideal for integrating with external REST services
- Need API schema. The construct provide [3 ways to upload a API schema to OpenAPI target](#api-schema-for-openapi-and-smithy-target)
- Need API schema. The construct provide [3 ways to upload an API schema to OpenAPI target](#api-schema-for-openapi-and-smithy-target)

**Smithy Model Target** : Smithy is a language for defining services and software development kits (SDKs). Smithy models provide
a more structured approach to defining APIs compared to OpenAPI, and are particularly useful for connecting to AWS services.
AgentCore Gateway supports built-in AWS service models only. It connects to services using Smithy model definitions

- Supports OAUTH and API_KEY credential providers
- Ideal for AWS service integrations
- Need API schema. The construct provide 3 ways to upload a API schema to Smity target
- Need API schema. The construct provide 3 ways to upload an API schema to Smity target
- When using the default IAM authentication (no `credentialProviderConfigurations` specified), The construct only
grants permission to read the Smithy schema file from S3. You MUST manually grant permissions for the gateway
role to invoke the actual Smithy API endpoints
Expand Down
4 changes: 2 additions & 2 deletions packages/@aws-cdk/aws-glue-alpha/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -547,7 +547,7 @@ new glue.Database(this, 'MyDatabase', {

## Table

A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in a S3 bucket), and format for the files (Json, Avro, Parquet, etc.):
A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in an S3 bucket), and format for the files (Json, Avro, Parquet, etc.):

```ts
declare const myDatabase: glue.Database;
Expand All @@ -565,7 +565,7 @@ new glue.S3Table(this, 'MyTable', {
});
```

By default, a S3 bucket will be created to store the table's data but you can manually pass the `bucket` and `s3Prefix`:
By default, an S3 bucket will be created to store the table's data but you can manually pass the `bucket` and `s3Prefix`:

```ts
declare const myBucket: s3.Bucket;
Expand Down
6 changes: 3 additions & 3 deletions packages/@aws-cdk/aws-iot-actions-alpha/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Currently supported are:

- Republish a message to another MQTT topic
- Invoke a Lambda function
- Put objects to a S3 bucket
- Put objects to an S3 bucket
- Put logs to CloudWatch Logs
- Capture CloudWatch metrics
- Change state for a CloudWatch alarm
Expand Down Expand Up @@ -73,9 +73,9 @@ new iot.TopicRule(this, 'TopicRule', {
});
```

## Put objects to a S3 bucket
## Put objects to an S3 bucket

The code snippet below creates an AWS IoT Rule that puts objects to a S3 bucket
The code snippet below creates an AWS IoT Rule that puts objects to an S3 bucket
when it is triggered.

```ts
Expand Down
4 changes: 2 additions & 2 deletions packages/@aws-cdk/aws-pipes-alpha/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,12 @@ const pipe = new pipes.Pipe(this, 'Pipe', {
});
```

This minimal example creates a pipe with a SQS queue as source and a SQS queue as target.
This minimal example creates a pipe with an SQS queue as source and an SQS queue as target.
Messages from the source are put into the body of the target message.

## Source

A source is a AWS Service that is polled. The following sources are possible:
A source is an AWS Service that is polled. The following sources are possible:

- [Amazon DynamoDB stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-dynamodb.html)
- [Amazon Kinesis stream](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-kinesis.html)
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/aws-apigatewayv2-integrations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ httpApi.addRoutes({
SQS integrations enable integrating an HTTP API route with AWS SQS.
This allows the HTTP API to send, receive and delete messages from an SQS queue.

The following code configures a SQS integrations:
The following code configures an SQS integrations:

```ts
import * as sqs from 'aws-cdk-lib/aws-sqs';
Expand Down Expand Up @@ -453,7 +453,7 @@ webSocketApi.addRoute('sendMessage', {
AWS type integrations enable integrating with any supported AWS service. This is only supported for WebSocket APIs. When a client
connects/disconnects or sends a message specific to a route, the API Gateway service forwards the request to the specified AWS service.

The following code configures a `$connect` route with a AWS integration that integrates with a dynamodb table. On websocket api connect,
The following code configures a `$connect` route with an AWS integration that integrates with a dynamodb table. On websocket api connect,
it will write new entry to the dynamodb table.

```ts
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/aws-cloudfront-origins/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ S3 buckets, Elastic Load Balancing v2 load balancers, or any other domain name.

## S3 Bucket

An S3 bucket can be used as an origin. An S3 bucket origin can either be configured using a standard S3 bucket or using a S3 bucket that's configured as a website endpoint (see AWS docs for [Using an S3 Bucket](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#using-s3-as-origin)).
An S3 bucket can be used as an origin. An S3 bucket origin can either be configured using a standard S3 bucket or using an S3 bucket that's configured as a website endpoint (see AWS docs for [Using an S3 Bucket](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html#using-s3-as-origin)).

> Note: `S3Origin` has been deprecated. Use `S3BucketOrigin` for standard S3 origins and `S3StaticWebsiteOrigin` for static website S3 origins.

Expand Down Expand Up @@ -424,7 +424,7 @@ S3 bucket to allow the OAI read access:

See AWS docs on [Giving an origin access identity permission to read files in the Amazon S3 bucket](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-restricting-access-to-s3-oai) for more details.

### Setting up a S3 origin with no origin access control
### Setting up an S3 origin with no origin access control

To setup a standard S3 origin with no access control (no OAI nor OAC), use `origins.S3BucketOrigin.withBucketDefaults()`:

Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/aws-codebuild/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -883,7 +883,7 @@ by events via an event rule.
### Using Project as an event target

The `aws-cdk-lib/aws-events-targets.CodeBuildProject` allows using an AWS CodeBuild
project as a AWS CloudWatch event rule target:
project as an AWS CloudWatch event rule target:

```ts
// start build when a commit is pushed
Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/aws-config/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ new config.CloudFormationStackDriftDetectionCheck(this, 'Drift', {

#### CloudFormation Stack notifications

Checks whether your CloudFormation stacks are sending event notifications to a SNS topic.
Checks whether your CloudFormation stacks are sending event notifications to an SNS topic.

```ts
// topics to which CloudFormation stacks may send event notifications
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/aws-ecs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -735,7 +735,7 @@ taskDefinition.addContainer('windowsservercore', {

Amazon ECS supports Active Directory authentication for Linux containers through a special kind of service account called a group Managed Service Account (gMSA). For more details, please see the [product documentation on how to implement on Windows containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html), or this [blog post on how to implement on Linux containers](https://aws.amazon.com/blogs/containers/using-windows-authentication-with-gmsa-on-linux-containers-on-amazon-ecs/).

There are two types of CredentialSpecs, domained-join or domainless. Both types support creation from a S3 bucket, a SSM parameter, or by directly specifying a location for the file in the constructor.
There are two types of CredentialSpecs, domained-join or domainless. Both types support creation from an S3 bucket, a SSM parameter, or by directly specifying a location for the file in the constructor.

A domian-joined gMSA container looks like:

Expand All @@ -760,7 +760,7 @@ A domianless gMSA container looks like:
declare const bucket: s3.Bucket;
declare const taskDefinition: ecs.TaskDefinition;

// Domainless gMSA container from a S3 bucket object.
// Domainless gMSA container from an S3 bucket object.
taskDefinition.addContainer('gmsa-domainless-container', {
image: ecs.ContainerImage.fromRegistry("amazon/amazon-ecs-sample"),
cpu: 128,
Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/aws-events/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ const onCommitRule = repo.onCommit('OnCommit', {

You can add additional targets, with optional [input
transformer](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_InputTransformer.html)
using `eventRule.addTarget(target[, input])`. For example, we can add a SNS
using `eventRule.addTarget(target[, input])`. For example, we can add an SNS
topic target which formats a human-readable message for the commit.

For example, this adds an SNS topic as a target:
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/aws-lambda-destinations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The following destinations are supported
* EventBridge event bus
* S3 bucket

Example with a SNS topic for successful invocations:
Example with an SNS topic for successful invocations:

```ts
// An sns topic for successful invocations of a lambda function
Expand All @@ -31,7 +31,7 @@ const myFn = new lambda.Function(this, 'Fn', {
})
```

Example with a SQS queue for unsuccessful invocations:
Example with an SQS queue for unsuccessful invocations:

```ts
// An sqs queue for unsuccessful invocations of a lambda function
Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/aws-logs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Account ID strings. Non-ARN principals, like Service principals or Any principal
## Encrypting Log Groups

By default, log group data is always encrypted in CloudWatch Logs. You have the
option to encrypt log group data using a AWS KMS customer master key (CMK) should
option to encrypt log group data using an AWS KMS customer master key (CMK) should
you not wish to use the default AWS encryption. Keep in mind that if you decide to
encrypt a log group, any service or IAM identity that needs to read the encrypted
log streams in the future will require the same CMK to decrypt the data.
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/aws-scheduler-targets/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ new Schedule(this, 'Schedule', {

Use the `SnsPublish` target to publish messages to an Amazon SNS topic.

The code snippets below create an event rule with a Amazon SNS topic as a target.
The code snippets below create an event rule with an Amazon SNS topic as a target.
It's called every hour by Amazon EventBridge Scheduler with a custom payload.

```ts
Expand Down Expand Up @@ -322,7 +322,7 @@ new Schedule(this, 'Schedule', {
});
```

The code snippet below creates an event rule with a EC2 task definition and cluster as the target which is called every hour by EventBridge Scheduler.
The code snippet below creates an event rule with an EC2 task definition and cluster as the target which is called every hour by EventBridge Scheduler.

```ts
import * as ecs from 'aws-cdk-lib/aws-ecs';
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/aws-servicecatalog/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,8 +175,8 @@ const product = new servicecatalog.CloudFormationProduct(this, 'Product', {

### Using Assets in your Product Stack

You can reference assets in a Product Stack. For example, we can add a handler to a Lambda function or a S3 Asset directly from a local asset file.
In this case, you must provide a S3 Bucket with a bucketName to store your assets.
You can reference assets in a Product Stack. For example, we can add a handler to a Lambda function or an S3 Asset directly from a local asset file.
In this case, you must provide an S3 Bucket with a bucketName to store your assets.

```ts
import * as lambda from 'aws-cdk-lib/aws-lambda';
Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/aws-ssm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ const stringValue = ssm.StringParameter.valueFromLookup(this, '/My/Public/Parame
When using `valueFromLookup` an initial value of 'dummy-value-for-${parameterName}'
(`dummy-value-for-/My/Public/Parameter` in the above example)
is returned prior to the lookup being performed. This can lead to errors if you are using this
value in places that require a certain format. For example if you have stored the ARN for a SNS
value in places that require a certain format. For example if you have stored the ARN for an SNS
topic in a SSM Parameter which you want to lookup and provide to `Topic.fromTopicArn()`

```ts
Expand Down
6 changes: 3 additions & 3 deletions packages/aws-cdk-lib/aws-stepfunctions/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -947,8 +947,8 @@ distributedMap.itemProcessor(new sfn.Pass(this, 'Pass State'));
});
distributedMap.itemProcessor(new sfn.Pass(this, 'Pass'));
```
* Objects in a S3 bucket with an optional prefix.
* When `DistributedMap` is required to iterate over objects stored in a S3 bucket, then an object of `S3ObjectsItemReader` can be passed to `itemReader` to configure the iterator source. Note that `S3ObjectsItemReader` will default to use Distributed map's query language. If the
* Objects in an S3 bucket with an optional prefix.
* When `DistributedMap` is required to iterate over objects stored in an S3 bucket, then an object of `S3ObjectsItemReader` can be passed to `itemReader` to configure the iterator source. Note that `S3ObjectsItemReader` will default to use Distributed map's query language. If the
map does not specify a query language, then it falls back to the State machine's query language. An exmaple of using `S3ObjectsItemReader`
is as follows:
```ts
Expand Down Expand Up @@ -997,7 +997,7 @@ distributedMap.itemProcessor(new sfn.Pass(this, 'Pass State'));
```
* Both `bucket` and `bucketNamePath` are mutually exclusive.
* JSON array in a JSON file stored in S3
* When `DistributedMap` is required to iterate over a JSON array stored in a JSON file in a S3 bucket, then an object of `S3JsonItemReader` can be passed to `itemReader` to configure the iterator source as follows:
* When `DistributedMap` is required to iterate over a JSON array stored in a JSON file in an S3 bucket, then an object of `S3JsonItemReader` can be passed to `itemReader` to configure the iterator source as follows:
```ts
import * as s3 from 'aws-cdk-lib/aws-s3';

Expand Down
2 changes: 1 addition & 1 deletion packages/aws-cdk-lib/aws-synthetics/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ new synthetics.Canary(this, 'Asset Canary', {
runtime: synthetics.Runtime.SYNTHETICS_NODEJS_PUPPETEER_6_2,
});

// To supply the code from a S3 bucket:
// To supply the code from an S3 bucket:
import * as s3 from 'aws-cdk-lib/aws-s3';
const bucket = new s3.Bucket(this, 'Code Bucket');
new synthetics.Canary(this, 'Bucket Canary', {
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/custom-resources/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -450,7 +450,7 @@ const myProvider = new cr.Provider(this, 'MyProvider', {

### Customizing Provider Function environment encryption key

Sometimes it may be useful to manually set a AWS KMS key for the Provider Function Lambda and therefore
Sometimes it may be useful to manually set an AWS KMS key for the Provider Function Lambda and therefore
be able to view, manage and audit the key usage.

```ts
Expand Down Expand Up @@ -983,7 +983,7 @@ new s3deploy.BucketDeployment(nestedStackB, "s3deployB", {

### Setting Log Group Removal Policy

The `addLogRetentionLifetime` method of `CustomResourceConfig` will associate a log group with a AWS-vended custom resource lambda.
The `addLogRetentionLifetime` method of `CustomResourceConfig` will associate a log group with an AWS-vended custom resource lambda.
The `addRemovalPolicy` method will configure the custom resource lambda log group removal policy to `DESTROY`.
```ts
import * as cdk from 'aws-cdk-lib';
Expand Down
4 changes: 2 additions & 2 deletions packages/aws-cdk-lib/cx-api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ _cdk.json_

* `@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption`

Enable this feature flag to restrict the decryption of a SQS queue, which is subscribed to a SNS topic, to
Enable this feature flag to restrict the decryption of an SQS queue, which is subscribed to an SNS topic, to
only the topic which it is subscribed to and not the whole SNS service of an account.

Previously the decryption was only restricted to the SNS service principal. To make the SQS subscription more
Expand Down Expand Up @@ -361,7 +361,7 @@ _cdk.json_

When enabled, remove default deployment alarm settings.

When this featuer flag is enabled, remove the default deployment alarm settings when creating a AWS ECS service.
When this featuer flag is enabled, remove the default deployment alarm settings when creating an AWS ECS service.

_cdk.json_

Expand Down
Loading