Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions charts/karpenter-crd/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ apiVersion: v2
name: karpenter-crd
description: A Helm chart for Karpenter Custom Resource Definitions (CRDs).
type: application
version: 1.10.0
appVersion: 1.10.0
version: 1.11.0
appVersion: 1.11.0
keywords:
- cluster
- node
Expand Down
4 changes: 2 additions & 2 deletions charts/karpenter/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ apiVersion: v2
name: karpenter
description: A Helm chart for Karpenter, an open-source node provisioning project built for Kubernetes.
type: application
version: 1.10.0
appVersion: 1.10.0
version: 1.11.0
appVersion: 1.11.0
keywords:
- cluster
- node
Expand Down
14 changes: 7 additions & 7 deletions charts/karpenter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A Helm chart for Karpenter, an open-source node provisioning project built for Kubernetes.

![Version: 1.10.0](https://img.shields.io/badge/Version-1.10.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.10.0](https://img.shields.io/badge/AppVersion-1.10.0-informational?style=flat-square)
![Version: 1.11.0](https://img.shields.io/badge/Version-1.11.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 1.11.0](https://img.shields.io/badge/AppVersion-1.11.0-informational?style=flat-square)

## Documentation

Expand All @@ -15,7 +15,7 @@ You can follow the detailed installation instruction in the [documentation](http
```bash
helm upgrade --install --namespace karpenter --create-namespace \
karpenter oci://public.ecr.aws/karpenter/karpenter \
--version 1.10.0 \
--version 1.11.0 \
--set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=${KARPENTER_IAM_ROLE_ARN}" \
--set settings.clusterName=${CLUSTER_NAME} \
--set settings.interruptionQueue=${CLUSTER_NAME} \
Expand All @@ -27,13 +27,13 @@ helm upgrade --install --namespace karpenter --create-namespace \
As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command.

```shell
cosign verify public.ecr.aws/karpenter/karpenter:1.10.0 \
cosign verify public.ecr.aws/karpenter/karpenter:1.11.0 \
--certificate-oidc-issuer=https://token.actions.githubusercontent.com \
--certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \
--certificate-github-workflow-repository=aws/karpenter-provider-aws \
--certificate-github-workflow-name=Release \
--certificate-github-workflow-ref=refs/tags/v1.10.0 \
--annotations version=1.10.0
--certificate-github-workflow-ref=refs/tags/v1.11.0 \
--annotations version=1.11.0
```

## Values
Expand All @@ -49,9 +49,9 @@ cosign verify public.ecr.aws/karpenter/karpenter:1.10.0 \
| controller.envFrom | list | `[]` | |
| controller.extraVolumeMounts | list | `[]` | Additional volumeMounts for the controller container. |
| controller.healthProbe.port | int | `8081` | The container port to use for http health probe. |
| controller.image.digest | string | `"sha256:0c215133a37e0d8bc2515b75120d2fefa14be3f939aebc14020813cdc3c001a3"` | SHA256 digest of the controller image. |
| controller.image.digest | string | `"sha256:f5691977d6f6ca3032fa61a3faefbfcfc838837d00586b40331d95bd84d55f74"` | SHA256 digest of the controller image. |
| controller.image.repository | string | `"public.ecr.aws/karpenter/controller"` | Repository path to the controller image. |
| controller.image.tag | string | `"1.10.0"` | Tag of the controller image. |
| controller.image.tag | string | `"1.11.0"` | Tag of the controller image. |
| controller.metrics.port | int | `8080` | The container port to use for metrics. |
| controller.resources | object | `{}` | Resources for the controller container. |
| controller.securityContext.appArmorProfile | object | `{}` | AppArmor profile for the controller container. |
Expand Down
4 changes: 2 additions & 2 deletions charts/karpenter/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -126,9 +126,9 @@ controller:
# -- Repository path to the controller image.
repository: public.ecr.aws/karpenter/controller
# -- Tag of the controller image.
tag: 1.10.0
tag: 1.11.0
# -- SHA256 digest of the controller image.
digest: sha256:0c215133a37e0d8bc2515b75120d2fefa14be3f939aebc14020813cdc3c001a3
digest: sha256:f5691977d6f6ca3032fa61a3faefbfcfc838837d00586b40331d95bd84d55f74
# -- Additional environment variables for the controller pod.
env: []
# - name: AWS_REGION
Expand Down
3 changes: 3 additions & 0 deletions hack/docs/compatibilitymatrix_gen/compatibility.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,5 +86,8 @@ compatibility:
minK8sVersion: 1.26
maxK8sVersion: 1.35
- appVersion: 1.10.x
minK8sVersion: 1.26
maxK8sVersion: 1.35
- appVersion: 1.11.x
minK8sVersion: 1.26
maxK8sVersion: 1.35
71 changes: 70 additions & 1 deletion website/content/en/docs/concepts/nodeclasses.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,11 @@ spec:
- id: cr-123
- instanceMatchCriteria: open

# Optional, the terms are exclusive
placementGroupSelector:
name: my-pg
id: pg-123

# Optional, propagates tags to underlying EC2 resources
tags:
team: team-a
Expand All @@ -141,6 +146,15 @@ spec:
snapshotID: snap-0123456789
volumeInitializationRate: 100

# Optional, configures the network interfaces for the instance
networkInterfaces:
- networkCardIndex: 0
deviceIndex: 0
interfaceType: "interface"
- networkCardIndex: 0
deviceIndex: 1
interfaceType: "interface"

# Optional, use instance-store volumes for node ephemeral-storage
instanceStorePolicy: RAID0

Expand Down Expand Up @@ -714,7 +728,7 @@ You can provision and assign a role to an IAM instance profile using [CloudForma

{{% alert title="Note" color="primary" %}}

For [private clusters](https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html) that do not have access to the public internet, using `spec.instanceProfile` is required. `spec.role` cannot be used since Karpenter needs to access IAM endpoints to manage a generated instance profile. IAM [doesn't support private endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-privatelink-support.html) to enable accessing the service without going to the public internet.
For [private clusters](https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html) without access to their AWS region's IAM API endpoint, using `spec.instanceProfile` is required. `spec.role` cannot be used since Karpenter needs to access IAM endpoints to manage a generated instance profile. IAM [doesn't support private endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-privatelink-support.html) to enable accessing the service without going to the public internet.

{{% /alert %}}

Expand Down Expand Up @@ -962,6 +976,39 @@ spec:
key: foo
```

## spec.placementGroupSelector

Placement Group Selector allows you to select a [placement group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) for instances launched by this EC2NodeClass. Each EC2NodeClass maps to exactly one placement group — all instances launched from that EC2NodeClass are placed into the resolved placement group.

Placement groups can be selected by either name or ID. Only one of `name` or `id` may be specified.

Karpenter supports all three placement group strategies:
- **Cluster** — instances are placed in a single AZ on the same network segment for low-latency, high-throughput networking (e.g., EFA workloads)
- **Partition** — instances are distributed across isolated partitions (up to 7 per AZ) for hardware fault isolation. Applications can use `topologySpreadConstraints` with the `karpenter.k8s.aws/placement-group-partition` label to spread workloads across partitions.
- **Spread** — each instance is placed on distinct hardware (up to 7 instances per AZ per group) for maximum fault isolation

{{% alert title="Note" color="primary" %}}
The IAM role Karpenter assumes must have permissions for the [ec2:DescribePlacementGroups](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribePlacementGroups.html) action to discover placement groups and the [ec2:RunInstances](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html#amazonec2-RunInstances) / [ec2:CreateFleet](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html#amazonec2-CreateFleet) actions to launch instances into the placement group.
{{% /alert %}}

#### Examples

Select the placement group with the given ID:

```yaml
spec:
placementGroupSelector:
id: pg-123
```

Select the placement group with the given name:

```yaml
spec:
placementGroupSelector:
name: my-pg-a
```

## spec.tags

Karpenter adds tags to all resources it creates, including EC2 Instances, EBS volumes, and Launch Templates. The default set of tags are listed below.
Expand Down Expand Up @@ -1082,6 +1129,28 @@ spec:

The `Custom` AMIFamily ships without any default `blockDeviceMappings`.

## spec.networkInterfaces

The `networkInterfaces` field allows you to configure network interface attachments for instances, including support for EFA (Elastic Fabric Adapter) devices for high-performance computing and machine learning workloads. For more information see the [AWS EFA docs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa.html).

Configure network interfaces by specifying the network card index, device index, and interface type:

```yaml
spec:
networkInterfaces:
- networkCardIndex: 0
deviceIndex: 0
interfaceType: "interface"
- networkCardIndex: 0
deviceIndex: 1
interfaceType: "efa-only"
```

### Interface Types

- __interface__: Standard ENA (Elastic Network Adapter) interface providing IP connectivity
- __efa-only__: EFA interface that provides only the EFA device for RDMA communication without consuming an IP address

## spec.instanceStorePolicy

The `instanceStorePolicy` field controls how [instance-store](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html) volumes are handled. By default, Karpenter and Kubernetes will simply ignore them.
Expand Down
2 changes: 2 additions & 0 deletions website/content/en/docs/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -184,6 +184,8 @@ Take care to ensure the label domains are correct. A well known label like `karp
| karpenter.k8s.aws/instance-local-nvme | 900 | [AWS Specific] Number of gibibytes of local nvme storage on the instance |
| karpenter.k8s.aws/instance-capability-flex | true | [AWS Specific] Instance with capacity flex |
| karpenter.k8s.aws/instance-tenancy | default | [AWS Specific] Tenancy types include `default`, and `dedicated` |
| karpenter.k8s.aws/placement-group-id | pg-0fa32af67ed0f8da0 | [AWS Specific] The placement group ID.
| karpenter.k8s.aws/placement-group-partition | 7 | [AWS Specific] The partition number of the partition placement group the instance is in.
| topology.k8s.aws/zone-id | use1-az1 | [AWS Specific] Globally consistent [zone id](https://docs.aws.amazon.com/global-infrastructure/latest/regions/az-ids.html) |


Expand Down
4 changes: 2 additions & 2 deletions website/content/en/docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for
AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well.

### Can I write my own cloud provider for Karpenter?
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-provider-aws/tree/v1.10.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-provider-aws/tree/v1.11.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.

### What operating system nodes does Karpenter deploy?
Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}).
Expand All @@ -29,7 +29,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref
Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}).

### What RBAC access is required?
All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.10.0/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.10.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.10.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.10.0/charts/karpenter/templates/role.yaml) files for details.
All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.11.0/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.11.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.11.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.11.0/charts/karpenter/templates/role.yaml) files for details.

### Can I run Karpenter outside of a Kubernetes cluster?
Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ After setting up the tools, set the Karpenter and Kubernetes version:

```bash
export KARPENTER_NAMESPACE="kube-system"
export KARPENTER_VERSION="1.10.0"
export KARPENTER_VERSION="1.11.0"
export K8S_VERSION="1.35"
```

Expand Down Expand Up @@ -115,13 +115,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/
As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command.

```bash
cosign verify public.ecr.aws/karpenter/karpenter:1.10.0 \
cosign verify public.ecr.aws/karpenter/karpenter:1.11.0 \
--certificate-oidc-issuer=https://token.actions.githubusercontent.com \
--certificate-identity-regexp='https://github\.com/aws/karpenter-provider-aws/\.github/workflows/release\.yaml@.+' \
--certificate-github-workflow-repository=aws/karpenter-provider-aws \
--certificate-github-workflow-name=Release \
--certificate-github-workflow-ref=refs/tags/v1.10.0 \
--annotations version=1.10.0
--certificate-github-workflow-ref=refs/tags/v1.11.0 \
--annotations version=1.11.0
```

{{% alert title="DNS Policy Notice" color="warning" %}}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,8 @@ Resources:
"arn:${AWS::Partition}:ec2:${AWS::Region}::snapshot/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:security-group/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:subnet/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:capacity-reservation/*"
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:capacity-reservation/*",
"arn:${AWS::Partition}:ec2:${AWS::Region}:*:placement-group/*"
],
"Action": [
"ec2:RunInstances",
Expand Down Expand Up @@ -315,6 +316,7 @@ Resources:
"ec2:DescribeInstanceTypeOfferings",
"ec2:DescribeInstanceTypes",
"ec2:DescribeLaunchTemplates",
"ec2:DescribePlacementGroups",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSpotPriceHistory",
"ec2:DescribeSubnets"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,6 @@ dashboardProviders:
dashboards:
default:
capacity-dashboard:
url: https://karpenter.sh/v1.10/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json
url: https://karpenter.sh/v1.11/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json
performance-dashboard:
url: https://karpenter.sh/v1.10/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json
url: https://karpenter.sh/v1.11/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group.
First set the Karpenter release you want to deploy.

```bash
export KARPENTER_VERSION="1.10.0"
export KARPENTER_VERSION="1.11.0"
```

We can now generate a full Karpenter deployment yaml from the Helm chart.
Expand Down Expand Up @@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t

## Create default NodePool

We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.10.0/examples/v1) for specific needs.
We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.11.0/examples/v1) for specific needs.

{{% script file="./content/en/{VERSION}/getting-started/migrating-from-cas/scripts/step10-create-nodepool.sh" language="bash" %}}

Expand Down
Loading
Loading