Is there an existing issue for this?
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform Version
1.4.0
AzureRM Provider Version
3.47.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
resource "azurerm_kubernetes_cluster" "k8s" {
name = "skutest-saas-${var.environment}"
location = data.azurerm_resource_group.saas.location
resource_group_name = data.azurerm_resource_group.saas.name
dns_prefix = "k8s"
kubernetes_version = "1.24"
role_based_access_control_enabled = true
automatic_channel_upgrade = "patch"
sku_tier = "Standard"
api_server_access_profile {
authorized_ip_ranges = concat(var.api_server_authorized_ip_ranges, ["${data.azurerm_public_ip.loadbalancer-saas-buildagent.ip_address}/32"])
}
identity {
type = "SystemAssigned"
}
default_node_pool {
name = "system"
enable_auto_scaling = true
min_count = var.min_system_agent_count
max_count = var.max_system_agent_count
vm_size = "Standard_E4ds_v5"
os_disk_type = "Ephemeral"
max_pods = 110
only_critical_addons_enabled = true
vnet_subnet_id = azurerm_subnet.k8s-subnet.id
}
network_profile {
load_balancer_sku = "standard"
network_plugin = "azure"
network_policy = "calico"
service_cidr = "10.140.0.0/14"
docker_bridge_cidr = "172.17.0.1/16"
dns_service_ip = "10.140.0.101"
}
maintenance_window {
allowed {
day = "Monday"
hours = [0, 23]
}
allowed {
day = "Tuesday"
hours = [0, 23]
}
allowed {
day = "Wednesday"
hours = [0, 23]
}
allowed {
day = "Thursday"
hours = [0, 23]
}
allowed {
day = "Friday"
hours = [0, 23]
}
allowed {
day = "Saturday"
hours = [0, 23]
}
allowed {
day = "Sunday"
hours = [0, 23]
}
}
lifecycle {
prevent_destroy = true
}
tags = {
}
}
Debug Output/Panic Output
╷
│ Error: updating Managed Cluster (Subscription: "SUBSCRIPTIONID"
│ Resource Group Name: "RESOURCEGROUPNAME"
│ Managed Cluster Name: "CLUSTERNAME"): managedclusters.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="SkuNotAvailable" Message="Standard managed cluster SKU tier is invalid. 'Paid' has been replaced by 'Standard' since v2023-02-01. "
│
│ with azurerm_kubernetes_cluster.k8s,
│ on k8s.tf line 1, in resource "azurerm_kubernetes_cluster" "k8s":
│ 1: resource "azurerm_kubernetes_cluster" "k8s" {
│
╵
Expected Behaviour
Changing the value in the terraformstate fromPaid to Standard
Actual Behaviour
Azure returns an error message, that Paid has been replaced by Standard
Steps to Reproduce
- terraform apply with sku_tier = Paid
- terraform apply with sku_tier = Standard
Important Factoids
No response
References
#20734 sku_tier standard was introduced in azurerm v3.46.0
Is there an existing issue for this?
Community Note
Terraform Version
1.4.0
AzureRM Provider Version
3.47.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
Changing the value in the terraformstate fromPaid to Standard
Actual Behaviour
Azure returns an error message, that Paid has been replaced by Standard
Steps to Reproduce
Important Factoids
No response
References
#20734 sku_tier standard was introduced in azurerm v3.46.0