Please consult the examples
directory for reference example configurations. If you find a bug, please open an issue with supporting configuration to reproduce.
- Minimum supported version of Terraform AWS provider updated to v4.34 to support latest features provided via the resources utilized.
- Minimum supported version of Terraform updated to v1.0
- Individual security group created per EKS managed node group or self managed node group has been removed. This configuration went mostly un-used and would often cause confusion ("Why is there an empty security group attached to my nodes?"). This functionality can easily be replicated by user's providing one or more externally created security groups to attach to nodes launched from the node group.
- Previously,
var.iam_role_additional_policies
(one for each of the following: cluster IAM role, EKS managed node group IAM role, self-managed node group IAM role, and Fargate Profile IAM role) accepted a list of strings. This worked well for policies that already existed but failed for policies being created at the same time as the cluster due to the well known issue of unkown values used in afor_each
loop. To rectify this issue inv19.x
, two changes were made:var.iam_role_additional_policies
was changed from typelist(string)
to typemap(string)
-> this is a breaking change. More information on managing this change can be found below, underTerraform State Moves
- The logic used in the root module for this variable was changed to replace the use of
try()
withlookup()
. More details on why can be found here
- Support for setting
preserve
as well asmost_recent
on addons.preserve
indicates if you want to preserve the created resources when deleting the EKS add-onmost_recent
indicates if you want to use the most recent revision of the add-on or the default version (default)
cluster_security_group_additional_rules
andnode_security_group_additional_rules
have been modified to uselookup()
instead oftry()
to avoid the well known issue of unkown values within afor_each
loopblock_device_mappings
previously required a map of maps but has since changed to an array of maps. Users can remove the outer key for each block device mapping and replace the outermost map{}
with an array[]
. There are no state changes required for this change.node_security_group_ntp_ipv4_cidr_block
previously defaulted to["0.0.0.0/0"]
and now defaults to["169.254.169.123/32"]
(Referenc: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html)node_security_group_ntp_ipv6_cidr_block
previously defaulted to["::/0"]
and now defaults to["fd00:ec2::123/128"]
(Referenc: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html)create_kms_key
previously defaulted tofalse
and now defaults totrue
. Clusters created with this module now default to enabling secret encryption by default with a customer managed KMS key created by this modulecluster_encryption_config
previously used a type oflist(any)
and now uses a type ofany
-> users can simply remove the outer[
...]
brackets onv19.x
cluster_encryption_config
previously defaulted to[]
and now defaults to{resources = ["secrets"]}
to encrypt secrets by default
cluster_endpoint_public_access
previously defaulted totrue
and now defaults tofalse
. Clusters created with this module now default to private only access to the cluster endpointcluster_endpoint_private_access
previously defaulted tofalse
and now defaults totrue
- The addon configuration now sets
"OVERWRITE"
as the default value forresolve_conflicts
to ease addon upgrade management. Users can opt out of this by instead setting"NONE"
as the value forresolve_conflicts
- The
kms
module used has been updated fromv1.0.2
tov1.1.0
- no material changes other than updated to latest
- Remove all references of
aws_default_tags
to avoid update conflicts; this is the responsibility of the provider and should be handled at the provider level
-
Removed variables:
- Self managed node groups:
create_security_group
security_group_name
security_group_use_name_prefix
security_group_description
security_group_rules
security_group_tags
cluster_security_group_id
vpc_id
- EKS managed node groups:
create_security_group
security_group_name
security_group_use_name_prefix
security_group_description
security_group_rules
security_group_tags
cluster_security_group_id
vpc_id
- Self managed node groups:
-
Renamed variables:
- N/A
-
Added variables:
-
provision_on_outpost
for Outposts support -
outpost_config
for Outposts support -
cluster_addons_timeouts
for setting a common set of timeouts for all addons (unless a specific value is provided within the addon configuration) -
service_ipv6_cidr
for setting the IPv6 CIDR block for the Kubernetes service addresses -
Self managed node groups:
- N/A
-
EKS managed node groups:
use_custom_launch_template
was added to better clarify how users can switch betweeen a custom launch template or the default launch template provided by the EKS managed node group. Previously, to achieve this same functionality of using the default launch template, users needed to setcreate_launch_template = false
andlaunch_template_name = ""
which is not very intuitive.
-
-
Removed outputs:
- Self managed node groups:
security_group_arn
security_group_id
- EKS managed node groups:
security_group_arn
security_group_id
- Self managed node groups:
-
Renamed outputs:
- N/A
-
Added outputs:
- N/A
- Before upgrading your module definition to
v19.x
, please see below for both EKS managed node group(s) and self-managed node groups and removing the node group(s) security group prior to upgrading.
Self managed node groups on v18.x
by default create a security group that does not specify any rules. In v19.x
, this security group has been removed due to the predominant lack of usage (most users rely on the the shared node security group). While still using version v18.x
of your module definition, remove this security group from your node groups by setting create_security_group = false
.
- If you are currently utilizing this security group, it is recommended to create an additional security group that matches the rules/settings of the security group created by the node group, and specify that security group ID in
vpc_security_group_ids
. Once this is in place, you can proceed with the original security group removal. - For most users, the security group is not used and can be safely removed. However, deployed instances will have the security group attached to nodes and require the security group to be disassociated before the security group can be deleted. Because instances are deployed via autoscaling groups, we cannot simply remove the security group from code and have those changes reflected on the instances. Instead, we have to update the code and then trigger the autoscaling groups to cycle the instances deployed so that new instances are provisioned without the security group attached. You can utilize the
instance_refresh
parameter of Autoscaling groups to force nodes to re-deploy when removing the security group since changes to launch templates automatically trigger an instance refresh. An example configuration is provided below.- Add the following to either/or
self_managed_node_group_defaults
or the individual self-managed node group definitions:create_security_group = false instance_refresh = { strategy = "Rolling" preferences = { min_healthy_percentage = 66 } }
- Add the following to either/or
- It is recommended to use the
aws-node-termination-handler
while performing this update. Please refer to theirsa-autoscale-refresh
example for usage. This will ensure that pods are safely evicted in a controlled manner to avoid service disruptions. - Once the necessary configurations are in place, you can apply the changes which will:
- Create a new launch template (version) without the self-managed node group security group
- Replace instances based on the
instance_refresh
configuration settings - New instances will launch without the self-managed node group security group, and prior instances will be terminated
- Once the self-managed node group has cycled, the security group will be deleted
EKS managed node groups on v18.x
by default create a security group that does not specify any rules. In v19.x
, this security group has been removed due to the predominant lack of usage (most users rely on the the shared node security group). While still using version v18.x
of your module definition, remove this security group from your node groups by setting create_security_group = false
.
- If you are currently utilizing this security group, it is recommended to create an additional security group that matches the rules/settings of the security group created by the node group, and specify that security group ID in
vpc_security_group_ids
. Once this is in place, you can proceed with the original security group removal. - EKS managed node groups rollout changes using a rolling update strategy that can be influenced through
update_config
. No additional changes are required for removing the the security group created by node groups (unlike self-managed node groups which should utilize theinstance_refresh
setting of Autoscaling groups). - Once
create_security_group = false
has been set, you can apply the changes which will:- Create a new launch template (version) without the EKS managed node group security group
- Replace instances based on the
update_config
configuration settings - New instances will launch without the EKS managed node group security group, and prior instances will be terminated
- Once the EKS managed node group has cycled, the security group will be deleted
- Once the node group security group(s) have been removed, you can update your module definition to specify the
v19.x
version of the module. - Using the documentation provided above, update your module definition to reflect the changes in the module from
v18.x
tov19.x
. You can utilizeterraform plan
as you go to help highlight any changes that you wish to make. See below forterraform state mv ...
commands related to the use ofiam_role_additional_policies
. If you are not providing any values to these variables, you can skip this section. - Once you are satisifed with the changes and the
terraform plan
output, you can apply the changes to sync your infrastructure with the updated module definition (or vice versa).
module "eks" {
source = "terraform-aws-modules/eks/aws"
- version = "~> 18.0"
+ version = "~> 19.0"
cluster_name = local.name
+ cluster_endpoint_public_access = true
- cluster_endpoint_private_access = true # now the default
cluster_addons = {
- resolve_conflicts = "OVERWRITE" # now the default
+ preserve = true
+ most_recent = true
+ timeouts = {
+ create = "25m"
+ delete = "10m"
}
kube-proxy = {}
vpc-cni = {
- resolve_conflicts = "OVERWRITE" # now the default
}
}
# Encryption key
create_kms_key = true
- cluster_encryption_config = {
- resources = ["secrets"]
- }
+ cluster_encryption_config = [{
+ resources = ["secrets"]
+ }]
kms_key_deletion_window_in_days = 7
enable_kms_key_rotation = true
- iam_role_additional_policies = [additional = aws_iam_policy.additional.arn]
+ iam_role_additional_policies = {
+ additional = aws_iam_policy.additional.arn
+ }
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.intra_subnets
# Extend cluster security group rules
cluster_security_group_additional_rules = {
egress_nodes_ephemeral_ports_tcp = {
description = "To node 1025-65535"
protocol = "tcp"
from_port = 1025
to_port = 65535
type = "egress"
source_node_security_group = true
}
}
# Extend node-to-node security group rules
- node_security_group_ntp_ipv4_cidr_block = ["169.254.169.123/32"] # now the default
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
egress_all = {
description = "Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
# Self Managed Node Group(s)
self_managed_node_group_defaults = {
vpc_security_group_ids = [aws_security_group.additional.id]
- iam_role_additional_policies = [additional = aws_iam_policy.additional.arn]
+ iam_role_additional_policies = {
+ additional = aws_iam_policy.additional.arn
+ }
}
self_managed_node_groups = {
spot = {
instance_type = "m5.large"
instance_market_options = {
market_type = "spot"
}
pre_bootstrap_user_data = <<-EOT
echo "foo"
export FOO=bar
EOT
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
post_bootstrap_user_data = <<-EOT
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
sudo systemctl enable amazon-ssm-agent
sudo systemctl start amazon-ssm-agent
EOT
- create_security_group = true
- security_group_name = "eks-managed-node-group-complete-example"
- security_group_use_name_prefix = false
- security_group_description = "EKS managed node group complete example security group"
- security_group_rules = {}
- security_group_tags = {}
}
}
# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
attach_cluster_primary_security_group = true
vpc_security_group_ids = [aws_security_group.additional.id]
- iam_role_additional_policies = [additional = aws_iam_policy.additional.arn]
+ iam_role_additional_policies = {
+ additional = aws_iam_policy.additional.arn
+ }
}
eks_managed_node_groups = {
blue = {}
green = {
min_size = 1
max_size = 10
desired_size = 1
instance_types = ["t3.large"]
capacity_type = "SPOT"
labels = {
Environment = "test"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
taints = {
dedicated = {
key = "dedicated"
value = "gpuGroup"
effect = "NO_SCHEDULE"
}
}
update_config = {
max_unavailable_percentage = 33 # or set `max_unavailable`
}
- create_security_group = true
- security_group_name = "eks-managed-node-group-complete-example"
- security_group_use_name_prefix = false
- security_group_description = "EKS managed node group complete example security group"
- security_group_rules = {}
- security_group_tags = {}
tags = {
ExtraTag = "example"
}
}
}
# Fargate Profile(s)
fargate_profile_defaults = {
- iam_role_additional_policies = [additional = aws_iam_policy.additional.arn]
+ iam_role_additional_policies = {
+ additional = aws_iam_policy.additional.arn
+ }
}
fargate_profiles = {
default = {
name = "default"
selectors = [
{
namespace = "kube-system"
labels = {
k8s-app = "kube-dns"
}
},
{
namespace = "default"
}
]
tags = {
Owner = "test"
}
timeouts = {
create = "20m"
delete = "20m"
}
}
}
# OIDC Identity provider
cluster_identity_providers = {
sts = {
client_id = "sts.amazonaws.com"
}
}
# aws-auth configmap
manage_aws_auth_configmap = true
aws_auth_node_iam_role_arns_non_windows = [
module.eks_managed_node_group.iam_role_arn,
module.self_managed_node_group.iam_role_arn,
]
aws_auth_fargate_profile_pod_execution_role_arns = [
module.fargate_profile.fargate_profile_pod_execution_role_arn
]
aws_auth_roles = [
{
rolearn = "arn:aws:iam::66666666666:role/role1"
username = "role1"
groups = ["system:masters"]
},
]
aws_auth_users = [
{
userarn = "arn:aws:iam::66666666666:user/user1"
username = "user1"
groups = ["system:masters"]
},
{
userarn = "arn:aws:iam::66666666666:user/user2"
username = "user2"
groups = ["system:masters"]
},
]
aws_auth_accounts = [
"777777777777",
"888888888888",
]
tags = local.tags
}
The following Terraform state move commands are optional but recommended if you are providing additional IAM policies that are to be attached to IAM roles created by this module (cluster IAM role, node group IAM role, Fargate profile IAM role). Because the resources affected are aws_iam_role_policy_attachment
, in theory you could get away with simply applying the configuration and letting Terraform detach and re-attach the policies. However, during this brief period of update, you could experience permission failures as the policy is detached and re-attached and therefore the state move route is recommended.
Where "<POLICY_ARN>"
is specified, this should be replaced with the full ARN of the policy, and "<POLICY_MAP_KEY>"
should be replaced with the key used in the iam_role_additional_policies
map for the associated policy. For example, if you have the followingv19.x
configuration:
...
# This is demonstrating the cluster IAM role addtional policies
iam_role_additional_policies = {
additional = aws_iam_policy.additional.arn
}
...
The associated state move command would look similar to (albeit with your correct policy ARN):
terraform state mv 'module.eks.aws_iam_role_policy_attachment.this["arn:aws:iam::111111111111:policy/ex-complete-additional"]' 'module.eks.aws_iam_role_policy_attachment.additional["additional"]'
If you are not providing any additional IAM policies, no actions are required.
Repeat for each policy provided in iam_role_additional_policies
:
terraform state mv 'module.eks.aws_iam_role_policy_attachment.this["<POLICY_ARN>"]' 'module.eks.aws_iam_role_policy_attachment.additional["<POLICY_MAP_KEY>"]'
Where "<NODE_GROUP_KEY>"
is the key used in the eks_managed_node_groups
map for the associated node group. Repeat for each policy provided in iam_role_additional_policies
in either/or eks_managed_node_group_defaults
or the individual node group definitions:
terraform state mv 'module.eks.module.eks_managed_node_group["<NODE_GROUP_KEY>"].aws_iam_role_policy_attachment.this["<POLICY_ARN>"]' 'module.eks.module.eks_managed_node_group["<NODE_GROUP_KEY>"].aws_iam_role_policy_attachment.additional["<POLICY_MAP_KEY>"]'
Where "<NODE_GROUP_KEY>"
is the key used in the self_managed_node_groups
map for the associated node group. Repeat for each policy provided in iam_role_additional_policies
in either/or self_managed_node_group_defaults
or the individual node group definitions:
terraform state mv 'module.eks.module.self_managed_node_group["<NODE_GROUP_KEY>"].aws_iam_role_policy_attachment.this["<POLICY_ARN>"]' 'module.eks.module.self_managed_node_group["<NODE_GROUP_KEY>"].aws_iam_role_policy_attachment.additional["<POLICY_MAP_KEY>"]'
Where "<FARGATE_PROFILE_KEY>"
is the key used in the fargate_profiles
map for the associated profile. Repeat for each policy provided in iam_role_additional_policies
in either/or fargate_profile_defaults
or the individual profile definitions:
terraform state mv 'module.eks.module.fargate_profile["<FARGATE_PROFILE_KEY>"].aws_iam_role_policy_attachment.this["<POLICY_ARN>"]' 'module.eks.module.fargate_profile["<FARGATE_PROFILE_KEY>"].aws_iam_role_policy_attachment.additional["<POLICY_MAP_KEY>"]'