Skip to content

Commit

Permalink
docs: Correct spelling mistakes (terraform-aws-modules#2334)
Browse files Browse the repository at this point in the history
Resolves undefined
  • Loading branch information
bryantbiggs authored Dec 8, 2022
1 parent 7124d76 commit ca03fd9
Show file tree
Hide file tree
Showing 9 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ The examples provided under `examples/` provide a comprehensive suite of configu
- [EKS Managed Node Group](https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html)
- [Self Managed Node Group](https://docs.aws.amazon.com/eks/latest/userguide/worker.html)
- [Fargate Profile](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html)
- Support for creating Karpenter related AWS infrastruture resources (e.g. IAM roles, SQS queue, EventBridge rules, etc.)
- Support for creating Karpenter related AWS infrastructure resources (e.g. IAM roles, SQS queue, EventBridge rules, etc.)
- Support for custom AMI, custom launch template, and custom user data including custom user data template
- Support for Amazon Linux 2 EKS Optimized AMI and Bottlerocket nodes
- Windows based node support is limited to a default user data template that is provided due to the lack of Windows support and manual steps required to provision Windows based EKS nodes
Expand Down
12 changes: 6 additions & 6 deletions docs/UPGRADE-19.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Please consult the `examples` directory for reference example configurations. If
- Minimum supported version of Terraform AWS provider updated to v4.45 to support latest features provided via the resources utilized.
- Minimum supported version of Terraform updated to v1.0
- Individual security group created per EKS managed node group or self managed node group has been removed. This configuration went mostly un-used and would often cause confusion ("Why is there an empty security group attached to my nodes?"). This functionality can easily be replicated by user's providing one or more externally created security groups to attach to nodes launched from the node group.
- Previously, `var.iam_role_additional_policies` (one for each of the following: cluster IAM role, EKS managed node group IAM role, self-managed node group IAM role, and Fargate Profile IAM role) accepted a list of strings. This worked well for policies that already existed but failed for policies being created at the same time as the cluster due to the well known issue of unkown values used in a `for_each` loop. To rectify this issue in `v19.x`, two changes were made:
- Previously, `var.iam_role_additional_policies` (one for each of the following: cluster IAM role, EKS managed node group IAM role, self-managed node group IAM role, and Fargate Profile IAM role) accepted a list of strings. This worked well for policies that already existed but failed for policies being created at the same time as the cluster due to the well known issue of unknown values used in a `for_each` loop. To rectify this issue in `v19.x`, two changes were made:
1. `var.iam_role_additional_policies` was changed from type `list(string)` to type `map(string)` -> this is a breaking change. More information on managing this change can be found below, under `Terraform State Moves`
2. The logic used in the root module for this variable was changed to replace the use of `try()` with `lookup()`. More details on why can be found [here](https://github.com/clowdhaus/terraform-for-each-unknown)
- The cluster name has been removed from the Karpenter module event rule names. Due to the use of long cluster names appending to the provided naming scheme, the cluster name has moved to a `ClusterName` tag and the event rule name is now a prefix. This guarantees that users can have multiple instances of Karpenter withe their respective event rules/SQS queue without name collisions, while also still being able to identify which queues and event rules belong to which cluster.
Expand All @@ -26,7 +26,7 @@ Please consult the `examples` directory for reference example configurations. If

### Modified

- `cluster_security_group_additional_rules` and `node_security_group_additional_rules` have been modified to use `lookup()` instead of `try()` to avoid the well known issue of [unkown values within a `for_each` loop](https://github.com/hashicorp/terraform/issues/4149)
- `cluster_security_group_additional_rules` and `node_security_group_additional_rules` have been modified to use `lookup()` instead of `try()` to avoid the well known issue of [unknown values within a `for_each` loop](https://github.com/hashicorp/terraform/issues/4149)
- Default cluster security group rules have removed egress rules for TCP/443 and TCP/10250 to node groups since the cluster primary security group includes a default rule for ALL to `0.0.0.0/0`/`::/0`
- Default node security group rules have removed egress rules have been removed since the default security group settings have egress rule for ALL to `0.0.0.0/0`/`::/0`
- `block_device_mappings` previously required a map of maps but has since changed to an array of maps. Users can remove the outer key for each block device mapping and replace the outermost map `{}` with an array `[]`. There are no state changes required for this change.
Expand Down Expand Up @@ -101,7 +101,7 @@ Please consult the `examples` directory for reference example configurations. If
- `default_instance_warmup`
- `force_delete_warm_pool`
- EKS managed node groups:
- `use_custom_launch_template` was added to better clarify how users can switch betweeen a custom launch template or the default launch template provided by the EKS managed node group. Previously, to achieve this same functionality of using the default launch template, users needed to set `create_launch_template = false` and `launch_template_name = ""` which is not very intuitive.
- `use_custom_launch_template` was added to better clarify how users can switch between a custom launch template or the default launch template provided by the EKS managed node group. Previously, to achieve this same functionality of using the default launch template, users needed to set `create_launch_template = false` and `launch_template_name = ""` which is not very intuitive.
- `launch_template_id` for use when using an existing/externally created launch template (Ref: https://github.com/terraform-aws-modules/terraform-aws-autoscaling/pull/204)
- `maintenance_options`
- `private_dns_name_options`
Expand All @@ -121,7 +121,7 @@ Please consult the `examples` directory for reference example configurations. If
6. Added outputs:
- `cluster_name` - The `cluster_id` currently set by the AWS provider is actually the cluster name, but in the future this will change and there will be a distinction between the `cluster_name` and `clsuter_id`. [Reference](https://github.com/hashicorp/terraform-provider-aws/issues/27560)
- `cluster_name` - The `cluster_id` currently set by the AWS provider is actually the cluster name, but in the future this will change and there will be a distinction between the `cluster_name` and `cluster_id`. [Reference](https://github.com/hashicorp/terraform-provider-aws/issues/27560)
## Upgrade Migrations
Expand Down Expand Up @@ -165,7 +165,7 @@ EKS managed node groups on `v18.x` by default create a security group that does
2. Once the node group security group(s) have been removed, you can update your module definition to specify the `v19.x` version of the module
3. Run `terraform init -upgrade=true` to update your configuration and pull in the v19 changes
4. Using the documentation provided above, update your module definition to reflect the changes in the module from `v18.x` to `v19.x`. You can utilize `terraform plan` as you go to help highlight any changes that you wish to make. See below for `terraform state mv ...` commands related to the use of `iam_role_additional_policies`. If you are not providing any values to these variables, you can skip this section.
5. Once you are satisifed with the changes and the `terraform plan` output, you can apply the changes to sync your infrastructure with the updated module definition (or vice versa).
5. Once you are satisfied with the changes and the `terraform plan` output, you can apply the changes to sync your infrastructure with the updated module definition (or vice versa).
### Diff of Before (v18.x) vs After (v19.x)
Expand Down Expand Up @@ -418,7 +418,7 @@ Where `"<POLICY_ARN>"` is specified, this should be replaced with the full ARN o

```hcl
...
# This is demonstrating the cluster IAM role addtional policies
# This is demonstrating the cluster IAM role additional policies
iam_role_additional_policies = {
additional = aws_iam_policy.additional.arn
}
Expand Down
2 changes: 1 addition & 1 deletion examples/eks_managed_node_group/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ See the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/man

## Container Runtime & User Data

When using the default AMI provided by the EKS Managed Node Group service (i.e. - not specifying a value for `ami_id`), users should be aware of the limitations of configuring the node bootstrap process via user data. Due to not having direct access to the bootrap.sh script invocation and therefore its configuration flags (this is provided by the EKS Managed Node Group service in the node user data), a workaround for ensuring the appropriate configuration settings is shown below. The following example shows how to inject configuration variables ahead of the merged user data provided by the EKS Managed Node Group service as well as how to enable the containerd runtime using this approach. More details can be found [here](https://github.com/awslabs/amazon-eks-ami/issues/844).
When using the default AMI provided by the EKS Managed Node Group service (i.e. - not specifying a value for `ami_id`), users should be aware of the limitations of configuring the node bootstrap process via user data. Due to not having direct access to the bootstrap.sh script invocation and therefore its configuration flags (this is provided by the EKS Managed Node Group service in the node user data), a workaround for ensuring the appropriate configuration settings is shown below. The following example shows how to inject configuration variables ahead of the merged user data provided by the EKS Managed Node Group service as well as how to enable the containerd runtime using this approach. More details can be found [here](https://github.com/awslabs/amazon-eks-ami/issues/844).

```hcl
...
Expand Down
4 changes: 2 additions & 2 deletions examples/eks_managed_node_group/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ module "eks" {
ami_id = data.aws_ami.eks_default_bottlerocket.image_id
platform = "bottlerocket"

# Use module user data template to boostrap
# Use module user data template to bootstrap
enable_bootstrap_user_data = true
# This will get added to the template
bootstrap_extra_args = <<-EOT
Expand Down Expand Up @@ -165,7 +165,7 @@ module "eks" {
# Current default AMI used by managed node groups - pseudo "custom"
ami_id = data.aws_ami.eks_default_arm.image_id

# This will ensure the boostrap user data is used to join the node
# This will ensure the bootstrap user data is used to join the node
# By default, EKS managed node groups will not append bootstrap script;
# this adds it back in using the default template provided by the module
# Note: this assumes the AMI provided is an EKS optimized AMI derivative
Expand Down
6 changes: 3 additions & 3 deletions examples/fargate_profile/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ resource "null_resource" "remove_default_coredns_deployment" {
}

# We are removing the deployment provided by the EKS service and replacing it through the self-managed CoreDNS Helm addon
# However, we are maintaing the existing kube-dns service and annotating it for Helm to assume control
# However, we are maintaining the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
kubectl --namespace kube-system delete deployment coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
EOT
Expand All @@ -168,7 +168,7 @@ resource "null_resource" "modify_kube_dns" {
KUBECONFIG = base64encode(local.kubeconfig)
}

# We are maintaing the existing kube-dns service and annotating it for Helm to assume control
# We are maintaining the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
echo "Setting implicit dependency on ${module.eks.fargate_profiles["kube_system"].fargate_profile_pod_execution_role_arn}"
kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-name=coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
Expand Down Expand Up @@ -223,7 +223,7 @@ resource "helm_release" "coredns" {
]

depends_on = [
# Need to ensure the CoreDNS updates are peformed before provisioning
# Need to ensure the CoreDNS updates are performed before provisioning
null_resource.modify_kube_dns
]
}
Expand Down
6 changes: 3 additions & 3 deletions examples/karpenter/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -292,7 +292,7 @@ resource "null_resource" "remove_default_coredns_deployment" {
}

# We are removing the deployment provided by the EKS service and replacing it through the self-managed CoreDNS Helm addon
# However, we are maintaing the existing kube-dns service and annotating it for Helm to assume control
# However, we are maintaining the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
kubectl --namespace kube-system delete deployment coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
EOT
Expand All @@ -308,7 +308,7 @@ resource "null_resource" "modify_kube_dns" {
KUBECONFIG = base64encode(local.kubeconfig)
}

# We are maintaing the existing kube-dns service and annotating it for Helm to assume control
# We are maintaining the existing kube-dns service and annotating it for Helm to assume control
command = <<-EOT
echo "Setting implicit dependency on ${module.eks.fargate_profiles["kube_system"].fargate_profile_pod_execution_role_arn}"
kubectl --namespace kube-system annotate --overwrite service kube-dns meta.helm.sh/release-name=coredns --kubeconfig <(echo $KUBECONFIG | base64 --decode)
Expand Down Expand Up @@ -363,7 +363,7 @@ resource "helm_release" "coredns" {
]

depends_on = [
# Need to ensure the CoreDNS updates are peformed before provisioning
# Need to ensure the CoreDNS updates are performed before provisioning
null_resource.modify_kube_dns
]
}
Expand Down
2 changes: 1 addition & 1 deletion main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,7 @@ resource "aws_iam_role" "this" {
force_detach_policies = true

# https://github.com/terraform-aws-modules/terraform-aws-eks/issues/920
# Resources running on the cluster are still generaring logs when destroying the module resources
# Resources running on the cluster are still generating logs when destroying the module resources
# which results in the log group being re-created even after Terraform destroys it. Removing the
# ability for the cluster role to create the log group prevents this log group from being re-created
# outside of Terraform due to services still generating logs during destroy process
Expand Down
2 changes: 1 addition & 1 deletion modules/_user_data/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ data "cloudinit_config" "linux_eks_managed_node_group" {
gzip = false
boundary = "//"

# Prepend to existing user data suppled by AWS EKS
# Prepend to existing user data supplied by AWS EKS
part {
content_type = "text/x-shellscript"
content = var.pre_bootstrap_user_data
Expand Down
6 changes: 3 additions & 3 deletions node_groups.tf
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ data "aws_iam_policy_document" "cni_ipv6_policy" {
}
}

# Note - we are keeping this to a minimim in hopes that its soon replaced with an AWS managed policy like `AmazonEKS_CNI_Policy`
# Note - we are keeping this to a minimum in hopes that its soon replaced with an AWS managed policy like `AmazonEKS_CNI_Policy`
resource "aws_iam_policy" "cni_ipv6_policy" {
count = var.create && var.create_cni_ipv6_iam_policy ? 1 : 0

Expand Down Expand Up @@ -106,7 +106,7 @@ locals {
}
}

node_secuirty_group_recommended_rules = { for k, v in {
node_security_group_recommended_rules = { for k, v in {
ingress_nodes_ephemeral = {
description = "Node to node ingress on ephemeral ports"
protocol = "tcp"
Expand Down Expand Up @@ -168,7 +168,7 @@ resource "aws_security_group" "node" {
resource "aws_security_group_rule" "node" {
for_each = { for k, v in merge(
local.node_security_group_rules,
local.node_secuirty_group_recommended_rules,
local.node_security_group_recommended_rules,
var.node_security_group_additional_rules,
) : k => v if local.create_node_sg }

Expand Down

0 comments on commit ca03fd9

Please sign in to comment.