Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Update provider deprecations #117

Merged
merged 8 commits into from
Aug 14, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 11 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,29 +99,31 @@ resources that lack official modules.

### A note on updating EKS cluster version

Users can update the EKS cluster version to the latest version offered by AWS. This can be done using the environment variable `eks_cluster_version`. Note that, cluster and nodegroup version updates can only be done in increments of one version at a time. For example, if your current cluster version is `1.21` and the latest version available is `1.24` - you'd need to:
Users can update the EKS cluster version to the latest version offered by AWS. This can be done using the environment variable `eks_cluster_version`. Note that, cluster and nodegroup version updates can only be done in increments of one version at a time. For example, if your current cluster version is `1.21` and the latest version available is `1.25` - you'd need to:

- Update `1.21` to `1.22`, run `terraform apply`,
- then upgrade to `1.23`, run `tf apply` and
- finally to `1.24`, run `tf apply`.

You will not be able to upgrade directly from `1.21` to `1.24`.
1. update the cluster version in the app_eks module from `1.21` to `1.22`
2. run `terraform apply`
3. update the cluster version to `1.23`
4. run `terraform apply`
5. update the cluster version to `1.24`
...and so on and so forth.

Upgrades must be executed in step-wise fashion from one version to the next. You cannot skip versions when upgrading EKS.
<!-- BEGIN_TF_DOCS -->

## Requirements

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | ~> 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | ~> 3.60 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | ~> 4.6 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | ~> 2.6 |

## Providers

| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | ~> 3.60 |
| <a name="provider_aws"></a> [aws](#provider\_aws) | ~> 4.6 |

## Modules

Expand Down
4 changes: 2 additions & 2 deletions examples/public-dns-external/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ module "wandb_infra" {
allowed_inbound_cidr = var.allowed_inbound_cidr
allowed_inbound_ipv6_cidr = ["::/0"]

eks_cluster_version = "1.24"
eks_cluster_version = "1.25"
kubernetes_public_access = true
kubernetes_public_access_cidrs = ["0.0.0.0/0"]

Expand All @@ -51,7 +51,7 @@ data "aws_eks_cluster_auth" "app_cluster" {

provider "kubernetes" {
host = data.aws_eks_cluster.app_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.app_cluster.certificate_authority.0.data)
cluster_ca_certificate = base64decode(data.aws_eks_cluster.app_cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.app_cluster.token
}

Expand Down
7 changes: 1 addition & 6 deletions examples/public-dns-external/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,7 @@ variable "wandb_license" {
variable "database_engine_version" {
description = "Version for MySQL Auora"
type = string
default = "8.0.mysql_aurora.3.01.0"

validation {
condition = contains(["5.7", "8.0.mysql_aurora.3.01.0", "8.0.mysql_aurora.3.02.0"], var.database_engine_version)
error_message = "We only support MySQL: \"5.7\"; \"8.0.mysql_aurora.3.01.0\"; \"8.0.mysql_aurora.3.02.0\"."
}
default = "8.0.mysql_aurora.3.02.2"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd think we still want validation here that it's 5.7 or 8.

}

variable "database_instance_class" {
Expand Down
2 changes: 1 addition & 1 deletion main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ module "app_lb" {
resource "aws_autoscaling_attachment" "autoscaling_attachment" {
for_each = module.app_eks.autoscaling_group_names
autoscaling_group_name = each.value
alb_target_group_arn = module.app_lb.tg_app_arn
lb_target_group_arn = module.app_lb.tg_app_arn
}

module "redis" {
Expand Down
10 changes: 0 additions & 10 deletions modules/app_eks/iam-policy-docs.tf
Original file line number Diff line number Diff line change
@@ -1,26 +1,21 @@
data "aws_iam_policy_document" "node_cloudwatch" {
statement {
sid = "bb2"
actions = ["cloudwatch:PutMetricData"]
effect = "Allow"
resources = ["*"]
}
}


data "aws_iam_policy_document" "node_IMDSv2" {
statement {
sid = "cc3"
actions = ["ec2:DescribeInstanceAttribute"]
effect = "Allow"
resources = ["*"]
}
}

// todo: refactor --> v1.16.3
data "aws_iam_policy_document" "node_kms" {
statement {
sid = "dd4"
actions = [
"kms:Encrypt",
"kms:Decrypt",
Expand All @@ -33,21 +28,16 @@ data "aws_iam_policy_document" "node_kms" {
}
}


// todo: refactor --> v1.16.3
data "aws_iam_policy_document" "node_sqs" {
statement {
sid = "ee5"
actions = ["sqs:*"]
effect = "Allow"
resources = var.bucket_sqs_queue_arn == "" || var.bucket_sqs_queue_arn == null ? ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${aws_iam_role.node.name}"] : [var.bucket_sqs_queue_arn]
}
}


data "aws_iam_policy_document" "node_s3" {
statement {
sid = "ff6"
actions = ["s3:*"]
effect = "Allow"
resources = [
Expand Down
2 changes: 0 additions & 2 deletions modules/app_eks/iam-roles.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@ resource "aws_iam_role" "node" {
name = "${var.namespace}-node"
assume_role_policy = data.aws_iam_policy_document.node_assume.json

// todo: refactor --> v1.16.3
inline_policy {}
}


2 changes: 1 addition & 1 deletion modules/app_eks/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ variable "cluster_endpoint_public_access_cidrs" {

variable "cluster_version" {
description = "Indicates AWS EKS cluster version"
nullable = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have a sane minimum here?

type = string
default = "1.21"
}

variable "create_elasticache_security_group" {
Expand Down
50 changes: 36 additions & 14 deletions modules/file_storage/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,29 @@ resource "aws_sqs_queue" "file_storage" {

# Enable long-polling
receive_wait_time_seconds = 10

# kms_master_key_id = var.kms_key_arn
}


resource "aws_s3_bucket" "file_storage" {
bucket = "${var.namespace}-file-storage-${random_pet.file_storage.id}"

force_destroy = !var.deletion_protection
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this also apply to object_lock_enabled?


# Configuration error if SQS does not exist
# https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/
depends_on = [aws_sqs_queue.file_storage]
}

resource "aws_s3_bucket_acl" "file_storage" {
depends_on = [aws_s3_bucket_ownership_controls.file_storage]

bucket = aws_s3_bucket.file_storage.id
acl = "private"
}

resource "aws_s3_bucket_cors_configuration" "file_storage" {
bucket = aws_s3_bucket.file_storage.id

cors_rule {
allowed_headers = ["*"]
Expand All @@ -23,21 +39,13 @@ resource "aws_s3_bucket" "file_storage" {
expose_headers = ["ETag"]
max_age_seconds = 3000
}
}

server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = var.kms_key_arn
sse_algorithm = var.sse_algorithm
}
}
resource "aws_s3_bucket_ownership_controls" "file_storage" {
bucket = aws_s3_bucket.file_storage.id
rule {
object_ownership = "BucketOwnerPreferred"
}

force_destroy = !var.deletion_protection

# Configuration error if SQS does not exist
# https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/
depends_on = [aws_sqs_queue.file_storage]
}

resource "aws_s3_bucket_public_access_block" "file_storage" {
Expand All @@ -48,6 +56,20 @@ resource "aws_s3_bucket_public_access_block" "file_storage" {
ignore_public_acls = true
}

resource "aws_s3_bucket_server_side_encryption_configuration" "file_storage" {
bucket = aws_s3_bucket.file_storage.id

rule {
apply_server_side_encryption_by_default {
kms_master_key_id = var.kms_key_arn
sse_algorithm = var.sse_algorithm
}
}
}




# Give the bucket permission to send messages onto the queue. Looks like we
# overide this value.
resource "aws_sqs_queue_policy" "file_storage" {
Expand Down
4 changes: 4 additions & 0 deletions modules/file_storage/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ output "bucket_arn" {
value = aws_s3_bucket.file_storage.arn
}

output "bucket_id" {
value = aws_s3_bucket.file_storage.id
}

output "bucket_region" {
value = aws_s3_bucket.file_storage.region
}
Expand Down
8 changes: 4 additions & 4 deletions modules/redis/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ locals {
}

resource "aws_elasticache_replication_group" "default" {
replication_group_id = "${var.namespace}-rep-group"
replication_group_description = "${var.namespace}-rep-group"
number_cache_clusters = 2
port = 6379
replication_group_id = "${var.namespace}-rep-group"
description = "${var.namespace}-rep-group"
num_cache_clusters = 2
port = 6379

node_type = var.node_type
parameter_group_name = "default.redis6.x"
Expand Down
5 changes: 5 additions & 0 deletions modules/secure_storage_connector/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@ output "bucket" {
value = data.aws_s3_bucket.file_storage
}

output "bucket_id" {
value = data.aws_s3_bucket.file_storage.id
}

output "bucket_kms_key" {
value = var.create_kms_key ? aws_kms_key.key[0] : null
}

4 changes: 2 additions & 2 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -233,9 +233,9 @@ variable "network_elasticache_subnet_cidrs" {
# EKS Cluster #
##########################################
variable "eks_cluster_version" {
description = "EKS cluster kubernetes version"
nullable = false
type = string
description = "Indicates EKS cluster version"
default = "1.21"
}

variable "kubernetes_public_access" {
Expand Down
Loading