Terraform module which creates AWS EKS (Kubernetes) resources with an opinionated configuration targeting Camunda 8, an AWS Aurora RDS cluster and an OpenSearch domain.
The related guide describes more detailed usage. Consider installing Camunda 8 via this guide after deploying the AWS EKS cluster.
Below is a simple example configuration for deploying both an EKS cluster, an Aurora PostgreSQL database and an OpenSearch domain.
See AWS EKS Cluster inputs, AWS Aurora RDS inputs and AWS OpenSearch inputs for further configuration options and how they affect the cluster and database creation.
module "eks_cluster" {
source = "github.com/camunda/camunda-tf-eks-module/modules/eks-cluster"
region = "eu-central-1"
name = "cluster-name"
cluster_service_ipv4_cidr = "10.190.0.0/16"
cluster_node_ipv4_cidr = "10.192.0.0/16"
}
module "postgresql" {
source = "github.com/camunda/camunda-tf-eks-module/modules/aurora"
engine_version = "15.4"
auto_minor_version_upgrade = false
cluster_name = "cluster-name-postgresql"
username = "username"
password = "password"
vpc_id = module.eks_cluster.vpc_id
subnet_ids = module.eks_cluster.private_subnet_ids
cidr_blocks = concat(module.eks_cluster.private_vpc_cidr_blocks, module.eks_cluster.public_vpc_cidr_blocks)
instance_class = "db.t3.medium"
iam_auth_enabled = true
depends_on = [module.eks_cluster]
}
module "opensearch_domain" {
source = "github.com/camunda/camunda-tf-eks-module/modules/opensearch"
domain_name = "my-opensearch-domain"
subnet_ids = module.eks_cluster.private_subnet_ids
security_group_ids = module.eks_cluster.security_group_ids
vpc_id = module.eks_cluster.vpc_id
cidr_blocks = concat(module.eks_cluster.private_vpc_cidr_blocks, module.eks_cluster.public_vpc_cidr_blocks)
instance_type = "t3.small.search"
instance_count = 3
ebs_volume_size = 100
advanced_security_enabled = true
advanced_security_internal_user_database_enabled = true
advanced_security_master_user_name = "admin"
advanced_security_master_user_password = "password"
depends_on = [module.eks_cluster]
}
You can automate the deployment and deletion of the EKS cluster and Aurora database using GitHub Actions.
Note: This is recommended only for development and testing purposes, not for production use.
Below are examples of GitHub Actions workflows for deploying and deleting these resources.
For more details, refer to the corresponding EKS Actions README, Aurora Actions README and OpenSearch Actions README, Cleanup Actions README.
An example workflow can be found in here.
This documentation provides a step-by-step guide to creating an EKS cluster, an Aurora RDS instance, and an OpenSearch domain with IRSA (IAM Roles for Service Accounts) roles using Terraform modules. The modules create the necessary IAM roles and policies for Aurora and OpenSearch. To simplify the configuration, the modules use the outputs of the EKS cluster module to define the IRSA roles and policies.
For further details and a more in-depth configuration, it is recommended to refer to the official documentation at:
The Aurora module uses the following outputs from the EKS cluster module to define the IRSA role and policy:
module.eks_cluster.oidc_provider_arn
: The ARN of the OIDC provider for the EKS cluster.module.eks_cluster.oidc_provider_id
: The ID of the OIDC provider for the EKS cluster.var.account_id
: Your AWS account idvar.aurora_cluster_name
: The name of the Aurora cluster to access Here is the corrected version:var.aurora_irsa_username
: The username used to access AuroraDB. This username is different from the superuser. The user must also be created manually in the database to enable the IRSA connection, as described in the steps below.var.aurora_namespace
: The kubernetes namespace to allow accessvar.aurora_service_account
: The kubernetes ServiceAccount to allow access
You need to define the IAM role trust policy and access policy for Aurora. Here's an example of how to define these policies using the outputs of the EKS cluster module:
module "postgresql" {
# ...
iam_aurora_access_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": "arn:aws:rds-db:${module.eks_cluster.region}:${var.account_id}:dbuser:${var.aurora_cluster_name}/${var.aurora_irsa_username}"
}
]
}
EOF
iam_role_trust_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "${module.eks_cluster.oidc_provider_arn}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${module.eks_cluster.oidc_provider_id}:sub": "system:serviceaccount:${var.aurora_namespace}:${var.aurora_service_account}"
}
}
}
]
}
EOF
iam_aurora_role_name = "AuroraRole-your-cluster" # ensure uniqueness of this one
iam_create_aurora_role = true
iam_auth_enabled = true
# ...
}
Once the database is up, you will need to connect to it using the superuser credentials defined in the module (username
, password
).
echo "Creating IRSA DB user using admin user"
psql -h $AURORA_ENDPOINT -p $AURORA_PORT "sslmode=require dbname=$AURORA_DB_NAME user=$AURORA_USERNAME password=$AURORA_PASSWORD" \
-c "CREATE USER \"${AURORA_USERNAME_IRSA}\" WITH LOGIN;" \
-c "GRANT rds_iam TO \"${AURORA_USERNAME_IRSA}\";" \
-c "GRANT rds_superuser TO \"${AURORA_USERNAME_IRSA}\";" \
-c "GRANT ALL PRIVILEGES ON DATABASE \"${AURORA_DB_NAME}\" TO \"${AURORA_USERNAME_IRSA}\";" \
-c "SELECT aurora_version();" \
-c "SELECT version();" -c "\du"
The permissions can be adapted as needed. However, the most important permission is rds_iam
, which is required for using IRSA with the database.
A complete example of a pod to create the database is available.
The OpenSearch module uses the following outputs from the EKS cluster module to define the IRSA role and policy:
module.eks_cluster.oidc_provider_arn
: The ARN of the OIDC provider for the EKS cluster.module.eks_cluster.oidc_provider_id
: The ID of the OIDC provider for the EKS cluster.var.account_id
: Your AWS account idvar.opensearch_domain_name
: The name of the OpenSearch domain to accessvar.opensearch_namespace
: The kubernetes namespace to allow accessvar.opensearch_service_account
: The kubernetes ServiceAccount to allow access
module "opensearch_domain" {
# ...
iam_create_opensearch_role = true
iam_opensearch_role_name = "OpenSearchRole-your-cluster" # ensure uniqueness of this one
iam_opensearch_access_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"es:ESHttpGet",
"es:ESHttpPut",
"es:ESHttpPost"
],
"Resource": "arn:aws:es:${module.eks_cluster.region}:${var.account_id}:domain/${var.opensearch_domain_name}/*"
}
]
}
EOF
iam_role_trust_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "${module.eks_cluster.oidc_provider_arn}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${module.eks_cluster.oidc_provider_id}:sub": "system:serviceaccount:${var.opensearch_namespace}:${var.opensearch_service_account}"
}
}
}
]
}
EOF
# ...
}
By defining the IRSA roles and policies using the outputs of the EKS cluster module, you can simplify the configuration and ensure that the roles and policies are created with the correct permissions and trust policies.
Apply the Service Accounts definitions to your Kubernetes cluster:
Aurora Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: aurora-service-account
namespace: <your-namespace>
annotations:
eks.amazonaws.com/role-arn: <arn:aws:iam:<YOUR-ACCOUNT-ID>:role/AuroraRole>
You can retrieve the role ARN from the module output: aurora_role_arn
.
OpenSearch Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: opensearch-service-account
namespace: <your-namespace>
annotations:
eks.amazonaws.com/role-arn: <arn:aws:iam:<YOUR-ACCOUNT-ID>:role/OpenSearchRole>
You can retrieve the role ARN from the module output: opensearch_role_arn
.
Please note that the modules have been tested with Terraform in the version described in the .tool-versions of this project.