Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create local YAML from kops_kube_config data resource #438

Open
ddelange opened this issue Dec 15, 2021 · 6 comments
Open

Create local YAML from kops_kube_config data resource #438

ddelange opened this issue Dec 15, 2021 · 6 comments

Comments

@ddelange
Copy link

ddelange commented Dec 15, 2021

Problem

The kops_kube_config data source example will start causing errors after the first apply, ref hashicorp/terraform#27934

Concretely, using dependent providers:

data "kops_kube_config" "kube_config" {
  cluster_name = kops_cluster.cluster.name
  # ensure the cluster has been launched/updated
  depends_on = [kops_cluster_updater.updater]
}

provider "kubectl" {
  host                   = data.kops_kube_config.kube_config.server
  username               = data.kops_kube_config.kube_config.kube_user
  password               = data.kops_kube_config.kube_config.kube_password
  client_certificate     = data.kops_kube_config.kube_config.client_cert
  client_key             = data.kops_kube_config.kube_config.client_key
  cluster_ca_certificate = data.kops_kube_config.kube_config.ca_cert
  load_config_file       = "false"
}

provider "helm" {
  kubernetes {
    host                   = data.kops_kube_config.kube_config.server
    username               = data.kops_kube_config.kube_config.kube_user
    password               = data.kops_kube_config.kube_config.kube_password
    client_certificate     = data.kops_kube_config.kube_config.client_cert
    client_key             = data.kops_kube_config.kube_config.client_key
    cluster_ca_certificate = data.kops_kube_config.kube_config.ca_cert
  }
}

Will cause errors after the first successful apply, so upon the first subsequent apply:

kubectl.kubectl_manifest:

│ Error: failed to create kubernetes rest client for read of resource: Get "http://localhost/api?timeout=32s": dial tcp [::1]:80: connect: connection refused

helm.helm_release:

│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Workaround

Manually delete these entries from tfstate after each apply

Suggestion

It would be cool to be able to have a yaml_body attribute in the data source, so that we can:

  • Create a local kops_kube_config.yaml file from the data source
  • Provide the dependent providers with that file instead of the dynamic values, like here
@eddycharly
Copy link
Owner

Hello, what version are you using ?

@ddelange
Copy link
Author

Hi! That was quick :) Not on the latest now that you say so. How is it relevant? Just curious

$ terraform version
Terraform v1.0.9
on darwin_amd64
+ provider registry.terraform.io/eddycharly/kops v1.21.2-alpha.2
+ provider registry.terraform.io/gavinbunney/kubectl v1.13.1
+ provider registry.terraform.io/hashicorp/aws v3.58.0
+ provider registry.terraform.io/hashicorp/helm v2.3.0
+ provider registry.terraform.io/hashicorp/http v2.1.0
+ provider registry.terraform.io/hashicorp/local v2.1.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0
+ provider registry.terraform.io/invidian/sshcommand v0.2.2
+ provider registry.terraform.io/rancher/rancher2 v1.21.0

@eddycharly
Copy link
Owner

You're right, i misunderstood your issue, sorry.

Not sure how to implement a yaml_body attribute, i'll dig it asap but i find it crazy that terraform partially supports this, it looks very error prone.

@eddycharly
Copy link
Owner

I wonder how terraform can build a plan when the cluster does not exist yet ?

@ddelange
Copy link
Author

Yes, was also big surprise to me that there is no depends_on for providers (ref hashicorp/terraform#2430)

Regarding the plan when there is no cluster yet: it's probably the same when there is already a cluster, namely initiate the providers with nulls. For planning that is apparently sufficient, but how terraform manages to fill it in just-in-time (re-initiate the providers?) such that the initial apply actually works, is a mystery to me.

Probably the subsequent applies would also work if it weren't for the refresh. Already tried finding some way to defer the refresh of relevant resources, but no luck

@ddelange
Copy link
Author

ddelange commented Dec 21, 2021

My current hack to write the kubeconfig to a local file and use it downstream (will need to have the kops executable available):

resource "kops_cluster_updater" "updater" {
  ...

  provisioner "local-exec" {
    command = "kops export kubeconfig '${self.cluster_name}' --state 's3://${local.state_bucket_name}' --admin --kubeconfig ${local.kops_kube_config_filename}"
  }
}

provider "kubectl" {
  config_path = local.kops_kube_config_filename
}

provider "helm" {
  kubernetes {
    config_path = local.kops_kube_config_filename
  }
}

ref https://kops.sigs.k8s.io/cli/kops_export_kubeconfig/

EDIT: for the helm provider this works (apparently it reads the path lazily when applying a helm_release), but it looks like kubectl reads the config upon provider init, so this workaround won't work for kubectl, ref hashicorp/terraform#2430 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants