Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

COMPONENT: CDKTF fetching providers on every deploy #3737

Open
1 task
paymog opened this issue Oct 2, 2024 · 1 comment
Open
1 task

COMPONENT: CDKTF fetching providers on every deploy #3737

paymog opened this issue Oct 2, 2024 · 1 comment
Labels
bug Something isn't working new Un-triaged issue

Comments

@paymog
Copy link

paymog commented Oct 2, 2024

Expected Behavior

CDKTF should not fetch providers if they already exist

Actual Behavior

CDKTF fetches providers on every deploy, even subsequent deploys that did not change providers

Steps to Reproduce

Not sure how to reliably reproduce.

Versions

 ❯❯❯ npx cdktf debug
cdktf debug
language: typescript
cdktf-cli: 0.20.8
node: v18.17.0
cdktf: 0.20.8
constructs: 10.3.0
jsii: null
terraform: 1.9.3
arch: arm64
os: darwin 23.6.0
providers
aws@~> 5.61 (LOCAL)
        terraform provider version: 5.61.0
kubernetes@ ~> 2.31.0 (LOCAL)
        terraform provider version: 2.31.0
http@ ~> 2.1.0 (LOCAL)
        terraform provider version: 2.1.0
tls@ ~> 4.0.5 (LOCAL)
        terraform provider version: 4.0.5
random@ ~> 3.1.0 (LOCAL)
        terraform provider version: 3.1.3
gavinbunney/kubectl@~> 1.14.0 (LOCAL)
        terraform provider version: 1.14.0
DopplerHQ/doppler@1.1.6 (LOCAL)
        terraform provider version: 1.1.6
helm@undefined (LOCAL)
        terraform provider version: 2.14.0
@cdktf/provider-aws (PREBUILT)
        terraform provider version: 5.61.0
        prebuilt provider version: 19.29.0
        cdktf version: ^0.20.0
@cdktf/provider-kubernetes (PREBUILT)
        terraform provider version: 2.31.0
        prebuilt provider version: 11.6.0
        cdktf version: ^0.20.0

Providers

No response

Gist

No response

Possible Solutions

No response

Workarounds

No response

Anything Else?

Looks like this was addressed in 0.15.3 but I'm on 0.20.0 and I'm still seeing it. Fetching these providers can be very slow. Here are some logs I see

                    - Reusing previous version of hashicorp/random from the dependency lock file
goldsky-infra-prod  - Reusing previous version of hashicorp/aws from the dependency lock file
goldsky-infra-prod  - Reusing previous version of hashicorp/archive from the dependency lock file
goldsky-infra-prod  - Reusing previous version of hashicorp/helm from the dependency lock file
goldsky-infra-prod  - Reusing previous version of hashicorp/tls from the dependency lock file
goldsky-infra-prod  - Reusing previous version of hashicorp/http from the dependency lock file
goldsky-infra-prod  - Reusing previous version of hashicorp/cloudinit from the dependency lock file
goldsky-infra-prod  - Reusing previous version of hashicorp/kubernetes from the dependency lock file
goldsky-infra-prod  - Reusing previous version of dopplerhq/doppler from the dependency lock file
goldsky-infra-prod  - Using previously-installed dopplerhq/doppler v1.1.6
goldsky-infra-prod  - Using previously-installed hashicorp/random v3.1.3
goldsky-infra-prod  - Using previously-installed hashicorp/cloudinit v2.3.5
goldsky-infra-prod  - Using previously-installed hashicorp/helm v2.14.0
goldsky-infra-prod  - Using previously-installed hashicorp/tls v4.0.5
goldsky-infra-prod  - Using previously-installed hashicorp/http v2.1.0
goldsky-infra-prod  - Using previously-installed hashicorp/kubernetes v2.31.0
goldsky-infra-prod  - Using previously-installed hashicorp/aws v5.61.0
goldsky-infra-prod  - Using previously-installed hashicorp/archive v2.6.0
goldsky-infra-prod  Terraform has been successfully initialized!

                    You may now begin working with Terraform. Try running "terraform plan" to see
                    any changes that are required for your infrastructure. All Terraform commands
                    should now work.

                    If you ever set or change modules or backend configuration for Terraform,
                    rerun this command to reinitialize your working directory. If you forget, other
                    commands will detect it and remind you to do so if necessary.
goldsky-infra-prod  - Fetching dopplerhq/doppler 1.1.6 for linux_amd64...
goldsky-infra-prod  - Retrieved dopplerhq/doppler 1.1.6 for linux_amd64 (self-signed, key ID D13E9DC04ACCB5E6)
goldsky-infra-prod  - Fetching hashicorp/http 2.1.0 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/http 2.1.0 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Fetching hashicorp/random 3.1.3 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/random 3.1.3 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Fetching hashicorp/archive 2.6.0 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/archive 2.6.0 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Fetching hashicorp/kubernetes 2.31.0 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/kubernetes 2.31.0 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Fetching hashicorp/helm 2.14.0 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/helm 2.14.0 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Fetching hashicorp/cloudinit 2.3.5 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/cloudinit 2.3.5 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Fetching hashicorp/aws 5.61.0 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/aws 5.61.0 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Fetching hashicorp/tls 4.0.5 for linux_amd64...
goldsky-infra-prod  - Retrieved hashicorp/tls 4.0.5 for linux_amd64 (signed by HashiCorp)
goldsky-infra-prod  - Obtained hashicorp/cloudinit checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained hashicorp/archive checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained hashicorp/http checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained hashicorp/kubernetes checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained hashicorp/tls checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained hashicorp/random checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained hashicorp/aws checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained dopplerhq/doppler checksums for linux_amd64; All checksums for this platform were already tracked in the lock file
                    - Obtained hashicorp/helm checksums for linux_amd64; All checksums for this platform were already tracked in the lock file

References

Help Wanted

  • I'm interested in contributing a fix myself

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@paymog paymog added bug Something isn't working new Un-triaged issue labels Oct 2, 2024
@lostick
Copy link

lostick commented Oct 15, 2024

I'm also experiencing the same issue on my side, it makes the init stage unnecessary slow.
I believe this issue is also related to #3622

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working new Un-triaged issue
Projects
None yet
Development

No branches or pull requests

2 participants