Skip to content
This repository has been archived by the owner on Oct 11, 2023. It is now read-only.

No option to override the chart name in the helm release #310

Open
andrewschmidt-a opened this issue Apr 7, 2020 · 8 comments
Open

No option to override the chart name in the helm release #310

andrewschmidt-a opened this issue Apr 7, 2020 · 8 comments

Comments

@andrewschmidt-a
Copy link

Describe the bug
A clear and concise description of what the bug is.
When using multiple services that share the same chart directory (common method of deployment) the release names when you execute azds up will be the same.
To Reproduce
Steps to follow to reproduce this issue.

  1. Set up two microservices.
  2. Create a chart that would work to deploy both of them.
  3. Make azds yaml files that both reference the same chart directory.
  4. azds up one of the services (yea it works!)
  5. azds up the other service (yea it works!....wait where did my old one go??)

Expected behavior
A clear and concise description of what you expected to happen.
I would expect that the release names would be unique
Logs
Attach logs from the following directory:
For Windows: %TEMP%/Azure Dev Spaces
For OSX/Linux: $TMPDIR/Azure Dev Spaces
Using dev space 'andrew' with target 'crsliot-aks-dev'
Synchronizing files...37s
Installing Helm chart...
Release "azds-200cac-andrew-mmm-iot-service" has been upgraded.
LAST DEPLOYED: Mon Apr 6 19:10:59 2020
NAMESPACE: andrew
STATUS: DEPLOYED
RESOURCES:

Environment Details
Client used (CLI/VS Code/Visual Studio):

Client's version:
Operating System:

Azure Dev Spaces CLI
1.0.20190514.3
API v3.2

Additional context
Add any other outputs from the clients or context you would like to share.

@joe-bethke
Copy link

I'm experiencing this same issue. When running azds up on a second service (that uses the same parameterized charts), the initial service disappears from the list in azds list-uris and its endpoint is no longer accessible (page 404's). This occurs both when my dev space is a child of our default namespace, and when it is not a child.

@DrEsteban
Copy link
Contributor

DrEsteban commented Apr 7, 2020

Hey @andrewschmidt-a and @joe-bethke!

Unfortunately this isn't a scenario we support today. However, as a workaround, you can consider using separate charts for each service, but still retain the ability to deploy them together by defining a "parent chart".

In your "parent chart", you can define a requirements.yaml file that declares relative file paths to the 2 "child charts". This concept is described in the Helm docs here: https://v2.helm.sh/docs/helm/#synopsis-4 (The second code block example in that section). This should give you the flexibility to deploy each service independently or together.

@kyleestes
Copy link

Hi @DrEsteban, thanks for responding on this. Regarding the workaround you suggest, in our case it turns out we actually already utilize a "parent chart" (let's call it the platform chart) that specifies multiple "child charts" in a requirements.yaml file. Each child chart is a .NET Core microservice, but instead of having one Helm chart per microservice, we actually have a single Helm chart (let's call it service) for which we specify parameters unique to the microservice at chart installation time. In the platform chart's requirements.yaml file, we leverage the alias property to disambiguate the microservices (https://v2.helm.sh/docs/developing_charts/#alias-field-in-requirements-yaml).

So, taking your suggested workaround as inspiration, I am re-configuring my azds.yaml file to specify the path to the platform chart in the install.chart property, and providing child chart parameters in the install.set property.

Do you know if this approach has any hope of working? I am trying it and am running into problems, and it might be because I have no way in the azds.yaml file to parameterize the common Dockerfile we use for building images used in each service chart.

@DrEsteban
Copy link
Contributor

@kyleestes Unfortunately I'm not sure that type of strategy will work :(

The root of the issue is that we derive/identify releases based on the Chart.yaml's name: field. So if two debugging sessions are using a chart with the same name, our tooling will assume you're re-running the same service. And neither Helm or AZDS has a built-in concept of tokenizing or replacing anything in the Chart.yaml.

That said, we've logged an internal issue to look at ways we can better support this scenario; perhaps with some sort of "nameOverride" field in the azds.yaml.

@kyleestes
Copy link

kyleestes commented Apr 9, 2020

@DrEsteban thanks for the response, it is as I suspected. I appreciate you all looking into this issue. A suggestion for the property you might add to azds.yaml: consider using install.alias, which would align with Helm's use of the term "alias" to mean "use this name for the Helm chart". Or perhaps simply install.releaseName would be more obvious.

Is there any way I could assist or accelerate implementing this field? If there is an open-source repository to which I can contribute, I'd be happy to do so. Right now, our use of Dev Spaces is largely on hold because of this, short of taking our common Helm chart, copy-pasting it 8 times (that is how many services we have), and then setting name in Chart.yaml (a discouraging prospect if I'm honest).

@DrEsteban
Copy link
Contributor

@kyleestes Great suggestions! I've copied those comments to our internal bug.

Thanks so much for the offer, but unfortunately our product code isn't open source. We will be sure to take the fact that you're blocked into account, though, when prioritizing.

@DrEsteban
Copy link
Contributor

DrEsteban commented Apr 9, 2020

@kyleestes and all, one last suggestion to hopefully unblock you from taking advantage of Dev Spaces:

We have preview functionality we refer to as "Connect" that may allow you to workaround this incompatibility with your Helm chart structure. Unlike our traditional scenario, where your chart is deployed with our tooling and builds/runs directly on the cluster, this functionality "connects" your dev machine as a "Pod" in the cluster - proxying inbound and outbound calls and allowing you to build/run your application code natively on your dev machine. There are various options for how to connect, but all of them should allow you to workaround your current issue.

Please let us know if you have any questions or run into any issues with this approach!

@kyleestes
Copy link

Sounds good, thanks for taking this into consideration!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants