Skip to content
This repository has been archived by the owner on Oct 11, 2023. It is now read-only.

Azure DevOps pipeline to Dev Spaces #296

Open
dahovey opened this issue Mar 24, 2020 · 11 comments
Open

Azure DevOps pipeline to Dev Spaces #296

dahovey opened this issue Mar 24, 2020 · 11 comments

Comments

@dahovey
Copy link

dahovey commented Mar 24, 2020

I am new to Azure Dev Spaces and would like to setup Azure DevOps YAML pipelines to deploy to Dev Spaces within a cluster when code changes. I setup a pipeline modeled after the sample here and shown off on BRK3039 at Build 2019, but I ran into issues.

I would like to setup pipeline similar to what was shown at Build 2019:

  • master branch -> master Dev Space
  • pull request -> <source-branch-name> Dev Space (new), parent: master

Additional Dev Spaces would be setup during development, using master as their parent.

My Issues:

  1. To start with I created a master Dev Spaces. For testing I debugged a .NET Core site using Visual Studio 2019 which worked great (using this same master Dev Space). Then I setup a pipeline modeled after the sample. Here is snippet of the deployment steps:
          - task: KubernetesManifest@0
            displayName: bake
            name: bake
            inputs:
              action: bake
              helmChart: '$(Agent.TempDirectory)/charts'
              overrides: |
                image.repository:$(containerRegistry)/$(imageRepository)
                image.tag:$(Build.BuildId)
                
          - task: KubernetesManifest@0
            displayName: deploy
            name: deploy
            inputs:
              action: deploy
              namespace: $(k8sNamespace)
              manifests: $(bake.manifestsBundle)

The deploy step fails with error:
The Deployment "api" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"api", "release":"RELEASE-NAME"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable.
I assume this is because azds client tooling used a different set of label selectors than the clean helm chart that would have been templated during CICD. Is it expected that this root master Dev Space would not be debugged, so the cicd pipeline would be using the 'clean' helm chart?

  1. Running the pipeline with ingress enabled I received the error in deploy step for: "/home/vsts/work/_temp/...": Ingress.extensions "api" is invalid: spec: Invalid value: []networking.IngressRule(nil): either backendorrules must be specified. It seems that when using the azds client tooling the spacePrefix, rootSpacePrefix and hostSuffix are processed using the azds.yaml configuration and used as values for the helm chart. But in a CICD pipeline how can I best get these values set when templating the helm chart? The only thing I see is the azds up -d usage here, during the pull request stage.

Is it not ideal to debug a service in a specific Dev Space and at the same time have DevOps pipelines that are deployed to the same Dev Space? Would the azds client tooling be required as shown here in the sample? For example:

          - task: AzureCLI@1
            displayName: 'Using AZDS tooling to deploy'
            inputs:
              azureSubscription: $(azureServiceConnection)
              scriptLocation: inlineScript
              inlineScript: |
                az aks use-dev-spaces -n $(AZDSControllerName) -g $(AZDSControllerRG) -y -s $(k8sNamespace)/$(system.pullRequest.sourceBranch) --update
                azds up -d

Documentation

Where is documentation for this type of YAML pipeline? The documentation on the docs.microsoft.com here shows using the Classic pipeline style and not what is in this repository.

Thank you for your assistance in understanding this.

@amsoedal
Copy link
Collaborator

amsoedal commented Mar 30, 2020

Hi @dahovey ! Thanks for bringing this issue to us. Off the top of my head, I'm not entirely sure whether or not we've worked through the Azure DevOps scenario recently. I've logged a bug on our side to investigate -- will follow up on this thread with our findings!

@greenie-msft
Copy link
Contributor

Hi @dahovey, thank you for your patience. It looks like the experience in Azure DevOps has changed since the last time we documented it...We've opened an issue on our side to update the documentation. In the meantime, I'm working with the team to find a solution for you. I will reach out as soon as I have an update.

Thank you,
Nick

@greenie-msft
Copy link
Contributor

Hi again @dahovey.

I'm attaching a medium article that will guide you through this scenario with one of our sample applications while we work to update our documentation. Please note, the screenshots are out of date and the UI has changed but the concepts still apply.

As you come across the Pipeline YAML, one modification is required. Due to a bug in the bake task, you will need to replace it with an inline script that does the equivalent. This should resolve the issue you are experiencing. Also, in your example you are missing a release name which is required. Please see the example below.

Bake Task:

 - task: Bash@3
    inputs:
        targetType: 'inline'
         script: 'helm template RELEASE-NAME-HERE $(Agent.TempDirectory)/charts --set image.repository=$(containerRegistry)/$(imageRepository) --set image.tag=$(Build.BuildId) --set  buildID="1" > $(AGENT.TEMPDIRECTORY)/baked-template.yaml'

Lastly, in the deploy task, you will need to reference the manifests that were created in the bake task:

Deploy Task:

- task: KubernetesManifest@0
    displayName: deploy
    name: deploy
    inputs:
        action: deploy
        namespace: $(k8sNamespace)
        manifests: $(AGENT.TEMPDIRECTORY)/baked-template.yaml

Please let me know how this goes and if you have any further questions.

Thank you,
Nick

@dahovey
Copy link
Author

dahovey commented Apr 1, 2020

Thanks for the article. It does help me understand some things I wasn't clear on. I would like to try this method, but I am learning toward just using the azds tooling in cicd. I do have concerns that I will still get the same issues I experienced making the corrections you mention but I need to try it to be sure.

While I understand the Dev Space is just a regular old namespace, and I could use bake/deploy to apply resources in the Dev Space, I like the idea of using the azds tooling during cicd. This is because regardless of what Dev Space is being targeted the pipeline doesn't need to know how to form the ingress host (space name, HostSuffix, etc.). This issue is compounded when creating a new space from a pull request. The azds.yaml has already been setup to handle this. It seems counter intuitive to have a separate process/pipeline that duplicates this effort.

Here is my pipeline, just doesn't include a stage for PRs.

image

The dev_space_master stage is just one step. It is a bit slower than I would like, but most of the time is due to the AzureCLI step itself, and the azds tooling installation since I am using Azure DevOps hosted agents. Not sure what could be done about it.

          - task: AzureCLI@1
            displayName: 'Using AZDS tooling to deploy'
            inputs:
              azureSubscription: $(azureServiceConnection)
              scriptLocation: inlineScript
              scriptType: bash
              workingDirectory: src/$(projectPath)
              inlineScript: |
                az aks get-credentials -n $(AZDSControllerName) -g $(AZDSControllerRG)
                az aks use-dev-spaces -n $(AZDSControllerName) -g $(AZDSControllerRG) -y -s $(spaceName) --update

                azds up -d

Sorry, that since my original comment, it seems I am answering my own question.

@dahovey
Copy link
Author

dahovey commented Apr 6, 2020

@greenie-msft I have been trying different methods in cicd and been running into various issues. Here is where I am:

  • If I use azds up -d from hosted Azure DevOps agents, and have a micro-service architecture where multiple pipelines get triggered, this creates a bottle-neck at the AKS cluster since each Docker image needs to be built by the cluster. This isn't ideal, which leads me to NOT using azds up -d from a CICD agent, only during development.
  • I wrote simple PowerShell script that forms the ingress host using azds tooling in order to get the HostSuffix. All that is required is the cluster name.
  • If I use bake-and-deploy to update resources in Dev Spaces namespace, I get errors because Helm was used to deploy resources during development (from Visual Studio), while the bake-and-deploy method using helm template... then kubectl apply....
  • deployment selectors are different depending on how resources are deployed.
    Using azds up -d:
spec:
...
  selector:
    matchLabels:
      app: app1-component1
      release: azds-7b95cb-master-app1-component1

Using 'bake & deploy' or helm upgrade...:

spec:
...
  selector:
    matchLabels:
      app: app1-component1
      release: app1-component1

Ultimately I would like to use helm upgrade... in cicd deployment, matching how deployments are performed during development in Visual Studio 2019 and Visual Studio Code. My problem right now is:

How is the release name formed when using azds up, as in azds-7b95cb-master-app1-component1 from above?

It seems that azds tooling generates a unique identifier per application that used to generate the helm release name. I would like to use the same release name during CICD, but would NOT like hard-code this identifier (i.e. 7b95cb).

@greenie-msft
Copy link
Contributor

greenie-msft commented Apr 7, 2020

Hi @dahovey,

I agree with you that using helm is the best approach for your scenario. You can see how we use helm in our GitHub Actions workflow which will enables the same scenario.

- name: Helm Upgrade PR
      run: |        
        ${{steps.install-helm-client.outputs.helm}} upgrade \
          --install  ${{steps.generate-release-name.outputs.result}} samples/BikeSharingApp/Bikes/charts/bikes \
          --namespace ${{steps.generate-child-space-name.outputs.result}} \
          --set image.repository=${{ secrets.CONTAINER_REGISTRY }}/bikes \
          --set image.tag=$GITHUB_SHA \
          --set imagePullSecrets[0].name=${{ secrets.IMAGE_PULL_SECRET }}

Please let me know if you have any questions with the sample above.

Unfortunately, we do not support reusing AZDS releases from development during CI/CD. With that said, I would love to understand your scenario further. Are you developing against the same cluster your pipeline deploys to?

@dahovey
Copy link
Author

dahovey commented Apr 7, 2020

Hi @greenie-msft,

To answer your question. Yes, active development and pipelines on the same cluster. I would like the flexibility of having cicd pipelines updating a Dev Space, while also having the ability to debug an application within that same Dev Space, and vise-versa.

It seems that most pipeline samples use "bake & deploy" to update a root Dev Space, then active development would be done using a Child Dev Space? This is all fine-and-good, but when starting out with Dev Spaces and when only small team of devs (or just team of 1 as the case with me), this deployment isn't desirable. For example, to get started, just using one Dev Space is ideal for me, as I also need to update applications to add the azda-route-as header when communicating with other services, and trying to use multiple Dev Spaces.

So, probably 90% I would just use root Dev Space for active development. But when code is checked-in I would like to keep this same Dev Space up-to-date.

@greenie-msft
Copy link
Contributor

Thanks for that information, @dahovey,

I think I'm starting to understand your scenario.

If the only functionality you desire from your CI/CD pipeline is updating a root dev space, post-commit, then I suggest following standard pipelines that use the bake and deploy tasks. We do not recommend using Dev Spaces as a form of deployment outside development.

As for developing services in the same cluster/namespace that your CI/CD targets - There are two approaches using Dev Spaces:

  1. Connect your development machine to a Kubernetes cluster - This approach allows you to run and debug a service on your development machine, while still connected to your Kubernetes cluster with the rest of your application or services. This experience is enabled through the Dev Spaces VS Code extension.

  2. Debug a microservice that's running in Kubernetes - This scenario syncs your changes into the cluster with the option of attaching a debugger to the running container in the cluster. We offer a CLI/Visual Studio/Visual Studio Code experience for this capability.

If you're only using one dev space (root), then it's safe to assume you're not using the routing capability? If so, then there is no need to add the azds-route-as headers to your code. That's only required if routing between namespaces is what you desire.

I apologize for the confusion in the earlier threads. I was under the impression that the scenario you were attempting to enable was creating review apps from pull requests, which is enabled through a CI/CD pipeline.

I hope this helps clear up any confusion. If you're still stuck or would like to discuss the above further, let me know and I'm happy to jump on a quick call to help.

Thank you,
Nick

@dahovey
Copy link
Author

dahovey commented Apr 8, 2020

Nick,

I'm sensing I am trying to force something that isn't supported or at least the best to way to work with Dev Spaces. Number 2 is my desired method of Development, using Visual Studio 2019. I would like to use the feature of creating separate Dev Spaces from pull requests (or some form of release branches) but I'm trying to first figure out development and cicd pipelines could be incorporated together. A pull request workflow could be added to that.

In my environment I have several different projects that have their own set of small components. For example a few shared projects (i.e. authentication, mapping API integration). These shared projects contain their own components (i.e. ASP.NET Core API, ASP.NET Core Web front-end). Then a few 'main' projects that have dependencies on shared projects. I am coming from Google Skaffold background, where each project consisted of one 'big' helm chart. The project's helm chart was pushed to a private Helm repository by CICD pipeline. Then during development, skaffold was run which pulled project dependencies from private helm repository (for local Docker/Kubernetes development). Now, moving to Dev Spaces, I have broken up helm charts so each component has its own helm chart, but each DevOps project should be in the one K8s namespace (i.e. 'master', 'prod').

Conceptually, in the case of one project, with a micro-service architecture I see how CICD isn't perhaps that helpful, since within VSCode or Visual Studio it wouldn't be hard to stand up services. But because I have separate DevOps projects, each having their own set of components, having some confidence that shared resources are up-to-date, I prefer having CICD connected to Dev Spaces.

It seems to me, in order for cicd and Dev Spaces to 'work together', I should use "bake & deploy" to a root namespace, then development is done ONLY on child Dev Spaces? I hope my descriptions are making it more clear what I have setup and not confusing even more.

I ran some tests using Helm upgrade instead of bake & deploy and it seems to work well, but I assume there are no guarantees it won't break in the future. Below I upgrade/install a Dev Spaces helm release using the releaseName established by Dev Spaces, pointing to the tiller instance in azds namespace. This does exactly what I would like, but it could easily be brittle. See #309, as I believe allows for these cicd scenarios, while leveraging azds tooling.

In the below, src/values.dev.yaml is shared values file that is also used during development, and $(Agent.TempDirectory)/values.ingress.yaml is created by pipeline which forms ingress host and creates appropriate values file with traefik ingress annotations.

          - task: HelmDeploy@0
            displayName: Helm upgrade
            inputs:
              azureSubscriptionEndpoint: $(azureServiceConnection)
              azureResourceGroup: $(AZDSControllerRG)
              kubernetesCluster: $(AZDSControllerName)
              command: upgrade
              chartType: filepath
              chartPath: $(Agent.TempDirectory)/charts
              releaseName: $(helmReleaseName)
              namespace: 'master'
              install: true
              waitForExecution: true
              arguments: >
                --tiller-namespace azds
                --set image.repository=$(containerRegistry)/$(imageRepository)
                --set image.tag=$(Build.BuildId)
                -f src/values.dev.yaml
                -f $(Agent.TempDirectory)/values.ingress.yaml

@dahovey
Copy link
Author

dahovey commented Apr 8, 2020

Ultimately, I recommend that bake & deploy steps should not be used on Dev Spaces namespaces. Instead any cicd integration with Dev Spaces should use azds tooling. If the azds up could be used with a pre-built container image that would created during cicd, it would open other possibilities making Dev Spaces more flexible.

@greenie-msft
Copy link
Contributor

Hi @dahovey,

Thank you for providing this information.

We don't recommend using "azds up" in the same namespace where you're also deploying the service using other tools/systems because that's likely to cause conflicts. For CI/CD, our guidance is to not use "azds" - this is meant for iterative development only. Instead, you should use standard actions (helm, kubectl, etc) in an azds enabled namespace. This will enable you to use our scenarios like routing, PR Flow, etc in your CI/CD system.

Unfortunately, I'm still trying to wrap my head around your scenario.

Are you open to jumping on a half hour Teams call to work through your scenario? I would like to confirm I'm understanding your questions correctly and providing you with the right answers.

Please feel free to email me at Nicholas.Greenfield@microsoft.com. I'm more than happy to schedule a quick chat. :)

Thank you,
Nick

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants