Quantcast
Channel: Chaminda's DevOps Journey with MSFT
Viewing all 345 articles
Browse latest View live

Deploying Machine Learning (ML) Model with Azure Pipeline Using Deployable Artifact from Build

$
0
0
We have discussed how to create a Machine Learning (ML) model as a deployable artifact in the post “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” which is based on the open source ML repo, (https://github.com/SaschaDittmann/MLOps-Lab.git) with data by Sascha Dittmann, which also contains the code to train a model.
Prerequisites: You have followed the instructions in posts “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” and “Setup MLOPS workspace using Azure DevOps pipeline”, and created a build pipeline which can train the ML model with cloned repo from https://github.com/SaschaDittmann/MLOps-Lab.git.
Link the build created as per instructions in “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” to the release pipeline.

Prerequisite steps, such as Install Python 3.6 step, adding Azure CLI ML extension, creating an ML workspace are required to be done as explained in post “Setup MLOPS workspace using Azure DevOps pipeline”. All Azure CLI steps require the Azure service connection which has used an service principal, which is having contribution permissions to your Azure subscription where you want to create and use ML workspace.

Then using the model file in the build artifacts register the model in the new ML workspace created. The step outputs a metadata file which you can save with providing a value to the argument --output-metadata-file. This file is required to do the deployment in the next step, using the registered model in this step. Make sure to change resource group ML workspace name which you use.
az ml model register -n diabetes_model --model-path sklearn_diabetes_model.pkl --experiment-name diabetes_sklearn --resource-group rg-ch-mldemostg01 --workspace-name mlw-ch-demostg01 --output-metadata-file ../metadata/deployedmodel.json

To deploy the model, you can use the output metadata file form previous step. Inference configuration available in the repo, which was copied to artifacts contains the input parameters related to the model deployment. Deploy config file contains the meta data for deployment.
az ml model deploy --resource-group rg-ch-mldemostg01 --workspace-name mlw-ch-demostg01 --name diabetes-service-aci --model-metadata-file ../metadata/deployedmodel.json --deploy-config-file aciDeploymentConfig.yml --inference-config-file inferenceConfig.yml –overwrite

Then you can install the python requirements as explained in the post “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” to enable running the python based integration tests in the artifacts which was copied from the repo, to test the deployed model.
Tests can be run using the command such as below.
pytest integration_test.py --doctest-modules --junitxml=junit/test-results.xml --cov=integration_test --cov-report=xml --cov-report=html --scoreurl $(az ml service show --resource-group rg-ch-mldemostg01 --workspace-name mlw-ch-demostg01 --name diabetes-service-aci --query scoringUri --output tsv)

The results can be published to the release pipeline.

Above two steps are similar to the unit test execution explained in the post “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact”.
Once the pipeline is executed you can see the model got registered and deployed.

The tests are executed and the results are available in the release pipeline. Following these instructions, you can setup ML deployments to different ML workspaces in different resource groups in Azure.



Running EF Commands in Builds with .NET Core 3.1 in Hosted Agents

$
0
0
When you run builds with Entity Framework (EF) commands such as dotnet ef migrationsscript with .NET Core 2.2 it would work without any issue. However, if you upgrade your projects to use .NET Core 3.1 your build may fail with issue below., when executing dotnet ef migrations script, to generate a script out of your EF migrations.
error NETSDK1045: The current .NET SDK does not support targeting .NET Core 3.1. Either target .NET Core 2.2 or lower, or use a version of the .NET SDK that supports .NET Core 3.1.
To resolve this as the first step you can set the build pipeline to use .NET Core 3.1 SDK, by using use .NET Core task.


Even after setting the SDK to 3.1 there will be a failure running dotnet ef migrations script command and any dotnet ef command.
--------------------------------------------------------------------------------------
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET Core program, but dotnet-ef does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
##[error]Cmd.exe exited with code '1'.
This is due to that the EF tool comes as a separate NuGet package in which is not available by default in hosted agents (Hosted Windows 2019 with VS2019 was used in this build).

To resolve this issue, you can install the dotnet ef tool to hosted agent by adding a command line step which is executing below command.
dotnet tool install --global dotnet-ef --version 3.1.2


This will fix the issue in the build and it can now run dotnet ef migrations script and other dotnet ef commands.

Pushing NuGet Packages to Azure DevOps Artifact Feeds Manually

$
0
0
In the build pipelines of Azure DevOps we can easily push a NuGet package, using a NuGet push step and selecting the artifact feed, in the Azure DevOps organization or team project. But you may sometimes need to push packages manually to Azure Artifact feeds. Let’s look at how we can do that.
In Azure DevOps artifact feed you can click connect feed to see the feed connection information.

In the feed connection information, you can find the instructions on how to push a package. The highlighted url below should be the url to access the package feed.

As you can see in above publish package instructions, you can setup a command like shown below. The API key can be any test value and we can just use it as key as shown below.
nuget push -Source https://pkgs.dev.azure.com/yourorg/yourteamproject/_packaging/feedname/nuget/v3/index.json -ApiKey key yourpackcgefile.x.x.x.x.nupkg

When you execute the command you will be prompted to log on to Azure DevOps and you should login with a user who has access to the feed to push packages.

Once logged in the NuGet package will be pushed to the feed.


Copy Azure Git Repo Branch Policies from One Branch to Another

$
0
0

Some teams when they practice Agile sprints keep a branch for the life of the sprint. In this case they do branch to develop features, make pull requests to sprint branch and release from the sprint branch at the end of the sprint. It is important to protect the sprint branch from incoming pull requests with policies. Since, there is no out of the box way to copy over the policies defined in one branch to another, each sprint the policies need to be created for the new sprint branch, which makes it a cumbersome process. In order to make it possible to copy set of policies defined in a branch to another, in Azure Git repos, now you can use the script made available here.

The script does below described actions.

The script takes the parameters listed below.

  • $AzureDevOpsPAT: Personal Access Token having access to the code read, write and manage
  • $OrganizationName: Azure DevOps organization name
  • $teamProjectName: Team project name which contains the repository
  • $repositoryName: Name of the Azure Git repository
  • $fromBranch: Branch with the policies already setup
  • $toBranch: Branch the policies should be copied to

As the first step an authentication header is created using the personal access token (PAT) to allow access to the REST API.

In the next step it reads the repository details to obtain the repository id.

Then the policies configured in the from branch is retrieved.

Next each policy is read and removed the information of the from branch. The target branch is set and the policy is applied to the target branch.

This would be a handy tool to just copy over the policies from one Azure Git repo branch to another branch in the same repo by executing the script as shown below.
.\PolicyCopy.ps1 -AzureDevOpsPAT 'yourPAT' -OrganizationName 'xamariners' -teamProjectName 'yourAzureDevOpsOrganization' -repositoryName 'YourAzureDevOpsGitRepo' -fromBranch 'from' -toBranch 'version/0.0.1999_policycopy'

Deploying Infrastructure to AWS LIghtsail Using Azure DevOps – Part 1

$
0
0
AWS Lightsail is easy to use cloud platform services by Amazon Web Services. You can use aws command line capabilities with Azure DevOps to deploy the required infrastructure on AWS Lightsail. Let’s look at step by step to see how we can get AWS Lightsail deployments via Azure DevOps.
First step is to write an infrastructure as code script using AWS CLI. You can either use bash or PowerShell to write the script. Let’s look at a sample script. The first step of the script below is taking the required arguments. We need below arguments as minimum to create an Lightsail instance.
· Instance name: Name of the Lightsail instance to create.
· Availability zone: The region to create the instance.
· Blue print id: Type of instance to setup such as Windows or Ubuntu 16.04 etc.
You can use AWS Lightsail get-blueprints to retrieve the available list of blueprints and get the required blueprint id using the AWS CLI.
· Bundle id: The pricing tier identification.
You can use AWS Lightsail get-bundles to retrieve the available list of bundles and get the required bundle id using the AWS CLI.

The next step is to check the availability of an AWS Lightsail instance by the name we want to create it. When the instance is not available an error occurs with aws lightsail get-instance command. To hide the error 2>/dev/null is used in below script. +e make sure with the error script still continues execution. -e makes if there is an error the script to halt the execution.

The return value to get instance becomes null if the instance is not available. If it is null can be checked with -z. If the instance by the specified name is not available then it will be created with the provided name, availability zone, type and the specified size. Once the instance is created, it will take sometime to get fully provisioned and come to running state. The script keeps on checking until the state of the Lightsail instance is running. If the instance is already existing and in state of the instance is printed by the script. The full script is available here.

In the next post, lets see how we can create the service connection for AWS in Azure DevOps as the next step of getting the AWS Lightsail infrastructure creation using Azure DevOps.

Deploying Infrastructure to AWS LIghtsail Using Azure DevOps – Part 2 – Creating a Service Connection

$
0
0
In the previous post, we have discussed how to write a bash script with AWS CLI to create AWS Lightsail instance. In order to run this script to create AWS Lightsail instance via Azure DevOps we need to make a service connection to AWS from Azure DevOps. Let’s look at the steps to create such service connection.
As the first step you need to create an access key for your user account in AWS. You can go to your profile and click on My Security Credentials to access the Identity and Access Management. Expand Access keys and you can click on Create New Access Key button to create an access key.

The created access key will have two parts Access Key ID and the Secret Access Key. The Secret of the access key is visible only once and you have to save it to a secure location with the access key id.
Then you need to setup AWS Toolkit for Azure DevOps to your Azure DevOps organization. Once the toolkit is installed you can go to service connections of your team project and click on create new service connection.

In the new service connections select AWS and click next. You can provide the access key and the secret, then provide a name for the service connection.

Make sure to select Grant access permissions to all pipelines.

Then save the service connection. This service connection now can be used in build and deployment pipelines to create resources in AWS via Azure DevOps.

In the next post, let’s use this service connection in a release pipeline and create AWS Lightsail instance using the script we have discussed in previous post.

Deploying Infrastructure to AWS Lightsail Using Azure DevOps – Part 3 – Creating a Release Pipeline to Deploy AWS Lightsail Infrastructure

$
0
0
In the previous two post, we have discussed how to write a bash script with AWS CLI to create AWS Lightsail instance and, how to setup a service connection in Azure DevOps for AWS. IN this post, lets explore the steps required to get the bash script executed in a release pipeline to create AWS Lightsail infrastructure.
As the first step we need to create a build pipeline to publish the script as an artifact. Instead of creating a build you can directly use the Azure Git repo in the release as an artifact as well. However, let’s create a build and release pipeline both to implement a clear solution.
In the build pipeline we can add a copy file step to copy the bash script to build artifact staging directory. Then using a publish step we can publish the content of the build artifact staging directory as build output.

The build produces an output drop with the file. Using the build as a trigger we can setup a release pipeline. In the release pipeline you can add AWS Shell script step. In the step select the AWS service connection, we have created in the previous post. Then select the script that is in the build artifact as the script to execute. You need to supply the four arguments, name of the AWS Lightsail instance, the region (availability zone), the type of machine (blueprint) and the size (bundle) as parameters.

You can setup them as variables in the release pipeline. You can define same step in multiple stages in release pipeline to enable creation of multiple environments in the release workflow.

Once the pipeline is executed, AWS Lightsail instance would be created if it is not already exist.

Azure Web App Creation with Azure CLI --runtime Specification Issues in PowerShell Scripts

$
0
0
PowerShell scripts and Azure CLI is a good combination to use for creating infrastructure as code targeting Azure platform. When creating an Azure app service app on Linux, you need to provide the --runtime argument specifying the web app runtime or the platform of the source code getting deployed. In a PowerShell window the command with --runtime argument fails, since a piping symbol is used in runtime arguments.
Have a look at example below.
az webapp create -n app-chd5-test  -g rg-dotnet5-test -p asp-dotnet5-test --runtime "aspnet|V4.7"


'V4.7' is not recognized as an internal or external command, operable program or batch file.
Issue is caused by the | symbol used in the runtime value as it is special character in PowerShell allowing the piping of values. To prevent the issue, you can use below syntax by enclosing the runtime value with single quotes.

az webapp create -n app-chd5-test  -g rg-dotnet5-test -p asp-dotnet5-test --runtime '"aspnet|V4.7"'


List of runtimes available for windows App Service Apps can be found by executing
az webapp list-runtimes

For Linux with
az webapp list-runtimes --linux

Cross Repo Branch Policies in Azure Git Repos

$
0
0
Azure Git repos provide protection to branches with branch policies. The cross-repo branch policy in a team project now lets you define policies applicable to a branch pattern, where it would even be applied to future branches which are adhering to the specified pattern. Let’s explore this feature in bit of detail.
In the team project settings page, now you get all repositories policies tab, where you can define branch policies effective to all repos in the team project.

You can apply policy to protect default branch in each repo. Or you can define a pattern, which will protect the current and future branches matching the pattern. Currently available branches matching the pattern in each repo in the team project will be listed as you type in the pattern.

Then you can create the branch policy for the given pattern as normal branch protection policy in Azure Git repo. Which can even include reviewers, build policies etc.

The policies will be applied to any branch meeting the pattern in any repo in the team project.

This feature is really useful when you have a single repo in a team project to auto apply the policies. Multiple repos in single project get the policies applied but the build policies have to be controlled with path filters in project level using cross project policies, or use inherited policies combined with individual repo branch level build policies. However, that will require the build policies to be applied to each new branch created.
Ideally, a feature to apply pattern based policies to single repo within a team project, with the option to decide cross repo policy inheritance to be overridden would be greatly benefiting the maintenance of branch protection policies in Azure Git repos.

Git Repo Submodule Checkout in Azure DevOps Build Pipelines

$
0
0

Submodule in Git repos help you to keep the common code modules in a separate repo and utilize in multiple other repos. When you clone the git repo you can include submodules by using git clone --recurse-submodules. In Azure pipelines you can enable checking out the code with submodules for build and package purpose. Let’s have a look at the settings to enable submodule checkout in builds.

In the build, when you select an Azure Git repo you get the option to enable checkout submodules. It can be set recursive to checkout all nested submodules to checkout only the top-level submodule.


You get this option to checkout submodules in all Git based repos that can be used with Axure pipelines such as BitBucket, GitHub and other Git repos.

In YAML pipelines the syntax would be to use as shown below to checkout top-level submodules.

- checkout: self

submodules: true

To recursively checkout all nested submodules you can use the syntax below.

- checkout: self

submodules: recursive

Allow Azure Services on SQL Server with Azure CLI

$
0
0

Allow Azure services on Azure SQL Server lets other Azure Services such as function apps, app service apps etc. to be connected to an Azure SQL Server without needing to allow the outbound IPs of such services. You can enable this easily using the portal. Let’s look at how we can Allow Azure services suing CLI.

The Azure portal lets you switch on off the setting in the Azure SQL Server firewall settings page as shown below.


However, setting this to Yes using Azure CLI requires to get a firewall rule added with start and end IP set as 0.0.0.0

az sql server firewall-rule create -g myresourcegroup -s mysqlserver -n myallowazsvcrule --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0

To set the Allow Azure services to No you should delete the firewall rule as shown below.

az sql server firewall-rule delete -g myresourcegroup -s mysqlserver -n myallowazsvcrule

How to Run GitHub Actions Step When a Failure Occurred in a Previous Step

$
0
0

GitHub Actions are the CI/CD workflow implementation tool built into GitHub repos. While using the GitHub Actions workflows you may want to execute a cleanup, or rollback or even a ticket(issue) creation task in a situation where a job step is failed. In Azure DevOps pipelines each task had control support to easy implantations of the run on failure need. Let’s look at what it is in Azure DevOps then understand how we can achieve same goals in GitHub actions workflow steps.

In Azure DevOps pipelines you could easily achieve this type of a need in tasks by using control options.


In case of Azure DevOps YAML pipelines, the same set of control could be implemented the information from Microsoft docs (https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml#conditions) is below.


In GitHub actions to execute a step in a job only when one of the previous step is failed you can use if: ${{ failure() }} in the step which needs to run on failure of previous step. For example, in below workflow we are purposefully failing a step by using exit code 1, and on the failure an issue in the repo is created using another step set to run at failures.

on: [push]
 
jobs:
  FailJobIssueDemo:
    runs-on: ubuntu-latest
    steps:
      - name: Step is going to pass
        run: echo Passing step
      
      - name: Step is going to fail
        run: exit 1
         
      - name: Step To run on failure
        if: ${{ failure() }}
        run: |
          curl --request POST \
          --url https://api.github.com/repos/${{ github.repository }}/issues \
          --header 'authorization: Bearer ${{ secrets.GITHUB_TOKEN }}' \
          --header 'content-type: application/json' \
          --data '{
            "title": "Issue created due to workflow fialure: ${{ github.run_id }}",
            "body": "This issue was automatically created by the GitHub Action workflow **${{ github.workflow }}**. \n\n due to failure in run: _${{ github.run_id }}_."
            }'

The step created issue in the repository can be seen after workflow execution as below.

Change Routing of Azure Function Apps

$
0
0

When you implement functions in Azure by default the routing is the https://functionappname/api/functionname . However, this implementation would not let you proper organizing of routing when you have multiple function apps in your software application project. You might want to create custom routing to make your function access from other application organized appropriately. Let’s look at default behaviors and how we can setup custom routing.

When you setup your fist function app in the resource group, say with name funct-sample01, and create two functions say func01 and func02, their default URLs would be as follows (note that anonymous access allowed in below functions).

https://funct-sample01.azurewebsites.net/api/func01

https://funct-sample01.azurewebsites.net/api/func02

Then if you create another function app named func-sample02, and create a function names func01, it would have below URL.

https://func-sample02.azurewebsites.net/api/func01

This is completely OK as long as your applications are going to call the functions directly. However, if you are going to use an APIM based routing, allowing the function app access via say Azure APIM (we can discuss about APIM routing and advantageous in a later post), you may want to alter function app rout prefix to add more than, /api/ which is the default rout prefix, to allow more clarity in routing.

If explained in other words both funct-sample01 and func-sample02 app have function named func01 which has routing setup as /api/func01, which might be creating confusions and misleading the consumers of the function.

https://funct-sample01.azurewebsites.net/api/func01

https://func-sample02.azurewebsites.net/api/func01

If we can make the above functions app routing to make the rout url for each function to add function app specific rout, that would avoid the confusion. For example, below urls would make more sense for the both function app above.

https://funct-sample01.azurewebsites.net/api/sample01/func01

https://funct-sample01.azurewebsites.net/api/sample01/func02

https://func-sample02.azurewebsites.net/api/sample02/func01

This make sample01 and sample 02 to have clear routing, distinguished from each other, and would be really help full in setting up App Gateway or APIM routing for additional protection.

To make this change what you have to do is in the host.json of the function app, setup the route prefix as specified in below syntax.

{

"extensions": {

  "http": {

    "routePrefix": "customPrefix"

  }

}

}

For funct-sample01, function app the rout prefix should be set as below.

{

"extensions": {

  "http": {

    "routePrefix": "/api/sample01"

  }

}

}

You can update Azure function app host.json from azure portal itself if required using app service editor or using kudus.


Once saved the host.json function URL updates as below.

Getting Started with Terraform for Azure in Windows 10

$
0
0

Terraform is a great way to setup infrastructure as code (IaC) for Azure. Terraform helps us to codify and version control our infrastructure needs in multiple platforms, hence, making learning terraform for IaC for Azure would let an individual to easily get adapted to other platforms such as AWS. IN this post let’s have a quick look at preparing a Windows 10 machine to get started with terraform.

As the first requirement you may install the latest stable version of PowerShell, using the installer msi downloaded from here https://github.com/PowerShell/PowerShell. However, if you already have PowerShell version 5.1 or higher it should be sufficient. If you update to latest version of PowerShell 7 you will have a separate Windows PowerShell 7 app.

To check your version of PowerShell version you can run $PSVersionTable command.



Install PowerShell Az module.

Install-Module -Name Az -AllowClobber -Scope CurrentUser

Then run

Import-Module Az

To get the Az Module imported. Now if you execute a Get-Module you will see the Az module available to use in latest version of PS.



Install the latest version of Azure CLI following instructions here https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-windows?view=azure-cli-latest&tabs=azure-cli.

To check installed version of Azure CLI run az --version



Then download Terraform for Windows from https://www.terraform.io/downloads.html and extract to a desired path say c:\Terraform. Setup Path env variable to have new entry for Terraform path.



Execute Terraform from a new PowerShell window.



Now, we are all set with our Windows 10 machine to use Terraform development for infra as code, and in a next post let’s check how we can get some simple Azure platform services created using Terraform

Authorizing Terraform to Apply Changes to Azure Using SPN

$
0
0

We have discussed setting up a Windows 10 environment to develop terraform scripts in previous post. Let’s understand how to authenticate terraform to deploy infrastructure on Azure platform using a service principle with this post.

As the first step we need to have an SPN created in Azure. If you have more than one Azure subscription, make sure to set the required subscription in cloud shell using below CLI command.

Az account set --subscription azure subscriptionid

Service Principle (SPN) can be created in Azure allowing subscription level contribution permissions by executing below command in Azure cloud shell.

az ad sp create-for-rbac -n "infradeployapp" --role contributor --scopes /subscriptions/azuresubscriptionid

An app registration in Azure active directory will be created with contributor access to the subscription specified in the above command. The output of the SPN create will provide app id, password and the tenant information which you have to copy to a secure location, as the password will not be viewable again.

In terraform main.tf file you can add below code segment to allow authentication of terraform and authorize it to add resources to Azure subscription via the created SPN.

provider "azurerm" {

# The "feature" block is required for AzureRM provider 2.x.

# If you're using version 1.x, the "features" block isn't allowed.

version = "~>2.0"

subscription_id = var.AzureSubscriptionId

client_id = var.AzureSPNAppId

client_secret = var.AzureSPNPwd

tenant_id = var.AzureTenantId

features {}

}


The variables for the above setting can be defined in the variables.tf.



These variables can be supplied with command line so that the terraform is authorized to apply or execute plan on the given Azure subscription.

To supply each variable you can use syntax as below.

-var='variablename=variablevalue'


Why Azure DevOps Terraform Extension Task by Microsoft DevLabs to Deploy Infra to Azure Does Not Work for Me

$
0
0

Terraform is used as declarative code for infrastructure deployments on multiple cloud platforms including Azure. Azure DevOps provides capability to execute Infrastructure as code (IaC), with CI/CD pipelines, by using tasks added with the extension by Microsoft DevLabs (https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks) available in the marketplace. The terraform task to plan or apply a terraform plan demands to have reources in Azure such as a resource group, storage account, container and a key (terraform state file path). Similar demands are there for even AWS and GCP for when using the task.

Why I do not like it?

The Terraform commands allowing plan or applying a plan relies on a state file. To cater this need, Terraform task by Microsoft DevLabs demands to create Azure resource group, storage account, container and a key (terraform state file path). Which is to create a new env in Azure subscription the task demands to pre create few Azure resources manually.



When using Azure CLI based IaC for creating Azure resources only requirement was to have a service connection from the team project based on an SPN created in Azure in subscription level, to authenticate and authorize creating resources in Azure, which is reasonable prerequisite. However, keeping the terraform state in Azure storage account demands for every new subscription to have manual Azure resource creation work other than the SPN and service connection creation in Azure DevOps if the task by Microsoft DevLabs is used.

Couple of other Marketplace extensions evaluated as below and they seem to be not fixing the issue described above.

Terraform by Peter Groenewegen

This one still keeps state in Azure blob so demands to create Azure resources manually as a prerequisite.

Terraform by Jamie Phillips

Only allows installing Terraform.

Terraform by Tyler Evert

Relies on terraform state in an Azure blob storage demanding for pre-created Azure resources manually.

Terraform by Arkia Consulting

Not sure how terraform maintained with the tasks in this extension. Need an evaluation to confirm which we can do in another post.

What is the alternative?

As alternatives Terraform by Charles Zipp seems to have multiple approaches to store Terraform status, which can be used as a solution.


Out of options available with this task local option is not feasible to be used with hosted agent as keeping the terraform state locally in a hosted agent would not help in the subsequent runs. The azurerm options relies on Azure blob storage so it is not a solution. However, keeping self-configured and using state file as a secure file in Azure DevOps seems to be a possible solution which we can explore in a next post.

My own solution

As an alternative to using any of extension tasks I could come up with a solution described briefly below.

· Use Terraform Install task by Microsoft DevLabs to get terraform installed in the agent.

· Use PowerShell commands to create a plan and apply the terraform plan

· Store the terraform state and plans in Azure git repo via pushing the file to git repo with command line within PowerShell tasks.

· After executing plan the plan is stored in a git repo and manual intervention step with agentless phase to be introduced to verify plan and approve for applying.

· Add an apply plan agent phase utilizing state plan stored in Azure git repo.


With this solution an approval flow can be introduced as well utilizing the usage of terraform plan to find the changes getting applied to infrastructure and control it with an approval, so that better control on changes to infrastructure in an environment can be maintained. Let’s discuss this solution in detail in the next post.

Azure Terraform Infra as Code Deployment via Custom PowerShell with Azure DevOps Pipelines – Part 1 – Create Plan

$
0
0

As described in the post “Why Azure DevOps Terraform Extension Task by Microsoft DevLabs to Deploy Infra to Azure Does Not Work for Me”, the Microsoft DevLabs task to plan and apply terraform infrastructure as code, is demanding to store state in Azure blob storage, which requires to create Azure resources manually as prerequisite of using the task. In this post let us look at a custom implementation of terraform plan and apply task utilizing PowerShell, while storing terraform state and plans in Azure Git repo and have an approval in between the plan and apply steps, to enhance the deployment workflow.

PowerShell Script to Execute Terraform Plan

To execute terraform via command line each variable must be passed in below syntax.

-var='variableName=variableValue'

Example: -var='envName=dev'

To build up such variables all parameters can be passed to PowerShell as params and then these parameters can be used to build the variable requirement syntax as shown below.

$joinChar='=';

# Set terraform variable with value

$AzureSubscriptionId='AzureSubscriptionId',$AzureSubscriptionId -join $joinChar;

The out come of above would be varibaleName=variableValue. The using these type of formatted variable in PowerShell for each parameter, terraform required variable passing syntax as a string can be built utilizing PowerShell script segment similar to below.

# Build up terraform variable format string

$varPrefix= "-var";

$vars =  ($varPrefix,("'",$AzureSubscriptionId,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$AzureSPNAppId,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$AzureSPNPwd,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$AzureTenantId,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$projectName,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$envName,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$prodEnvName,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$location,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$dotnetFramework,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$deploySlotName,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$planKind,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$planReserved,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$planSKUTier,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$planSKUSize,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$storageTier,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$storageReplicatonType,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$functionVersion,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$keyvaultPurgeProtection,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$SqlSvrVersion,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$SqlSvrAdminUser,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$SqlSvrAdminUserPwd,"'" -join '') -join $joinChar

       ),($varPrefix,("'",$SQLDatabaseEdition,"'" -join '') -join $joinChar

       ) -join ''

Above is just an example set of parameters. This can be altered based on the infrastructure creation need of yours. Outcome of above would be space separated string of parameters for terraform such as example below.

-var='AzureSubscriptionId=subsid' -var='AzureSPNAppId=appregid' -var='AzureSPNPwd=spnpwd' -var='AzureTenantId=tenatid' -var='projectName=qonsult' -var='envName=dev'

Next would be utilizing the parameter string build up the command for terraform plan and execute it via PowerShell.

# Set terraform plan file path and state file path parameters      

$outParam = '-out',("'",$planFilePath,"'" -join '') -join $joinChar

$stateParam = '-state',("'",$tfStateFilePath,"'" -join '') -join $joinChar

# Build up terraform plan command

$planCommand = 'terraform plan',$outParam,$stateParam,$vars -join ''

write-host ($planCommand)

# Execute terraform plan

Invoke-Expression $planCommand

Full script sample can be found here, which can be used with your own parameters (sample parameters defined) based on your need of infrastructure.

Execute Terraform Plan with Azure Pipeline and Store State and Plan in a Repo


The first step is to download the package containing the terraform maint.tf, variables.tf and the PowerShell file to execute the terraform plan.

As next step terraform can be installed to the agent machine and set the path for enable execution of command line terraform commands. This can be done via Install Terraform task by the extension of Microsoft DevLabs.



Once the package downloaded and extracted in the same package extracted folder need to create a folder and clone the repo which contains the terraform state and plans. This could be empty repo just containing the InfraRelease folder.

                     

The cloning can be performed with below PowerShell script.

$clonePath = '$(releaseDataRepoClonePath)';

$cloneUrl = '$(releaseDataRepoCloneUrl)';

$planFileFolderPath = '$(planFileFolderPath)';

md $clonePath

cd $clonePath

dir

git clone $cloneUrl $clonePath

If(!(test-path $planFileFolderPath))

{

New-Item -ItemType Directory -Force -Path $planFileFolderPath

}

dir


Such script can be inline script of a PowerShell task. The parameters release repo clone path should be within the extracted package folder to make it easy to refer to the files. The clone url embedded with credentials can be generated using Azure DevOps Git repo clone options.



The clone repo url should be in the format of as shown below.

https://username:generatedpwd@dev.azure.com/devopsorgname/teamprojectname/_git/reponame

Then terraform init should be run to get the required terraform modules and plugins loaded so that the main.tf can be executed.



All these download package, install terraform to agent, clone the repo containing terraform state and plans, and initializing terraform can be created as pre task group and can be utilized for both Plan and Execute steps.

The plan step would contain above for tasks done as prerequisites and the execution of terraform plan via the PowerShell script described previously.


The execution should happen from the extracted package path which is having the PowerShell script, main.tf and varaibles tf, and contains a folder with cloned repo where the state and plan files may exist if the command is running after any initial executions. Correct plan file path and state file path, which would be inside cloned repo, should be provided to the PowerShell script.

After execution plan execution is completed a new plan file would be added and the repo should be pushed with the changes. This need can be achieved via another PowerShell script as shown below.

$clonePath = '$(releaseDataRepoClonePath)';

$envName='$($envName)';

cd $clonePath

dir

git config user.email "$(gitUserEmail)"

git config user.name "$(gitUserName)"

# Upload plan file and state file to repo

$commitMessage = "Add terraform plan and state for env",$envName,"for release",$env:RELEASE_RELEASENAME -join ''

git add .

git commit -m "$commitMessage"

git push

The step will be executed from the extracted cloned repo path.


In summary following are the steps.

· Download main.tf, varaibles.tf and terraform plan execution PowerShell to a folder

· Install terraform in the agent

· Clone the terraform state and plan file repo to folder in path where the main.tf etc. extracted

· Execute a terraform init command from the location of main.tf to ensure plugins and modules are loaded.

· Execute the terraform plan supplying terraform state path (state file may not exist in first execution and that is not an issue) and path to save plan. These paths should be in the cloned repo.

· Push the new plan added to the Azure Git repo

Once above steps executed the plan to be applied can be viewed in the log of Azure DevOps pipeline.




In the next post let’s understand how to add the manual intervention step and wait for approval of plan by viewing the plan in the log as shown above, and get the same terraform plan executed with the pipeline.

Azure Terraform Infra as Code Deployment via Custom PowerShell with Azure DevOps Pipelines – Part 2 – Execute Plan with Approval

$
0
0

In the previous post “Azure Terraform Infra as Code Deployment via Custom PowerShell with Azure DevOps Pipelines – Part 1 – Create Plan” we have discussed how to generate a terraform plan targeting Azure Infrastructure deployment and upload it to an Azure Git repo. The solution is implemented instead of using terraform task for Azure DevOps, which is available with Microsoft DevLabs extension due to it is having a prerequisite of Azure resource group, storage etc. as described in the post “Why Azure DevOps Terraform Extension Task by Microsoft DevLabs to Deploy Infra to Azure Does Not Work for Me”. As the second part of previous post, let’s explore the steps require to approve the terraform plan and get the plan executed with Azure DevOps pipeline relying on a state kept in Azure Git repo instead of a storage blob, which is eliminating the need of having manually created Azure resources.

Terraform plan can be viewed in the release logs as explained in the previous post. Once the plan is uploaded to an Azure Git repo it is possible to complete the agent job and utilize an agentless job in the pipeline to use a manual intervention task. Such Manual intervention task will allow the approver to check the plan in the log or from the uploaded plan file (need to convert the plan to viewable Jason using tools such as terraform plan parser).

Once the approver is happy with plan it is possible to approve or if unhappy can reject further execution of the pipeline.



Once the approval is given the next steps to be executed in another hosted agent job. As a prerequisite need to download the package which is containing the main.tf and the variables.tf and extract it. Then Install the terraform should be installed in the agent machine utilizing the Microsoft DevLabs extension task to install terraform. The next step would be to clone the Azure Git repo containing the plan and optionally the terraform state of the target environment (first execution will not have the state). Next the terraform init command should be executed to get the required terraform modules and plugins loaded. All these steps were describe n detail in the post “Azure Terraform Infra as Code Deployment via Custom PowerShell with Azure DevOps Pipelines – Part 1 – Create Plan”.

Once these prerequisites are ready in the agent job the next step would be to use a simple PowerShell task to execute Terraform Apply using the plan. Below script can be used for the purpose of applying the plan cloned from the Azure Git repo. It is required to pass the terraform state file path (even if no state file in the first run after apply command the file will be created), and the plan file path already available in the cloned repo folder. The task working directory should be set to the package extracted folder which contains the cloned repo content as well in a folder.

$tfStateFilePath = '$(tfStateFilePath)';

$planFilePath = '$(planFilePath)';

# Set terraform state file path parameters

$stateParam = '-state',("'",$tfStateFilePath,"'" -join '') -join '='

# Build up terraform apply command

$applyCommand = 'terraform apply',$stateParam ,$planFilePath -join ''

write-host ($applyCommand )

# Execute terraform apply

Invoke-Expression $applyCommand



Once the plan is executed successfully state would be saved in the cloned folder Azure Git repo path. It is required to update the Azure Git repo with the terraform state file so that in the next release, the state file can be utilized to execute next plan with any modification to the target environment Azure infrastructure. How to push the changes was described via Git command line in a PowerShell task was described in the post “Azure Terraform Infra as Code Deployment via Custom PowerShell with Azure DevOps Pipelines – Part 1 – Create Plan”.



Once pushed the terraform state would be available in the repo for next infra run cycle. This implementation eliminates the need to keep the terraform state in another Azure resource such as storage account, which is making only prerequisite to deploy to a new subscription (in QA, UAT or Production environment needs), would be to have an SPNN created to make necessary connectivity.

Set Work Item State on Pull Request Completion

$
0
0

Associated work items help to identify what is completed in a given pull request. If required while completing a pull request you could, set the associated work item to be moved to completed state. With the introduction of new feature, which we are discussing in this post, you would be able to set the work item state (regardless of it is associated work item or not for the pull request) to desired closed, in progress or resolved states based on description specification.

You can use couple of key words in the description of the pull request to specify the state. It could be state name and the work item would be set to the state. If you put the state category name work item state would be set to the first state of the category. Below table from Microsoft documentation shows the possible keyword to use.


Let’s try out with an example. In one of the pull requests below you can see multiple wok items are set as Fix.


Once the pull request is completed and merged you would be able to see the states changed as below. Notice while all user stories are closed but the bug work item is moved to resolved as described in the table from MSFT docs.



The user story shows the state change to closed due to the pull request completion.



Similarly, the bug work item history is showing due to the pull request completion the bug is moved to resolved state.

Trigger Deployment YAML Pipeline Once YAML Build Completed

$
0
0

Now it is possible to implement multi stage pipelines with YAML facilitating the implementation of deployment pipelines as well with YAML instead of classic release pipelines in Azure DevOps. However, having build and deployment steps all together in a single pipeline script is not ideal as it looks bit not a nice implementation from my point of view. With the possibility of triggering another YAML pipeline based on completion of another YAML pipeline, it is possible to separate the Build and Deployment concerns into two different YAML scripts implementing two different pipelines. Let’s have a quick look at how we can trigger a YAML deployment pipeline based on another YAML build pipeline.

A pipeline can be set as trigger to another pipeline using the syntax below. The simplest implementation would be specifying the resource pipeline name as source (which is the name you set for the pipeline in the Azure DevOps portal). Then specify an identifier as pipeline to use any additional resources such as build artifacts available in the source pipeline (We will discuss usage of the pipeline artifacts in another pipeline with YAML in a next post). If the source pipeline is from a different team project you can specify the source team project as well. Then setting the trigger as true will allow the pipeline to be triggered once the source pipeline execution completes.



resources:
  pipelines:
  - pipeline: App_CI 
    projectYAML_CICD
    source: App.CI 
    trigger: true

Below is a full demo pipeline using a dummy stage to execute as Dev environment deployment based on the build pipeline completion. Build pipeline is an ASP.NET Core 3.1 web app build which is a YAML pipeline.

Source Pipeline - App.CI



CD Pipeline – App.CD

resources:
  pipelines:
  - pipeline: App_CI 
    projectYAML_CICD
    source: App.CI 
    trigger: true

stages:
- stage: Dev
  displayName: Deploy to Dev
  jobs:
  - job: DeployDev
    pool:
      vmImage: 'windows-latest'
    steps:
    - pwsh: |
        Write-Host "Deploy to Dev"
        Write-Host "We do nothing here"

In addition to just specifying to trigger when source pipeline is completed you can enhance the trigger with filters, such as to trigger the CD YAML pipeline if the build pipeline is based on a given branch pattern. It is possible to use include as well as exclude filters for branches.

  pipelines:
  - pipeline: App_CI 
    projectYAML_CICD
    source: App.CI 
    trigger:
      branches:
        include:
          - version/*
          - master
Viewing all 345 articles
Browse latest View live