Code as Code – Managing Azure Automation with Terraform

Introduction

Azure Automation is a robust tool that provides administrators with the ability to execute tasks either on Azure or on-premises (through a hybrid worker). If you’re familiar with Azure Automation, you’ve likely configured a lot of settings, including runbooks, schedules, RBAC permissions, variables and PowerShell modules. In certain organizations, these configurations can amount to hundreds of different settings.

However, a question that might arise at some point is: How do I back up all these configurations? How can I control the version of code executed in my runbooks? Or, how can I revert to a specific version of my configuration? The answer lies in Infrastructure as Code (IaC).

IaC as a solution for managing your infrastructure:

Terraform is an IaC tool that enables admins to manage resources on Azure (and on many other platforms). Terraform is great at managing your Azure resources. In combination with Git, you can track changes in your infrastructure (though Git is not in scope for this post). Moreover, Terraform can even track the changes in your Automation Runbook code.

Whether you’re using Terraform or not, this blog post has something for you.

All the code you will see in this post is posted on GitHub

What we will be covering in this post:

Advantages of IaC

Prerequisites (if you would like to follow along)

Resource Group & Automation Account

Runbooks (and runbook code)

Schedules

Modules

Conclusion

Advantages of IaC:

Leveraging IaC practices to manage your Azure environment offers many benefits. In the context of this post, the main advantages include:

  • Configuration backup – The Azure Automation configuration, runbooks, runbook code, schedules, PowerShell modules, even the RBAC permissions for the Automation Account’s managed identity will be defined in code. That means that if anything ever happens to your Azure Automation resource you can redeploy everything from your Terraform code.
  • Documentation – The entire solution is documented in the Terraform code. And that can be further enhanced with comments and wiki entries.
  • Versioning – although not covered in this post, when throwing Git into the mix you gain the ability to version control your Terraform code together with your runbook code.
  • Fast and reliable deployments – Need to add a runbook? just reuse the same code and change parameters. need to change multiple different schedules? Do it all from the one place.

With these benefits in mind, let’s delve into the mechanics of it.

Prerequisites

This blog post also provides you with step-by-step instructions if you want to follow along. Look for blue text boxes like this one.

Please prepare the following:

  • An Azure subscription. If you don’t have one, you can create one here
    • Owner or contributor rights on your subscription
  • Download and install Terraform
  • Download and install the Azure CLI
  • Recommended:
  • Once the above prerequisites are in place, run az login from a command prompt to login to your Azure account
  • Create a new folder to contain your Terraform configuration
    • Create a new file and name it main.tf

Resource Group & Automation Account

The following configuration defines the azurerm provider in addition to:

  • A local variable for defining resource location
  • Input variables for receiving the names for the resource group and automation account
  • The resource group and automation account resources.
provider "azurerm" {
    features {}
}

terraform {
    required_providers {
        azurerm = {
          source = "hashicorp/azurerm"
          version = "~>3"
      }
   }
}

locals {
    location = "westeurope"
}

variable "resource_group_name" {
    type = string
    default = "RG-AutomationAccount"
}

variable "automation_account_name" {
    type = string
    default = "AutomationAccount"
}

resource "azurerm_resource_group" "rg" {
    name = var.resource_group_name
    location = local.location
}

resource "azurerm_automation_account" "aa" {
      name = var.automation_account_name
      location = local.location
      resource_group_name = azurerm_resource_group.rg.name
      identity {
        type = "SystemAssigned"
      }
      sku_name = "Basic"
}

Step by step: add the above code to the main.tf file you created, in a commnad prompt nvaigate to the folder that holds your tf files and run terraform init to initialize the terraform configuration. Run terraform plan -out tfplan, review the plan and run terraform apply tfplan to apply it.

After running plan and apply, we should have a resource group and an automation account:

Runbooks

Nice! let’s add a PowerShell runbook. We are going to use a script that deletes old blob versions in a storage account:

# Connect with managed identity
Connect-AzAccount -Identity
# Define variables
$saName = "demostorageaccount"
$containerName = "state"
$fileName = "terraform.tfstate"
$blobsToKeep = 10

# Get storage context
$stg = New-AzStorageContext -StorageAccountName $saname -UseConnectedAccount

# Get blob versions
$blobVersions = Get-AzStorageBlob -Container $containerName -Context $stg -IncludeVersion | `
Where-Object {$_.Name -eq $fileName} | Sort-Object LastModified -Descending

# Delete old blob versions
if ($blobVersions.Count -gt $blobsToKeep)
{
    for ($i = $blobsToKeep; $i -lt $blobVersions.Count; $i++) {
        $blobVersions[$i] | Remove-AzStorageBlob
        Write-Output $i
    }
}

Step by step: Create a new file in your folder and name it DeleteOldBlobVersions.ps1 paste the above code inside. Note – this script removes old blob versions from a storage account. You can test it if you want. For the purpose of this exercise the script’s contents don’t matter.

Now let’s create a runbook with that script’s content, add the following to your tf config:

data "local_file" "ps_content" {
    filename = "DeleteOldBlobVersions.ps1"
}

resource "azurerm_automation_runbook" "ps" {
    name = "RemoveOldBlobVersions"
    location = azurerm_automation_account.aa.location
    resource_group_name = azurerm_resource_group.rg.name
    automation_account_name = azurerm_automation_account.aa.name
    runbook_type = "PowerShell"
    content = data.local_file.ps_content.content
    log_progress = "true"
    log_verbose = "true"
}

The cool part here is the: local_file data source. This data source loads the entire content of the PowerShell script into Terraform. furthermore, it tracks it in the terraform state.

Step by step: Add the above code to your main.tf file. Run terraform plan -out tfplan. review the plan.

After running terraform plan, we can see that the script’s content is loaded into the plan:

Step by step: Run terraform apply tfplan

After running terraform apply, we can see that the runbook has been created:

OK, now we have an automation account and a PowerShell runbook created with Terraform. That’s nice. But the true power of infrastructure as code is managing your infrastructure, not only deploying it. So, let’s assume we need to make a change in our PowerShell runbook: Our initial assumption that 10 blob versions to keep is not too much was wrong, and we can actually reduce some cost if we only keep 5 versions. So, since our entire Azure Automation solution is managed with IaC using Terraform, all we have to do is change just that small bit and we are good to go:

Step by step: Modify the DeleteOldBlobVersions.ps1 script according to the above screenshot. Run terraform plan -out tfplan

Let’s plan:

Since terraform tracks the contents of the PowerShell script, it could identify the rows that changed and modify them accordingly.

Run terraform apply tfplan

After applying:

Schedules

Now let’s add a daily schedule to kick off this runbook every day at midnight:

resource "azurerm_automation_schedule" "schedule" {
    name = "daily"
    automation_account_name = azurerm_automation_account.aa.name
    resource_group_name = azurerm_automation_account.aa.resource_group_name
    frequency                = "Day"
    interval                 = 1
    timezone                 = "Europe/Paris"
    start_time               = "2023-05-03T00:00:00Z"
    description              = "Daily at midnight"
}

Good, but we there is a small issue here. the start_time property needs to get a point in time that is at least 5 minutes in the future. Also, it is hard-coded and we don’t want to start editing it every time we run a new schedule deployment. So, we can use some parsing and a few Terraform expressions to dynamically calculate the “next midnight” (wasn’t there a movie by that name?) anyway:

locals {
  time_of_day = "00:00:00+02:00"
  now = timestamp()
  tomorrow = timeadd(timestamp(), "24h")
  tomorrow_date = substr(local.tomorrow, 0, 11)
  midnight = "${local.tomorrow_date}${local.time_of_day}"
  time_zone = "Europe/Paris"
  expiry_time = "9999-12-31T23:59:59+02:00"
}

Step by step: Add the above code block to your main.tf file

You might be wondering what is going on here. So, let’s go over this part step by step:

  • The start_time property of the “azurerm_automation_schedule” resource expects an RFC3339 formatted date. This looks like this:
    Year-Month-DayTHour:Minute:Second+TimeZoneOffset
  • We know the right part (the time) that’s time_of_day local
  • We only need the left part (the date), to get it we add 24 hours to the current timestamp which is saved in tomorrow local
  • Then we select only the date from tomorrow timestamp and save it in the tomorrow_date local
  • Now all that’s left is to put the two together. And that’s what we do in the midnight local that should look something like: 2023-05-29T00:00:00+02:00
  • The time_zone local defines the time zone for the schedule for all possible values browse here
  • The expiry_time local is used for setting an expiry on the schedule. In the example above, We are setting the expiry to never expire.

Please note that the time_zone, the offset at the end of time_of_day and the offset at the end of the expiry_time should all match. This will prevent your apply from failing on an invalid expiry time. Besides, this will also make sure that the time of the schedule is accurate.

Let’s use this in our schedule definition:

resource "azurerm_automation_schedule" "schedule" {
    name = "daily"
    automation_account_name = azurerm_automation_account.aa.name
    resource_group_name = azurerm_automation_account.aa.resource_group_name
    frequency               = "Day"
    interval                = 1
    timezone                = local.time_zone
    start_time              = local.midnight
    expiry_time             = local.expiry_time
    description             = "Daily at midnight"
}

Step by step: Add the above code block to your main.tf file

Let’s link our newly created schedule to the runbook by adding the azurerm_automation_job_schedule resource to our code:

resource "azurerm_automation_job_schedule" "schedule_link" {
    resource_group_name     = azurerm_automation_account.aa.resource_group_name
    automation_account_name = azurerm_automation_account.aa.name
    schedule_name           = azurerm_automation_schedule.schedule.name
    runbook_name            = azurerm_automation_runbook.ps.name
}

Step by step: Add the above code block to your main.tf file. Run terraform plan -out tfplan. Review the plan and run terraform apply tfplan

After planning and applying:

Modules

Azure Automation supports the feature of adding and updating PowerShell modules. This functionality becomes critical when there’s a need to utilize a PowerShell module that doesn’t come preloaded into the automation account. To illustrate, consider the “Microsoft.Graph.Users” PowerShell Module, which, at the time of writing this blog, isn’t included in the default set. Let’s add a resource to our Terraform code that Installs it and its dependencies. First, we need to add the “Microsoft.Graph.Authentication” module and then we can add the “Microsoft.Graph.Users” module since the latter depends on the former:

resource "azurerm_automation_module" "msGraphAuthModule" {
  name = "Microsoft.Graph.Authentication"
  automation_account_name = azurerm_automation_account.aa.name
  resource_group_name = azurerm_resource_group.rg.name
  module_link {
    uri = "https://devopsgallerystorage.blob.core.windows.net/packages/microsoft.graph.authentication.1.23.0.nupkg"
  }
}

resource "azurerm_automation_module" "msGraphModule" {
  name = "Microsoft.Graph.Users"
  automation_account_name = azurerm_automation_account.aa.name
  resource_group_name = azurerm_resource_group.rg.name
  module_link {
    uri = "https://devopsgallerystorage.blob.core.windows.net/packages/microsoft.graph.users.1.23.0.nupkg"
  }
  depends_on = [
    azurerm_automation_module.msGraphAuthModule
  ]
}

Add the above code to your main.tf file. run terraform plan -out tfplan. review the plan and run terraform apply tfplan

Let’s go through the key parts of this code:

  • The module_link property points to a storage account containing the nupkg file for the module. Where do I get the link from? To get a module from the gallery (as you would using the Azure Portal) you could follow the steps here to find the appropriate link to the gallery’s storage account.
  • The depends_on meta-argument defines a list of resources to wait for before creating this one. As mentioned above, The “Microsoft.Graph.Users” depends on the “Microsoft.Graph.Authentication” module and will fail to install if it isn’t there.

Let’s add this to our code and run terraform plan and terraform apply to deploy the modules:

Looks good. However, this blog post is about managing your infra with IaC remember? So, let’s say we want to update your modules. All you have to do is change the url in our code:

resource "azurerm_automation_module" "msGraphAuthModule" {
    name = "Microsoft.Graph.Authentication"
    automation_account_name = azurerm_automation_account.aa.name
    resource_group_name = azurerm_resource_group.rg.name
    module_link {
      uri = "https://devopsgallerystorage.blob.core.windows.net/packages/microsoft.graph.authentication.1.27.0.nupkg"
    }
}

resource "azurerm_automation_module" "msGraphModule" {
    name = "Microsoft.Graph.Users"
    automation_account_name = azurerm_automation_account.aa.name
    resource_group_name = azurerm_resource_group.rg.name
    module_link {
      uri = "https://devopsgallerystorage.blob.core.windows.net/packages/microsoft.graph.users.1.27.0.nupkg"
    }
    depends_on = [
      azurerm_automation_module.msGraphAuthModule
    ]
}

Step by step: Change the versions of the modules in your code following the above example (or newer) and run terraform plan -out tfplan. review the plan and run terraform apply tfplan

After terraform plan:

During the apply we can see that Azure Automation is updating the modules:

Conclusion

In this blog post, we’ve delved into the advantages of using Infrastructure as Code (IaC) and provided an in-depth example of how Terraform can be used to manage Automation Accounts in Azure. While we haven’t discussed it in this particular post, the use of Git, in conjunction with IaC tools, offers additional benefits such as infrastructure versioning, improved change management, and the ability to easily roll back configurations.

There is much more we can talk about, but this post was long enough 😅. In an upcoming post we will talk about how we can further leverage Terraform to manage Azure Automation.

I hope this post has been insightful and beneficial for you. Your feedback is always welcome, so feel free to share your thoughts in the comments section below.

Until next time, thank you for reading!

Itamar

Note: You do have the option to sync your code from version control and that's a good solution. But it might not fit your requirements in case you are using a source control other than GitHub or Azure DevOps or in case you cannot connect your source control system directly to Azure Automation.

Author

One thought on “Code as Code – Managing Azure Automation with Terraform