Azure DevOps Homelab – Automation Frenzy

In this article I will outline how you can configure your homelab with automatic deployment and configuration of servers and containers.

The basic idea is that you write Terraform or Ansible code in your text editor on a client device, push that to a git repo in the cloud, and then like magic your changes get applied and whatever you wanted is created.

In order to achieve this, we need somewhere for the code to go (in the cloud) and somewhere for the code to execute. Where the code executes is up to us. We can decide to have a local server at my house run the code. Or we can have something in the cloud run the code. Or even the built in runners that come with Azure DevOps for free!

And yeah, we’re going to use Azure DevOps for this. It is like Github but more enterprise focused. It supports the ability to connect self hosted agents to execute code. See my last post in this series to get an overview of how to set up that piece. For the code runner, I’m using a server rented on Digital Ocean.

The purpose for both the code repo and the runner being in the cloud is that my house can go offline and I’ll still have deployment capabilities. However that introduces the issue of how we connect the runner in the cloud to my home network.

For that problem I settled on Tailscale. It is a free for personal use service that gives you a flat mesh VPN network. You can also define ACL rules to determine what can talk to what and how.

So with all this in mind, here is a diagram of how we will approach this.

So going from the start to finish:

  1. Code is pushed to repo from code editor on iPad
  2. Azure DevOps receives the code
  3. Azure DevOps passes the code off to the runner agent
  4. The runner agent, on Digital Ocean, connects to my home server over Tailscale
  5. My server receives the Terraform / Ansible commands and executes them

Everything from step 1 onwards is automatic.

Here is what you’ll need to do this:

  1. A Microsoft 365 Business license
  2. An Azure DevOps instance
  3. A server on Digital Ocean connected to that DevOps instance (see previous post)
  4. A Proxmox server on hardware wherever you want
  5. A Tailscale account and connection to / from the two servers above

On the Proxmox server, we’re going to use a template VM that will be cloned to create the new VMs. If you prefer to config them with cloud-init, that is an option too.

Setting up Proxmox

We will need to create a new API key in Proxmox. This API key needs to have permission to create and destroy (or just create if you want) virtual machines and containers.

To do this, go to your Proxmox admin portal and click on your data center. Navigate to the API key section.

Create a new API key and give it the correct permissions. I’m giving mine full access, but you want to assign the minimum required permission in a production scenario.

Make sure to note down your API key because it will only be displayed once.

Pipeline Creation

Next we need to set up a pipeline in Azure DevOps. This will require us to somehow get the secret API key to the code so it can use it to authenticate. We need to do this in a secure way – we’re not putting the API key in the code.

To achieve this we will be creating a secret stored in the pipeline within Azure DevOps. This secret will get passed down from the pipeline to the code as an environment variable, that we can then call when the time is right.

To do this it is quite simple. Create a new pipeline in Azure DevOps. Select Azure DevOps as your repo and create a new yaml file. It will need to look like this.

trigger:
  branches:
    include:
    - main
  paths:
    include:
    - Deploy/Terraform

pool:
  name: default
  demands:
  - agent.name -equals DO-CO2

variables:
    TF_VAR_proxmoxapikey: $(proxmoxapikey)

steps:

- task: TerraformCLI@0
  displayName: 'Terraform init'
  inputs:
    command: 'init'
    workingDirectory: '$(System.DefaultWorkingDirectory)/Deploy/Terraform'

- task: TerraformCLI@0
  displayName: 'Terraform plan'
  inputs:
    command: 'plan'
    workingDirectory: '$(System.DefaultWorkingDirectory)/Deploy/Terraform'
    commandOptions: '-var "proxmoxapikey=$(TF_VAR_proxmoxapikey)"'
    


- task: TerraformCLI@0
  displayName: 'Terraform apply'
  inputs:
    command: 'apply'
    workingDirectory: '$(System.DefaultWorkingDirectory)/Deploy/Terraform'
    commandOptions: '-var "proxmoxapikey=$(TF_VAR_proxmoxapikey)"'




The first part of this code defines the trigger that will cause the pipeline to start. Under that, it defines what pool and agent to execute the code on. In our case, we’re using the agent we set up in the previous tutorial.

trigger:
  branches:
    include:
    - main
  paths:
    include:
    - Deploy/Terraform

pool:
  name: default
  demands:
  - agent.name -equals DO-CO2

The variables block is where we take the secret variable we’re storing in the pipeline and pass it down to the code as an environment variable.

variables:
    TF_VAR_proxmoxapikey: $(proxmoxapikey)

The rest of the code is the Terraform stuff that actually creates the virtual machine on the server.

In the upper right hand corner of the new pipeline screen, you will see a new variable button. Click that and create a new secret variable with your API key.

The Terraform Plan

Now at this point we have a code repo, a pipeline to execute the code, an agent to run the code against, and a server to house the new virtual machines we will create. Remember, we’re creating a new VM from a template on the Proxmox server at my house.

In your DevOps repo, create a new .tf file with the following content.

terraform {
  required_providers {
    proxmox = {
      source = "Telmate/proxmox"
      version = "2.9.14"
    }
  }
}

variable "proxmoxapikey" {
  type = string
  default = ""
}

# Define the provider and the API credentials
provider "proxmox" {
  pm_api_url = "https://100.75.68.156:8006/api2/json"
  pm_api_token_id = "root@pam!doco"
  pm_api_token_secret = var.proxmoxapikey
}


# Create a VM by cloning the template
resource "proxmox_vm_qemu" "vm" {
  name = "my-vm"
  target_node = "theark"
  clone = "Ubuntu2204base"
  os_type = "cloud-init"
  cores = 2
  sockets = 1
  memory = 4096
  network {
    model = "virtio"
    bridge = "vmbr0"
  }
  # Use the default disk info from the template
  disk {
    type = "scsi"
    storage = "local-lvm"
    size = "32G"
    format = "raw"
  }
}

This code is self explanatory, it creates a new virtual machine with the above specifications.

Once you push this code to the repo your pipeline should fire – note that the pipeline runner agent is connected to the Tailscale network my home server is a member of. That is how they are connecting. You will need to specify the IP address in your Terraform code or find a way to pass it down as an environment variable.

Conclusion

This was a first part look into how I’m integrating Azure DevOps into my homelab. I hope you enjoyed the topic, and if you’ve been following along at home with your own lab feel free to leave me an email if you have any questions!

patrick@malware.ink