Webux Lab - Blog
Webux Lab Logo

Webux Lab

By Studio Webux

Search

By Tommy Gingras

Last update 2023-02-19

TerraformVM

Experimenting with Terraform and Virsh

The goal was to test virsh with terraform. Currently I have 3 machines running this way. I had to install and configure the OS manually. I need to read about the cloudinit feature.

Setup

curl -O https://releases.hashicorp.com/terraform/1.3.8/terraform_1.3.8_darwin_arm64.zip
unzip terraform_1.3.8_darwin_arm64.zip
rm -rf terraform_1.3.8_darwin_arm64.zip
mv terraform /usr/local/bin/terraform

terraform --version

Terraform and Virsh

If you use a private SSH Key, you will need to create a dummy DNS entry in the hosts file (mydns.local => x.y.z.a)

I'm Using Rocky-8.6-x86_64-minimal.iso. It must exist in the remote directory. (I have way more experience with Ansible for that kind of setup, I simply wanted to give terraform a try.)

main.tf

terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
      version = "0.7.1"
    }
  }
}

provider "libvirt" {
  alias = "remotehost"
  uri   = "qemu+ssh://[email protected]/system?no_verify=1&sshauth=privkey&no_tty=1&keyfile=/Users/myuser/.ssh/remote_rsa&&known_hosts_verify=ignore"
}

## Master

# 10GB
resource "libvirt_volume" "kube_master_os" {
  provider = libvirt.remotehost
  name   = "master_os_qcow2"
  pool   = "default"
  format = "qcow2"
  size   = 20000000000
}

# 5GB
resource "libvirt_volume" "kube_master_local" {
  provider = libvirt.remotehost
  name   = "master_local_qcow2"
  pool   = "default"
  format = "qcow2"
  size   = 5000000000
}

# 10GB
resource "libvirt_volume" "kube_master_pool" {
  provider = libvirt.remotehost
  name   = "master_pool_qcow2"
  pool   = "default"
  format = "qcow2"
  size   = 10000000000
}

resource "libvirt_domain" "kube_master" {
  provider = libvirt.remotehost
  name = "kube_master"

  vcpu = 2
  memory = 2048

  autostart = true

  disk {
    file = "/home/myuser/Documents/Rocky-8.6-x86_64-minimal.iso"
  }

  disk {
    volume_id = libvirt_volume.kube_master_os.id
  }

  disk {
    volume_id = libvirt_volume.kube_master_local.id
  }

  disk {
    volume_id = libvirt_volume.kube_master_pool.id
  }

  boot_device {
    dev = [ "cdrom", "hd" ]
  }

  network_interface {
    network_name = "WAN"
  }

  graphics {
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }
}

## Nodes


# 10GB
resource "libvirt_volume" "kube_node_os" {
  provider = libvirt.remotehost
  name   = "kube_node_os_qcow2_${count.index}"
  pool   = "default"
  format = "qcow2"
  size   = 20000000000

  count = 2
}

# 5GB
resource "libvirt_volume" "kube_node_local" {
  provider = libvirt.remotehost
  name   = "kube_node_local_qcow2_${count.index}"
  pool   = "default"
  format = "qcow2"
  size   = 5000000000

  count = 2
}

# 10GB
resource "libvirt_volume" "kube_node_pool" {
  provider = libvirt.remotehost
  name   = "kube_node_pool_qcow2_${count.index}"
  pool   = "default"
  format = "qcow2"
  size   = 10000000000

  count = 2
}

resource "libvirt_domain" "kube_node" {
  provider = libvirt.remotehost
  name = "kube_node_${count.index}"

  vcpu = 4
  memory = 4096

  autostart = true

  disk {
    file = "/home/myuser/Documents/Rocky-8.6-x86_64-minimal.iso"
  }

  disk {
    volume_id = libvirt_volume.kube_node_os[count.index].id
  }

  disk {
    volume_id = libvirt_volume.kube_node_local[count.index].id
  }

  disk {
    volume_id = libvirt_volume.kube_node_pool[count.index].id
  }

  boot_device {
    dev = [ "cdrom", "hd" ]
  }

  network_interface {
    network_name = "WAN"
    wait_for_lease = true
  }

  graphics {
    type        = "spice"
    listen_type = "address"
    autoport    = true
  }

  count = 2
}

Usage

terraform init
terraform plan
terraform apply

Once these commands are launched, the OS installed, the CDROM disconnected... terraform detect changes and seems to want to mess up everything.. I didn't try to launch it after that.


Conclusion

It did go well, I spawn 3 machines correctly and the installation/configuration went fine.

In the past I was doing the same thing but "manually" with custom commands and I guess not effient way to use the virsh cli. but it was more flexible and powerful at the end.

The next step is to figure out cloudinit and to try to reproduce what I had with Ansible.