Skip to content

Terraform and OpenStack: Boot an instance from CD-ROM

In the spirit of "this took me way too long", here's how to boot an instance with a CD-ROM on OpenStack, using Terraform.

Why would I need this?

In a perfect world, I have templates to bootstrap instances. Means, the instances are ready to go when booted. I customise them with cloud-init and let them do all kinds of cool (or necessary) stuff like configuring the network, setting hostnames, adding user accounts and then maybe joining them to a cluster.

But I don't live in a perfect world, still: I try to automate as much as I can. So I don't have to remember any of it.

Use-case

The use-case is the installation (or setup) of a SoPhos firewall. The vendor provides me with an image which can be booted and then an installer and setup wizard have to be completed to finish the installation process.

Using Terraform

Let's look at the code first - the following is used to create the instance:

resource "openstack_compute_instance_v2" "vpn_host" {
  depends_on = [
    data.openstack_images_image_v2.vpn_image
  ]

  name        = "vpn"
  flavor_name = "dynamic-M1"

  security_groups = [
    "default",
  ]

  # boot device
  block_device {
    source_type           = "blank"
    volume_size           = "100"
    boot_index            = 0
    destination_type      = "volume"
    delete_on_termination = false
  }

  # cd-rom
  block_device {
    uuid             = data.openstack_images_image_v2.vpn_image.id
    source_type      = "image"
    destination_type = "volume"
    boot_index       = 1
    volume_size      = 1
    device_type      = "cdrom"
  }

  network {
    port = openstack_networking_port_v2.vpn_port.id
  }

  network {
    uuid = data.openstack_networking_network_v2.public_network.id
  }
}

I am omitting some code, but let's walk through this.

How to CD-ROM (block_device)

I am approaching this in reverse order — let me talk about the second block_device block first.

This is the bit that took me the longest because I didn't know how disk_bus or device_type play well together. Or which of the two is needed.

The moral of the story is, if the Terraform provider documentation is too vague, read OpenStack's documentation on device mapping instead. Or in your case, you are reading my blog post! :-)

To continue, the image of the SoPhos firewall is referenced by data.openstack_images_image_v2.vpn_image.id. Therefor, I have a data provider which pulls the image from OpenStack (or Glance):

data "openstack_images_image_v2" "vpn_image" {
  name = "fancy readable name of the ISO here"
}

During terraform apply Terraform will try to resolve it. If successful its result will be used to create a (Cinder) volume from it. The "1 (GB)" size of the volume is what OpenStack suggested when I did this via the fancy web UI. Therefor, I used it in my Terraform setup.

The important part of the block_device block is device_type = "cdrom". Without it OpenStack will refuse to boot from the volume even though we provide a boot_index.

Small caveat: I had to add a depends_on as Terraform's dependency graph would not wait for the data provider to resolve during apply.

Boot device

Last but not least: I also need a bootable root partition to install to, and that's the first block_device block in my code snippet.

If all goes well, the provisioning is as follows:

  1. OpenStack starts the instance
  2. It discovers that the first disk is not bootable (yet)
  3. It proceeds with the CD-ROM (attached to /dev/hda in my case).

After the installation is finished, subsequent reboots of the instance always use the first disk. This is similar to dropping a CD into a (real) server, installing it (from the CD) and leaving the CD (in the drive) at the data center (just in case). :-)

The rest

The rest is hopefully straight forward.

I defined two other networks (with another Terraform run) which are used via data providers.

One is used as a port (for fixed IP allocation/configuration, openstack_networking_port_v2.vpn_port.id) and the other provides the VPN instance with another accessible IP for dial-in and remote management from the public network (via data.openstack_networking_network_v2.public_network.id).

Fin

Thanks for reading.

NetworkManager (for resolv.conf and firewalld) on CentOS7

As I am spiralling into Linux server administration, there's certainly a lot to learn. Certainly a lot leaves me wanting BSD, but since that's not an option, ... here we go.

NetworkManager

The NetworkManager on Linux (or CentOS specially) manages the network. Whatever content/blog posts/knowledge base I found. It usually suggests that you uninstall it first. Common problems are that people are unable to manage /etc/resolv.conf — because changes made by them to that file get overwritten again.

Internals

The NetworkManager gets everything it needs from a few configuration files.

These are located in: /etc/sysconfig/network-scripts/

There're easy enough to be managed with automation (Ansible, Chef, Salt) and here's how you get a grip on DNS.

As an example, the host I'm dealing with has an eth0 device. It's configuration is located in the directory, in a ifcfg-eth0 file and its contents are the following:

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
NAME="eth0"
UUID="63b28d0a-41f0-4e3a-bf30-c05c98772dbb"
DEVICE="eth0"
ONBOOT="yes"
IPADDR="172.21.0.12"
PREFIX="24"
GATEWAY="172.21.0.1"
IPV6_PRIVACY="no"
ZONE=public
DNS1="172.21.0.1"

Most of this speaks for itself, but there are a few titbits in here.

Managing DNS and resolve.conf

In order to (statically) manage the nameservers used by this host, I put the following into the file:

DNS1="172.21.0.1"

If I needed multiple DNS (e.g. for fallback):

DNS1="172.21.0.1"
DNS2="172.21.0.2"
DNS3="172.21.0.3"

In order to apply this, you can use a hammer and reboot — or use your best friend (sarcasm) systemd:

$ systemctl restart NetworkManager

Done!

introducing firewalld

firewalld is another interesting component. It breaks your firewall down into zones, services and sources. (And a few other things.) It's not half bad, even though pf is still superior. Its biggest advantage is that it hides iptables from me (mostly). And it allows me to define rules in a structured XML, which is still easier to read and assert on than iptables -nL.

In order to for example but my eth0 device into a the public zone, put this into ifcfg-eth0:

ZONE=public

This also implies that I can't put this device into another zone — conflicts. But this makes sense. We can also change this of course and put devices into different zones. I believe public may be an implicit default.

FIN

Thanks for reading!