We use Ansible for all kinds of things. One of it being formatting and mounting a volume to be able to use it.
When I introduced the code for that, it worked (flawlessly, of course) — until I hit a bug when I provisioned another cluster. Long story short, I was able to fix the bug. But since we rely on it to work always, and I wanted to make sure I had all situations covered, I decided to extend one of our tests.
Background
We currently use Hetzer Cloud to bootstrap instances for CI. They are pretty okay. At times, you have odd issues where CPU hangs or a server doesn’t respond to SSH. But since it’s cheap, it hasn’t bothered me enough yet to find something else. Add to that, they are European (slightly less issues with data privacy, etc.), know how VAT work (means, the invoices are correct) and allow for SEPA to pay for invoices (means, less credit card fees, no currency conversions, etc.).
Extending the test
A molecule scenario is driven by a file called molecule.yml
. It’ll look similar to this:
---
dependency:
name: galaxy
driver:
name: hetznercloud
lint:
name: yamllint
platforms:
- name: node-01-${DRONE_BUILD_NUMBER:-111}
server_type: cx11
image: centos-7
- name: node-02-${DRONE_BUILD_NUMBER:-111}
server_type: cx11
image: centos-7
provisioner:
name: ansible
config_options:
ssh_connection:
pipelining: True
lint:
name: ansible-lint
verifier:
name: testinfra
lint:
name: flake8
Most of it is as generated, we added different names though as this test requires multiple instances and we wanted to run multiple builds at the same time, which is why we append $DRONE_BUILD_NUMBER
from the environment. (The last bit, ensures the number is still set when you drone exec
for a build locally.
TL;DR — the scenario will have two instances available: node-01-XYZ
and node-02-XYZ
.
Going from there, you have two additional files of interest: create.yml
and destroy.yml
.
The first is used to bootstrap instances through Ansible’s hcloud_server
module, the second cleans up after the scenario/build finished.
Adding the volume
In create.yml
, I added the following task after “Wait for instance(s) creation to complete”:
- name: Attach a volume
hcloud_volume:
name: "my-volume-{{ item.name }}"
server: "{{ item.name }}"
size: 15
automount: no
state: present
with_items: "{{ molecule_yml.platforms }}"
The task uses Ansible’s hcloud_volume
and ensures each of my nodes has a 15 GiB volume attached. The volume is called “my-volume” and gets the name of the instance (e.g. node-01-XYZ
suffixed). For our purposes, we also decided to attach it, without mounting it as we take care of that with our Ansible role.
Deleting the volume(s)
To save a few bucks and to clean up after each test run. Open destroy.yml
and add the following block after the instances are terminated:
- name: Delete volume(s)
block:
- name: Detach a volume
hcloud_volume:
name: "my-volume-{{ item.instance }}"
state: absent
with_items: "{{ instance_conf }}"
ignore_errors: yes
when: not skip_instances
Side note
Another neat trick is that if you add variables in molecule.yml
in platforms, like so.
For example, to set the size of the volume try the following:
platforms:
- name: instance
disk_size: 20
And then use that variable with the hcloud_volume
task as {{ item.disk_size }}
later. And if disk size is not your objective, you could control here if each instance gets a volume, or if you only need it for certain roles in your setup. This is all a hack and maybe it’ll go away, but for the time being I am glad no one bothered to validate these YAML keys or applied a schema.
Fin
Thanks for reading!