Vagrant: Bootstrapping
In a previous post, we talked about why we use virtual machines, and Vagrant, at Fictive Kin. Now let’s get to how we do it.
If you’re familiar with Virtual Machine based development environments that are set up through a configuration management (we use Salt, but you could use something different like Ansible, Puppet, Chef, etc.), this probably won’t seem all that new to you, but there are a few things that we do — possibly uniquely — that might help with your systems.
One thing that I didn’t mention on the previous post is that we’re a bit unlike many other startup kind of shops where developers and other team members (I’m going to just say “developers” to mean all team members from now on, for simplicity) focus on a single, large project. We have large projects, of course, but we tend to work on many of them at once. This doesn’t necessarily mean that each developer will be working on multiple projects at once, but our team — as a whole — will certainly need to be able to pull up access to several projects at the same time.
We’ve experimented with a monolithic VM (each developer gets one large VM that runs all active projects on the same operating system and virtual instance), but we found that it was both too large (and therefore more complicated, more prone to failure) for most of our developers, and too hard to maintain. Sometimes different projects required different versions of the same RDBMS, for example, and that’s much easier if there’s only one version running. Or, more precisely: one version per virtual machine. Splitting apps onto their own VMs like this also reduces (but certainly far from eliminates) the headaches associated with deploying apps on different platforms — Mined runs our regular Python + Flask + Postgres stack, but Teuxdeux is a Ruby + Sinatra + MySQL app. Technically, these two apps could run on the same VM, but we’ve learned that it’s best to keep them separated.
So, we give our developers a set of VMs — generally one per project or app. This not only separates concerns for things like database server versions, but also keeps one failing app from trampling on the others, for the most part. Luckily, Vagrant has good support for commanding multiple virtual machines from the same Vagrantfile
.
In addition to one VM per app, each developer has a primary VM called gateway
that we use for shared infrastructure, such as DNS resolution (more on this in a later post), caching (we use Apt-Cacher NG to avoid downloading the same Debian packages on each VM), and Vagrantfile
management.
We also use the same “Vagrant Box” (base image file) for each of our VMs, and this image closely matches the base image (EBS-backed AMIs) we use on AWS. (I’ve been tempted to move to an app-specific image model for production, but for now we use nearly the same image everywhere, and we’d continue to do so for the VMs… and since this post is about the VMs, let’s just ignore the future-production parts for now.)
That was more background information than I intended to share, but I do think it’s important. Let’s get on to a practical example workflow of how we’d get a new developer up and running on their new VMs.
The first two steps are: install VirtualBox and install Vagrant. We’re lucky enough to have all of our developers’ workstations on the same operating system (Mac OS X), so these steps — and a few other things we do — are relatively simple.
Next, we have a developer (in their shell) create a new directory, cd
into that directory and download a simple “bootstrapping” Vagrantfile
, which (essentially) looks like this:
# -*- mode: ruby -*- # vi: set ft=ruby : VMNAME=ENV.fetch('VM_NAME', false) unless VMNAME abort("You must set env[VM_NAME]") end VAGRANTFILE_API_VERSION = "2" def bootstrap(vm) # common base box vm.box_url = "http://example.com/path/to/fk-vm-versionnumber.box" vm.box = "fk-vm-versionnumber" # remove default minion_id stuff; provision default minion file # the base image has a minion id of "UNCONFIGURED" vm.provision :shell, :inline => 'if [ "`cat /etc/salt/minion_id`" == "UNCONFIGURED" ]; then systemctl stop salt-minion rm -rf /etc/salt/minion_id /etc/salt/pki/minion; cat > /etc/salt/minion<<EOF master: saltmaster.example.com master_port: 12345 grains: env: development EOF systemctl start salt-minion fi ' end Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define :gateway, primary: true do |config| # bootstrap (common code; salt installer) bootstrap config.vm # host, network config.vm.host_name = "#{VMNAME}.gateway.example.net" config.vm.network "private_network", ip: "172.31.2.2" config.vm.provider "virtualbox" do |v| v.customize ["modifyvm", :id, "--memory", 128] end end end
If you’re already familiar with Vagrant, then most of this could be similar to what you might use, yourself. Possibly with the exception of the VM_NAME
bits. You’ll probably also notice that this Vagrantfile
only configures one VM, and not the set of VMs (one per app) that’s described above.
Once our developer has this bootstrapping Vagrantfile
, we assign them a “VM Name”, which for the most part is our developer’s first name — mine is sean
, so we’ll use that as our example — and have them run the following command:
VM_NAME=sean vagrant up
This boots up the developer’s gateway
VM for the first time, as sean.gateway.example.net
(we have a domain name that we use instead of example.net
), and once it’s running, Vagrant executes the inline
provisioning script that’s in the Vagrantfile
above.
This provisioning script sets the VM’s Salt minion (the “Salt Minion” is the agent that runs on the “client” machine and connects to a “Salt Master” to get configuration management instructions and data) ID to sean.gateway.example.net
, and configures the minion. It then starts the minion, which connects to our public Salt Master (saltmaster.example.com:12345
in our example).
Once the VM is connected, someone from our ops team uses SSH to connect (through our jumphost — more on this later, too) to the saltmaster and manually verifies the key and approves/signs the sean.gateway.example.net
credentials.
(There’s a small opportunity for someone to spoof the same name that the VM is using and have our administrator mistakenly approve the wrong key (with the same name), but salt-key
showing two sets of credentials with the same name (or a rejected set) would be suspicious enough to halt this process… and Salt administration is a topic for another day.)
After approving the developer’s gateway VM, the administrator proceeds to “highstate” (effectively: apply the defined configuration management to) the VM. This step installs the required software on the gateway VM, such as the aforementioned Apt-Cacher NG.
Here’s the key to our bootsrapping strategy: one of the bits of managed configuration is a templated /vagrant/Vagrantfile
. This means that the Vagrantfile
is managed by our configuration management system, and can be updated on the developer’s workstation.
We (ops) intentionally can’t reach into a directory higher than the one containing the Vagrantfile
, but this directory is — by default — mounted at /vagrant
on the VMs. Vagrant takes care of managing this mount within our VMs, so each VM in our set has access to /vagrant
, which is the same directory that contains the Vagrantfile
— pretty convenient!
Configuration management alters the Vagrantfile
to contain not only an updated configuration for the gateway VM, but it also provisions the other VM configurations into the Vagrantfile, so once it’s complete, all a developer needs to do to work on another VM (such as our Mined app) is to vagrant up mined
. The developer no longer even needs to set VM_NAME
in the environment because we’ve captured that through the first gateway boot, and Salt wrote it directly to the Vagrantfile
. Ops doesn’t even need to log into the saltmaster host to approve new keys for this additional VM, and I intend to write about this part, too.
This has been a relatively long post, but I think you’ll see that managing the Vagrantfile
with Salt (or another config management platform) is pretty easy, and it greatly simplifies the burden on our developers (which might not be very skilled in system management).
In future posts, we’ll talk a bit more about some of the other Vagrantfile
customizations that I hinted at, that help our VMs shine.