Zappa and LambCI
In the previous post, we talked about Python serverless architectures on Amazon Web Services with Zappa.
In addition to the previously-mentioned benefits of being able to concentrate directly on the code of apps we're building, instead of spending effort on running and maintaining servers, we get a few other new tricks. One good example of this is that we can allow our developers to deploy (Zappa calls this update
) to shared dev and QA environments, directly, without having to involve anyone from ops (more on this in another post), nor even do we require a build/CI system to push out these types of builds.
That said, we do use a CI serversystem for this project, but it differs from our traditional setup. In the past, we used Jenkins, but found it a bit too heavy. Our current non-Lambda setup uses Buildbot to do full integration testing (it not only runs our apps' test suites, but it also spins up EC2 nodes, provisions them with Salt, and makes sure they pass the same health checks that our load balancers use to ensure the nodes should receive user requests).
On this new architecture, we still have a test suite, of course, but there are no nodes to spin up (Lambda handles this for us), no systems to provision (the "nodes" are containers that hold only our app, Amazon's defaults, and Zappa's bootstrap), and not even any load balancers to keep healthy (this is API Gateway's job).
In short, our tests and builds are simpler now, so we went looking for a simpler system. Plus, we didn't want to have to run one or more servers for CI if we're not even running any (permanent) servers for production.
So, we found LambCI. It's not a platform we would normally have chosen—we do quite a bit of JavaScript internally, but we don't currently run any other Node.js apps. It turns out that the platform doesn't really matter for this, though.
LambCI (as you might have guessed from the name) also runs on Lambda. It requires no permanent infrastructure, and it was actually a breeze to set up, thanks to its CloudFormation template. It ties into GitHub (via AWS SNS), and handles core duties like checking out the code, runing the suite only when configured to do so, and storing the build's output in S3. It's a little bit magical—the good kind of magic.
It's also very generic. It comes with some basic bootstrapping infrastructure, but otherwise relies primarily on configuration that you store in your Git repository. We store our build script there, too, so it's easy to maintain. Here's what our build script (do_ci_build
) looks like (I've edited it a bit for this post):
#!/bin/bash # more on this in a future post export PYTHONDONTWRITEBYTECODE=1 # run our test suite with tox and capture its return value pip install --user tox && tox tox_ret=$? # if tox fails, we're done if [ $tox_ret -ne 0 ]; then echo "Tox didn't exit cleanly." exit $tox_ret fi echo "Tox exited cleanly." set -x # use LAMBCI_BRANCH unless LAMBCI_CHECKOUT_BRANCH is set # this is because lambci considers a PR against master to be the PR branch BRANCH=$LAMBCI_BRANCH if [[ ! -z "$LAMBCI_CHECKOUT_BRANCH" ]]; then BRANCH=$LAMBCI_CHECKOUT_BRANCH fi # only do the `zappa update` for these branches case $BRANCH in master) STAGE=dev ;; qa) STAGE=qa ;; staging) STAGE=staging ;; production) STAGE=production ;; *) echo "Not doing zappa update. (branch is $BRANCH)" exit $tox_ret ;; esac echo "Attempting zappa update. Stage: $STAGE" # we remove these so they don't end up in the deployment zip rm -r .tox/ .coverage # virtualenv is needed for Zappa pip install --user --upgrade virtualenv # now build the venv virtualenv /tmp/venv . /tmp/venv/bin/activate # set up our virtual environment from our requirements.txt /tmp/venv/bin/pip install --upgrade -r requirements.txt --ignore-installed # we use the IAM profile on this lambda container, but the default region is # not part of that, so set it explicitly here: export AWS_DEFAULT_REGION='us-east-1' # do the zappa update; STAGE is set above and zappa is in the active virtualenv zappa update $STAGE # capture this value (and in this version we immediately return it) zappa_ret=$? exit $zappa_ret
This script, combined with our .lambci.json
configuration file (also stored in the repository, as mentioned, and read by LambCI on checkout) is pretty much all we need:
{ "cmd": "./do_ci_build", "branches": { "master": true, "qa": true, "staging": true, "production": true }, "notifications": { "sns": { "topicArn": "arn:aws:sns:us-east-1:ACCOUNTNUMBER:TOPICNAME" } } }
With this setup, our test suite runs automatically on the selected branches (and on pull request branches in GitHub), and if that's successful, it conditionally does a zappa update
(which builds and deploys the code to existing stages).
Oh, and one of the best parts: we only pay for builds when they run. We're not paying hourly for a CI server to sit around doing nothing on the weekend, overnight, or when it's otherwise idle.
There are a few limitations (such as a time limit on lambda functions, which means that the test suite + build must run within that time limit), but frankly, those haven't been a problem yet.
If you need simple builds/CI, it might be exactly what you need.