Skip to main content

How I helped fix Canadaʼs COVID Alert app

On July 31st, Canada's COVID Alert app was made available for general use, though it does not have support for actually reporting a diagnosis in most provinces, yet.

In Quebec, we can run the tracing part of the app, and if diagnosis codes become available here, the app can retroactively report contact. It uses the tracing mechanism that Google and Apple created together, and in my opinion—at least for now—Canadians should be running this thing to help us all deal with COVID-19. I won't run it forever, but for now, it seems to me that the benefits outweigh the "government can track me" fear (it's not actually tracking you; it doesn't even know who you are), and it's enabled on my phone.

But, before I decided to take this position and offer up my own movement data, I wanted to be sure the app is doing what it says it's doing—at least to the extent of my abilities to be duly diligent. (Note: it's not purely movement data that's shared—at least without more context—but it's actual physical interactions with other people whose phones are available within the radio range of Bluetooth LE.)

Before installing the app on my real daily-carry phone, I decided to put it on an old phone I still have, and to do some analysis on the most basic level of communication: who is it contacting?

In 2015, I gave a talk at ConFoo entitled "Inspect HTTP(S) with Your Own Man-in-the-Middle Non-Attacks", and this is exactly what I wanted to do here. The tooling has improved in the past 5 years, and firing up mitmproxy, even without ever having used it on this relatively new laptop, was a one-liner, thanks to Nix:

nix-shell -p mitmproxy --run mitmproxy

This gave me a terminal-based UI and proxy server that I pointed my old phone at (via the Wifi Network settings, under HTTP proxy, pointed to my laptop's local IP address). I needed to have mitmproxy create a Certificate Authority that it could use to generate and sign "trusted" certificates, and then have my phone trust that authority, by visiting http://mitm.it/ in mobile Safari, and doing the certificate acceptance dance (this is even more complicated on the latest versions of iOS). Worth noting also, is that certain endpoints such as the Apple App Store appear to use Certificate Pinning, so you'll want to do things like install the COVID Alert app from the App Store before turning on the proxy.

Once I was all set up to intercept my own traffic, I visited some https:// URLs and saw the request flows in mitmproxy.

I fired up the COVID Alert app again, and noticed something strange… something disturbing:

shows that the app is accessing clients.google.com

In addition to the expected traffic to canada.ca (I noticed it's using .alpha.canada.ca, but I suspect that's due to the often-reported unbearably-long bureaucratic hassle in getting a .canada.ca TLS certificate, but that's another story), my phone, when running COVID Alert, was contacting Google.

HEAD https://clients4.google.com/generate_204

A little web searching helped me discover that this is a commonly-used endpoint that helps developers determine if the device is behind a "captive portal" (an interaction that requires log-in or payment, or at least acceptance of terms before granting wider access to the Web). I decided that this was probably unintended by the developers of COVID Alert, but it still bothered me that an app, designed for tracking interactions between people['s devices], that the government wants us to run is telling Google that I'm running it, and disclosing my IP address in doing so:

shows that the User Agent header identifies the app as

(Note that the app clearly identifies itself in the User-Agent header.)

A bit more quick research turned up a statement by Canada's Privacy Commissioner:

An Internet Protocol (IP) address can be considered personal information if it can be associated with an identifiable individual. For example, in one complaint finding, we determined that some of the IP addresses that an internet service provider (ISP) was collecting were personal information because the ISP had the ability to link the IP addresses to its customers through their subscriber IDs.

It's not too difficult to imagine that Google probably has enough data on Canadians for this to be a real problem.

I discovered that this app is maintained by the Canadian Digital Service, and that the source code is on GitHub, but that the code itself didn't directly contain any references to clients3.google.com.

It's a React Native app, and I figured that the call out to Google must be in one of the dependencies, which—considering the norm with JavaScript apps—are pleasantly restrained mostly to React itself. I had no idea which of these libraries was calling out to Google.

Now, I could have run this app on the iOS Simulator (which did I end up doing to test my patches, below), but I thought "let's see what my actual phone is doing." I threw caution to the wind, and I ran checkra1n on my old phone, which gave me ssh access, which in turn allowed me to copy the app's application bundle to my laptop, where I could do a little more analysis (note the app is bundled as CovidShield because it was previously developed by volunteers at Shopify and was then renamed by CDS (or so I gather, anyway)).

~/De/C/iphone/CovidShield.app  grep -r 'clients3.google.com' *
main.jsbundle:__d(function(g,r,i,a,m,e,d){Object.defineProperty(e,"__esModule",{value:!0}),
e.default=void 0;var t={reachabilityUrl:'https://clients3.google.com/generate_204',
reachabilityTest:function(t){return Promise.resolve(204===t.status)},reachabilityShortTimeout:5e3,
reachabilityLongTimeout:6e4,reachabilityRequestTimeout:15e3};e.default=t},708,[]);

(Line breaks added for legibility.) Note reachabilityUrl:'https://clients3.google.com/generate_204. Found it! A bit more searching led me to a package called react-native-netinfo (which was directly in the above-linked package.json), and its default configuration that sets the reachabilityUrl to Google.

Now that I knew where it was happening, I could fix it.

To make this work the same way, we needed a reliable 204 endpoint that the app could hit, and to keep with the expectation that this app should not "leak" data outside of canada.ca, I ended up submitting a patch for the server side code that the app calls. (It turns out that this was not necessary after all, but I'm still glad I added this to my report.)

I also patched, and tested the app code itself via the iOS Simulator.

I then submitted a write-up of what was going wrong and why it's bad, to the main app repository, as cds-snc/covid-alert-app issue 1003, and felt pretty good about my COVID Civic Duty of the day.

The fine folks at the Canadian Digital Service seemed to recognize the problem and agree that it was something that needed to be addressed. A few very professional back-and-forths later (I'll be honest: I barely knew anything about the CDS and I expected some runaround from a government agency like this, and I was pleasantly surprised), we landed on a solution that simply didn't call the reachability URL at all, and they released a version of the app that fixed my issue!

With the new version loaded, I once again checked the traffic and can confirm that the new version of the app does not reach out to anywhere but .canada.ca.

A mitmproxy flow showing traffic to canada.ca and not google.com

New Site (same as old site)

You're looking at the new seancoates.com.

"I'm going to pay attention to my blog" posts on blogs are… passé, but…

I moved this site to a static site generator a few years ago when I had to move some server stuff around, and had let it decay. I spent most of the past week of evenings and weekend updating to what you see now.

It's still built on Nikola, but now the current version.

I completely reworked the HTML and CSS. Turns out—after not touching it in earnest in probably a decade (‼️)—that CSS is a much more pleasant experience these days. Lately, I've been doing almost exclusively back-end and server/operations work, so it was actually a bit refreshing to see how far CSS has come along. Last time I did this kind of thing, nothing seemed to work—or if it did work, it didn't work the same across browsers. This time, I used Nikola's SCSS builder and actually got some things done, including passing the Accessibility tests (for new posts, anyway) in Lighthouse (one of the few reasons I fire up Chrome), and a small amount of Responsive Web Design to make some elements reflow on small screens. When we built the HTML for the previous site, so long ago, small screens were barely a thing, and neither were wide browsers for the most part.

From templates that I built, Nikola generates static HTML, which has a few limitations when it comes to serving requests. The canonical URL for this post is https://seancoates.com/blogs/new-site-same-as-old-site. Note the lack of trailing slash. There are ways to accomplish this directly with where I wanted to store this generated HTML + assets (on S3), but it's always janky. I've been storing static sites on S3 and serving them up through CloudFront for what must be 7+ years, now, and it works great as long as you don't want to do anything "fancy" like redirects. You just have to name your files in a clever way, and be sure to set the metadata's Content-Type correctly. The file you're reading right now comes from a .md file that is compiled into [output]/blogs/new-site-same-as-old-site/index.html. Dealing with the "directory" path, and index.html are a pain, so I knew I wanted to serve it through a very thin HTTP handling app.

At work, we deploy mostly on AWS (API Gateway and Lambda, via some bespoke tooling, a forked and customized runtime from Zappa, and SAM for packaging), but all of that seemed too heavy for what amounts to a static site with a slightly-more-intelligent HTTP handler. Chalice had been on my radar for quite a while now, and this seemed like the perfect opportunity to try it.

It has a few limitations, such as horrific 404s, and I couldn't get binary serving to work (but I don't need it, since I put the very few binary assets on a different CloudFront + S3 distribution), but all of that considered, it's pretty nice.

Here's the entire [current version of] app.py that serves this site:

 import functools

 from chalice import Chalice, Response
 import boto3


 app = Chalice(app_name="seancoates")
 s3 = boto3.client("s3")
 BUCKET = "seancoates-site-content"

 REDIRECT_HOSTS = ["www.seancoates.com"]


 def fetch_from_s3(path):
     k = f"output/{path}"
     obj = s3.get_object(Bucket=BUCKET, Key=k)
     return obj["Body"].read()


 def wrapped_s3(path, content_type="text/html; charset=utf-8"):
     if app.current_request.headers.get("Host") in REDIRECT_HOSTS:
         return redirect("https://seancoates.com/")

     try:
         data = fetch_from_s3(path)
         return Response(
             body=data, headers={"Content-Type": content_type}, status_code=200,
         )
     except s3.exceptions.NoSuchKey:
         return Response(
             body="404 not found.",
             headers={"Content-Type": "text/plain"},
             status_code=404,
         )


 def check_slash(handler):
     @functools.wraps(handler)
     def slash_wrapper(*args, **kwargs):
         path = app.current_request.context["path"]
         if path[-1] == "/":
             return redirect(path[0:-1])
         return handler(*args, **kwargs)

     return slash_wrapper


 def redirect(path, status_code=303):
     return Response(
         body="Redirecting.",
         headers={"Content-Type": "text/plain", "Location": path},
         status_code=status_code,
     )


 @app.route("/")
 def index():
     return wrapped_s3("index.html")


 @app.route("/assets/css/{filename}")
 def assets_css(filename):
     return wrapped_s3(f"assets/css/{filename}", "text/css")


 @app.route("/blogs/{slug}")
 @check_slash
 def blogs_slug(slug):
     return wrapped_s3(f"blogs/{slug}/index.html")


 @app.route("/brews")
 @app.route("/shares")
 @app.route("/is")
 @check_slash
 def pages():
     return wrapped_s3(f"{app.current_request.context['path'].lstrip('/')}/index.html")


 @app.route("/archive")
 @app.route("/blogs")
 def no_page():
     return redirect("/")


 @app.route("/archive/{archive_page}")
 @check_slash
 def archive(archive_page):
     return wrapped_s3(f"archive/{archive_page}/index.html")


 @app.route("/rss.xml")
 def rss():
     return wrapped_s3("rss.xml", "application/xml")


 @app.route("/assets/xml/rss.xsl")
 def rss_xsl():
     return wrapped_s3("assets/xml/rss.xsl", "application/xml")


 @app.route("/feed.atom")
 def atom():
     return wrapped_s3("feed.atom", "application/atom+xml")

Not bad for less than 100 lines (if you don't count the mandated whitespace, at least).

Chalice handles the API Gateway, Custom Domain Name, permissions granting (for S3 access, via IAM policy) and deployments. It's pretty slick. I provided DNS and a certificate ARN from Certificate Manager.

Last thing: I had to trick Nikola into serving "pretty URLs" without a trailing slash. It has two modes, basically: /blogs/post/ or /blogs/post/index.html. I want /blogs/post. Now, avoiding the trailing slash usually invokes a 30x HTTP redirect when the HTTPd that's serving your static files needs to add it so you get the directory (index). But in my case, I was handling HTTP a little more intelligently, so I didn't want it. You can see my app.py above handles the trailing slash redirects in the wrapped_s3 function, but to get Nikola to handle this in the RSS/Atom (I had control in the HTML templates, but not in the feeds), I had to trick it with some ugliness in conf.py:

# hacky hack hack monkeypatch
# strips / from the end of URLs
from nikola import post

post.Post._unpatched_permalink = post.Post.permalink
post.Post.permalink = lambda self, lang=None, absolute=False, extension=".html", query=None: self._unpatched_permalink(
    lang, absolute, extension, query
).rstrip("/")

I feel dirty about that part, but pretty great about the rest.

Faculty

Officially, as of the start of the year, I've joined Faculty.

Faculty is not just new to me, but something altogether new. It's also something that feels older than it is. The familiar, experienced kind of old. The good kind. The kind I like.

It was founded by my good friend and long-term colleague Chris Shiflett, whom I'm very happy to be working with, directly, again.

People often ask us how long we've worked together, and the best answer I can come up with is "around 15 years"—nearly all of the mature part of my career. Since the early 2000s, Chris and I have attended and spoken at conferences together, he wrote a column for PHP Architect under my watch as the Editor-in-Chief, I worked on Chris's team at OmniTI, we ran Web Advent together, and we worked collaboratively at Fictive Kin.

A surprised "fifteen years!" is a common response, understandably—that's an eternity in web time. On a platform that reinvents itself regularly, where we (as a group) often find ourselves jumping from trend to trend, it's a rare privilege to be able to work with friends with such a rich history.

This kind of long-haul thinking also informs the ethos of Faculty, itself. We're particularly interested in mixing experience, proven methodologies and technology, and core values, together with promising newer (but solid) technologies and ideas. We help clients reduce bloat, slowness, and inefficiencies, while building solutions to web and app problems.

We care about doing things right, not just quickly. We want to help clients build projects they (and we) can be proud of. We remember the promise—and the output—of Web 2.0, and feel like we might have strayed a little away from the best Web that we can build. I'm really looking forward to leveraging our team's collective and collected experience to bring excellence, attention to detail, durability, and robustness to help—even if in some small way—influence the next wave of web architecture and development.

If any of that sounds interesting to you, we're actively seeking new clients. Drop us a note. I can't wait.

Shortbread

December 1st. Shortbread season begins today.

(Here's something a little different for long-time blog readers.)

Four years ago, in December, I made dozens and dozens of batches of shortbread, slowly tweaking my recipe, batch after batch. By Christmas, that year, I had what I consider (everyone's tastes are different) the perfect ratio of flour/butter/sugar/salt, and exactly the right texture: dense, crisp, substantial, and short—not fluffy and light like whipped/cornstarch shortbread, and not crumbly like Walkers.

I present for you, friends, that recipe.

shortbread

Ingredients

  • 70g granulated white sugar (don't use the fancy kinds)
  • 130g unsalted butter (slightly softened; it's colder than normal room temperature in my kitchen)
  • 200g all purpose white flour (I use unbleached)
  • 4g kosher salt

Method

In a stand mixer, with the paddle attachment, cream (mix until smooth) the sugar and butter. I like to throw the salt in here, too. You'll need to scrape down the sides with a spatula, as it mixes (turn the mixer off first, of course).

When well-creamed, and with the mixer on low speed, add the flour in small batches; a little at a time. Mix until combined so it looks like cheese curds. It's not a huge deal if you over-mix, but the best cookies come when it's a little crumblier than a cohesive dough, in my experience.

Turn out the near-dough onto a length of waxed paper, and roll into a log that's ~4cm in diameter, pressing the "curds" together, firmly. Wrapped in the wax paper, refrigerate for 30 minutes.

Preheat your oven to 325°F, with a rack in the middle position.

Slice the chilled log (with a sharp, non-searated knife) into ~1.5cm thick rounds, and place onto a baking sheet with a silicone mat or parchment paper. (If you refrigerate longer, you'll want to let it soften a little before slicing.)

Bake until right before the tops/sides brown. In my oven, this takes 22 minutes. Remove from oven and allow to cool on the baking sheet.

Eat and share!

Don't try to add vanilla or top them with anything, unless you like inferior shortbread. (-; Avoid the temptation to eat them right away, because they're 100 times better when they've cooled. Pop a couple in the freezer for 5 mins if you're really impatient (and I am).

Enjoy! Let me know if you make these, and how they turned out.

API Gateway timeout workaround

The last of the four (see previous posts) big API Gateway limitations is the 30 second integration timeout.

This means that API Gateway will give up on trying to serve your request to the client after 30 seconds—even though Lambda has a 300 second limit.

In my opinion, this is a reasonable limit. No one wants to be waiting around for an API for more than 30 seconds. And if something takes longer than 30s to render, it should probably be batched. "Render this for me, and I'll come back for it later. Thanks for the token. I'll use this to retrieve the results."

In an ideal world, all HTTP requests should definitely be served within 30 seconds. But in practice, that's not always possible. Sometimes realtime requests need to go to a slow third party. Sometimes the client isn't advanced enough to use the batch/token method hinted at above.

Indeed, 30s often falls below the client's own timeout. We're seeing a practical limitation where clients can often run for 90-600 seconds before timing out.

Terrible user experience aside, I needed to find a way to serve long-running requests, and I really didn't want to violate our serverless architecure to do so.

But this 30 second timeout in API gateway is a hard limit. It can't be increased via the regular AWS support request method. In fact, AWS says that it can't be increased at all—which might even be true. (-:

As I mentioned in an earlier post, I did a lot of driving this summer. Lots of driving led to lots of thinking, and lots of thinking led to a partial solution to this problem.

What if I could use API Gateway to handle the initial request, but buy an additional 30 seconds, somehow. Or better yet, what if I could buy up to an additional 270 seconds (5 minutes total).

Simply put, an initial request can kick off an asynchronous job, and if it takes a long time, after nearly 30 seconds, we can return an HTTP 303 (See Other) redirect to another endpoint that checks the status of this job. If the result still isn't available after another (nearly) 30s, redirect again. Repeat until the Lambda function call is killed after the hard-limited 300s, but if we don't get to the hard timeout, and we find the job has finished, we can return that result instead of a 303.

But I didn't really have a simple way to kick off an asynchronous job. Well, that's not quite true. I did have a way to do that: Zappa's asynchronous task execution. What I didn't have was a way to get the results from these jobs.

So I wrote one, and Zappa's maintainer, Rich, graciously merged it. And this week, it was released. Here's a post I wrote about it over on the Zappa blog.

The result:

$ time curl -L 'https://REDACTED.apigwateway/dev/payload?delay=40'
{
  "MESSAGE": "It took 40 seconds to generate this."
}

real    0m52.363s
user    0m0.020s
sys     0m0.025s

Here's the code (that uses Flask and Zappa); you'll notice that it also uses a simple backoff algorithm:

@app.route('/payload')
def payload():
    delay = request.args.get('delay', 60)
    x = longrunner(delay)
    if request.args.get('noredirect', False):
        return 'Response will be here in ~{}s: <a href="{}">{}</a>'.format(
            delay, url_for('response', response_id=x.response_id), x.response_id)
    else:
        return redirect(url_for('response', response_id=x.response_id))


@app.route('/async-response/<response_id>')
def response(response_id):
    response = get_async_response(response_id)
    if response is None:
        abort(404)

    backoff = float(request.args.get('backoff', 1))

    if response['status'] == 'complete':
        return jsonify(response['response'])

    sleep(backoff)
    return "Not yet ready. Redirecting.", 303, {
        'Content-Type': 'text/plain; charset=utf-8',
        'Location': url_for(
            'response', response_id=response_id,
            backoff=min(backoff*1.5, 28)),
        'X-redirect-reason': "Not yet ready.",
    }


@task(capture_response=True)
def longrunner(delay):
    sleep(float(delay))
    return {'MESSAGE': "It took {} seconds to generate this.".format(delay)}

That's it. Long-running tasks in API Gateway. Another tool for our serverless arsenal.

API Gateway Limitations

As I've mentioned a couple times in the past, I've been working with Lambda and API Gateway.

We're using it to host/deploy a big app for a big client, as well as some of the ancillary tooling to support the app (such as testing/builds, scheduling, batch jobs, notifications, authentication services, etc.).

For the most part, I love it. It's helped evaporate the most boring—and often most difficult—parts of deploying highly-available apps.

But it's not all sunshine and rainbows. Once the necessary allowances are made for a new architecture (things like: if we have concurrency of 10,000, a runaway process's consequences are amplified, database connection pools are easily exhausted, there's no simple way to use static IP addresses), there are 4 main problems that I've encountered with serving an app on Lambda and API Gateway.

The first two problems are essentially the same. Both headers and query string parameters are clobbered by API Gateway when it creates an API event object.

Consider the following Lambda function (note: this does not use Zappa, but functions provisioned by Zappa have the same limitation):

import json

def respond(err, res=None):
    return {
        'statusCode': '400' if err else '200',
        'body': err.message if err else json.dumps(res),
        'headers': {
            'Content-Type': 'application/json',
        },
    }

def lambda_handler(event, context):
    return respond(None, event.get('queryStringParameters'))

Then if you call your (properly-configured) function via the API Gateway URL such as: https://lambdatest.example.com/test?foo=1&foo=2, you will only get the following queryStringParameters:

{"foo": "2"}

Similarly, a modified function that dumps the event's headers, when called with duplicate headers, such as with:

curl 'https://lambdatest.example.com/test' -H'Foo: 1' -H'Foo: 2'

…will result in the second header overwriting the first:

{
    
    "headers":
        
        "Foo": "2",
        
    
}

The AWS folks have backed themselves into a bit of a corner, here. It's not trivial to change the way these events work, without affecting the thousands (millions?) of existing API Gateway deployments out there.

If they could make a change like this, it might make sense to turn queryStringParameters into an array when there would previously have been a clobbering:

{"foo": ["1", "2"]}

This is a bit more dangerous for headers:

{
    
    "headers":
        
        "Foo": [
            "1",
            "2"
        ],
        
    
}

This is not impossible, but it is a BC-breaking change.

What AWS could do, without breaking BC, is (even optionally, based on the API's Gateway configuration/metadata) supply us with an additional field in the event object: rawQueryString. In our example above, it would be foo=1&foo=2, and it would be up to my app to parse this string into something more useful.

Again, headers are a bit more difficult, but (optionally, as above), one solution might be to supply a rawHeaders field:

{
    
    "rawHeaders": [
        "Foo: 1",
        "Foo: 2",
        
    ],
    
}

We've been lucky so far in that these first two quirks haven't been showstoppers for our apps. I was especially worried about a conflict with access-es, which is effectively a proxy server.

The next two limitations (API Gateway, Lambda) are more difficult, but I've come up with some workarounds:

Lambda payload size workaround

Another of the AWS Lambda + API Gateway limitations is in the size of the response body we can return.

AWS states that the full payload size for API Gateway is 10 MB, and the request body payload size is 6 MB in Lambda.

In practice, I've found this number to be significantly lower than 6 MB, but perhaps I'm just calculating incorrectly.

Using a Flask route like this:

@app.route('/giant')
def giant():
    payload = "x" * int(request.args.get('size', 1024 * 1024 * 6))
    return payload

…and calling it with curl, I get the following cutoff:

$ curl -s 'https://REDACTED/dev/giant?size=4718559' | wc -c
 4718559
$ curl -s 'https://REDACTED/dev/giant?size=4718560'
{"message": "Internal server error"}

Checking the logs (with zappa tail), I see the non-obvious-unless-you've-come-across-this-before error message:

body size is too long

Let's just call this limit "4 MB" to be safe.

So, why does this matter? Well, sometimes—like it or not—APIs need to return more than 4 MB of data. In my opinion, this should usually (but not always) be resolved by requesting smaller results. But sometimes we don't get control over this, or it's just not practical.

Take Kibana, for example. In the past year, we started using Elasticsearch for logging certain types of structured data. We elected to use the AWS Elasticsearch Service to host this. AWS ES has an interesting authentication method: it requires signed requests, based on AWS IAM credentials. This is super useful for our Lambda-based app because we don't have to rely on DB connection pools, firewalls, VPCs, and much of the other pain that comes with using an RDBMS in a highly-distributed system. Our app can use its inherited IAM profile to sign requests to AWS ES quite easily, but we also wanted to give our developers and certain partners access to our structured logs.

At first, we had our developers run a local copy of aws-es-kibana, which is a proxy server that uses the developer's own AWS credentials (we distribute user or role credentials to our devs) to sign requests. Running a local proxy is a bit of a pain, though—especially for 3rd parties.

So, I wrote access-es (which is still in a very early "unstable" state, though we do use it "in production" (but not in user request flows)) to allow our users to access Kibana (and Elasticsearch). access-es runs on lambda and effectively works as a reverse HTTPS proxy that signs requests for session/cookie authenticated users, based on the IAM profile. This was a big win for our no-permanent-servers-managed-by-us architecture.

But the very first day we used access-es to load some large logs in Kibana, it failed on us.

It turns out that if you have large documents in Elasticsearch, Kibana loads very large blobs of JSON in order to render the discover stream (and possibly other streams). Larger than "4 MB", I noticed. Our (non-structured) logs filled with body size is too long messages, and I had to make some adjustments to the page size in the discover feed. This bought us some time, but we ran into the payload size limitation far too often, and at the most inopportune moments, such as when trying to rapidly diagnose a production issue.

The "easy" solution to this problem is to concede that we probably can't use Lambda + API Gateway to serve this kind of app. Maybe we should fire up some EC2 instances, provision them with Salt, manage upgrades, updates, security alerts, autoscalers, load balancers… and all of those things that we know how to do so well, but were really hoping to leave behind with the new "serverless" (no permanent servers managed by us) architecture.

This summer, I did a lot of driving, and during one of the longest of those driving sessions, I came up with an idea about how to handle this problem of using Lambda to serve documents that are larger than the Lambda maximum response size.

"What if," I thought, "we could calculate the response, but never actually serve it with Lambda. That would fix it." Turns out it did. The solution—which will probably seem obvious once I state it—is to use Lambda to calculate the response body, store that response body in a bucket in S3 (where we don't have to manage any servers), use Lambda + API Gateway to redirect the client to the new resource on S3.

Here's how I did it in access-es:

req = method(
    target_url,
    auth=awsauth,
    params=request.query_string,
    data=request.data,
    headers=headers,
    stream=False
)

content = req.content

if overflow_bucket is not None and len(content) > overflow_size:

    # the response would be bigger than overflow_size, so instead of trying to serve it,
    # we'll put the resulting body on S3, and redirect to a (temporary, signed) URL
    # this is especially useful because API Gateway has a body size limitation, and
    # Kibana serves *huge* blobs of JSON

    # UUID filename (same suffix as original request if possible)
    u = urlparse(target_url)
    if '.' in u.path:
        filename = str(uuid4()) + '.' + u.path.split('.')[-1]
    else:
        filename = str(uuid4())

    s3 = boto3.resource('s3')
    s3_client = boto3.client(
        's3', config=Config(signature_version='s3v4'))

    bucket = s3.Bucket(overflow_bucket)

    # actually put it in the bucket. beware that boto is really noisy
    # for this in debug log level
    obj = bucket.put_object(
        Key=filename,
        Body=content,
        ACL='authenticated-read',
        ContentType=req.headers['content-type']
    )

    # URL only works for 60 seconds
    url = s3_client.generate_presigned_url(
        'get_object',
        Params={'Bucket': overflow_bucket, 'Key': filename},
        ExpiresIn=60)

    # "see other"
    return redirect(url, 303)

else:
    # otherwise, just serve it normally
    return Response(content, content_type=req.headers['content-type'])

If the body size is larger than overflow_size, we store the result on S3, and the client receives a 303 see other with an appropriate Location header, completely bypassing the Lambda body size limitation, and saving the day for our "serverless" architecture.

The resulting URL is signed by AWS to make it only valid for 60 seconds, and the resource isn't available without such a signature (unless otherwise authenticated with IAM + appropriate permissions). Additionally, we use S3's lifecycle management to automatically delete old objects.

For clients that are modern browsers, though, you'll need to properly manage the CORS configuration on that S3 bucket.

This approach fixed our Kibana problem, and now sits in our arsenal of tools for when we need to handle large responses in our other serverless Lambda + API Gateway apps.

Zappa and LambCI

In the previous post, we talked about Python serverless architectures on Amazon Web Services with Zappa.

In addition to the previously-mentioned benefits of being able to concentrate directly on the code of apps we're building, instead of spending effort on running and maintaining servers, we get a few other new tricks. One good example of this is that we can allow our developers to deploy (Zappa calls this update) to shared dev and QA environments, directly, without having to involve anyone from ops (more on this in another post), nor even do we require a build/CI system to push out these types of builds.

That said, we do use a CI serversystem for this project, but it differs from our traditional setup. In the past, we used Jenkins, but found it a bit too heavy. Our current non-Lambda setup uses Buildbot to do full integration testing (it not only runs our apps' test suites, but it also spins up EC2 nodes, provisions them with Salt, and makes sure they pass the same health checks that our load balancers use to ensure the nodes should receive user requests).

On this new architecture, we still have a test suite, of course, but there are no nodes to spin up (Lambda handles this for us), no systems to provision (the "nodes" are containers that hold only our app, Amazon's defaults, and Zappa's bootstrap), and not even any load balancers to keep healthy (this is API Gateway's job).

In short, our tests and builds are simpler now, so we went looking for a simpler system. Plus, we didn't want to have to run one or more servers for CI if we're not even running any (permanent) servers for production.

So, we found LambCI. It's not a platform we would normally have chosen—we do quite a bit of JavaScript internally, but we don't currently run any other Node.js apps. It turns out that the platform doesn't really matter for this, though.

LambCI (as you might have guessed from the name) also runs on Lambda. It requires no permanent infrastructure, and it was actually a breeze to set up, thanks to its CloudFormation template. It ties into GitHub (via AWS SNS), and handles core duties like checking out the code, runing the suite only when configured to do so, and storing the build's output in S3. It's a little bit magical—the good kind of magic.

It's also very generic. It comes with some basic bootstrapping infrastructure, but otherwise relies primarily on configuration that you store in your Git repository. We store our build script there, too, so it's easy to maintain. Here's what our build script (do_ci_build) looks like (I've edited it a bit for this post):

#!/bin/bash

# more on this in a future post
export PYTHONDONTWRITEBYTECODE=1

# run our test suite with tox and capture its return value
pip install --user tox && tox
tox_ret=$?

# if tox fails, we're done
if [ $tox_ret -ne 0 ]; then
    echo "Tox didn't exit cleanly."
    exit $tox_ret
fi

echo "Tox exited cleanly."

set -x

# use LAMBCI_BRANCH unless LAMBCI_CHECKOUT_BRANCH is set
# this is because lambci considers a PR against master to be the PR branch
BRANCH=$LAMBCI_BRANCH
if [[ ! -z "$LAMBCI_CHECKOUT_BRANCH" ]]; then
    BRANCH=$LAMBCI_CHECKOUT_BRANCH
fi

# only do the `zappa update` for these branches
case $BRANCH in
    master)
        STAGE=dev
        ;;
    qa)
        STAGE=qa
        ;;
    staging)
        STAGE=staging
        ;;
    production)
        STAGE=production
        ;;
    *)
        echo "Not doing zappa update. (branch is $BRANCH)"
        exit $tox_ret
        ;;
esac

echo "Attempting zappa update. Stage: $STAGE"

# we remove these so they don't end up in the deployment zip
rm -r .tox/ .coverage

# virtualenv is needed for Zappa
pip install --user --upgrade virtualenv

# now build the venv
virtualenv /tmp/venv
. /tmp/venv/bin/activate

# set up our virtual environment from our requirements.txt
/tmp/venv/bin/pip install --upgrade -r requirements.txt --ignore-installed

# we use the IAM profile on this lambda container, but the default region is
# not part of that, so set it explicitly here:
export AWS_DEFAULT_REGION='us-east-1'

# do the zappa update; STAGE is set above and zappa is in the active virtualenv
zappa update $STAGE

# capture this value (and in this version we immediately return it)
zappa_ret=$?
exit $zappa_ret

This script, combined with our .lambci.json configuration file (also stored in the repository, as mentioned, and read by LambCI on checkout) is pretty much all we need:

{
    "cmd": "./do_ci_build",
    "branches": {
        "master": true,
        "qa": true,
        "staging": true,
        "production": true
    },
    "notifications": {
        "sns": {
            "topicArn": "arn:aws:sns:us-east-1:ACCOUNTNUMBER:TOPICNAME"
        }
    }
}

With this setup, our test suite runs automatically on the selected branches (and on pull request branches in GitHub), and if that's successful, it conditionally does a zappa update (which builds and deploys the code to existing stages).

Oh, and one of the best parts: we only pay for builds when they run. We're not paying hourly for a CI server to sit around doing nothing on the weekend, overnight, or when it's otherwise idle.

There are a few limitations (such as a time limit on lambda functions, which means that the test suite + build must run within that time limit), but frankly, those haven't been a problem yet.

If you need simple builds/CI, it might be exactly what you need.

Zappa

For the past few months, I've been focused primarily on a Python project that we're deploying without any servers.

Well, of course that's not really true, but we're deploying it without any permanent servers.

The idea of "serverless" architecture isn't brand new, anymore, but running serverless applications on the AWS infrastructure—which I've become very familiar with over the past few years—is still a pretty new concept.

AWS Lambda has been around for a few years, now. It's a platform that allows you to run arbitary code, in response to events, and pay only for gigabyte-seconds of RAM time. This means that someone else (Amazon) manages the servers, networking, storage, etc.

At some point, Lambda gained the ability to run Python code (instead of just JavaScript, C#, and Java). This piqued my interest, but we didn't have a whole lot of use for it in building web apps. Nevertheless, we used it to turn SNS notifications into IRC messages, so our #ops IRC channel would get inline notices that our Autoscalers were autoscaling.

In early 2015, I tweeted: "…too bad @AWSCloud Lambda can’t listen (and respond) to HTTP(S) events on Elastic Load Balancer…".

A while later, Amazon introduced API Gateway, which—amid other functionality that we don't use very much—had the ability to turn arbitrary HTTP(S) requests into AWS events. Things got interesting. You'll recall, from above, that Lambda functions can run in response to events.

Interesting in that we could respond to HTTP events, but it wasn't really possible to use regular tools and frameworks with API Gateway. We're used to building apps in Flask, not monolithic Python functions that do their own HTTP request parsing.

As time went on, these tools got a little more mature and gained more useful features. I kept thinking back to my tweet where we could just run code, not servers.

Then, in October—increasingly tired of the grind of otherwise-simple operations work—I went searching a bit harder for something to help with the monolithic lambda function problem, and I stumbled upon Zappa. It seemed to be exactly the kind of thing I was looking for. With a bit of boilerplate, hackery, and near-magic, it turns API Gateway request events into WSGI requests, and Flask (plus other Python tools) speaks WSGI. This looked great.

Little did I know that right around that same time, there were some new, barely-documented (at the time), changes to API Gateway that would help reduce the magic and hacky parts of the Zappa boilerplate.

I quickly built my first simple Zappa-based app (it was actually porting a 10-year-old PHP app), and deployed it: paste.website.

We're using this technology on a very large client project, too. It's exciting that we're going to be able to do it without having to worry about things like software upgrades, underutilized servers, and build nodes that cost us money while we're all sleeping.

I'm not going to let this turn into yet-another-Zappa-tutorial—there are plenty of those out there—but if you're interested in this kind of thing and hadn't heard of Zappa before now, well… now you have.

We (mostly Rich) even managed to get it working on the brand-new Python 3.6 target in Lambda.

DST pain

Tonight, in Montreal (and many other North American cities), we change from Standard Time to Daylight Time.

I know we programmers complain about date/time math relentlessly, but I thought it was worth sharing this real-life problem that someone asked me about on Reddit this weekend:

It sounds like this is a serious problem that has effected [sic] you on more than one occasion. Story?

The simplest complicated scenario is: let's say we have a call scheduled between our team on the east coast of North America and a colleague in the UK at 10AM Montreal time.

Normally Nottingham (UK, same as London time) is 5 hours ahead of Montreal. This is pretty easy. Our British colleague needs to join at 3PM.

However, tonight, we change from EST to EDT in Montreal (clocks move one hour ahead). But the UK will still be on GMT tomorrow. So, now, the daily 10AM call becomes a 2PM call for the Brits.

But this is only for the next 2 weeks, because BST starts on March 26th (BST is to GMT as EDT is to EST). Then, we go back to a 5 hour difference. So we can expect Europeans to show up an hour late for everything this week. Or maybe we're just an hour early on this side of the Atlantic.

To make this more difficult, we often have calls between not only Montreal and England, but also those two plus Korea and Brazil.

Korea doesn't employ Daylight Saving Time, so a standing 7AM call in Seoul (5PM in Montreal) becomes a 6PM call in Montreal.

And to even further complicate things, our partners in São Paulo switched FROM DST to standard time on Feb 17. Because they're in the southern hemisphere the clock change is the opposite direction of ours, on a different day.

So: yes. It has affected our team on many occasions. It's already very difficult to get that many international parties synced up. DST can make it nearly impossible.