My tool-kit for tiny APIs

Gareth Cronin
8 min readJul 11, 2021

--

In my first story in this series, I explained that I spend about a day a week solving problems that interest me with tiny software applications. I’ve been assembling a tool-kit to quickly build against architectural patterns that I find keep coming up in tiny apps. The technology I choose has to solve these problems:

  • I only want to pay for what I use (scale-to-zero)
  • I don’t have a lot of time available for learning or building (no steep learning curves without substantial time-savings)
  • I don’t have time for maintenance activities (no patching servers, automated scale-up)
  • I’m not a good UI designer or front end engineer (design systems are great)

In my second story, I described the tool-kit I use for building tiny responsive web apps. When I build these, I stick to two-tiers for CRUD type functions by using Google Firebase and its cunning approach to authentication and authorisation. But — sometimes I need server-side logic, so I want to add an API. If I’m going to add server-side logic, then I also want to make that API public, because there’s nothing more annoying than a useful application that is locked away from its users.

As with my previous story, I’ll use a project of mine to illustrate the stack.

TL;DR

  • Open API (and Postman)
  • AWS API Gateway
  • Google Firebase Cloud Functions with Node.js and Express
  • Google Firebase
  • Github Actions

The business problem

My side projects fall into two camps:

  • those that solve a problem I have, and it would be cool if I could figure out a way to monetise them one day
  • those that solve a problem I care about, and it would be cool if I could offer them up for free without it costing me too much along the way

This one is in the second camp. Like a lot of people these days, I worry about what happens to all that junk that is used to package the things we consume. I also get annoyed at inefficient and nonsensical supply chains, and I have the urge to disrupt those.

A supply chain that starts with something we throw out and ends with a landfill, destruction, reuse, or recycling is known as a “waste stream”. The particular waste stream that bugs me is when I get a food delivery order (usually Uber Eats) and discover it has been sent in packaging that is not compostable or recyclable. There’s really no need to use packaging options like that, and my hypothesis is that if customers were able to tell vendors that they want them to change their packaging and where they can find sensibly priced alternatives, the vendors will change their behaviour.

My vision is for some kind of functionality in the Uber Eats app, or Zomato, and similar where a consumer can rate the choice of packaging used in a food delivery, and if it’s a poor choice, to send details of better alternatives to the vendor. When I started exploring this, I discovered that figuring out whether something is recyclable here in New Zealand is not straightforward. There are 67 separate recycling schemes: one for each local authority in the country. In some places polypropylene food containers (“number 5 plastics”) are picked up at the kerb for recycling, in others they go to landfill. There is no central database of the difference in schemes, so to even get started on this one, I’d need to collect all that information, put it in a structured form and expose it with… an API.

Three problems and then some

My tool-kit needs to solve the first three of the problems I’ve mentioned:

  • I only want to pay for what I use (scale-to-zero)
  • I don’t have a lot of time available for learning or building (no steep learning curves without substantial time-savings)
  • I don’t have time for maintenance activities (no patching servers, automated scale-up)

The server-side execution

A while back I tried using the Serverless Framework to build a CRUD API with AWS Lambda and API Gateway. The gateway worked fine, but the cold start time on Lambdas was just too slow for a scale-to-zero application. Lambda does let you pre-provision compute resource so the functions are always pre-warmed, but that means ongoing charges when no one is using the application. Even in my closest physical region, I was looking at over ten seconds for a cold start on each Lambda function.

For this project I tried Google Cloud functions instead. The cold start time in Functions is impressive — it takes no more than about five seconds and sometimes less. I found Cloud Functions is very easy to use and followed a tutorial on creating a simple Firebase-backed function.

Cloud Functions lets a Node.js person like me write the API in Express — the de facto standard Node web service framework. It only takes a few lines to marshal a Firebase collection and return it from an endpoint. E.g. here’s the code for the endpoint that returns the list of local authorities with recycling schemes:

app.get('/authority', async (req, res) => {
let docRef = db.collection('councils');
const snapshot = await docRef.get();
res.status(200).send(snapshot.docs.map(doc=>doc.id));
});

The public facing API

An API is only consumable if it has a proper developer experience:

  • I need a way to issue API keys to consumers and manage usage limits if required
  • I need a way to measure usage by consumers
  • I need a way to prevent misuse through tools like “throttling” (rate-limiting)
  • I don’t want to have to keep API documentation up to date manually and separately from the contract in the code

I certainly don’t want to burn up my precious building time by building my own platform for these, so I want to use services that provide them out of the box. Since I had settled on Google Cloud Functions, I explored using Google Endpoints — the Google Cloud Platform managed API gateway. Experience tells me that setting up a custom domain is sometimes the curly part with cloud services, so I started making my way through setting it up using the official docs. It was going well until I reached this part:

Build the Endpoints service config into a new ESPv2 docker image. You will later deploy this image onto the reserved Cloud Run service.

That is just crazy-talk. I don’t want to build a container and deploy it with a series of arcane commands just to configure an API gateway! Back I went to AWS API Gateway. I’ve used it before, and setting up a custom domain to proxy other HTTP endpoints (in this case the cloud functions) is a doddle. It also has key management, a dev portal, rate limiting, and usage reporting.

API Gateway also has support for Open API — the open standard for defining API contracts that started life as Swagger. Swagger provide a beautiful online editor for messing around with specs in-browser. I defined and documented my contracts for the API in the Swagger editor, then imported them into Postman for testing directly against the Google Cloud Functions URLs.

Once I was happy with my contracts, I imported them into API Gateway, manually configured the Gateway in the console to proxy calls into my Google Cloud Functions (including passing an extra header with a secret token that my Cloud Functions check to ensure it’s the gateway calling them) and then re-exported them to Open API. The API Gateway export makes use of Open API’s extensibility to include AWS-specific extensions that include all of the integration configuration in the spec file.

Why would I need to do that? Well, that brings us to CI/CD.

Deployment

Remembering that I don’t have time for maintenance, I didn’t want to have to rely on making manual changes to the API Gateway configuration as I evolved the API, and I didn’t want to have to deploy my Cloud Functions manually.

API Gateway CI/CD

I’ve used AWS Cloud Formation for automating AWS provisioning and configuration in the past, but I’d read good things about AWS Cloud Development Kit (CDK) in an edition of Thoughtworks Tech Radar and thought I’d give it a try. The concept is clever — you get to construct Cloud Formation style declarative configurations in your favourite imperative language. This ticks my “shallow learning curve” box nicely, since I can produce my configuration in Node. Better than that though, the CDK support for API Gateway includes the ability to import an Open API spec with AWS extensions. That means the bulk of the configuration is already done, and the code is only required for setting variables, configuring the custom domain, and making the DNS entry. The full code looks like this:

const arn = 'arn:aws'; // ...
const certificate = acm.Certificate.fromCertificateArn(this, 'Certificate', arn);
const definition = apigateway.ApiDefinition.fromAsset('../nz-recycling-advanced-prod-oas30-apigateway.json');
const subdomain = 'nz-recycling.api.cronin.nz';
const api = new apigateway.SpecRestApi(this, "recycling-api", {
deploy: false,
apiDefinition: definition,
domainName: {
domainName: subdomain,
certificate: certificate,
}
});
const zone = route53.HostedZone.fromHostedZoneAttributes(this, 'CroninDomain', {
zoneName: 'cronin.nz',
hostedZoneId: 'Z14C5CS' //...,
});
const deployment = new apigateway.Deployment(this, 'recycling-deployment', { api });
new apigateway.Stage(this, 'prod', {
deployment,
stageName: 'prod',
variables: { 'authToken': 'my-secret-token' },
loggingLevel: apigateway.MethodLoggingLevel.INFO,
});
new route53.ARecord(this, 'CustomDomainAliasRecord', {
zone,
recordName: subdomain,
target: route53.RecordTarget.fromAlias(new targets.ApiGateway(api))
});

If you’ve read my second story, you’ll know I’m a fan of Github Actions. The community support for just about anything you’d ever want to do in a CI/CD pipeline is amazing, and I wasn’t disappointed with the AWS CDK support. The following YAML in the .github/workflows directory is all that is required to pop up the whole API gateway stack on every push to my Github repo:

name: Gateway buildon:
push:
branches: [ main ]
paths:
- 'gateway/**'
jobs:
aws_cdk:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: cdk deploy
uses: youyo/aws-cdk-github-actions@v2
with:
cdk_subcommand: 'deploy'
cdk_args: '--require-approval never'
actions_comment: false
working_dir: 'gateway'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-west-2'

Cloud Functions CI/CD

With the API Gateway layer automated, it was time to automate the Google Functions.

I had a couple of secrets in my API to deal with (one for accessing a third party API that figures out which local authority a user inside the boundary for based on geolocation, and one for the service account configuration for Firebase). Google’s equivalent of AWS’s encrypted environment variables in Lambdas is Secret Manager. I found a nice tutorial on accessing secrets from Cloud Functions (it’s in Python but the principles are the same) and only had a minor hiccup with the IAM model: each secret has its own set of permissions, so the service account for the cloud function has to be granted special access on each secret.

Integrating with Github Actions was elegantly simple as usual. Just like the action for AWS, the Firebase token lives in a secret in the Github repo. Running firebase login:ci in the project directory generates the secret. The official documentation for the Github Action for Firebase made some assumptions about directory layout and build jobs that tripped me up, but then I found a much simpler version that worked nicely with my layout:

name: Functions buildon:
push:
branches: [ main ]
paths:
- 'functions/**'
- '.github/workflows/functions.yml'
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout Repo
uses: actions/checkout@master
- name: Install Dependencies
run: |
cd functions
npm install
- name: Deploy to Firebase
uses: w9jds/firebase-action@master
with:
args: deploy --only functions
env:
FIREBASE_TOKEN: ${{ secrets.FIREBASE_TOKEN }}

Summary

Let’s revisit those principles.

I only want to pay for what I use (scale-to-zero)

AWS Gateway, Google Cloud Functions, and Firebase all have zero charge if they are not used.

I don’t have a lot of time available for learning or building (no steep learning curves without substantial time-savings)

Using Node JS for my function code in Google and in AWS CDK for stack configuration meant no new languages to learn.

I don’t have time for maintenance activities (no patching servers, automated scale-up)

The stack is serverless and will scale up as required.

Bonza!

Next up… my tool-kit for tiny short-run batch jobs

--

--

Gareth Cronin
Gareth Cronin

Written by Gareth Cronin

Technology leader in Auckland, New Zealand: start-up founder, father of two, maker of t-shirts and small software products

No responses yet