- Products
- Solutions Use casesBy industry
- Developers
- Resources Connect
- Pricing
Microservice architecture is a good design practice, but testing it locally is very difficult. Let’s take a look at why.
A simple microservice architecture may look like this:
You can see several components in this architecture:
However, microservice architecture in reality is a lot more complicated. Here is a more realistic version:
First, there are many more services in the real world than in the simple architecture we saw above. Each business domain has its own microservice ecosystem. Each product can have its own API service, message queue, async services, and databases. Once a company adopts microservice architecture, the number of services grows exponentially.
Second, business domains often cross boundaries. A service in Product A sometimes reads from a database defined for Product B. This is not necessarily a bad design, because business domains always correlate to each other. For example, when you do a Google web search, the search API will inevitably also call Google Maps and YouTube. It is very hard to build a solid wall between products, where services from one product never call beyond business boundaries.
Third, not all services belong to a product. Some servicesare built as“platform services”, or “shared services”. Some examples are:
It is very common for a mid-sized company to have hundreds of backend services. Spinning up every single service locally in a laptop environment becomes simply impossible.
Microservice architecture poses some challenges for developers:
To solve these problems, we created a local developer environment for Nylas engineers.
The foundation of a developer environment is defining a “domain”. A domain is a collection of closely related services. Service coupling inside a domain is much tighter than services across domains.
A domain is usually defined by product. For example, the Nylas Streams domain is defined by the Nylas Stream product (https://stg-5ji7vw.elementor.cloud/capabilities/nylas-streams). Inside this domain, there is a Streams API service, Streams async service, Streams database, and Streams message queue.
In this way, we can group microservices into domains.
Here’s what the architecture might look like before grouping:
And after grouping into domains:
At Nylas, code repositories are defined not by service, but by domain. Services that work closely together are stored in the same repository.
Therefore, our developer environment is built on a domain-by-domain basis. Each time our developers want to run and test a service locally, they will have to start the developer environment for the entire domain.
Let’s look at an example developer environment:
Inside the example above, we have the API Service, Async Service A, and Async Service B. They are the services we want to test.
Don’t forget: services have dependencies outside the domain. Some dependencies are provided by third parties: message queues (Kafka/RabbitMQ/SQS), databases (Mongo/Dynamo/Spanner), Cache (Redis), and so on. Other dependencies are in other domains; for example, an email service may need to call an authentication service, which is outside of the email domain.
The developer environment consists of all the services defined in a domain. It also contains emulators for third-party-dependencies and services outside of the domain.
At this point, we’ve defined the scope of our developer environment, but how are we going to run it locally?
The answer is Minikube and Tilt.
Minikube provides the infrastructure for our developer environment.
A local Kubernetes cluster is called Minikube. It can simulate Kubernetes running in production. You can create Kubernetes deployments and cron jobs in Minikube. These can be accessed via the Kubernetes CLI kubectl. For more about Minikube, see https://minikube.sigs.k8s.io/docs/.
Let’s have a look at the developer environment for the Nylas Sync domain, which has 14 services and 2 jobs running locally inside of a local Minikube cluster:
Each pod here is running a unique service. Services interact with each other inside of the same cluster.
Developers can see a list of services in the “Deployments” tab like “zoom-notification-server”, “graph-notification-subscriber”, “graph-metadata-listener”, and so on:
All of these are Nylas backend services, running locally inside the same Minikube cluster. But why don’t we use Docker Compose?
First, Nylas backend services interact with Kubernetes APIs. Some microservices create Kubernetes jobs, and some provision Kubernetes pods. There is no way Docker Compose can simulate Kubernetes internal APIs.
Second, Minikube can better simulate a production environment. People use Helm, not Docker Compose, to deploy services into production. We can use the same Helm template for our developer environment and production. The only difference is the Helm values. In this way, people can test not only their code, but also their deployment pipeline.
Besides Minikube, we use a wonderful thing called Tilt. Tilt is a developer tool to run a Kubernetes cluster locally. It offers smart rebuilds and live updates.
To set up developer environment, simply run tilt up
. All services will be automatically deployed to Minikube. It also provides a dashboard where you can watch the progress:
Under the hood, Tilt does several things for each service:
The most exciting thing Tilt offers is live-update. Every time a developer changes code in their IDE, Tilt will detect the change and automatically restart the service! The service restart only takes several seconds. The write-and-test iteration now is shorter than ever.
For more about Tilt, you can read their documentation: https://tilt.dev/.
Most services have third party dependencies. A service may rely on Redis, PostgreSQL, Kafka, and so on. In order to set up a developer environment, we have to run third party dependencies locally.
If the third party service has an official Helm chart, we can simply install it in our Minikube. However, some third party dependencies cannot be installed directly. Services such as Google PubSub and Amazon DynamoDB do not have Helm charts available.
In regard to those services, we have emulators.
Here are some emulators Nylas uses for our developer environment:
We built these emulators into Docker images and wrote a Helm chart template. We can then deploy them to Minikube.
Here is an example Dockerfile for our Google PubSub emulator:
FROM google/cloud-sdk:alpine AS builder
LABEL stage=builder
RUN gcloud components install pubsub-emulator
FROM openjdk:jre-alpine
COPY --from=builder /google-cloud-sdk/platform/pubsub-emulator /pubsub-emulator
RUN apk --update --no-cache add tini bash
ENTRYPOINT ["/sbin/tini", "--"]
CMD /pubsub-emulator/bin/cloud-pubsub-emulator --host=0.0.0.0 --port=8085
EXPOSE 8085
Here is a list of third-party dependencies Nylas uses:
Third-party dependency | Description | Does it have an official Helm chart? | Does it have an emulator? |
ElasticSearch | Document-based indexing | Yes: https://github.com/elastic/helm-charts | – |
Redis | Key-value caching | Yes: https://github.com/bitnami/charts/tree/master/bitnami/redis | – |
PostgreSQL | Relational database | Yes: https://github.com/helm/charts/tree/master/stable/postgresql | – |
Vault | Secret store | Yes: https://github.com/hashicorp/vault-helm | – |
MongoDB | Document-oriented database | Yes: https://github.com/bitnami/charts/tree/master/bitnami/mongodb | – |
Google PubSub | Message queue | No | Yes: https://cloud.google.com/pubsub/docs/emulator |
Cloud Spanner | Relational database | No | Yes: https://cloud.google.com/spanner/docs/emulator |
DynamoDB | Key-value database | No | Yes: https://localstack.cloud/ |
SQS | Message queue | No | Yes: https://localstack.cloud/ |
Temporal | Workflow engine | Yes, but the chart is too heavy: https://github.com/temporalio/helm-charts | Yes: https://github.com/DataDog/temporalite |
Between official Helm charts and emulators, we have everything we need to run our third party dependencies locally.
Let’s take a look at our example developer environment again:
Services often interact with other services outside of their domain. An Email service can send API requests to an authentication service, but the two services are not in the same domain. If services in other domains are not running in Minikube, how do we test them locally?
The answer is: building fake services.
If the email service calls the authentication service, we will build a fake-auth-service just for testing. The fake service maintains the same API contract as the real one, but it is not doing actual authentication. The response of a fake service can be hard-coded.
The answer isn’t: deploy multiple domains locally.
We believe services should be tested locally within their own domain without requiring another one to also be running locally. Deploying multiple domains simultaneously in a Minikube cluster causes more problems than it solves.
The biggest reason is CPU and memory consumption. We experimented with deploying two domains of services on a 2019 Apple MacBook Pro. Our IDE froze, the laptop started heating, and the fans wouldn’t stop. It was quite a bad developer experience.
With a developer environment running locally, we can write function tests. Here are some example function tests we can implement:
Furthermore, function tests can be integrated with Continuous Integration (CI) using GitHub Actions. We found two GitHub Actions plugins quite useful:
Here is an example GitHub Action configuration:
name: Function test
on: pull_request
jobs:
Function-Test:
name: Function-Test
runs-on: ubuntu-latest
steps:
- name: Install Go
uses: actions/setup-go@v2
with:
go-version: '>=1.17.0'
- name: Checkout
uses: actions/checkout@v2
- name: Setup Kubernetes Tools
uses: yokawasa/[email protected]
with:
tilt: '0.26.3'
- name: Start Minikube
id: minikube
uses: medyagh/setup-minikube@master
with:
minikube-version: 1.25.2
- name: Validate tilt
run: tilt ci
- name: Run Function Test
run: go test -v function-test/...
If a function test fails, people will see the failure in their pull request. If people accidentally breaks the developer environment, the CI check will also catch it before code deployment.
A developer environment for a service domain at Nylas looks like this:
To summarize:
Now, all Nylas core services can be run and tested locally, bringing a huge boost to developer velocity at Nylas.
Special thanks to all Nylanauts who helped this effort:
Zhi is a staff engineer who leads the Developer Velocity team at Nylas. In 2021, he left Stripe and became a Nylanaut. Zhi is also a history buff who loves traveling.