Welcome to our series of articles describing migration of a monolithic application to the microservice architecture where we talk about real-world implementation and challenges we faced while migrating a monolith to microservices. Be sure to check out Where To Start post first.
For our next steps, I wanted to give you a few real-world examples of designing and developing microservices that we use in our .NET world.
To start, I recommend you check out Microsoft’s book on cloud architecture and the reference application.
It’s a really great starter, there is a GitHub repo with all the code and you can use this book to follow along and figure out what’s going on.
To start building your services, you’re probably gonna need...
To make your life (and deployment) easier, you’ll most definitely want to use containers. A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. You don’t need to install all your dependencies like a database, message broker, exact version of .NET Core/Java, etc.; you can use containers for that. All you need to have installed on your development environment is some kind of a container host, like Docker.
The most popular container technology by far is Docker, so it’s a good place to start.
Docker has a Docker Hub, a place where lots of container images are created daily.
A container image is a template, a base that you use to create containers or your own images. You basically take a starter image that needs to be some kind of operating system (it will probably be a flavor of Linux).
You take that image (base image) and add stuff to it in layers. If you install .NET Core onto Linux, that’s a layer. Then you add your application code, that’s another layer, etc. And so on.
Docker Hub has a lot of these images already made (like .NET Core, Kafka, IBM MQ, etc.) that somebody already built, so all you have to do is add your own application code to the base image and create your own Docker image that, in turn, you’ll create containers from.
Speaking of which, containers need to be as small as possible so that it takes less time to spin them up. To achieve this, you can use base images that use stripped down, bareboned flavors of the wanted OS (like Alpine or Stretch for Linux). Not everything is guaranteed to work out of the box on these distributions because there is a lot of stuff missing from them, so be sure to test every aspect of your service before using the stripped-down base images.
The way you build your image is that you define a Dockerfile, a file where you tell Docker what base image to use, what to install and which files to copy from your machine (usually your binaries or code) to the container.
I recommend you get to know Docker-compose as well, you’ll often need to spin up multiple docker images at once and use them in tandem (e.g. you can run your frontend and backend code in separate containers, spin up a database of choice, run a message broker), all with a simple config file (docker-compose.yaml) and a single docker-compose up command.
There is no need to install all this stuff (except Docker, of course) on your local development environment so... I highly recommend giving docker-compose a read. 🙂
While we’re on the subject, if you’re dealing with containers, 99.999% of the time you’ll probably want to use Linux as the base OS, so it is wise to familiarize yourself with Linux asap. Your knowledge must not necessarily be really advanced, but it’s good to know how to install new features (usually with some kind of package manager), browse directories, view files and processes and so on. This depends on the flavor of Linux you’re using.
There are Windows options, but then there’s the well-known licensing problem, and almost all of the Docker images use Linux as a base, anyway.
With that said, if you want to develop with .NET and containers, your best bet is .NET Core.
.NET Core is a cross-platform framework, designed to be fast, modular and open source. It’s stable, it’s mature (version 3.0 as of this writing) and it comes with a great set of tools to make your microservice life easier.
Golden rules for Visual Studio and .NET Core Docker development:
Most of the time, when you hear microservices, you think of splitting your application into a series of REST (or GraphQL/gRPC/insert here) APIs that talk to each other. This would create dependencies, which you want to avoid (if you can) while designing microservices as independent, standalone modules.
No, most of the communication between microservices will ideally be asynchronous via async messaging systems, messaging middle-man communication.
The main advantage of using messaging is decoupling and reliability. If one service stops working, the message intended for the service will wait for the service to start working again, meaning that the request won’t be lost in a void.
Depending on the messaging system you use, a good place to start when working with messaging is to use battle tested service bus frameworks like MassTransit or NServiceBus (keep in mind that NServiceBus has some licensing to consider).
If you’re using Kafka (and you probably should be), you can use Confluent’s .NET Core Client NuGet package that makes it a breeze to work with Kafka.
If you’re using messaging, you’ll probably want to use some kind of ServiceBus architecture that’ll make it easier to process messages as events or commands.
If you’re lucky and your technology stack is supported in MassTransit/NServiceBus, use that. If you’re using unsupported message brokers (like IBM MQ), you can try creating a custom transport for those libraries or you can try rolling your own ServiceBus/EventBus functionality based on Microsoft’s reference implementation.
It’s not a bad place to start but it’s missing some features like advanced error handling and retrying so use at your own risk.
Be sure to check out our next article explaining what those events, commands, and messages are all about - you can find it below.