Welcome to our series of articles describing the migration process of a monolithic application to the microservice architecture, where we talk about real-world implementation and challenges we faced while migrating a monolith to microservices. Be sure to check out Where To Start and Real-world Examples and Tips posts first.
You might be wondering: what’s the deal with events and commands everyone keeps talking about?
When something happens somewhere, e.g. a user places an order, that’s an event. The Ordering service that handles orders might want to notify other services (like Shipping, Billing) of that change and it can do that by publishing a message called an Event.
To do this, the Ordering service needs to publish this message to a specific topic, and Shipping and Billing services must be subscribed to that topic to get notified of the changes.
Each subscriber (in this example, Shipping and Billing) gets a copy of this message, this event, so they can act on it and in turn, do their own business processing and publish other events and so on.
If, for example, we need to do something specific (like place an order), we create a message called Command. Command will probably only be processed once by specific functionality.
Here is a nice illustration, taken from the NServiceBus website:
Messages (events, commands) are contracts between endpoints, so it’s wise to keep them slim. If the messages change, we don’t want to go changing too much (if anything) in all our endpoints that consume them.
Once you tackle microservices and have long-running processes, sooner or later something will go wrong, and you’ll find that you need to undo the changes you made in your system.
What was once easy in monoliths becomes a nightmare in distributed architecture. Fortunately, there are patterns that can help you.
Helpful link here.
What you need to do is have a compensating transaction, a counteraction for every change that your services can make in your system. For example, if a user places an order via the PlaceOrder action, you need to have a CancelOrder action that undoes that order.
Sounds simple in theory, but it’s hard to do in real life.
Thankfully, there are some patterns that can help!
To manage all the doing/undoing of transactions and long running processes, you can use patterns like sagas that can in turn use patterns like choreography, orchestration, or routing slips. Here are some helpful links to help you understand these patterns:
https://microservices.io/patterns/data/saga.html
https://docs.microsoft.com/en-us/azure/architecture/patterns/choreography
There is a single process (object) that orchestrates creating transactions and undoing transactions if something goes wrong and so on.
Each service in the process is responsible for publishing events that create or undo transactions and so on.
MassTransit and NServiceBus each have their own solutions for implementing sagas so, if you can use either of them, be sure to check out their documentation.
It’s worth noting that MassTransit supports the courier (routing slip) variant of choreography sagas out of the box as well.
A routing slip specifies a sequence of processing steps called activities that are combined into a single transaction. As each activity is completed, the routing slip is forwarded to the next activity in the itinerary. When all activities have completed, the routing slip is completed and the transaction is complete.
A key advantage of using a routing slip is that it allows the activities to vary for each transaction. Depending on the requirements for each transaction, which may differ based on things like payment methods, billing or shipping address, or customer preference ratings, the routing slip builder can selectively add activities to the routing slip.
I suggest you look into/implement routing slip as your solution for sagas if you really, really need them.
When you actually want to write code for your microservice, I suggest you try using CQRS (Command Query Responsibility Segregation) for organizing your code and microservices. CQRS fits in well with event-based systems, and since microservices are supposed to be one, it could be a good fit.
There is a great sample app and a video presentation on clean architecture (and CQRS) by Jason Taylor:
GitHub repo
Video presentation
CQRS means you separate your reads (queries) from your commands (insert, update, delete). That’s it. The idea is that you can have a different model for your queries, one that already aggregated the data you need and can be differently optimized and scaled than perhaps your inserts, updates and so on.
To help you with CQRS, there is a great NuGet package called MediatR.
MediatR basically creates an InMemory messaging system for your application that can dispatch and handle messages between your application components.
I suggest taking a look at JasonGTs repo and the linked video to get a better understanding of clean architecture.
The good news (at least from a developer’s perspective) – logging is somewhat easy in the microservice architecture. All you need to do is log to console or STDOUT and I suggest you use some kind of structured logging framework like Serilog that has console sinks and different formatters, like the ones for ElasticSearch that you’ll probably want to use.
The harder part comes in when you need to design your service for maximum visibility. I suggest you use CorrelationId as a unique identifier for each request. CorrelationId is something you create at the beginning of your request (e.g. User places an order on the web, and you can create the CorrelationId there. It can be a UUID or some unique random value) and you make sure to log it and pass it along each step of your application – pass it into messages, logs, read it from messages and pass it along to other services and components, etc.
The hardest part of logging falls on your infrastructure. Something needs to read all those stdouts from each of the containers and aggregate them in a common place.
This is something that an ELK/EFK (ElasticSearch / (Logstash|Fluentd) / Kibana) stack can really help you with. Fluentd or Logstash is that something that reads stdouts and aggregates them into an ElasticSearch database. Kibana is a log reader tool.
This is hopefully solved by your platform of choice (e.g. OpenShift, Azure, GCE, Amazon...). We’ll mention OpenShift sometime later on.
When it comes to configuring your microservices (and you’ll need configuration for things like connection strings, locations of external services, secrets, etc.), if you’re using .NET Core, all you need is the built-in configuration framework (Microsoft.Extensions.Configuration).
First, for local development you create the appsettings.json file and input all the configuration data you need in there in a json format. Example:
"Producer": {
"BootstrapServers": "localhost:9092"
}
If you need to override this setting at runtime, you’ll use environment variables. Environment variables are the main configuration option for containers and cloud native applications.
The only thing you need to do in .NET Core is to add the AddEnvironmentVariables() when you’re building your configuration with ConfigurationBuilder.
This enables you to override any settings in the appsettings.json file with environment variables with keys that match our configuration settings keys, like so:
ENV Producer__BootstrapServers="new_host_name:port"
Our configuration object (or methods) will automatically read the setting from the environment, not from the appsettings.json file.
For example, if we wanted to read the Producer:BootstrapServers config from our appsettings with C#, we would use this line:
var boostrapServers = Configuration.GetSection("Producer:BoostrapServers").Get();
We use __ to replace: when using environment variables for cross-platform support because some systems don’t support: in environment variables.
And finally, we come to the platform. Once you’re done developing your microservices, you need to publish them somewhere. There are many cloud platforms that can run Docker containers like Microsoft Azure, Google Cloud Engine, Amazon, etc.
If you have no problems (legal or otherwise) with cloud, I suggest you go with cloud platforms and their excellent built-in tools that can help you run your distributed microservices architecture efficiently.
If you need an on-premise solution or you just want to try running your own “cloud”, I suggest you try Red Hat’s OpenShift.
OpenShift is a hybrid platform that can run on-premise. It’s built on top of open-source cloud technologies like Kubernetes, Istio etc comes with its own web app (a console) that simplifies working with Kubernetes, and it’s at the same time really focused on security.
There is also an open-source version called Origin (OKD) that you can run without the need for paid options.
If you don’t have your own servers, don’t worry, you can run OpenShift on your dev machine.
You can download the CodeReady containers and run them locally and try out all the cloud features like Kubernetes and so on – it’s good stuff and good to know.
You can also try it online, sometimes it works. 😊
P.S.: We don’t have any affiliation with IBM or Red Hat (that I know of), so this blog post isn’t just a huge ad for IBM or Red Hat.
The main purpose of my talk and this series of blog posts is to point you in the right direction with some useful information regarding microservices development and with some real-world examples and knowledge that we acquired while developing microservices over the past couple of years.
It’s not a panacea and it’s not a guarantee that it’s the right way of working with microservices but it works for us, maybe it’ll help you in breaking down your monoliths, or at least it’ll break some stuff. 🙂
Kind regards and thank you for sticking to the end of the series. 🙂