Skip to main content

Should this be a microservice?

After having developed several distributed systems and been a part of dozens of architectural discussions I decided to put together a way to frame the microservices debate. Microservices have been fashionable for some time. A lot of it stemmed from the fact that big and successful cloud companies are using microservices.  It seems reasonable that to create a “serious system” one must be using serious tools and architecture, today it’s microservices.  No engineer wants to be called out for creating a solution that “doesn’t scale.”

The definition for a microservice varies, but overall it tends to be a piece of your system that can run somewhat independently (unless of course it depends on other microservices) and has a REST or queue processing interface.  Overall code encapsulation and separation of concerns have all been around for a long period of time.  Current evolution with containers, fast networks and REST API allows people to easily integrate pieces of their system using web services and that’s what’s used for microservices.

What are the main advantages of micro service?  They are:
1. Scaling part of your system independently.  If you have something that’s doing a lot of processing or handles a ton of data, that piece my starve the other parts of your system.  It would be nice to give the hungry beast extra power preferably also when it needs it.
2. Understanding the code base.  If you have a well defined interface and all you need to worry about is just supporting it, then of course you don’t need to worry about the caller or the complete end to end test.  It sounds attractive: teams can work independently and as long as they don’t change the contract they can keep working away.
3. Quick deployments.  You do not need to deploy the entire system, unless of course you are changing the interfaces between microservices in which case you might to deploy a lot of things altogether.
4. Fault isolation. If one of the micro services goes down the rest of the system can stay up.
5. Necessity for different stacks.  If you have multiple teams or need for different technologies for functionality or performance reasons (something can we well suited in Python, some other thing can be only possible in C# because of some legacy .NET library).

The microservice architecture is not all upsides, there are some downsides to it as well:
1. Network is often the slowest way to communicate between components.  Using latency numbers from 2012 main memory reference is 5000 faster to access than going to another place in the same data center.
2. More code in totality.  Every service is going to need to be a small web server with it’s configs, logging configuration etc.  It’s possible to consolidate but if you have different stacks, not so much.
3. More resources.  To have a web application, even if it’s a micro web application you might need some of the following: data storage, containers, caches, queues.  You might reuse those between your microservices, but then they are not so isolated.
4. Deployment coordination.  If you change that interface the whole system needs to be redeployed, now you need Kubernetes or some other orchestration technology.
5. Debugging is harder.  It’s great when you are in the micro service and only well defined things come your way.  Those bugs are easy. Much more interesting scenarios happen when something crossed network boundaries several times and possibly even different technology stacks in the process.
6. Running the system locally is harder.  If you have 15 micro services (which is not a big number), now you are doing docker-compose or you are connecting part of the system to a remote system.

If the microservices are not really micro what you end up having is a distributed monolith: everything needs to be up, running and synchronized, but it’s slower and harder to debug.  That tends to be the microservice architecture gone wrong and that negates the advantages of a smaller code base to understand and fault isolation.

Given the pros and cons it’s important to ask critical questions before splitting up your code base:
1. Is there a part of the system that’s taking up a lot of resources and needs to move at a different pace than the rest?  For instance you might have streaming video vs regular HTTP request processing.
2. Is your team large and technically diverse to where you can’t get everyone together on the same technology stack?
3. Are there pieces of the system that are isolated enough to where you are not going to require multitude of potentially circular dependencies within your system?
4. Are you going to invest in deployment, logging and debugging of request that potentially trace request between microservices?
5. Are you certain about the service boundaries where you are not going to have to do refactoring across a bunch of microservices and their dependencies all at once?

If the answer is yes to most if not all of the questions above then it’s worth considering.  If not then maybe it’s best to go with some clean decoupled modules or classes in your code base and make sure that your solution layout is organized and manageable.

Ideas, data and inspiration has been taken from these excellent sources:

https://insights.sei.cmu.edu/sei_blog/2015/11/microservices-beyond-the-hype-what-you-gain-and-what-you-lose.html
https://cloudacademy.com/blog/microservices-architecture-challenge-advantage-drawback/
https://stackoverflow.com/questions/33041733/microservices-vs-monolithic-architecture
https://hackernoon.com/the-microservices-hype-7f9398f66f99
https://developers.redhat.com/blog/2018/09/10/the-rise-of-non-microservices-architectures/
https://gist.github.com/jboner/2841832

Comments

Popular posts from this blog

SDET / QA Engineer Interview Checklist

After interviewing and hiring hundreds of engineers over the past 12+  years I have come up with a few checklists.  I wanted to share one of those with you so you could conduct comprehensive interviews of QA Engineers for your team.

I use this checklist when I review incoming resumes and during the interview.  It keeps me from missing areas that ensure a good team and technology fit.  I hope you make good use of them.  If you think there are good questions or topics that I have missed - get in touch with me!


SDE/T or QA Engineer interview checklist from Mike Borozdin
If you like this checklist you might want to check out these posts:
Emotional Intelligence in Software Teams  and Good-bye manual tester, hello crowdsourcing!

Code versus Configuration

At Ethos we are building a distributed mortgage origination system and in mortgage there is a lot of
different user types with processes that vary depending on geography.  One of our ongoing discussions is about how much of the logic resides in code vs. being in a workflow system or configuration.  After researching this topic for a bit, I have arrived at a conclusion that the logic should live outside of code very infrequently, which might come as a surprise to a lot of enterprise software engineers.

Costs of configuration files and workflow engines First thing that I assume is true is that having any logic outside of the code has costs associated with it.  Debugging highly configurable system involves not only getting the appropriate branch from source control, you also need to make sure that the right configuration values or the database.  In most cases this is harder for programmers to deal with.  In many FinTech companies where the production data is not made readily accessible…

Quality of Code is Quality of Life

About 20 years ago when I started working in technology companies I remember “the best” engineers had similar patterns:
-They worked crazy hours
-They knew the systems no one else knew
-They could react and deliver something faster than anyone else
You could always hear other employees say: “Bob is really smart, no one knows how to get anything done in system X besides him!”

This reinforced optimization around being the only person who knew how to do something in some part of the code.  That in turn reinforced job security and bargaining for those engineers, but also chained them to a particular system.  We had big code bases of C++ or Java code where some “Bob” hacked up features as soon as he possibly could.  “Bob” would have occasional nuclear disasters where he’d sleep in the office or through the weekend and then everyone would thank him for how he “saved the day.”  “Bob” sacrificed his quality of life to get praise when he hacked stuff up quickly and then the second time when n…