Microservices are a big deal. They represent a highly functional opportunity to move away from monolithic stack architectures with their cumbersome connections, their lack of flexibility and the possibility of unwieldy links to legacy applications, data repositories and tools. Because microservices offer us an architectural approach to software that is composed of inherently smaller custom-aligned independent services and functions, they more naturally align to communicate information effectively over defined APIs.

As MuleSoft clarifies in its microservices patterns whitepaper, “Traditional applications act as monoliths — meaning that they are single self-contained artifacts — like a bulky concrete slab. A microservice-based application is made up of several building blocks that can be composed together to get new applications and services up and running faster. This type of architecture can encompass thousands of individual components.

But any powerful technology always has trade-offs and caveats, so when are microservices not such a great idea?

Lift-And-Shift, Point-And-Click

First and foremost it’s important to remember that moving from monoliths to microservices is no point-and-click affair, neither is it typically a lift-and-shift process. This move generally requires re-architecting and refactoring down to ground zero level, organizations may want to think about microservices in the first instance for greenfield projects (or at least brownfield deployments where only a portion of base layer services and provisioning is in place) that are genuinely cloud-native.

As microservices consultant and author Sam Newman notes here, “I remain convinced that it is much easier to partition an existing brownfield system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision-making process.”

Infrastructure Overhead

Although we laud microservices for their ability to work in a more modular form that enables easier scalability, more fine-tuned replaceability (especially where failures occur, that can bring down an entire system), there is always the trade-off of performance limitations due to the fact that the sum amount of resource consumption required by microservices is typically higher. In other words, our infrastructure overhead becomes more expensive. 

Because we’re building each microservice in and of itself, the potential for pockets of redundant logic where functionality is duplicated arises. Although we can use various techniques and approaches that stem from the realm of service meshes, each has its own overhead and management responsibility, so there is no such thing as a free lunch here. 

This is where design logic may veer towards the use of sidecar proxies to handle the network-related tasks that a microservice needs to exist. From authentication and traffic management and onward to observability through monitoring and logging, a sidecar proxy provides the fine-grained control needed for each microservice, but there is a resource consumption cost here too. There is a differing point of diminishing sidecar proxy returns, depending on the precise nature of the environment in which microservices are deployed.

Depending On Dependencies

As a microservices architecture grows, developers and sysadmins will need to keep track of the growing thread of dependencies that exists throughout the network design pattern. As microservices evolve throughout a project, the freedom afforded by the ability to change each building block means each piece has to be tested independently, which can be a help or a hindrance. 

Looking ahead, we can expect use of microservices to remain most prevalent in the database arena as well as in data analytics projects, business intelligence tools and customer transactions (or indeed CRM) applications. Taking stock of the negative and the positive aspects of microservices together is arguably the most prudent approach here. Given that many microservices projects have started as monoliths that proved to be too clunky, this path to granular liberation can be paved with a payoff amount of reengineering, at least at the outset.