If you’re considering adopting open source for your next cloud-native project, you’re probably optimistic about all the money you’ll save. Before you reallocate those savings, it’s important to consider three factors that are often “hidden” costs of open source.
Everyone loves open source, and for good reason. It’s free, high-quality code that has been battle-tested (at least in the case of established projects). But how free is it really? Yes, you can use the code at no cost, but you’ll need to install, operate and maintain it. And how much time that will consume depends on the project. Far too often, we see engineering departments committing to projects without understanding their true cost—a potentially expensive endeavor, especially in the cloud-native stack.
The Three Hidden Costs of Open Source
To get the most out of open source, it’s important to consider the expense tradeoffs right from the beginning. There are three hidden cost categories that smart business leaders should look out for.
Computing Resources: If you are running open source software, you’re also consuming computing resources. For projects like open source machine learning, you expect to consume a lot of resources. However, for infrastructure projects, it might not be on the radar.
Many popular open source solutions do difficult tasks quickly and well by eating through resources. Unless you’re using a project that was deliberately designed with a minimal resource footprint, the costs can really add up. And if you are doing the work with cloud-native computing, you will pay your cloud provider directly for everything you use.
Staffing: Organizations choosing open source solutions often find that they need help to navigate operations. The nature of open source means there’s no vendor support to help your team get up to speed or solve challenges. Instead, companies need a dedicated expert or team of experts. These aren’t new hires, either. Success and efficiency demand senior-level knowledge and experience, and that can be expensive.
The hidden cost also includes staff training. Open source software, like licensed products, requires ongoing learning for developers to skill up and manage upgrades.
Take Kubernetes, for example. It’s open source, but everybody on your team has to learn about the way Kubernetes works. Rather quickly, you find that it’s a full-time job to keep your Kubernetes clusters happy and trouble-free—operational tasks that require even more expertise. Even if you are using a Google-type solution to run your clusters, you’ll pay for that service and need an experienced administrator on staff.
Operational complexity: So, you think, “I’ll pad the training budget and have it covered.” However, open source can also add operational complexity that eats up significant time.
An application developer can learn the basics of Kubernetes in a few days, but getting to the point of being able to use and debug Kubernetes takes much longer. Real expertise takes months or years. In essence, the solution is free, but the “spend” takes the form of additional employee hours.
It’s important to note that your project choice can have an impact on complexity costs. Some open source software takes months to implement while others only take a few hours. The difference often comes down to control. Having fewer options or decisions gives you less control but moves things along faster where time is money. The catch is to be thoughtful about choosing your open source solution in the context of your larger business goals. You may find that a little control is a good tradeoff for ease of use.
An Example: Open Source Service Meshes
Let’s take service meshes as an example. If you’re running microservices, you’ll need to add security, reliability and observability to your Kubernetes clusters. The best way to do this is by adding a service mesh. Linkerd and Istio are the most popular open source service meshes and address the same challenges; however, they operate quite differently, which will affect cost.
Linkerd is known for its operational simplicity, requiring little to no configuration for standard implementations. That is due to its Rust-based micro-proxy, built specifically for the service mesh use case. Istio, on the other hand, has taken the path of providing extra features, which tend to increase its operational complexity. It also uses the powerful Envoy proxy instead of a purpose-built service mesh proxy, which ends up consuming more resources and requiring significantly more expertise for successful operation. Which service mesh you select will have a dramatic impact on your operational overhead costs: Compared to Linkerd, you’ll need to pay more for compute resources and for staff with Envoy expertise dedicated to caring for Istio. This may be worth it, or it may not; you’ll need to understand the tradeoffs to make a good decision.
Do Your Research
The bottom line: Do your research before committing to an open source project. Like any significant initiative, open source software requires careful planning and budgeting. You’ll want to move forward with your eyes wide open and all potential costs on the table, from computing resources to memory and RAM to staff.
Open source comes with a slew of cost-free benefits that haven’t been mentioned here in detail – such as people constantly working on security and performance upgrades, altruistic motivations versus profit-minded motivations and maintaining more long-term control over your own destiny.
Optimal open source use in the cloud-native stack means planning for the tradeoff expenses that can crop up. And the best way to do that is to talk with others using the product and ask specifically about their costs. Well-established, proven, open source products should have a community of users and use cases to help you uncover hidden costs.