Speaker 1: This is Techstrong TV.
Mitch Ashley: Hey everybody. Welcome back to KubeCon 2023 North America, here in Chicago. So we are talking of Kubernetes, Kubernetes, Kubernetes. There’s some other topics too, but that’s certainly a theme for today. So I’m joined by Shaun O’Meara. Welcome Shaun, from [inaudible 00:00:19].
Shaun O’Meara: Thanks for having me.
Mitch Ashley: Field, CTO, correct?
Shaun O’Meara: Field CTO. That’s right. Yeah.
Mitch Ashley: Great. Say a little bit about Mirantis, the folks may not know of you all and things that you do, and we’ll talk about some of your announcements.
Shaun O’Meara: Cool. Mirantis has been around for a better part of 12, 13 years now. Very focused in the open source space. Specifically, we started off in the infrastructure space with the OpenStack community, moved to Kubernetes fairly early, collaborated with Google in the early days on Kubernetes, and we’ve really just focused on enterprise customers. In 2019, we bought the Docker enterprise business, which we’ve merged into our product line, Mirantis Kubernetes Engine product line, and we’re running hundreds of customers on Kubernetes today and have done so for at scale for a number of years. So our entire business is around how to make Kubernetes easier and more consumable.
Mitch Ashley: A lot of depth [inaudible 00:01:08] research, which I run, and we’ve done some work with you all on that, some great, interesting research. So what’s new? What’s happening now? You all continue to move and move fast and do big things when you tend to announce things.
Shaun O’Meara: Yeah, we like to make a splash when we can. We really needed to double down on our open source and our open source chops and everything we’ve been doing in that space. So round the 2021 timeframe, we launched our K0s offering, which for us was the next generation of the Kubernetes offering, really focusing on how do we make Kubernetes super easy to use, to deploy. But the challenge with K0s, as easy as it is, it’s still one cluster. The majority of our customer base today though, run one or two large clusters, and that is a management nightmare for them because you’ve got to learn Kubernetes [inaudible 00:01:59] manage that. So just before the show, at the show, we’ve just announced k0smotron. k0smotron is a new way of deploying and managing Kubernetes clusters. So we are really trying to change the paradigm of how people think of a Kubernetes cluster.
Cluster can be a throwaway. It can be a, it doesn’t need to be a pet, it can be cattle, to use the old paradigm here. Really what we are trying to solve for with k0smotron is this idea that you can have a small cluster, very affordably. That’s the key message. Ultimately, what we’re doing with k0smotron is we’re solving the problem of the resource overhead when I need to build clusters, I want smaller clusters, but I don’t want to deal with the control plane overhead, the three to five nodes, and then a hundred clusters, a hundred nodes behind that cluster, which I then got to manage and deal with all the complexity of. If I can put k0smotron onto my systems and I can make, virtualize the control planes so run them as a Kubernetes workload and then attach workers from anywhere.
So that’s the key message here. What we’ve done is, I can spin up a new Kubernetes cluster in about seven seconds, three lines of YAML, and there I go. I have a cluster up and running. And then the other part of what we’ve done is we’ve integrated that to CAPI. So the cluster API service made k0smotron a bootstrap provider for CAPI. So I can now attach workers from anywhere.
Mitch Ashley: So plus if you have existing infrastructure tied into that as well.
Shaun O’Meara: Precisely.
Mitch Ashley: Right. It’s going to be my personal question. So New Greenfield or can you bring it to, so you’re talking about people having generally one large or couple of large clusters, and of course, yeah, those things probably should be other things, but who has the time to really manage it, split it up. In that kind of a scenario, is it still attach k0smotron to the workers and you can start managing that? Or is there a dividing line of where very oriented towards this kind of a size of an implementation versus something that’s bigger?
Shaun O’Meara: So ultimately what we’re doing with k0smotron is we’re just moving the control plane from being a pet. So by virtualizing the control plane, I can now spin up hundreds of control planes. So instead of having to share a cluster or share workloads on a single cluster, I can spin up a cluster per workload because I don’t have that overhead. The way k0smotron works, we use the connectivity with a K service.
So now we have full separation between those workers and that control plane. So now my workers can live anywhere. So I can put the workers and I can run a k0smotron control plane on-prem. I own my control plane. I don’t have to use a third party control plane. Use one of the cloud provider control planes, which take 25 minutes to spin up. But I can attach workers from any of those cloud providers or all of those cloud providers as long as I have a connectivity back to that single control plane. Suddenly I’m changing the way, if I want to do testing work, I spin up a cluster, a whole cluster for that test, run the test and kill it. So that’s the beauty of what’s behind k0smotron.
Mitch Ashley: It seems like almost you’ve abstracted or virtualized away the structural component of Kubernetes and said, that’s still there. You need those things for load performance [inaudible 00:05:26] lots of good reasons.
Shaun O’Meara: [inaudible 00:05:28].
Mitch Ashley: But I can also now treat it as anywhere, right. Matter of fact, it seems like those one or two big cluster implementations would now you could start spinning up wherever you want, right? Let’s new cloud provider on-prem, maybe same cloud provider, but-
Shaun O’Meara: True multi-cloud cluster.
Mitch Ashley: True multi-cloud. Yeah.
Shaun O’Meara: Because instead of trying to force my developers to use a bunch of different ed points and complicate the application deployment, I’m now doing that and I can put multiple kubelets in different cloud locations or at the edge attached to that centralized control plane, and I’m deploying to one single Kube API. Trying to just simplify, again, with our whole processes, simplify the developer experience of getting actual workload into production, which is what every company ultimately wants.
Mitch Ashley: It seems also like it would help the development process too, right? Early development, development environment, testing environments, simulating production.
Shaun O’Meara: Yeah, CICD processes, being able to spin up clusters on demand, being able to move your production workload really simply. If your application is fully automated, why shouldn’t your infrastructure be fully [inaudible 00:06:35]?
Mitch Ashley: Good point. So talk some more about the control plane and you said you containerized it. How does that differentiate from its, way it’s implemented now? Is that one of the limitations of expanding and growing and managing across a complex infrastructure?
Shaun O’Meara: So it all really boils down to the Kube API and etcd. Typically, when I’m deploying a control plane, I need to deploy etcd and I need to deploy the Kube API, and they’re distinct components of the system. So if I’m using something like Kube ADM to deploy a control plane, I’ve got to spin up three separate nodes, at least for production.
I mean, yes, you can do a single node cluster, but it’s not really a cluster. What I’m doing is, I’m putting etcd and the Kube API into a single pod, and I’m treating those as just another Kubernetes containerized workload, which means I control them using Kubernetes norms and Kubernetes standards for containers. So Kube API, etcd lives inside the pod. I can make as many replicas of that pod or not as I want, just change a replica count in Kubernetes standard CRD norms for Kubernetes. Beauty is now, I need to expand that control plane to support more workers. I push a button, two seconds later I have another pod running or another version running, another container running. So I’m expanding. The kubelets itself runs anywhere, and then the connectivity service provides for that connectivity.
Mitch Ashley: Great. You mentioned emphasizing open source, really, I guess leaning into that even more, right, tell me about some things you’ve done. Is that particularly around k0smotron or is it in general or what’s the philosophy there?
Shaun O’Meara: [inaudible 00:08:14] mentioned in general, the lens team that where this work has come out of the lens group, we’ve really wanted to focus on open source as much as possible. We leverage the open source community for everything we do at Mirantis. We’ve been huge contributors to the community over the years. We want to continue to be good citizens of the open source community. So both K0s is fully open source and we’ve got a huge following of K0s and a lot of people providing input and feedback. We just move faster in the open source space. We can do things with much better quality because we get that feedback in a very controlled, and so both K0s and k0smotron are fully open source. We obviously offer commercial support options for them, but we’re seeing a lot of customers are diving in there, trying it out, implementing it, and then coming to talk to us later on about getting support for it just to get that backup.
Mitch Ashley: So it takes you out of the game of the cripple wear, right? It sort of works, but if you need more than five, you got to come to us.
Shaun O’Meara: So ultimately, the open source model is always challenging for any company. We want to put it out there, but we also need to pay for it to be produced. So by being good citizens, people can consume our software. When they go into production, they need that extra layer of support, or quite frankly, they don’t have the people to do that support themselves.
Mitch Ashley: Yeah, thinking that too.
Shaun O’Meara: And that’s who really we’re talking to. We’re talking to a lot of platform teams who want to focus their efforts on building the tools their developers need to be more efficient. Well, we’re providing the tools the platform teams need to be more efficient. And if they can treat Kubernetes as just another workload, same way they treat those workloads, we are simplifying their lives and giving them back time to focus on building value for their business. That’s the ethos.
Mitch Ashley: It’s built into the nature of the job but organizations aren’t hiring 50, 100, 200 platform engineers. They’ve got 1, 2, 3, 4, 8, maybe not large teams. So they’re incented to do their own productivity improvement by open source automation, management control plane things.
Shaun O’Meara: Absolutely. I mean, I’ve been going around the world talking to a lot of the customers. I was in the Nordics recently with a platform team at a bank, and they’re super excited by what this brings them because they’re spending so much of their time dealing with underlying platform issues at what is the commodity layer. The Kubernetes API is really a commodity layer. It’s what we put on top of it that brings value to the business. And I can get Kubernetes now from half a dozen to a dozen different places.
Mitch Ashley: Yeah, absolutely.
Shaun O’Meara: On demand or deploy it myself, but it’s still complicated and it shouldn’t be, and that’s what we’re trying to move for the community to be into that place.
Mitch Ashley: Is some of that complexity also introduced by these distributions or these cloud providers, or especially, you talked about multi-cloud, now you’re talking about, everything’s not uniform the same. I’ve got to manage across that variation.
Shaun O’Meara: I mean, of course it is. Ultimately, a business model says you want to try and keep your customers as close to home as possible, and I don’t point fingers here to anybody, but the lock-in model is a true thing. Especially in our European customers we’re seeing more and more desire to use multiple providers, and I’m being very careful to say providers because that could be cloud or on-premise.
Mitch Ashley: Right.
Shaun O’Meara: It could be, often we’re seeing two vendor policies coming in at least two vendor policies. So they’ll work with us and another vendor or two other vendors providing their Kubernetes resources. The reality of that is it’s not just the Kubernetes that’s different, and often it’s not, at least at the surface, the API is the same. It’s everything that they wrap around that, that is different and adds layers of complexity for platform teams as well as, as I mentioned earlier, the multiple endpoint problem. I’m deploying my app now. Where do I, which endpoints do I choose to, and suddenly what ends up happening is the apps just go into one provider because it’s easier. So we’re seeing that the time saving and the simplicity that we can offer is really good for these platform teams.
Mitch Ashley: Great. If you don’t mind, I’d love to ask about K0. Is that primarily being used to spin up those simplified development environment, or do you see it being used at the edge a lot because people talked about K3 [inaudible 00:12:38] kind of edge environments?
Shaun O’Meara: Yeah, so we’ve got a wide range of K0s customers today and a wide range of user use cases. Currently, I think we’ve got about 200,000 clusters that are reporting telemetry, but we’ve got use cases right now with people building clusters on-prem. So traditional expand our clusters. We are seeing clusters that are being built for, in the public cloud. So on top of public cloud providers, on top of VM instances, we’re seeing it being used as appliances. In fact, we have one customer who is embedding K0s as a single node, single cluster, in inverted commas, appliance. And they push K0s out with the K0s manifest capabilities, and they run it as an appliance. We’re seeing K0s being used at the edge because the kubelets is so simple to deploy we just pop it out, provide a token file, and it can run, I hate to say it, on something like a Raspberry Pi or an equivalent commercial version of a Raspberry Pi. So it really makes it, it’s been designed to be simple to use in a multitude of different use cases, and Edge is definitely one of those.
Mitch Ashley: It seems architecturally, that really supports the idea of cars are becoming software platforms with wheels on it, right?
Shaun O’Meara: For sure.
Mitch Ashley: But everything is. Your smartphone could be some part of that distributed Edge network for you, or [inaudible 00:14:04].
Shaun O’Meara: It already is in some ways.
Mitch Ashley: Right? And that seems like a great place where a K0 could be.
Shaun O’Meara: Absolutely.
Mitch Ashley: It really could [inaudible 00:14:11] a lot of environments.
Shaun O’Meara: Well, the beauty of that is when we’re looking at it from the point of view of K0s being the simple to deploy and manage Kubernetes, k0smotron, removing the complexity of the control plane, CAPI providing that infrastructure management layer, and we’ve done some extra work to do bare metal management and operating system management as part of one of our other products. All of those things combined, I now have a very simple way to do a very, very distributed Kubernetes network. And because I have all this information in a central database, so from an application point of view, I can query that database and understand what my left work looks like, and that’s the beauty behind it. CAPI, k0smotron, and K0s, all of that together, and that’s the vision.
Mitch Ashley: So you launched k0smotron here. Have you been having open source users of that for a while now? I assume it’s been out there and tested and battle tested?
Shaun O’Meara: So we’ve been doing demos here all week. We have a number of people trying it out. We’re seeing, K0s we’ve got a lot of commercial use. k0smotron is new. We’ve got a lot of test clusters out there, a lot of CAPI integration work happening. I’ve been doing demos here all week showing people three lines of YAML, seven seconds, and there’s a cluster up and running. I can do a full CAPI deployment on a public cloud provider in under a minute and a half, and most of that time is waiting for the instances to start. So it’s pretty stable. We’re seeing a lot of good uptake. We really want the community to come out and dig into it and provide us feedback. We’re always trying to improve.
Mitch Ashley: That’s the open source model. Where can folks find out more, kick the tires down low with open source, do all the good things you can do.
Shaun O’Meara: Everything’s on GitHub, k0smotron.io, K0s.io, of course, mirantis.com. There are links to all of this. We are doing everything in the open. The team is super active online, so dig in on GitHub, ask questions, load issues. The guys are always there to respond. The leadership of the K0s team has been helping me get ready for this, preparing videos and demos at two o’clock in the morning their time, and they’re all in Finland. So it’s been a great experience to work with them.
Mitch Ashley: You’re markedly vivid for being up that long doing those long hours of work, so.
Shaun O’Meara: No, great team.
Mitch Ashley: [inaudible 00:16:43] doing well. Congrats on the launch.
Shaun O’Meara: Thank you.
Mitch Ashley: And the hard work on k0smotron and K0 and other great things Mirantis is doing. So keep us informed, let us know how it’s going and be really interesting to see where this evolves and use cases that might [inaudible 00:16:58]. It’s always the fun part.
Shaun O’Meara: That’s always the fun part.
Mitch Ashley: We never intended that to happen, but it’s a great use case, right.
Shaun O’Meara: We intend to publish those use cases as soon as they’re ready. We love talking about what we do. We put a lot of our use cases on the websites as well. So we are doing some amazing stuff with customers around the world.
Mitch Ashley: Good. All right. Shaun O’Meara here from Mirantis, be sure and check it out on GitHub and their website. We’ll be back with another great interview here from KubeCon North America in Chicago on our third day of live-streaming about Kubernetes and a lot of other great [inaudible 00:17:31]. So thanks you, Shaun for coming.
Shaun O’Meara: Thank you very much.
Mitch Ashley: We’ll be back in just a minute.