One thing that doesn’t usually come to mind when people think about the risk of artificial intelligence (AI) is Kubernetes. Why would it? Kubernetes is a technology that allows engineering teams to create and change applications at a lightning pace–and it’s also a major wildcard when it comes to the risk of AI. OpenAI and ChatGPT have been using Kubernetes since inception, and there is a burgeoning list of Kubernetes plugins and tools for AI applications. This means that, for all intents and purposes, the backend risk of AI with regard to Kubernetes security is flying under the radar.

The Cloudy Issue of Kubernetes and AI

If we want to understand the real risk of AI, we must also understand Kubernetes security. Here is where things get cloudy; to address a risk, the solution must align with the problem. Today, security for Kubernetes tends to be approached in a similar way to cloud security or container security versus a Kubernetes-specific approach. Unfortunately, this means the capabilities provided are either peripheral or sometimes completely irrelevant to how Kubernetes actually works.

While 96% of organizations using Kubernetes use it 90% of the time on managed Kubernetes platforms in the cloud, Kubernetes is an open source project governed by the Cloud Native Computing Foundation (CNCF). And, from a security perspective, its needs are very different from those of the cloud. Let’s look at an example of how calling Kubernetes security cloud security–or by any other name–can go very, very wrong.

The Wrong Approach at the Wrong Time

Cloud security generally uses polling intervals. You can scan for misconfigurations in cloud service accounts (like an open API to an S3 bucket) every few hours. This is because of the volume of data and how cloud providers make data accessible. New cloud services aren’t being spun up every second, and account configurations aren’t changing every second, so there is no need for greater frequency.

This approach, when applied to Kubernetes, is almost completely irrelevant because Kubernetes configurations are tied to short bursts of compute that last, on average, less than five minutes. If a cloud security scanner looks at Kubernetes every few hours, it could miss the whole show and not even know it.

Picking Apart the Kubernetes Risks for AI

If we focus Kubernetes security specifically on the job that needs to be done, we will see that, as it applies to AI, there is no shortage of challenges and risks to overcome. Here are some of the top things to consider:

Multitenancy and Sensitive Data

When you run a search query in AI, you don’t get your own little space on the back end. Instead, you pull from a generalized group of information. What happens when you start putting sensitive data into the query? How is that data isolated on the back end to address privacy concerns? How do you ensure that one person’s query can’t access another person’s query output? This is especially important with AutoGPT on the rise. This becomes a bigger issue as people ask AI to process sensitive information like bank accounts and social security numbers to automate their daily lives. AutoGPT relates to Kubernetes development as well if developers use AI to automate tasks that involve secrets or other sensitive data.

Role-Based Access Control (RBAC)

Who at OpenAI can access the Kubernetes cluster that processes every single person’s ChatGPT query? This is a role-based access control (RBAC) concern, which is specific to Kubernetes.

Kubernetes Common Vulnerabilities and Exposures (CVEs)

Just this year to date, there have been more than 10 new vulnerabilities in the Kubernetes ecosystem, allowing for escalated privileges, divulsion of secrets and more. How are these being identified and handled quickly across the volume of Kubernetes clusters in the back end of an AI app?

A Dystopian Future: Is Clarity Possible for the Sake of AI?

As we continue exploring the broader risks of AI and start including Kubernetes security in the conversation, there are multiple futures one can envision. In one future, we can secure Kubernetes with an appropriate approach that fits its unique AI-related risk factors. In the opposite future, we continue using approaches that might work for more well-known security areas but are wholly inappropriate for Kubernetes, delaying security for the compute mechanism behind the AI revolution. With such a rapid pace of change in AI and so much uncertainty ahead, we are almost certain to learn more quickly if we dive headfirst into a more Kubernetes-appropriate approach; by default, we’ll be quicker to secure AI.