Aerospike has updated its Kubernetes Operator, that makes it simpler to back up the company’s namesake real-time database using an Aerospike Backup Service (ABS) launched earlier this year.

In addition, version 3.4 of the Aerospike Kubernetes Operator (AKO) supports the recently released Aerospike 7.2, an update that added an “Active Rack” capability that makes it possible to deploy the database across multiple zones in the cloud.

IT teams can also use AKO to trigger warm and cold restarts to your Aerospike clusters without impacting applications. Aerospike has also doubled the resource limits for AKO and added support for a paused configuration state that halts all AKO operations in a resumable manner.

Finally, AKO is integrated with Aerospike Monitoring Stack to simplify setup and configuration.

Aerospike CTO Srini Srinivasan said these capabilities will make it simpler for IT teams to, for example, provide a rolling upgrade capability that ensures applications deployed on Kubernetes clusters remain available as updates to the underlying Aerospike database are made.

In general, the Aerospike database is being deployed with increased frequency on Kubernetes clusters as organizations build and deploy more low-latency applications. The Aerospike multimodal database is designed to process transactions, documents, graphs and vectors in real-time at scale.

For example, Flipkart, an e-commerce marketplace based in India, relies on the Aerospike database running on Kubernetes clusters to drive 95 million transactions per second (TPS) that are generated during the Diwali holiday season. Other organizations that have deployed Aerospike include Adobe, Airtel, Criteo, DBS Bank, Experian, PayPal, Snap and Sony Interactive Entertainment.

It’s not clear at what rate organizations are building and deploying real-time applications but as IT continues to evolve there is less reliance on batch-oriented processes. Instead, organizations are in many cases driving digital business processes that require data to be continuously updated versus at some pre-defined set of intervals. Many of those applications are, naturally, being built using containers that are then hosted on Kubernetes clusters.

That doesn’t necessarily mean batch-oriented applications are being replaced. Instead, new classes of real-time applications are being deployed alongside legacy applications, which collectively make IT environments that much more complex to manage.

Kubernetes Operators are extensions to the Kubernetes application programming interfaces (APIs) that help reduce that complexity by making it simpler to automate the deployment and execution of workloads on Kubernetes clusters. The challenge now is coming to terms the potential number of Operators that might exist in increasingly complex IT environments. In some cases, organizations are opting to build their own Operators to automate processes spanning multiple classes of workloads, databases and middleware. One day soon, those Operators will also provide the foundation upon which multiple artificial intelligence (AI) agents will be trained to further automate tasks.

Regardless of approach, Operators are now being pervasively employed across Kubernetes environments. Each IT team will need to determine how best to manage them but the days when IT teams relied on low-level tools to manage Kubernetes workloads have come to an end. It’s never been simpler for the average IT team to deploy and manage multiple classes of applications in a Kubernetes environment.