Red Hat today made available an update to its OpenShift platform that adds a technology preview of a virtual artificial intelligence (AI) assistant, dubbed Lightspeed, that makes it possible to troubleshoot issues using a natural language interface.

Announced at the KubeCon + CloudNativeCon 2024 conference, Red Hat is also updating Red Hat OpenShift Virtualization for encapsulating virtual machines and their workloads in containers to include improved safe memory oversubscription, improved dynamic workload rebalancing and a technology preview of a dedicated administration console.

Red Hat Advanced Cluster Management for Kubernetes has also been updated to provide improved search and filtering for VMs, as well as, in technology preview, the ability to stop, start, restart and pause VMs directly.

At the same time, Red Hat OpenShift 4.17 adds, in preview, a native network isolation for namespaces running at the pod level and a Confidential Compute Attestation Operator that provides attestation services for confidential container workloads.

Red Hat is also updating Red Hat OpenShift AI, an instance of its platform for building cloud-native applications that includes Kubeflow, a framework for building AI models, to add an AI model registry, bias detection and data drift detection monitoring tools and low-rank adapters (LoRA) that enable more efficient fine-tuning of large language models (LLMs). Red Hat has also donated the model registry project to the Kubeflow community as a subproject.

In addition, Red Hat has added support for the NVIDIA NIM microservices framework and graphical processor units (GPUs) from AMD that can be used to deploy the AMD ROCm framework for building AI models.

Red Hat OpenShift AI 2.15 also adds support for runtime for LLMs, dubbed vLLM for KServe, a model inference platform, and KServe Modelcars that adds Open Container Initiative (OCI) repositories as an option for storing and accessing encapsulated models. Additionally, private/public route selection for endpoints in KServe enables organizations to enhance the security posture of a model by directing it specifically to internal endpoints when required.

Finally, Red Hat OpenShift AI 2.15 makes it simpler to manage, compare and analyze pipeline runs grouped in a logical structure and support for hyperparameter tuning using the Ray Tune tool.

Jeff DeMoss, senior manager of product management for Red Hat OpenShift AI, said the overall goal is to make it simpler for organizations to meld machine learning operations (MLOps) and DevOps workflows.

Red Hat has also launched a joint effort with SoftBank Corp. to bring open AI radio access network (AI-RAN) technologies for 5G and 6G wireless networks to the Red Hat OpenShift platform.

Additionally, Red Hat Device Edge has been updated to add support for IPv6. Based on Microshift, a lightweight instance of Red Hat Open Shift, Red Hat Enterprise Linux (RHEL) and Red Hat Ansible Automation Platform, the Red Hat Device Edge platform 4.17 is designed to support real-time applications running at the network edge. Red Hat is also partnering with Tyrrell Products to develop smart building and industrial automation controllers based on Intel processors.

Finally, Red Hat is adding additional templates to Red Hat Developer Hub, an instance of the open-source Backstage internal developer portal (IDP) that it launched earlier this year.

It’s clear that Red Hat is making a case for being the single source for every capability needed to build and deploy any type of cloud-native application. The issue now is determining at what rate of speed.