At its DockerCon event, Docker, Inc. today launched a framework to make it simpler to build applications infused with artificial intelligence (AI) using containers.
In addition, Docker Inc. revealed it has partnered with Neo4j, LangChain and Ollama to create a GenAI Stack designed to help developers start building generative AI applications in minutes.
Docker, Inc. CEO Scott Johnston said Docker AI is designed to help developers define and troubleshoot all aspects of an application development involving AI models. Specifically, developers will be able to take advantage of a framework that surfaces automated guidance, for example, to spin up resources such as databases using Docker files and images, he added.
The GenAI Stack complements that effort by making available pre-configured large language models (LLMs) from Ollama, vector and graph databases from Neo4j, and the open source LangChain framework. Components include open source LLMs such as Llama 2, Code Llama, Mistral and private models such as OpenAI’s GPT-3.5 and GPT-4, along with tools and templates to, for example, simplify data loading and vector index population.
Developers can take advantage of knowledge graphs to enhance querying and enrich results using summarization and retrieval tools. In addition, developers can compare results achieved between LLMs on their own, LLMs with vector and LLMs with vector and knowledge graph integration.
Building applications infused with AI is now a team sport involving data scientists, data engineers, software engineers and traditional developers. Over time, Docker is betting that as AI models become more commonly infused with AI capabilities, developers will be able to more easily invoke models they constructed themselves or those created by a data science team. Of course, data science teams have their own distinct cultures based on machine learning operations (MLOps) platforms, so the biggest challenge ahead in terms of creating more holistic teams will be bridging the cultural divides that exist between the different classes of engineers and developers involved.
In the meantime, the race to embed AI models is on. While it’s still early days with regard to determining the best use cases for AI, many organizations are now experimenting with AI models. The challenge many initially face is simply figuring out how to train AI models using reliable data. However, once a model is constructed, it then needs to be integrated into an application via an application programming interface (API).
Of course, AI models are also subject to drift, so there will be a need for observability, governance and DevOps frameworks to monitor performance, bias and hallucinations in addition to updating and rolling back AI models based on the feedback loops an organization creates.
Finally, each AI model will need to be secured to ensure that the data used to create it has not been poisoned by cybercriminals trying to deliberately force a hallucination to occur.
Most organizations are going to need time to completely address every aspect of the AI life cycle, but, as always, the desire to innovate now is pushing the limitations of the frameworks many organizations have relied on for decades to build applications.