Red Hat today announced advances in Red Hat OpenShift AI, an artificial intelligence (AI) and machine learning (ML) platform that enables enterprises to create and deliver AI-enabled applications at scale across hybrid clouds. 

OpenShift AI was designed to help organizations struggling with bringing AI models into production, the company said in its announcement. Among those challenges are higher costs for hardware, concerns over data privacy and a lack of trust regarding data sharing. Meanwhile, generative AI is quickly evolving, so getting a reliable AI platform that can run in a local data center or in the cloud has not been easy.

“Bringing AI into the enterprise is no longer an ‘if,’ it’s a matter of ‘when.’ Enterprises need a more reliable, consistent and flexible AI platform that can increase productivity, drive revenue and fuel market differentiation,” Ashesh Badani, senior vice president and chief product officer at Red Hat, said in the announcement. Red Hat OpenShift AI, he said, makes it possible “to deploy intelligent applications anywhere across the hybrid cloud while growing and fine-tuning operations and models as needed to support the realities of production applications and services.”

According to Red Hat, the latest version of the platform, Red Hat OpenShift AI 2.9, delivers enhanced model serving, including at the edge; improved model development and model monitoring visualizations.

Model serving at the edge, now a technology preview, can extend the reach of AI models to various edge locations using single-node OpenShift, Red Hat explained. In the announcement, Red Hat said this capability “provides inferencing capabilities in resource-constrained environments with intermittent or air-gapped network access.”

Enhanced model serving enables the use of multiple model servers that support predictive AI and generative AI running on a single platform, which cuts costs and simplifies operations, the company said.