close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Red Hat Accelerates AI Workloads with Latest OpenShift Platform Enhancements
aecifo

Red Hat Accelerates AI Workloads with Latest OpenShift Platform Enhancements

Open source software giant Chapeau Rouge Inc. ups its game in artificial intelligence development with a host of updates to the Red Hat OpenShift AI platform, announced at KubeCon + CloudNativeCon North America 2024 today in Salt Lake City, Utah.

Red Hat OpenShift AI is an evolving AI and machine learning development platform which enables businesses to build and deliver AI-driven applications at scale in hybrid cloud environments. In other words, they can use it to build AI models and applications that leverage their data no matter where it resides, in the cloud or on-premises.

With today’s update, the IBM Corp.-owned company. adds several new features this will enable organizations to supercharge AI capabilities within their existing mission-critical applications.

Red Hat OpenShift AI 2.15 will be available later this month, and the main feature is a new template registry, available in technology preview, that provides a centralized location for users to view and manage templates. Registered AIs. Using predictive and generative AI capabilities, the registry provides users with a structured way to organize, share, version, deploy and track their models, as well as the underlying metadata and model artifacts on which they lean on each other.

Meanwhile, the new Data Drift Detection Monitor works by detecting changes in the distribution of input data from machine learning models operational in production, so it can alert users when live data used for model inference deviate significantly from the original information. trained on. This is an important capability that helps maintain the reliability of AI models, Red Hat said, ensuring they remain aligned with their original training to maintain the accuracy of their predictions and answers.

Newer bias detection tools do much the same thing, but they’re more focused on fairness in AI models. The idea is that they can prevent cases of AI bias. This is a common problem for AI models. For example, an AI tasked with screening job applicants might be biased against applicants of a specific ethnicity or educational background, and the new feature is designed to prevent this.

AI developers also have more tools to improve fine-tuning of models. Red Hat OpenShift AI 2.15 adds support for LoRA, or low-rank adapters, which help fine-tune LLMs such as Meta Platforms Inc.’s Llama 3 to limit resource consumption and make them more affordable to execute. on a large scale.

Other enhancements announced today include the integration of Nvidia Corp.’s NIM microservices, which provide access to a broad selection of large, easy-to-deploy language models for generative AI applications, and support for Unities graphics processing equipment manufactured by Advanced Micro Devices Inc., giving customers more options in AI processing hardware. There are also new model service features that developers can look forward to, such as vLLM service runtimes for KServe, as well as expanded AI training and experimentation options, such as Ray Tune hyperparameter tuning tool.

Joe Fernandes, vice president and general manager of Red Hat’s AI business unit, said organizations need a reliable, scalable and flexible AI platform that can run anywhere their data. “We deliver significant improvements in scalability, performance and operational efficiencies… enabling IT organizations to realize the benefits of a powerful AI platform while retaining the ability to build, deploy and run on any environment their unique business needs dictate.

Next-generation Red Hat OpenShift enters preview

Red Hat’s focus on AI isn’t just about helping businesses develop and deploy intelligent applications. In addition, it also uses AI to improve the capabilities of its own platforms, such as the classic OpenShift Red Hata popular Kubernetes platform used by developers to build and deploy distributed, scalable applications in any cloud or on-premises environment.

Red Hat OpenShift Speed ​​of light The platform, available now as a technology preview, represents the culmination of these AI integration efforts, introducing a new AI-powered generative assistant that allows users to ask it questions in their natural language.

Red Hat said the OpenShift virtual assistant is designed to make tasks like troubleshooting applications and investigating cluster resources easier, with the goal of making users’ lives easier and more productive.

Although Lightspeed is still only in preview, the latest version of Red Hat OpenShift 4.17 is now generally available, although the new features are a little less exciting. The main focus of the update has been improving virtualized workload management with enhancements to Red Hat OpenShift virtualization. Additionally, the company announced advanced Kubernetes management capabilities, making it easier to manage virtual machines across multiple clusters.

AI application models

Coming back to AI, Red Hat announced a number of new features in Red Hat Developer Centerwhich is a business development portal based on the open source Backstage project.

With this, Red Hat introduces new AI-driven software models allowing developers to follow a more standardized approach to building and deploying AI-enhanced services and components. The new models cover common applications such as audio-to-text, chatbots, code generation, object detection, and retrieval-augmented generation. By using the templates, developers get what amounts to a pre-built application that can then be customized as they wish, saving them time on development.

Finally, the company announced Red Hat Device Edge 4.17with updates aimed at making it easier to deploy AI applications that live at the edge, on low-power devices such as sensors, drones and robots. Red Hat Device Edge is a platform for resource-constrained edge environments where small devices and compute resources require low-latency operations.

Image: SiliconANGLE/Freepik AI

Your vote of support is important to us and helps us keep content FREE.

A click below supports our mission of providing free, in-depth and relevant content.

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Andy Jassy, ​​CEO of Amazon.com, Michael Dell, Founder and CEO of Dell Technologies, Pat Gelsinger, CEO of Intel, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You are truly a part of our events and we really appreciate you coming and I know people also appreciate the content you create” – Andy Jassy

THANK YOU