UPDATED 09:00 EDT / OCTOBER 04 2023

AI

Dell enhances its generative AI hardware and software portfolio to cover more extensive use cases

Dell Technologies Inc. said today it’s expanding its suite of generative artificial intelligence infrastructure and services.

The computer giant’s generative AI hardware now supports additional use cases, including model tuning and inferencing, allowing customers to deploy their latest models on specialized infrastructure within their own data centers.

In addition, Dell announced an expansion of its Generative AI professional services capabilities to cover data preparation, implementation and education. It’s also working with Starburst Data Inc. to help customers build a modern, open data lakehouse to power their generative AI applications.

Dell’s Validated Design for Generative AI with Nvidia offers customers a combination of Dell’s most advanced server hardware and Nvidia Corp.’s AI software and graphics processing units. It can serve as a foundation for generative AI models hosted in customers’ own data centers, rather than in the cloud.

The solution was born out of Dell and Nvidia’s Project Helix initiative and marries the most powerful, AI-capable Dell PowerEdge serves with Nvidia’s most advanced GPUs and software, such as Nvidia AI Enterprise and Nvidia NeMo, which is a framework for training, customizing and deploying generative AI models. Together with the hardware and software, customers also gain access to numerous pretrained AI models that can be customized, so there’s no need to start from scratch.

The initial focus of Dell’s Validated Design for Generative AI with Nvidia was on AI training, but with today’s update it now supports model tuning and inferencing, meaning customers also have a way to deploy their models on-premises. The new capability is available now via traditional channels, and will also come to Dell APEX customers later this month.

With Dell’s Validated Design for Generative AI with Nvidia, customers can choose between the Dell PowerEdge XE9680 and PowerEdge XE8640 servers and a variety of Nvidia’s GPUs. The company pitches it as an ideal solution for companies that want to build generative AI models while keeping their data secure within their own servers.

Andy Thurai, vice president and principal analyst at Constellation Research Inc., told SiliconANGLE that the most powerful large language models, such as GPT-4, are trained in purpose-built environments in the cloud due to their massive size and resource demands. However, he said some enterprises are looking at ways to train their own, much smaller LLMs, within their own environments. “They want a way to do this at a much smaller scale, and a way to fine-tune existing models on their own data, and do it on their own infrastructure,” Thurai explained. “This hasn’t gained much steam yet, but when it happens there will be a need for optimized infrastucture.”

One customer that’s already doing this is Princeton University. Sanjeev Arora, the Charles C. Fitzmorris Professor in Computer Science at Princeton, explained that the university has implemented Dell’s and Nvidia’s hardware within a high-performance computing cluster to develop large language models. “This system gives researchers in natural sciences, engineering, social sciences and humanities the opportunity to apply powerful AI models to their work in areas such as visualization, modeling and quantum computing,” Arora said.

As for Dell’s Generative AI Professional Services, these have been expanded to data preparation. Dell’s experts can ensure customers have the cleanest and most accurate data set, formatted in the right way, to power their AI projects. It also ensures data integration and high quality data output, Dell said.

Meanwhile, Dell’s new implementation services can help customers to establish an operational generative AI platform for inferencing and model customization with accelerated time to value, the company said. It’s essentially offering a fully managed service for customers that want to run Dell and Nvidia’s AI stack in their own data centers, freeing customers to focus on developing their AI models.

Dell is also offering new educational services for customers that want to train their staff in the latest developments in generative AI, it said. Each of the new services will be rolled out by the end of the month, Dell said.

Finally, Dell’s integrating its PowerEdge compute and storage platforms with Starburst’s industry-leading analytics software, to help customers build a centralized data lakehouse and extract insights from their data more easily. Global availability is planned for early 2024.

TECHnalysis Research analyst Bob O’Donnell said it has become clear that enterprises are adamant about wanting to use their own data to train generative AI models. However, they need a lot of help, both to prepare that data and to ensure it remains secure. “Dell’s latest Generative AI solutions and partnerships offer a broad set of capabilities that help companies capitalize on this potential, bridging knowledge gaps and ensuring data drives discernible, impactful business results,” the analyst said.

That said, Thurai believes Dell will have to be patient if it wants to gain any traction in terms of on-premises generative AI development. “It’s an interesting concept, but Dell first needs to lure customers away from the cloud, where all of the LLM model training components are already set up,” he said. “But setting all of these things up on-premises, doing the data prep and data movement from the cloud and distributed locations is not for the faint of heart.”

Image: Dell/LinkedIn

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU