UPDATED 15:14 EDT / AUGUST 21 2023

AI

Generative AI startup Contextual AI names Google Cloud its preferred cloud provider

Contextual AI Inc., a startup developing large language models for the enterprise, today named Google Cloud as its preferred cloud provider.

The company will use Google Cloud services to power several parts of its business. Most notably, it plans to leverage the search giant’s infrastructure to train its language models.

Palo Alto, California-based Contextual AI launched from stealth mode earlier this year with $20 million in funding. It’s led by co-founder and Chief Executive Officer Douwe Kiela, an adjunct professor at Stanford University. The language models the startup is building are based on a technology called retrieval augmented generation, or RAG, that Kiela helped pioneer.

Artificial intelligence models usually draw upon the dataset on which they were trained to answer user questions. According to Contextual AI, its RAG technology allows a neural network to draw on information from external sources as well. Moreover, a RAG-powered neural network can do so with no need for retraining, which reduces infrastructure costs.

Contextual AI says its technology provides several benefits. The startup’s language models can cite their sources when answering a user question. Moreover, Contextual AI claims its models are less prone to AI hallucinations than a traditional neural network.

“Building a large language model to solve some of the most challenging enterprise use cases requires advanced performance and global infrastructure,” said Kiela.

The startup plans to train its neural networks using Google Cloud’s A3 and A2 instances. The former instance offers access to eight of Nvidia Corp.’s flagship H100 graphics processing units. The A2 instance includes 16 processors from the A100 chip family, which was Nvidia’s flagship GPU series before the launch of the H100.

Contextual AI also plans to use Google’s internally developed TPU machine learning chips. The latest addition to the chip series, the TPU v4, debuted last year. Google says the processor is 2.1 times faster than its previous-generation silicon and nearly three times more power-efficient.

The manner in which Google has deployed TPU v4 chips within its data centers is also a part of the processor series’ value proposition.

According to Google, each TPU v4 cluster comprises 4,096 chips linked together by a custom optical interconnect. This interconnect automatically reconfigures itself based on AI models’ requirements. More specifically, it adjusts the TPU v4 cluster’s network settings in a way that speeds up the neural network running on top.

Contextual AI says the language models it’s training on Google Cloud will lend themselves to a variety of use cases. According to the startup, customer support is one area where its technology could be applied. Additionally, it sees opportunities to deploy its language models in the financial sector.

Image: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU