When you talk to business leaders about generative AI, they often marvel at just how quick adoption has been across the enterprise. I’ve rarely seen anything like it in my decades in the business, and you need only look to the action at Google Cloud Next this week for proof.
Thousands of customers and partners have joined us here in Las Vegas to showcase the amazing things they’ve created over the past year using AI — as well as to explore all that could be possible with our latest offerings, like Google Agentspace, Gemini 2.5, Imagen 3, Veo 2, our latest TPU chips and Customer Engagement Suite.
To give you just a flavor of some of the things we’re building together with AI, here are highlights from nine business leaders who took part in major announcements at Cloud Next 25. And to really get a window into what’s possible, check out our updated list of real-world AI use cases, which now numbers more than 600.CollectionGoogle Cloud Next 25Here’s a look at what we announced at Google Cloud Next 25.See more
NVIDIA
What they did: Google Cloud and NVIDIA announced a significant expansion of our partnership that will bring a range of Gemini models, running on NVIDIA’s new Blackwell compute platform, to on-premises data centers and other secure environments. Oftentimes, highly regulated industries or localities with strict digital sovereignty requirements limit access to the public cloud, including most gen AI models. This new offering through Google Distributed Cloud bridges that gap.
What they’re saying: “Every industry, every company, every country wants to get its hands on AI. However, everything has to be fundamentally confidential and secure. And so, we’re announcing something utterly gigantic today — Google Distributed Cloud with Gemini and NVIDIA are going to bring state-of-the-art AI to the world’s regulated industries and countries. Now, if you can’t come to the cloud, Google Cloud will bring AI to you.” — Jensen Huang, co-founder and CEO, NVIDIA, during the Next 25 keynote.
Sphere + The Wizard of Oz
What they did: Sphere Entertainment partnered with Google DeepMind, Google Cloud, Hollywood production company Magnopus, to create a larger-than-life immersive experience of “The Wizard of Oz” for the 160,000-square-foot interior screen at Sphere. Using first-of-its-kind engineering and AI, they were able to greatly enhance the resolution of the 1939 film classic, as well as expand backgrounds and even fill in character performances and enliven details that traditional CGI would have struggled to reproduce.
What they’re saying: “We looked for content that would accentuate all of the different capabilities inside of the venue. My hope for it is that we keep exploring different ways to create this kind of content and to take great performances from the past and bring them to life today.” — James Dolan, CEO of Sphere Entertainment and Madison Square Garden, in a Next 25 video.
L’Oréal
What they did: L’Oreal is using Google’s Imagen 3, Veo 2 and Gemini multimodal models within CREAITECH, L’Oréal’s gen AI beauty content lab. This platform has transformed the creative process of L’Oréal’s marketing teams, allowing them to supercharge creative ideation and streamline the marketing production with the creation of unique images. The only rule is not to generate images of people for advertising purposes, to remain true to human beauty.
What they’re saying: “Using AI technology is a very big accelerator for marketing teams. Now, it’s easier to bring to life ideas and generate new concepts, create storyboards or test product pack shots in different universes, all of this in order to express a clearer vision to internal teams or partners and therefore save time and back and forth.” — Thomas Alves Machado, Gen AI Global Content Director at L’Oréal Groupe, in an interview.
