This is Nvidia’s lesser-known plan to stay dominant in the AI chip business

This is Nvidia’s lesser-known plan to stay dominant in the AI chip business

The company wants to become a platform on the level of Google, Apple, and Microsoft.

BY Mark Sullivan

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Nvidia’s big news Monday wasn’t a new chip, it was a strategy

It’s a good time to be Nvidia CEO Jensen Huang. Right now, Nvidia dominates the market for the chips needed to run AI models. Nvidia stock has tripled in value over the past 12 months. And delivering a keynote speech at San Jose’s SAP Center on Monday as part of the company’s GTC developer conference, Huang looked like a literal rock star. No wonder one attendee dubbed the event “AI Woodstock.” 

At the keynote Huang announced a new graphics processing unit (GPU) called Blackwell that it says is up to 30 times as fast as its predecessor (Hopper) and far more efficient. That’s obviously big news—Nvidia will have to keep the accelerator pressed down to stay ahead of challengers such as Intel, AMD, Cerberus, and SambaNova—but the bigger story from the conference concerns how Nvidia will ensure its dominant place in AI even when its chips aren’t markedly faster than others. 

Nvidia also announced Monday a new product called NIM (Nvidia Inference Microservices), a “container” of all the software an enterprise might need to put AI models to work. This includes application programming interfaces (APIs) to popular foundation models, software needed to deploy open-source models, pre-built models and software needed to access and process the company’s own proprietary data, and software links to popular business software such as SAP and the cybersecurity-focused CrowdStrike.

In 2023, many enterprises learned the hard way that deploying AI models is a messy business that requires building a lot of infrastructure and some PhDs on deck to make it all work. NIM is trying to package up all the major components that fit around the models, and abstract some of the deep technical stuff into controls that non-PhDs can use. Yes, other companies, including the major cloud providers, are doing this, but NIM is focused on making all the components work seamlessly and efficiently with Nvidia’s hardware. It’s similar to Apple’s superpower, which is producing both software and hardware and integrating them so tightly that they bring out the best in each other. 

It’s clear that Nvidia isn’t content with being just a chip supplier. It wants to be a tech company on the same level as Apple, Google, and Meta. And becoming a platform player is a tried-and-true way of reaching that rarified air. 

Why Apple using Google’s Gemini is disastrous and unlikely

Bloomberg reporter Mark Gurman notes that Google and Apple have been in talks to add Google cloud-based AI service to the iPhone. The service would be powered by Google’s Gemini AI models, Gurman writes, citing unnamed sources. This would be an extension of Google’s current arrangement with Apple in which it pays billions per year to supply the default search experience on the iPhone. 

The deal would be a boon for Google’s generative AI efforts; there are currently about two billion active iOS devices in use around the world. The exact use of the Gemini model on the iPhone remains unclear, but it’s possible that the model would anchor some form of chatbot, or perhaps a writing app. It could also power a form of conversational search similar to Google’s experimental Search Generative Experience

What is certain, however, is that the antitrust environment around tech has changed a lot since Google began paying to put its search on the iPhone. The Federal Trade Commission under the leadership of Lina Khan would almost certainly open an investigation into a big money deal to put Gemini on the iPhone in some form. The FTC last summer opened a probe into Microsoft’s large-scale investment in OpenAI, and Google’s and Amazon’s investments in Anthropic. (Gurman reports that Apple has also held talks with OpenAI to provide some form of AI function.) 

A deal with Google would suggest that Apple sees generative AI as the forte of another company. This is somewhat surprising because Apple has been working with machine learning for years, and has deployed features driven by that technology on its devices, including several camera features. In 2018 Apple even poached Google’s then-head of AI John Giannandrea to lead its own AI efforts. Apple was the first tech company to embrace a voice assistant, Siri, on its devices way back in 2011.

 

Apple has also developed its own generative AI models over the past few years, but the company may not have been able to advance the capabilities of its models as quickly as Google and OpenAI. Apple’s big opportunity is offering privacy-protecting personal AI apps powered by models that run mostly or completely on-device. 

InflectionAI wasn’t bought by Microsoft—it was absorbed by it

When I spoke to InflectionAI cofounder Mustafa Suleyman last September, he’d landed a huge $1.3 billion funding round (at a $4 billion valuation). His new book about the future impact of AI had just come out. Inflection’s app, an emotionally intelligent personal AI assistant called Pi, was doing well. “This is the arrival of a new era in computing,” he told me. “This is going to be like bringing your digital life with you wherever you are.” He bragged that his company had been the first to get Nvidia’s latest H100 servers—22,000 of them in a $1.2 billion cluster.

What a difference six months makes. On Tuesday, Suleyman confirmed that he and most of Inflection’s 70 employees have taken jobs at Microsoft, which had earlier made an investment in the fledgling company. But don’t call it an acquisition, an Inflection spokesperson was quick to point out on the phone Tuesday. Inflection would stay around as a B2B company and sell API access to the Inflection generative AI model that powers Pi. Microsoft will also sell access to the model via its Azure cloud. The spokesperson declined to say exactly how many employees would be going to Microsoft, nor did he know what would become of the $1.2 billion server cluster. 

Terms of the “agreement” were not divulged, but Inflection AI cofounder Reid Hoffman said on LinkedIn Tuesday that the Microsoft deal “means that all of Inflection’s investors will have a good outcome today.” It’s unclear how exactly investors can have a “good outcome” if Microsoft isn’t buying their shares at a premium. Certainly the investors didn’t foresee making their multiples from a B2B company that collects API fees.

When all is said and done, a promising company that was developing large AI models independent of Google, Amazon, or Microsoft has vanished. The brains that designed the Inflection LLM and the Pi app (which will live on for the time being) are now under Microsoft’s roof. The brain power in AI continues to converge with the big money in the tech industry.

 

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld 


 

Fast Company – technology

(24)