ChatGPT’s new web search function puts the web ecosystem (and Google) on notice 

ChatGPT’s new web search function puts the web ecosystem (and Google) on notice 

OpenAI released its search tool to ChatGPT Plus users Thursday, and has plans to extend it to free users in the future. 

BY Mark Sullivan

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

OpenAI bursts into a growing AI Search race

OpenAI on Thursday launched its AI search product to ChatGPT Pro subscription users. The company made the wise choice to integrate the search functionality into the existing chatbot instead of trying to make it a separate app with its own branding. For many user queries, ChatGPT will now automatically search the web to add current information or context to its responses. But users can also manually tell the chatbot to search the web for an answer by clicking a small web-search icon within the chatbot’s dialog window. The user can then ask follow-up questions, and the chatbot will remember the full context of the chat so that there’s no need to repeat information. 

OpenAI’s news comes as Google continues to wade through an antitrust battle with the Federal Trade Commission (FTC), and after Google’s Bing already added AI to its search. In OpenAI’s case, Search product lead Adam Fry says the search functionality is powered by the company’s most powerful models, including GPT-4o and o1. The search feature is now rolling out to ChatGPT Plus subscribers, and OpenAI says it’ll soon integrate search with Advanced Voice Mode on the ChatGPT app. 

The addition of search might boost ChatGPT Pro subscriber numbers, at least for a time. OpenAI has plans to bring search to the chatbot’s free users, too. OpenAI’s search feature could threaten Perplexity, which is making money through subscriptions to its pure-play AI search tool. Above all, the launch could threaten Google’s role as de facto mediator between web users and web content. Users may like having chat and search all in one app, especially one capable of interacting like a real person via Advanced Voice Mode. And they may prefer a search tool that does all the work—that is, creating a custom answer instead of leaving it to the user to click through a list of links.

And more entrants in the race are coming. The Information reported this week that Meta is also developing its own AI search capability for its AI chatbot. Similar to other AI search tools, Meta’s tool would crawl the web and index chunks of information that can be combined into a custom answer for the user. Meta is trying to keep up with OpenAI’s ChatGPT tool, and may also be trying to rid itself of its dependence on Google and Microsoft’s Bing, which it taps to bring news, sports, and stocks content to Meta AI users. 

Reports say xAI in talks to raise “several billion” at $40 billion valuation 

The largest and best AI models are, and will be, owned by a handful of very rich tech companies. That theme is continually reinforced as tech companies push large generative AI models toward higher levels of performance. The transformative technology requires unheard-of amounts of computing power, electricity, and training data. The AI community itself is just coming to terms with this reality. 

Masayoshi Son, founder and CEO of SoftBank Group said at the FII8 conference Tuesday that the worldwide investment in computing chips and data centers needed to support “artificial superintelligence” (that is, AI that’s far superior to humans at most tasks) will require an investment of $9 trillion. “Nine trillion of CapEx for artificial superintelligence is very reasonable,” he said. “In fact, it may be too small.” If artificial superintelligence (ASI) begins improving the global GDP by 5% per year starting in 10 years, the companies that contributed the investment money will begin seeing profits of $4 trillion per year. He predicted that just four companies (and, presumably, their investors) would reap that reward. 

AI companies are investing now in hopes of being one of those beneficiaries. In early October, OpenAI raised another $6.6 billion to add to the estimated $13.5 billion it had already raised. Its investors believe the company to be worth $157 billion, up from its $86 billion valuation in February. Microsoft (a major backer of OpenAI) and Meta (which develops open-source foundation models) are likely investing in excess of $30 billion in hardware this year, much of which will be dedicated to training and hosting AI models. Google will spend an estimated $51 billion on capital expenditure this year, up 59% from last year, much of it on AI infrastructure.

Elon Musk’s xAI is betting big, too. While many of the leading foundation model developers such as OpenAI and Anthropic rely on larger partners for their computing power (Microsoft Azure and Amazon Web Services/AWS, respectively), xAI is busy building models and data centers. The company has just completed work on a massive new data center in Memphis, the Colossus supercomputer cluster, which contains 100,000 Nvidia Hopper GPUs and is being used to train xAI’s Grok large language models. xAI, with help from Nvidia, Oracle, Dell, and Supermicro, are now working to double the size of the cluster to 200,000 Hopper GPUs. 

It’s not surprising, then, that xAI may be raising more money. The Wall Street Journal reports that the company is now in early-stage talks with investors to raise “several” additional billions of funding on a $40 billion valuation. And that comes just five months after xAI raised $6 billion on a $24 billion valuation.

“XAI is in the midst of an arms race of capital expenditure build-out that requires subsequent funding at higher valuations, and likely the raise of venture debt, collateralized against the company’s existing hardware,” PitchBook analyst Brendan Burke tells Fast Company in an email. “XAI’s custom data centers can give the company an advantage in pursuing frontier models given the restrictions of OpenAI’s relationship with Microsoft and Anthropic with AWS.” 

On the foundation model front of the AI arms race, xAI has designs on models that reach AGI (that is, AI that’s equal to humans at most tasks), and then ASI. Musk, speaking at FII8, offered some of his characteristically ambitious projections for when his company will achieve these advanced forms of AI.

“I think it’ll be able to do anything any human can do, possibly within the next year or two,” Musk said. “So then, how much longer does it take for it to do anything all humans can do combined? I think, not long, probably three years from that point—probably 2029-ish, 2028, something like that.”

XAI released Grok-1 in late 2023, followed by Grok 1.5 in April 2024 and Grok-2 in August 2024. Grok-2 is meant to compete with the latest foundation models from OpenAI, Anthropic, and Google; it brought major improvements over its predecessor in chat, computer code generation, advanced math, and image generation.

Osmo gives robots the ability to smell

Companies like Covariant, Tesla, and Figure are trying to use large multimodal transformer models to take robots to the next level. Much of this work involves improving robots’ AI brains, giving them intuition—a sense of how the world works. But one company is working on technology that could give robots a new sense: that of smell.

Osmo caught the attention of many on X this week when it published a video about its AI system, which can detect the molecules in the air that comprise an odor, and then digitize them. Once an odor is represented in a series of numbers, it becomes the domain of machines, including robots. Certainly, a robot wouldn’t experience smells in the way we do (they wouldn’t experience pleasure from a beautiful smell or nausea from a particularly disagreeable smell), but they’d be able to identify a smell and its source. This could be a good thing for some kinds of robots. A security bot might be able to detect the smell of a gas leak, or an intruder, for example. Drug-sniffing dogs might be out of a job, too.

 

ABOUT THE AUTHOR

Mark Sullivan is a San Francisco-based senior writer at Fast Company who focuses on chronicling the advance of artificial intelligence and its effects on business and culture. He’s interviewed luminaries from the emerging space including Google DeepMind’s Demis Hassabis, Microsoft’s Mustafa Suleyman, and OpenAI’s Brad Lightcap 


Fast Company

(3)