AI this week: Doomers vs. builders, LLMs and healthcare, and fails search

 

By Mark Sullivan

We’ve witnessed the set-in of transformational technologies in the past, like the iPhone, or the printing press, or fire. But it’s always been very, very slow. Nowadays, we have a constant firehose blast of AI news, and the hose is getting bigger. These are the AI news items that captured our attention over the past week.

ChatGPT-style generative AI driven by large language models (LLMs) has captured our attention, in part because it speaks our language and has hinted at a path toward extra-human intelligence and maybe even machine sentience. But it’s crucial that we keep our eyes fixed on how well the technology is fulfilling its promise in the real world. So far, the results are mixed.

Stanford’s Human-Centered AI research group just published a paper showing that LLMs are so far failing at at least one aspect of the very first task that the tech industry has asked them to do: searching the internet. In short, they don’t cite their sources well. The research group studied the outputs of four prominent AI search engines—Bing Chat, NeevaAI, Perplexity, and YouChat—and found that, on average, they include citations for only about half of the sentences they generate in response to common queries. 

AI video production: You’re not ready

Video production is another story. The moment is coming when AI-generated video starts to get so good that it might threaten a whole industry. Many developers/producers are posting their work on Twitter and other platforms. This VR production set in the trenches of World War I, I think, is particularly good. The video is created by RunwayML Gen2, and the sound is generated using Soundful.

Epic possibilities

During a recent earnings call with analysts, Microsoft CEO Satya Nadella quickly mentioned that Epic Systems, the dominant maker of electronic medical records software, has been using the OpenAI LLMs via Microsoft’s Azure Cloud. I’ve personally been fascinated by the idea of using generative AI in healthcare, which might reduce the amount of paperwork caregivers have to do so that they can stay more focused on treating patients.

Curious, I talked to Epic’s SVP of R&D Seth Hain who told me that Epic customer medical groups in Wisconsin, Los Angeles, and San Diego are now testing a new OpenAI-powered tool that generates the text for patient messages, informed by data from the patient’s EMR. It’s possible that Epic may one day experiment with training an LLM on a large corpus of EMR data, if this can be done while preserving the privacy of the data. 

Of course, companies in pretty much all industries are trying to understand what generative AI can do for them. During March and April, Gartner polled more than 2,500 executive leaders about applying AI, and 45% said the publicity of ChatGPT has prompted them to increase AI investments. Seventy percent said that their organization is in “investigation and exploration” mode with generative AI, while 19% are in “pilot or production” mode.

advertisement

‘Pi’ wants to be your friend

Inflection AI, founded by the cofounder of Google’s Deepmind, Mustafa Suleyman, launched its new “Pi” personal AI assistant to the world Tuesday. The new assistant, which features a human-sounding voice, is getting mixed reviews. Some say Pi responds too slowly, shuts down when faced with hard questions, or isn’t good at writing code. But the early detractors may be missing the point: As Suleyman explained to me, Pi is really supposed to be a social, empathetic AI friend that remembers its conversations with the users. 

Doomsayer vindication

AI alignment researcher (and prominent AI doomsayer) Eliezer Yudkowsky believes researchers are pressing ahead with new models far too quickly. In the not-so-distant future, he says, AIs will be so much smarter than humans, and they will be able to outsmart, and even exploit, humans for their own goals. Whether those goals are “aligned” with human goals and values is something of a crapshoot, and very likely outside of human control. Yudkowsky’s research on the dangers of AI has been met with calls of “alarmist” from various industry voices. 

Here’s a memorable quote from my conversation with Yudkowsky this week: “If you’re all in the middle of a global AI arms race, people will say there’s no point in slowing down because their competitors won’t slow down. . . . But maybe an international coalition gets scared enough to shut down the entire mess, or maybe humanity wakes up that day and decides that it wants to live.” 

Yudkowsky now appears to be somewhat vindicated, however. News broke this week that one of the “godfathers” of AI, Geoffrey Hinton, has left his post at Google, in part to speak more freely about the dangers of AI. In fact, Hinton is now hitting many of the talking points Yudkowskly has been touching on for months.

“I think it’s very reasonable for people to be worrying about these issues now,” Hinton said in March, “even though it’s not going to happen in the next year or two.”

Fast Company

(27)