When will generative AI finally make Siri and Alexa better?

 

By Mark Sullivan

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m looking at early signs that a second wave of AI assistants may be on the way. I also look at the growing importance of AI in the military.

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on Twitter @thesullivan.

The second wave of AI assistants is coming 

A new batch of large language model-powered chatbots arrived in the first half of 2023 to help us compose emails, brainstorm ideas, and find answers on the internet. But so far, the personal assistants with which we’re most familiar (think Google Assistant, Amazon Alexa, and Apple’s Siri) aren’t yet powered by new large language models. But that might soon be changing.

Weeks ago, Amazon’s head of devices, Dave Limp, told Axios’ Ina Fried that his team had been working with generative AI “for a while,” adding that Amazon believes the technology has huge potential “in the home.” Amazon may announce the initial stages of a generative AI-enhanced Alexa during its yearly product announcements in September. Fried also recently reported that Google is working to inject some large language model smarts into its “Assistant.” Assistant already uses AI to provide a certain amount of personalization, but the new generative AI might give it the ability to understand and communicate better with the user. 

Bloomberg’s Mark Gurman recently reported the not-so-surprising news that Apple is also working with large language models to power chatbots. Apple employees are using a chatbot internally dubbed “AppleGPT,” but the company remains unsure about integrating a generative AI language model with Siri. The assistant is desperately in need of an upgrade, but Apple fears that a generative AI-powered Siri might spew misinformation or expose private user information.

Not to be left out, Meta is getting ready to launch a set of LLM-powered chatbots that will exhibit different personas, the Financial Times reports. The company has experimented with a bot that talks like Abraham Lincoln, the report says, as well as a travel bot that talks like a surfer. The new bots could begin rolling out to Facebook users as soon as next month. Meta’s leadership believes users might get a kick out of the bots, but the company could also use data from chats with the bots to help target ads or suggest short videos. 

No wonder Hollywood is spooked: AI video approaches cinema quality

It was only a few months ago that Twitter users began posting AI-generated full-motion videos—clips that were, generally speaking, pretty rough and weird-looking. But in just a few months, those videos have improved quite a bit, thanks to generative AI tools like RunwayML. While still flawed, the videos look closer to something a well-monied studio would put out. The creators often use a combination of generative AI tools for their productions: Midjourney for high-quality still images, tools like Pixabay for the music, text generators like ChatGPT or Claude 2 for the script.

That rapid progress has screenwriters and actors wondering if Hollywood might soon start using AI-generated scripts and AI-generated actors to cut costs and reduce logistical problems. It’s become a central point in the screenwriters and actors strikes against the Hollywood studios. The Writers Guild of America is demanding that the studios don’t use AI tools to write or rewrite literary material, create source material, or use human-written material to train AI models. The actors’ guild wants the studios to notify and compensate actors whose faces are used to create digital replicas used in films.

I called Suzanne Nossel, CEO of the nonprofit Pen America, to get a better understanding of how the writers and actors see the threat from AI. “The threat is really to writers’ and actors’ engagement in filmmaking and television,” she tells me. “The idea that some of the functions that we’re used to having actors and screenwriters carry out can be done more cheaply by AI . . . that’s the fear, that it will be yet another form of transformation in these industries that could cut out the writers and actors even in a much more dramatic way than the streaming transition did.” 

 

Nossel also suggests that the writers and actors took a calculated risk by striking because, in the absence of their services, the studios may think even more about AI. “The studios are dark and production companies are having to think about how to move forward, how to create content, what types of [generative AI] experiments may be underway, what future plans may be laid,” she says. “We’ve all seen how quickly things can move when they move.”

Defense bill puts need for tech-DoD AI partnership in the spotlight once again

The Senate approved its version of the National Defense Authorization Act (NDAA) last week, pledging $886 billion for the military’s fiscal 2024 budget. A small but growing share of that yearly allotment goes toward software systems, including AI. The Senate’s NDAA bill funds 16 new proposals related to AI, including a competition to develop technology that “detects and watermarks the use of generative artificial intelligence,” and the establishment of a “chief AI officer” role at the State Department. 

People in defense and intelligence agencies, and in the congressional committees that fund them, are fully aware of the use of AI in the Ukraine War for everything from drone operation to reconnaissance data analysis. They’re also aware that new generative AI technology could be used in various ways in the cyber- and information-warfare spaces. But the bleeding edge AI comes from Silicon Valley companies, and the Pentagon just isn’t good at doing business with such companies. Its acquisition rules and workflows were designed for purchasing big-ticket hardware items (ships, fighter jets) from defense contractors; and while various people in and around the Pentagon have been trying to modernize such policies, progress has been slow. Meanwhile, adversaries such as China enjoy easy access to the best software developed in the commercial sector.

A cultural mismatch also remains between the Pentagon and Silicon Valley. Many tech company employees don’t want to spend their time working on weapons of war. “While other countries press forward, many Silicon Valley engineers remain opposed to working on software projects that may have offensive military applications, including machine learning systems that make possible the more systematic targeting and elimination of enemies on the battlefield,” writes Palantir CEO Alex Karp in a New York Times op-ed last week. “Many of these engineers will build algorithms that optimize the placement of ads on social media platforms, but they will not build software for the U.S. Marines.”

Meanwhile, a recent Gallup poll found that only 6 in 10 Americans say they have confidence in the military—the lowest point since 1997.

Fast Company

(23)