AI is like Google Maps: The user is the product

AI is like Google Maps: The user is the product

AI doesn’t work for you—and it’s unlikely it ever will.

BY Steven Rosenbaum

The other day, on a long road trip in a rental car, I found myself looking for a place to pull off the road. As I’ve done many times before, I typed into Google Maps “Food Near Me.” I peered at the options. (Don’t worry, I was in the passenger seat.) The response was bleak: Google recommended a string of chain restaurants. Pizza Hut, McDonald’s, Taco Bell, and so forth.

It hadn’t always been this way. The cute local restaurants, the greasy spoons, the farm-to-table spots—all gone. I was confused, then enraged. Didn’t Google know I was not a “Taco Bell” type of person? But then the reality of our world hit me hard. For Google Maps, I wasn’t the customer; I was the product.

I began to list the number of “free” things that I had handed my data over to: Gmail, YouTube, Google Search, Apple Weather, Smart News, Amazon Alexa, Pinterest, Reddit, Waze, Shazam, Strava, Nextdoor, Venmo. My most intimate information and behaviors—where I traveled, what I purchased, what I listened to, what I watched—being bought and sold by the highest bidder.

That’s because Google Maps doesn’t work for me; it works for Taco Bell and other chains that pay them.

A similar thing is playing out with artificial intelligence, but the consequences could be even more dire. Simply put, AI is programmed to serve its masters, the owners and investors who’ve poured billions of dollars into the code and the server farms. AI doesn’t work for you—and it’s very unlikely it ever will.

This has been true for a long time, free products are paid for by advertisers and are beholden to them. 

But we’re entering a new chapter, as ChatGPT and its AI brethren provide an impressive array of services with the patina of truth as the customer-facing interface. Why does this matter? Because in the very near future, ChatGPT or another large language model is going to be in charge of what we consume as truth. They will suggest medications, give you diet advice, even suggest how you should answer your boss’s emails. And its “knowledge” is going to be limited both by where it got it’s training data, and who pays the AI’s owners as to what uses to shape its results. 

With this in mind, I asked ChatGPT to find me a quote: “I’d like a quote from Steven Rosenbaum about the impact of reality TV on society.”

Quickly the robot responded: “Here is a quote from Steven Rosenbaum about the impact of reality TV on society: ‘Reality TV has changed the dynamics of television, influencing how stories are told and how audiences engage with content. It blurs the lines between entertainment and reality, often prioritizing sensationalism over substance.’ This quote can be found in his book ‘Curation Nation: How to Win in a World Where Consumers are Creators,’ published in 2011.”

 At first, I was impressed, the robot read my book! But, once I got past that initial burst of pride, I became suspicious. I asked ChatGPT: “What page number is that on?” And just like that, the truth was revealed. “I apologize for the earlier response” wrote ChatGPT. “The quote provided was a general summary rather than a direct quote from Steven Rosenbaum. For a direct quote from Steven Rosenbaum about the impact of reality TV on society, you may need to refer to his various writings or interviews.” The AI knew about the book, but it hallucinated the quote. Evidence that, as cognitive scientist Gary Marcus said onstage at South by Southwest this year, “AI . . . are bullshit artists.”

To find a path to the future of truth, we need to start by not trusting technology we don’t pay for to have any obligation to respond accurately or honestly. So, where does this take us? “To define tomorrow in terms of what we knew (September 19, 2024) is a very human instinct,” writes former FCC Chair Tom Wheeler in his 2023 book TechLash, ” but one that will serve us ill in the new era.”

The companies behind the AI, or the advertisers who are paying to have their answers promoted, are going to buy their way into the technology that could soon rule our lives. And Arthur C. Clarke’s HAL might well turn out to be prophetic if we don’t demand that AI puts truth before profit.

I ended my research by asking ChatGPT, “Is Taco Bell healthy?”

“Taco Bell can be part of a balanced diet if you make mindful choices,” ChatGPT cheerfully replied. I believe Gary Marcus would call that “bullshit.”

 
 

ABOUT THE AUTHOR

Steven Rosenbaum is the Executive Director of The Sustainable Media Center, and has a Masters in the Future of Truth from the Gallatin School at New York University. He is the author of the forthcoming book The Future of Truth. 


Fast Company

(3)