A Harvard historian explains what Elon Musk is getting wrong about the future
When it comes to Elon Musk, it can be hard to separate the man from the myth. But in her new podcast The Evening Rocket, Harvard historian and New Yorker writer Jill Lepore manages to see through Musk’s mystique, explain his worldview, and decipher his visions of the future by going back to the sci-fi stories he grew up on—stories, Lepore says, that Musk sometimes misread.
This week, Lepore joins host Rufus Griscom on the Next Big Idea podcast. Listen to the full episode below, or read a few key highlights.
Elon Musk is a Marvel character . . . in real life.
Rufus Griscom: I’ve heard you say, Jill, that you are fantastically uninterested in biographies of the rich and famous. You were not even particularly interested in Elon Musk before this project came along. Having said that, Musk is an unusually interesting person, and it feels to me like we all, collectively, have a kind of love/hate relationship with him.
Jill Lepore: I don’t think I have a love/hate relationship with him. It’s hard to really reckon with him as a true human being. The character he plays on the internet is such a caricature of himself. I don’t think anyone who watches that really has much of a sense of Musk as a person. That’s part of the consequence of being a kind of Marvel character in real life.
Science fiction isn’t a user’s guide—but Musk doesn’t get that.
Rufus: Musk was born in Pretoria, South Africa, in 1971. Growing up, he was fascinated by comic books about space travel. He was an avid reader, and The Hitchhiker’s Guide to the Galaxy became a kind of bible to him. And this book has an interesting relationship to the apartheid era in South Africa in which Musk grew up. You say, in your podcast, “There’s a weird way in which the culture of apartheid found expression in the 1990s in Silicon Valley’s vision of the future.” Can you unpack that a little?
Jill: I was fascinated, in working on this series, by how deeply Musk appears to feel about Hitchhiker’s Guide, and how often he uses it as a reference point. He wants to name the first spaceship to Mars after the spaceship in the story. But then I discovered that Douglas Adams was a pretty vocal opponent of apartheid, and the typewriter on which he typed the script for the Hitchhiker’s Guide radio series has a sticker on it that says “end apartheid.”
So I wasn’t imagining it when I listened to The Hitchhiker’s Guide to the Galaxy, and heard it as an indictment of systems of profound economic inequality—and specifically an indictment of apartheid. Once I started thinking about that and taking that seriously, I had to ask myself, How could you miss that? That is to say, like, how could you base a vision of the future on a satire of that vision of the future? Douglas Adams is saying, “We shouldn’t send wealthy colonists to other planets to build luxury colonies because that is wrong,” and writing a satire that displays the many ways in which that is wrong. That is Musk’s guide for living, yet he is using that guide to justify doing the very thing that the story is opposed to.
And it’s not just Musk—it’s Bezos and his science fiction reference points. Or think about Mark Zuckerberg; he says the metaverse is inspired by Neal Stephenson. Well, Neal Stephenson’s metaverse is a dystopia.
Why do these guys keep reading science fiction, which often is a searing social criticism—why are they reading it as a user’s manual?
If billionaires take over the space race, space exploration will cease to be a public good.
Rufus: If early space tourism, with price tags of $55 million for a seat on the next SpaceX flight, is an indication of where we are headed, we’re going to have a real problem. My guess is that Elon would say that the vision here is similar to that of Tesla: Start with the high-end sports car that captures attention, gradually lower the price, advance a solution for the world.
Jill: Yeah, sure. He could say that. But we accept that the automotive industry is a private enterprise. We had not accepted until very recently that that’s also the case with space exploration.
In the 1960s, federal government dollars were being put toward space exploration, toward the mission to the Moon. You could disagree with that or agree with it, but at least it was subject to the ordinary process of political wisdom.
And, in fact, people did disagree. It was a big argument of a lot of civil rights activists. Think of Gil Scott-Heron’s “Whitey on the Moon” or the civil rights activists who marched at the Kennedy Space Center the day before the Apollo mission was launched in 1969. People would say, “If this is something that the federal government wants to do, then we, as people who pay taxes, object to it.”
There’s not that possibility for objection when you have the world’s likely first trillionaire funding this—and now saying he doesn’t need to pay taxes because he’s bringing the light of human consciousness to the stars. It’s just a complete subversion of our notion of space exploration as a public good.
Technology can’t save us from itself.
Rufus: Playing devil’s advocate for a minute, or Musk advocate, it strikes me that there’s no doubt that technological acceleration is increasing. And so I think what Musk might say is: “On the one hand, we really need to use those technologies as solutions to problems like global warming. On the other hand, it’s rational to be afraid of the existential threat.” I think there’s also a rational argument to be made that some of these potential existential threats—from biohacking to artificial intelligence—might be real. As William S. Burroughs puts it, “Sometimes the paranoid just have all the facts.” I think what Musk would say, if he were in this conversation, is: “I’m really genuinely trying to help us use the acceleration of technology in ways that will help the species prepare for bad outcomes, which might actually be worse than we think.”
Jill: I would concede some portion of that, but I wouldn’t concede a whole lot of it. The “let’s engineer our way out of problems that we engineered our way into” argument is a very handy one to make. But think about it historically.
I wrote a history of the United States a few years ago, and when I got to 1945, I talked about Hiroshima and Nagasaki as the moment when the speed of technological change outstrips our capacity for moral reckoning. So what was the consequence, then, of our inability to take the time to reckon with atomic weaponry before using it? Well, it took 50 years of activism. It took the Nuclear Freeze movement. It took the collapse of the Soviet economy and Gorbachev’s need to negotiate. It took decades of scientific research on the consequences of radiation poisoning for the Japanese who suffered during those attacks. It took research on the nature of fallout. It took Carl Sagan’s work on nuclear winter. It took decades and decades and decades to come to the realization, both scientific and moral, that launching nuclear attacks is a terrible idea and shouldn’t be done. Even though a lot of people said that in 1945, it took decades of research and politicking and political struggle; it took scientific dissent and peer review and argumentation to get to a point of significant disarmament.
So the idea that there are threats coming, so we should come up with technological fixes to those threats instead of not developing the threats—it’s just a weird idea. It’s not actually a question of technology. It’s a question of morality and politics.
(58)