Forecasting The Future And Explaining Silicon Valley’s New Religions
Yuval Noah Harari might be Silicon Valley’s favorite historian. His last book, Sapiens: A Brief History of Humankind, which detailed the entirety of human history and how Homo Sapiens came to dominate the Earth, was blurbed by President Barack Obama and Bill Gates, and Mark Zuckerberg recommended it for his book club. And more than 100,000 students have taken Harari’s online course.
In his new book, Homo Deus: A Brief History of Tomorrow, Harari looks forward and hazards a few guesses on what comes next for humanity. These next chapters in our history range from the utopian to the horrific, he says. A growing gap between rich and poor could result in global warfare, humanity might create artificial life, or people might transform themselves into godlike creatures. In any case, the book says, both the tech industry and governments worldwide are heavily implicated in what comes next.
Fast Company spoke with Harari shortly before his book’s U.S. release on February 21 about the risks of algorithms, how China will likely become a genetic engineering leader, and more.
Fast Company: What is Homo Deus about?
Yuval Noah Harari: It’s about a lot of things, but above all else, the potential future of humankind and of life in general. It’s not a book of prophecy and forecasts. Instead of offering prophecies, what the book tries to do is map the different possibilities humankind is facing—the different challenges and dangers.
In the 21st century, it’s obvious that we will see really amazing developments. I wrote this book in order to map and highlight some of the possibilities, including some of the most dangerous possibilities. This was in the hope of contributing to the public and political debate about what to do with these technologies.
When you speak to audiences from the tech world, what kind of questions do you usually get?
Most of the time there is a very real thirst for engaging discussions that aren’t about the technological aspects. It’s not about what artificial intelligence might be able or not be able to do in five or ten years. It’s not self-driving cars being feasible by 2025, or any of these technical questions. I’m not going to answer that anyway.
But they’re really concerned with the political and social and even religious and philosophical implications of such developments. In most cases, people in tech are not alienated or frightened by this type of thinking. They are usually fascinated by it.
Are there any questions those audiences should be asking?
There are two very important questions they should ask.
The first is whether the tech world understands human society well enough to really appreciate what technological developments are going to do to humanity and the world. I believe, in many cases, the answer is no.
The other big scientific question of our century has to do with the human mind and consciousness. We are making tremendous development in understanding the human brain and intelligence, but we are making comparatively little progress in understanding the mind and understanding consciousness.
So far, we don’t have any serious theory of human consciousness. The widespread assumption is that somehow the brain produces the mind, somehow millions of neurons fire signals at one another create or produce consciousness… but we have no idea how or why this happens. I’m afraid that in many cases, people in the tech world fail to understand that. They equate brain with mind, and equate intelligence with consciousness, even though they’re separate things.
In human beings, as with other mammals, consciousness and the mind often goes hand in hand, but that’s not the same thing. We know of other organisms that have intelligence but no consciousness (as far as we know) like trees. Intelligence is the ability to solve problems, and consciousness is the ability to feel things and have subjective experiences.
The fact that we don’t understand the mind and consciousness also implies that there is absolutely no reason to expect computers or artificial intelligences to develop consciousness anytime soon. Since the beginning of the computer age, there has been immense development in computer intelligence but exactly zero development in computer consciousness.
Even the most sophisticated computer and AI software today, as far as we know, has zero consciousness—no feelings and no emotions whatsoever. And one of the dangers is that if we, and we are gaining the ability to manipulate the human body and the human brain, but we don’t understand the human mind, we won’t be able to foresee the consequences of these manipulations.
In Homo Deus, you discuss new religions coming out of Silicon Valley. What do you mean?
The basic insight is that religion isn’t made in heaven, it’s made on earth. If you don’t like the word religion, you can replace it with ideology—it’s largely the same thing. At the heart of both religion and ideology is the question of authority, and where authority is coming from. In traditional societies, like in the Middle Ages, people thought authority came from above the clouds or from the gods. In the modern era, authority came down from the clouds down to earth, and people thought authority was invested in individual humans. This led to the rise of humanism, and putting peoples’ means and desires forward as the most important authorities in the world.
In the Middle Ages, the idea was that God knows us best of all and we should follow his commands and his representatives—listening to the pope, the priests, mullahs, and rabbis. In the modern age, we are told no one understands you better than you know yourself, and because of that you shouldn’t listen to any external authority. We now see an ideological shift, a religious shift, but it’s really a shift in authority. Humans are losing authority and authority is shifting to algorithms and external data processing systems that are supposed to know us better than we know ourselves.
Do you think the engineers working on these algorithms think about the long-term consequences of what they’re creating?
No. I think some of them have some idea which may be right or wrong. But most of them focus on the immediate problems. And it’s not like you have a single engineer that creates the algorithm that now takes charge of a self-driving car or education. Usually, the algorithms are created by entire teams or several teams, each working on a different part of the algorithm.
More importantly, there’s also machine learning. It’s a bit of a caricature, but to some extent you give an algorithm immense amounts of data and it learns by itself. You don’t know what the resulting algorithm will look like. Even when you have it, you may not understand how the algorithm functions and makes a particular decision or a choice.
What we are trying to create at present are algorithms, at least in some fields, that are more intelligent than human beings. By definition you can’t really predict what such algorithms will do and what the consequences of their actions will be.
You mentioned China being a breeding ground for new ideologies in the 21st century. Can you talk more about that?
Comparing China with the United States on the technological level, China is now rapidly closing the gap with the United States. But on the ideological level, there’s a very big difference. The western world is very committed to and dominated by a humanist ideology developed in the west over the last two or three centuries. China, on the other hand, is far less committed to humanism or any other ideology or religion.
In terms of ideology, the official ideology is still communism, but it’s no longer a communist country to a large extent. In some respects, it’s far more committed to capitalism at the collective level—the pursuit of economic growth—and, on the individual level, to economic success even more than in the United States.
For ideas like extending human life indefinitely, or using biotechnology to upgrade humans, or AI to manage society, these kinds of ideas will have far less resistance in China than U.S.
You originally wrote the book in Hebrew, and then translated it into English?
I wrote Homo Deus first in Hebrew, and then got a lot of feedback from the Israeli audience—both lay people and experts in various fields. I didn’t translate it into English; I rewrote it again in English. The English version is quite different from the Hebrew version. Many of the examples in the original book were taken from Israeli culture, and politics, and so on. I replaced those with many international anecdotes.
It’s still easier for me to write in my native tongue. But, in some fields, English is so much richer than Hebrew for writing. I guess there are theories about how different languages construct different worldviews. You see, to some extent, a worldview embedded in the language.
For example, in Hebrew, we have no word for the mind. It’s one of the most difficult words to translate into Hebrew. This is because of the religious and cultural background; Judaism was never very interested, and didn’t give much importance to questions of mind. Whereas in Buddhism, you have dozens of different words describing different types of aspects of the mind. Translating Buddhist texts into English is very difficult because you have 20 different words which all get translated into the same word, “mind.” Hebrew is more difficult because you don’t have that word!
Is there anything you wish you included in the book after you wrote it?
All the time there are things I wish I added. My previous book, Sapiens was about the past, and the past is always there.
But when you write about the future, there are technological developments; every year there are these important changes and breakthroughs. Of course, I even made some small changes between the U.K. edition which came out in December and the U.S. edition which comes out in February. For instance, there was the famous Go match between Alpha Go and Beta Go that was too late for the U.K. edition but I added to the U.S. edition. I didn’t predict Trump would be elected U.S. president, but I didn’t try to predict such things.
Anything else you’d like to add for our readers?
The most important things to know, from a historical perspective, about these technological developments is that they are almost never deterministic.
Every technology can be the basis for very different social and political systems. You cannot just stop the march of technology—this is not an option—but you can influence the direction it is taking.
The idea of a particular technology, whether it’s the internet, or genetic engineering, or artificial intelligence, mandating a particular future is a very dangerous idea.
This interview has been condensed for length and readability.
Fast Company , Read Full Story
(67)