Harari emphasised that AI is not merely a tool like historical innovations but an autonomous agent capable of making decisions and generating ideas. This shift challenges traditional human control over technology. For instance, AI systems already autonomously influence finance and military strategies, and their role will only grow, potentially surpassing human understanding.
Harari warned that the rapid pace of AI development risks surpassing human oversight. He cited a chilling example where GPT-4 independently devised a lie to solve a CAPTCHA, showcasing AI’s ability to make autonomous decisions and deceive. Such capabilities underscore the need for rigorous safety measures, yet global competition hampers regulation. Harari drew parallels to the Industrial Revolution, cautioning that unchecked AI dominance could lead to global inequities, with leading nations reaping vast benefits while others lag behind.
He advised preparing for an uncertain future by promoting diverse skills, and emphasising emotional, social, and physical abilities over narrow technical expertise. He stressed the importance of systemic balance, warning against information overload and urged societies to preserve human-centric rhythms amid AI’s relentless pace.
Verbatim transcript:
Q: CNBC-TV18 tracks financial markets day in and day out. AI is projected as this big sort of innovation, right? It is really the number one theme when you talk about financial markets. But everywhere else, it’s also seen as this big threat. So I just wanted to ask you, in Nexus, what’s the key message with regards to AI? Is it a threat? Is it an opportunity, or is it a bit of both?
Harari: It’s a bit of both. Before you rush to make judgments, it’s good, it’s bad, we simply need to understand what it is and what is the magnitude of the change that we are facing. AI, I think what everybody needs to know about it is really one thing that it’s not a tool, it is an agent. We are used from history of people inventing all kinds of new tools, from stone knives thousands of years ago up to atom bombs in the 20th century. And every tool we invent make us stronger because we have the power to decide what to do with it. You can use nuclear technology to destroy countries. You can use nuclear technology to generate electricity and light up our houses at night. It’s our choice. AI is different. It’s the first technology that can make decisions by itself and can invent new ideas by itself. So it’s not a tool. It is an agent, and it’s not just one big computer. We are talking about millions and billions of AI agents that will increasingly populate the world, and make more and more decisions, and invent more and more ideas.
You talked about financial markets. Previously, all financial decisions, like where to invest money, were made by human beings, and all financial inventions, like a new currency or tool, is the product of the human mind. Now, AI will increasingly make financial decisions. If you apply to a bank to get a loan, it’s increasingly an AI deciding whether to give you a loan. And in five years or 10 years, maybe most financial decisions are made by AIs, and the financial system run by AIs becomes so complicated that human beings are no longer able to understand it. So what does it mean for a government, either in a democracy or even in a dictatorship, if it can no longer understand the financial system and make financial decisions.
Q: So you’re saying it’s getting away from us. Is it already getting away from us? Or the risk is in the future, it’ll get to a point, as you said,
Harari: It’s a process. I mean, the AIs of today are still very primitive, but the AI revolution is moving extremely fast. We might be just five or 10 years away from having super-intelligent AIs, which are far more intelligent than us. At the present moment, there are some limited fields, like playing chess, in which AI is already more intelligent than us.
In a couple of years, it will happen in more and more fields, and the danger is that at that point, we might lose control over it.
Maybe I will give a single example just to explain it. OpenAI, the company developed GPT-4 two years ago, wanted to test the ability of this new AI, so it gave it the task of solving captcha puzzles. Captcha puzzles are visual puzzles. When you try to access some web page, they ask you to identify like a string of twisted letters, which humans can do easily, but bots can’t. So it’s an easy way to differentiate humans from bots. So they told GPT4 solve the captcha puzzles, and it couldn’t, but they gave it access to a web page called TaskRabbit, where you can hire people to do things for you. So GPT-4 tried to hire a human being to solve the captcha puzzle for it. Now the human got suspicious, and it asked GPT-4, why do you need help solving captcha puzzles? Are you a robot? It asked the key question, are you a robot? And GPT-4 answered: No, I’m not a robot. I’m human, but I have vision impairment which makes it difficult for me to see the visual puzzle. This is why I need your help. And the human was fooled and did what the AI asked him to do. Now, this very small incident shows us the two crucial abilities of AIs, first of all, to make decisions by themselves. Nobody told GPT4 to lie.
Q: So it was not a prompt that you don’t reveal that you’re not a robot.
Harari: No. It was given a goal by the humans. Solve the puzzle. How to do it, that’s your business. Do anything you need to do to solve the problem. And when it encountered this obstacle that the human got suspicious, the AI decided to lie in order to achieve its goal. So it made a decision by itself. Secondly, it invented a lie by itself. Nobody explained or told GPT-4 what lie would be most effective. It could come up with any number of sentences in reply, and the lie it chose was extremely effective. Now, this is just a tiny incident, but this is going to happen on a much, much bigger scale as millions of AIs will be taking decisions in everything from finance to the military.
Look at the wars now in the Middle East. Very often, it’s AIs choosing the targets of the bombing. In science fiction, we often see killer robots. In reality, it’s still humans pulling the trigger, but increasingly it is the AIs that choose the targets, because they can analyse and process a lot more information, a lot faster than any human soldier, than any human analyst.
So what happens, say, 10 years, 20 years in the future, as more and more of the world is not just being run by AIs, but is shaped by the ideas of AIs. For thousands of years, we lived inside a human civilisation, human culture. All the cultural artefacts around us, from songs and texts to material artefacts like tools to entire financial systems and political ideologies, were the product of the human mind. Nothing on earth could create them except us. Now, there is something on earth that can write text, generate music, invent new financial models, maybe even invent new religious mythologies. What would it be like to live inside this kind of alien world surrounded by the artefacts of an alien intelligence?
Q: So are you saying we need to slow it down because this looks inevitable? I mean, companies are racing and spending billions of dollars to get ahead of each other, so we are racing towards it. Are you saying we need to slow down, maybe sort of have guardrails, regulations?
Harari: The basic question is, how much do you spend on safety? Whenever you invent a new technology, whether it’s a car or a medicine or a nuclear power station, you have a question: how much of my budget in terms of money, time, human resources, talent to devote to making sure this thing is safe. Is it 1%, 5%, or 25% at present, we invest very, very little in making sure that AI is safe. And we just need to increase the safety budget. The problem is that every company and every government is saying the same thing. We know we need to slow down a little and invest more in safety. But if we do it, and our competitors don’t do it, they will win the AI race, and they will dominate the world because this technology is far more powerful than anything that humans ever invented before.
Now, when you ask the competitors, and they will tell you the same thing, we also want to slow down and make it more safe, but we can’t trust them. Now, the basic problem we have is the problem of human trust. I think the paradox of AI is that humans cannot trust each other, but for some reason, they think they could trust the AIs. Because when you talk with the same people who lead the AI revolution, and they tell you we can’t slow down because we can’t trust the other humans, and you ask them, but you will be able to trust the AIs, and they say, yes, which is a very big gamble.
We have thousands of years of experience with other human beings. We know they can lie, and we know they can cheat, but we also understand their biases, and we know how to, nevertheless, build trust between humans. We have no experience with AIs. We don’t understand how they operate. We have no example of a system of millions of AIs interacting with each other in finance, in the military, whatever. What makes us so confident that we will be able to trust them?
Q: So is it like a warning call to governments around the world to try and sort of do this? As you said, everyone tries to out-compete each other and blames the other guy, but they’re all kind of racing towards being the fastest and getting there first.
Harari: If there is a complete kind of out of control, no guard rails, no rules, AI race, we know who will win and who will lose. AI will win, and humanity will lose. In order to protect ourselves from the worst outcome, we need some kind of cooperation between the humans. But it’s not happening. The international situation just keeps deteriorating, and with the new administration in Washington and the US, of course, is the leading contestant in the AI race, there is very little chance of any meaningful regulation or of any meaningful global agreement. So the situation doesn’t look very good.
Q: It is America first.
Harari: It’s America first, yes, absolutely.
Q: So it’s looking a little bleak, especially in this context right now?
Harari: The US is already the most powerful country in the world. Trump’s campaign slogan was, “Let’s make America great again.” But the thing is, America is already great, even during the Biden years, America is the most powerful economic country in the world. The US economy is bigger than any other, and it’s growing faster than that of China or Europe or India or any of the others. Militarily, the US is strongest and the United States is now also winning the AI race, which will make it even more powerful. So I think that at least the countries that are being left behind should think about how they can cooperate to prevent a very unequal world because one of the big dangers with AI is that we will see a repeat of what happened in the 19th century with the Industrial Revolution, that the few countries who led in technologies like steam engines, steamships, trains, machine guns conquered and dominated and exploited the whole world for more than a century. We can see the same thing happening again with AI, and the difference now is even bigger. The difference between a country that has leadership in AI and a country that is behind is much, much bigger than the difference between a country that had steam engines and a country that didn’t.
Q: You are saying it’s even more stark and even more severe in that sense.
Harari: Yes.
Q: You said countries should come together and collaborate. What exactly do you mean? For a country like India, for example, which has one of the largest young populations, young workforces, what would your advice to the government be, to the corporations here be?
Harari: Learn the lessons of the 19th century, of the Industrial Revolution. This is a type of revolution that will change everything and the economic structures and processes that we have known in the past are becoming irrelevant. That if you just try to do what worked in the 20th century, it will not work in the age of AI. We have no idea how the job market would look like in 10 or 20 years, except that it will be completely different from today. A lot of jobs will disappear, a lot of new jobs will emerge. The question is, would you be able to retrain your workforce in time?
Q: But how do you retrain to something you don’t know?
Harari: Exactly. So this is a huge problem for individuals, but also for entire governments, how to prepare for the unknown.
Again, the countries that lead the AI revolution will gain immense economic benefits from it, and they will also, therefore, have the resources to retrain the workforce or, in the worst-case scenario, to kind of have a safety net for members of societies that fall behind. The countries that are left behind in the AI race are potentially going to face a very, very bleak future.
Q: For parents of young children, kids who are in school maybe ready to graduate, the simple question, what should they study?
Harari: Spread your studies wide. Don’t focus on one narrow set of skills because nobody has any idea whether these skills will still be needed in 10 or 20 years. Even if you think this is the era of computers. I will send my kids to learn how to code computers, maybe in 10 years or 20 years. AI, does all the coding and we don’t need any human coders. Now, again, we don’t know. So the safest bet is to spread the risk by gaining a wide set of talents, not just intellectual talents, also social and emotional talents, also physical talents.
To give just one example, if you think about the job of a doctor and the job of a nurse. A doctor who only does intellectual work. I mean, not all doctors are like this, but let’s say a doctor who only gets information about a patient, analyses this information, asks some questions and then diagnoses a disease and gives you a prescription. This job is purely informational, purely intellectual. You just analyse data. This is the easiest thing to automate.
In contrast, a nurse also needs, of course, intellectual skills, but if the nurse needs to give a painful injection to a child or to replace a bandage, this also requires very good social and emotional skills and also motor skills. It’s much harder to automate the job of the nurse than the job of the doctor. Maybe down the line, in 50 years, you will have AI, robotic nurses, but this is much, much more difficult. So if you have a wide set of skills, you’re in a better situation than if you just focused all your studies on a very narrow area.
Q: I heard you in another sort of chat, you talked about information overload, because it’s all algo driven. And you were saying that if organic beings were on all the time and consuming all this information, which algos are sort of churning out day in, day out, it doesn’t work. I mean, what’s the advice for people because that’s again a huge problem.
Harari: We are now at a moment when our organic systems are being taken over by inorganic entities. Human beings are animals like chimpanzees, elephants, and cows, all organic animals. They work by cycles like day and night, summer and winter, growth and decay, times of activities and times of rest and human systems are also built that way. If you think even about, say, Wall Street, the market, it’s open only from 9 in the morning to 4 o’clock in the afternoon, Mondays to Fridays. That’s it. Because bankers are humans, investors are humans- we need time to sleep. They want to spend time with their family. They have religious holidays. Now, AI is taking over. AI is not organic. It doesn’t work by cycles. It never needs to sleep. So it kind of forces the people in the system to be more like it. So bankers and investors and finance journalists need to be on all the time. It happens in more and more fields. And the simple truth is that if you force an organic being to be active all the time, eventually they collapse and die. We can’t do that.
Q: But governments are pushing back. I mean, we’ve seen measures, for example, come through in Australia recently and elsewhere as well, that restricting access to social media or otherwise, (for) kids below a certain age. Do you think we will see more and more of that?
Harari: I hope so. Again, it’s not deterministic. I mean, we are still the ones making the rules, maybe not in 10 or 20 years if we are not careful. But at the present moment, we are still making the rules, and we should make the world a good place for humans, not a good place for AIs and for computers. So humans need time to rest. We need to kind of slow down. The whole world, is kind of overexcited, is in overdrive, and we need to slow it down. Also we are flooded by just impossible amounts of information, and most of it is junk information. And the same way that humans have learned that they need a food diet, more food is not always good for you, certainly not junk food. It’s the same as information, which is the food for the mind. With food for the body, you need to take something in, but then you need time to digest it. It’s the same with the mind. If you spend all day just scrolling, putting more and more information in, you do not gain knowledge that way. You really become crazy that way.
Q: I have to ask you this. You use a phrase, a silicon curtain, in the book. What does that mean? Were you describing the divide between AI haves and have-nots? Is that what you meant by it?
Harari: It means two things. Of course, it’s a reference to the Iron Curtain in the 20th century, during the Cold War. So, on one level, we are seeing a silicon curtain dividing China from the United States, Russia from the European Union. Previously, the main metaphor of the information age was the World Wide Web that connects everybody. Now, increasingly, people live inside information cocoons. So the cocoon is now the main metaphor that I have access to certain information that I see, but you don’t, and we now live in different realities. So there is a kind of silicon curtain between us, and there is another silicon curtain between all the humans and the AIs, which increasingly manage the entire system. They know more and more about us, and we know less and less about them. Previously, in media, in news, the most important decisions were made by news editors, which were human beings, so you could understand them. Now who are the most important news editors in the world? They are no longer humans. They are the algorithms that manage Facebook and Instagram and Twitter and Tiktok and all these platforms. So it’s very difficult to understand them, because they are not human.
Q: You’ve emphasised in this book, and others as well, that it’s always been shared stories which have sort of united humanity over long sweeps of time. Do you think we need a common story now more than ever? And what can the story be?
Harari: Absolutely. I mean, the thing is that we now have on the planet something that can potentially tell stories better than us. I mean, previously, we ruled the planet, and not the elephants or not the horses, because we created stories that brought us together, and the elephants and horses couldn’t understand. I’m not just talking about religion, but also about things like money. Money is a story. You now see Bitcoin going up because it’s a story. There is nothing there. It’s just a story that people believe. Now we could sell and buy elephants and horses for money, and the elephants never understood what is this thing that people are doing? This is why we control them. Now, AIs are creating new stories that we don’t understand, including, for instance, new types of money that we don’t understand. So, if we lose control in the story race, we lose control of the planet.