OPENING MESSAGE BY PHILOSOPHER YUVAL NOAH HARARI

Let me begin my short talk with a scene of a normal morning in 2025. As you wake up in one of the world's big cities, the electricity grid is already busy balancing the load. Hospital systems are preparing patients for the morning rounds. Buses and trains are being rerouted. The cameras on the streets and the robotic arms in the warehouses don't greet you, but they are already making choices that will shape your day.

Many of these everyday choices are being made by computer systems that learn, adjust, and take initiatives. We call this technology AI.

AIs are not just automatic tools. They are agents. They have the power to make decisions by themselves and sometimes even to invent entirely new ideas by themselves. When such AI systems move into society, becoming new members of society, the rhythms of life change quietly at first.

In finance, AIs develop new kinds of portfolios and trade patterns. In energy and supply chains, dispatch and pricing begin to slip past human intuitions. In the military chain of command, warning and response are compressed into windows too short for a human conversation or even a human thought to catch up. In justice and education, the question of who gets a chance and who doesn't is shaped by hidden algorithms most of us never see and can hardly understand.

 Culture, of course, is another very big part of this story. AIs produce materials from which we build our identities.

I don't oppose technological change. Technology brought us better health, more knowledge, light in the dark, and the means to connect one another like never before. But as a historian, I'm worried about the pace and method of change. In history, the biggest problem with change is often not the destination, it's the way there. Humans are incredibly adaptive entities. But we need time to adapt and we need trustworthy institutions that will make adaptation possible for us, especially for the weakest members of society. 

Every time a powerful new technology appeared in history, societies needed time to develop and test the matching social kit. The printing press did not deliver enlightenment in the year it was invented. At first, the printing press actually unleashed a torrent of noise, lies, and extremism. Only gradually did editing and fact-checking, catalogues in libraries, schools and research institutions, publishing norms and public debates produced wisdom out of the flood of printed information.

Similarly, creating beneficial industrial societies required more than just the steam engine. It also needed corporate laws, labor unions, ecological regulations, welfare safety nets, and so much more. Without such social mechanisms of adaptation, industrial technology by itself would only have produced deeper inequality, ruthless competition, and more extreme exploitation.

What makes AI different from all previous technologies is that it touches the central nervous system of society. Intelligent machines are now learning how to manage and reshape the operating systems of banks, militaries, entire countries, and even religions. The danger we are facing with AI isn't just a bad person deciding to press a bad button. Instead, the danger is an invisible process happening all around us. A line of code inserted into a clearing system, a path burned into tactical logic, a scoring rubric embedded at the gateway of hiring or lending.

Science fiction conditioned us to fear the big robot rebellion. The real danger is quieter and more scary. The growth of a digital bureaucracy where decisive power shifts from humans we can question to opaque algorithms we cannot even see. As these algorithms become faster and more powerful, things become more comfortable. Everything seems to be working smoothly. Everything seems to be improving until a tiny bias cascades into a financial catastrophe, into a military miscalculation on a frontier, or into a collapse of human identity and shared reality. 

When I speak with friends in the tech world, I hear the same line again and again. We know there are risks, people tell me, but we cannot slow down because the others will not slow down. We worry about the recklessness of our competitors. So, we must move faster, be bolder, get on the train first.

I can sympathize with these feelings. But there is something I don't understand. People hesitate to place trust in their human competitors. Okay, sounds reasonable. But why are the same people so ready to naively hand immense power to an unfamiliar nonhuman intelligence? How can they be sure it will always remain gentle, restrained, and reliable?

If trust among humans is still fragile and difficult to verify, shifting more power to AIs is not a cautious move. It is reckless in the extreme. If we cannot trust even our fellow humans, why rush to entrust an army of nonhuman intelligences with the most fragile parts of our society? With the attention of children, with the bargaining power of ordinary workers, the credibility of elections, the management of public conversations, the threshold between peace and war.

Meanwhile, the voice of philosophers and historians like myself is often misunderstood as a call to stop progress. In truth, it is a call to understand what progress really means and how it is really achieved. Speed alone isn't progress. A car that drives at 100 kilometers per hour without any traffic regulations or any braking system isn't progress.

So how to make real progress? Let me offer a few simple reminders from human history.

One simple fact, human strength never came from isolation. It came from cooperation with strangers and with what is outside us. To live, we breathe in and we breathe out. We receive something foreign from the air outside us and we give back the air that was deep inside our lungs. Individuals do it. Nations do it too. Every nation grows by exchanging ideas, goods and methods with foreigners. If you cut all connections with others and rely only on yourself, it doesn't make you strong. Ultimately, it suffocates you. Bringing this ancient lesson into the age of AI means building verifiable global commitments instead of just racing to see who is faster.

Another fact, what deserves our worry is not technology itself, but the impulse to deploy technology without guardrails in the name of commercial advantage. Any system that truly reshapes human society should not be launch first, govern later. History has shown more than once that speed and safety can coexist, but only if we close the loop of self-correction.

Advanced technological societies must have the means to speedily identify and correct their own errors and biases, then they can run very fast and safely. When a baby learns to walk, what she really learns is how to correct her own mistakes fast enough. She takes a step, she falls down, the body learns and adjusts. She takes another step, again, falls down again. The body learns and adjusts. Only when the body can learn and adjust on its feet, the child starts walking. And once that self-correcting mechanism is in place, the child not only walks, she runs.

If we try to run with AI before we have the ability to identify and correct the systems inevitable mistakes, the price of this speed will be paid by those who can least afford it.

One last historical fact, memory is important. Not nostalgia for a past that never was, but memory as a steering wheel. Memory is the mechanism for digesting and telling the story of what is happening to us. Memory is the mechanism for making sense of even failed experiments and wrong turns and as the capacity of societies to acknowledge wrongs and make amends. As AIs take over both the process of decision making and the process of narrative production, we must protect the human ability to remember and tell our own story. If we entrust our memory to a nonhuman intelligence, nothing will remain of us.

Some will probably ask, "Are you calling for slowness?"

No, I'm calling for moving with memory. I understand ambition. I understand competitive pressure. Without them, we would not have had modern science. But we should not push competition beyond the edge of what humans can understand of the human map. Push it to where human understanding fails and human memory becomes blank. We should not treat going beyond the human boundary as itself a kind of trophy.

Help people move faster and help people be at peace with the changing nature of reality, but don't force people to go so fast that they lose their bearing and are overwhelmed by anxiety.

Please heed the condensed lesson of centuries. When humans are overwhelmed by anxiety, when humans feel that everything melts into air, humans seek solidity by holding tight to their deepest pain and to their deepest hatreds.

In the age of AI, we must allow time for human memory and for building bonds of trust and affection between humans. Measure our progress not by the speed of our technology, but by the strength of our cooperation and by the depth of our compassion.

Thank you