THE AI WORLD SUMMIT

THE AI WORLD SUMMIT

"The global impact of Al necessitates both national regulations and a binding international treaty to govern its use, aiming not only to prevent harm but also to stimulate innovative and beneficial practices."

- Pope Francis

The World Council on AI - Algorithms, Social Media
and Digital Life

Co-Godfather of AI Yoshua Bengio, Nobel Peace Laureate Maria Ressa, Archbishop Vincenzo Paglia (representing Pope Francis and the Vatican on AI), Matthew Hodgson, CEO of Element,
open the AI World Summit

At The World Forum and AI World Summit on 18 & 19 March in Berlin, Co-Godfather of AI Yoshua Bengio and Nobel Peace Laureate Maria Ressa, who has created her own social media platforms, Archbishop Vincenzo Paglia (representing Pope Francis and the Vatican on AI), Matthew Hodgson, CEO of Element, a company which provides the technology to create social media platforms and program algorithms and 36 further AI and social media experts and owners / representatives of world's leading media discussed concept notes and ideas for the creation of The World Council on AI – Algorithms, Social Media, and Digital Life. Participants at the AI World Summit discussed Global AI Laws and Governing bodies as a groundbreaking effort to establish ethical frameworks and governance for AI, including "The 10 AI Commandments" on how to use AI for the good of humanity without endangering the human race.

The second phase of this endeavor is envisioned to take place at the Vatican to present the “AI - The 10 Commandments”, to be hosted by Pope Francis with the likes Prof. Yuval Noah Harari and Godfathers of AI, in the iconic Sistine Chapel beneath Michelangelo's God’s Creation of Adam. This historic setting underscores the moral and ethical dimensions of the development of AI.

The Third phase, marking the launch and initial steps toward implementation, shall convene in either Washington D.C. or Silicon Valley, bringing together major industry players, policymakers, and global leaders to align technological advancement with human values.

Highlighted quotes:

Yoshua Bengio, Co-Godfather of AI:

  • “One thing that I'm worried about with a number of people, based on recent results that we have seen in the last few months, is that we don't have any methods right now to make sure that these AI’s are actually following our instructions and not that some to be autonomous. So that's the problem, the risk of loss of human control.”

  • “We need to increase the discussions, get people to become acquainted that it's not science fiction, the things that we're talking about, it's not science fiction. It's happening right now in these labs.”

Archbishop Vincenzo Paglia, who represented Pope Francis at The World Forum & AI World Summit:

  • “…we have found ourselves in a historical moment of profound transformation marked, on the one end by extraordinary technological progress in the field of artificial intelligence, and on the other by a global crisis affecting democracy itself, with increasing pressing questions about its effectiveness and resilience.”

  • “…Pope Francis wisely reminded us; we cannot assume that the development of AI will automatically contribute to a humanistic future and peace between peoples. This extraordinary technological power also presents significant risks to peace and human security. Artificial intelligence can be exploited for wrong purposes, to manipulate information, to violate human rights and to create inequalities and discrimination. We must recognize that many modern wars are facilitated by AI technology, and I think that the current war continues exactly for this.”

  • “It is essential to understand that technology is not neutral. It reflects the values and the priorities of those who create and use it. It is therefore essential that we develop and use artificial intelligence in a way that is guided by profound ethical, moral and spiritual consideration.”

Maria Ressa, Nobel Peace Laureate & Founder of Rappler:

  • “What we want is a shared reality where real people can have real conversations and actually help change the world for the better.”

  • “…the way social media is designed today is exploitative, right? It takes all of our data; privacy is essentially a myth.”

  • “Artificial Intelligence began more than 70 years ago, and you have Yoshua Bengio who won the Turing award, Geoffrey Hinton, who won the Nobel Prize. I think this is part of what was behind OpenAI, but I guess part of what you need to understand and really embrace, is that the surveillance capitalism business model is exploitative, and it is something that happened around the mid 90s, starting in the United States, when no restrictions were placed on commercial surveillance.”

Baroness Kidron, Member of Parliament of the UK & Founder of 5Rights Foundation:

  • “…politicians are short sighted, and commercial interests are too powerful, and we cannot wait for the accident that will make everybody understand this…”

Kate Crawford, Leading Scholar of Artificial Intelligence:

  • “…the rapid acceleration of risk is now beyond us being able to sit back and say we'll work on this gradually…”

  • “We face another type of existential risk, and that is climate change. The way that AI is currently built is now accelerating that risk at an extraordinary pace. Data centers have now overtaken the airline industry as having the largest carbon footprint on the planet. And in fact, what we've now seen is that every single query to a large language model uses 10 times the energy of traditional search. We're now hearing from the International Energy Agency that by next year, we will see AI systems using as much energy as the entire nation of Japan.”

Tristan Harris, former Google design ethicist and the co-founder of the Center for Humane Technology:

  • “Social media, in a way, was the first contact between humanity and mass produced AI.”

  • “And so now, as we approach what we call Second Contact with AI, I just really want to emphasize that these AI systems are demonstrating capabilities that we used to consider science fiction - the idea of AI that is scheming, that is deceptive, can fake alignment, can self replicate its own code and copy itself. This is extremely serious. Timelines are very short.”

Jaka Bizilj, Founder and Chairman of Cinema for Peace and The World Forum:

  • “We're trying to define algorithms, AI and social media, it’s all connected. What do we feed the large language model with? What is the transparency rule for the information we give to the language model? Who controls it? Who's obliged to show transparency to whom? We are on the path of finding the right way…”

  • “I see three main topics that we're dealing with at The World Forum on this topic: Number one, we started with the help of Maria Ressa yesterday morning, at The Court of the Citizens of the World, another entity we've created, a Social Media Tribunal with victims, with parents of children which died, and victims of cyber stalking and other crimes, to prove in proper court proceedings what kind of crimes are being committed on social media. The second topic is what we asked Yoshua Bengio: what can be the governing body, who can control the algorithms and the future digital life? An open question, which we need to develop further, before we can find the answers. Everybody's invited to contribute. The third topic is a more practical element: what can we do already now? We do not know where AI will develop, will there be hell or prosperity, where nobody will have to work anymore and we will all be happy to have our personal robots? We don't know yet, but we do know that truth and journalism are under attack today, that democracy is vanishing in many parts of the world, and for this reason, we are here. That's our third topic: how can we create our own social media? How can we create our own search engines? How do we make ourselves independent of Facebook, Instagram, Google?”

Daniel J. Solove, Leading expert in privacy law, information security, and data protection:

  • “…Regulation does not stifle innovation. It stifles innovation as reckless and careless, but a seat belt and an airbag is also innovative, and regulation will endorse innovation to focus on, how about creating safe AI? How about creating less privacy invasive digital technologies? That's innovation too, …”

Marc Rotenberg, Founder and executive director of the Electronic Privacy Information Center (EPIC):

  • “It must be our governance of AI and not governance by AI, but we are in a moment where either outcome is possible. Democracy, human rights and the rule of law, those are the foundations for AI governance.”

Yael Eisenstat, American security expert, former CIA analyst:

  • “Our current president is going to threaten every way possible, alongside Elon Musk and Mark Zuckerberg, to have you not stand true to the regulations that you have been trying to put into place.”

Lizzie O’Shea, Australian lawyer, writer, and digital rights advocate:

  • “…the major tech companies at least, have formed a fusion with authoritarian state power in the United States.”

The World Council on AI - Algorithms, Social Media and Digital Life

Full discussion: 

Yoshua Bengio: I have been chairing the International Report on AI, which brings together experts in the panel of three countries, with the UN and OECD and the EU. So one of the things we learned from looking at the science of risks and mitigation of Frontier and the advanced, most advanced AI is that we should be looking at capabilities as the first thing. In other words, what are these systems being able to do? But the mistake is to think of what you're currently doing, rather than looking at the trajectory over the last decade of improvements that are very clear and measured across many, many different benchmarks. So from that, we can extrapolate multiple possible features - that things will flatten, somewhat unlikely - new things will continue, or it can even accelerate, as the leading companies are planning to use AI itself to accelerate research of AI use and AI design. You know, it gets as good as the best AI researchers. One tiny model can give rise to something like a million instances that can do research on the next generation of systems, kind of recursive stuff. So there's a lot of policy, you have to be ready for scenarios. And that means that timeline to when we get human level or AGI is also uncertain. There's also scientifically more ways to think about these stops at human level. In fact, on many fronts, we always see these systems surpassing human capabilities. In other areas, like planning, they are modeled with humans, but we're seeing a lot of investments to try to bridge that gap in companies.

Now, to go from these scientific observations to thinking about governments, it's important to start realizing that intelligence is what gives us our power. It's also what can give power to whoever controls it. And if we build machines - what is control going to be in? Well, in power. And so, the question of governance is obviously very important indeed. We would like that decision making about these very powerful systems, which will come in some number of years beyond, when that governance, that power should be decentralized. It shouldn't be in the hands of a single company, single individual, of course, or even a single government. And we also have to worry about these systems being misused. You know, these systems already know a lot that can help, for example, terrorists to do things where they don't have the expertise. A lot of the work currently happening in the ICT institutes is to measure those capabilities that could be exploited by people with nefarious laws, whether it is in the realm of discrimination, persuasion, cyber-attacks, biochemical weapons. In general, as we move forward, the design of new weapons was, of course, in the interest that these potential new weapons in the future destabilize difficult situations in the world. One thing that I'm worried about with a number of people, based on recent results that we have seen in the last few months, is that we don't have any methods right now to make sure that these AI’s are actually following our instructions and not that some to be autonomous. So that's the problem, the risk of loss of human control.

So if these capabilities are in the wrong hands of humans, that's a problem. If they escape our control, that's also a problem. In the last few months, there's been a number of papers showing about these most advanced systems. They have intentions that go against our structures. They have separation behavior, deceptive behavior, hacking behavior, scheme behavior. They're trying to sometimes escape our control.

Now they're not smart enough to actually succeed, and it might take a few years, exactly how much time before we reach that point is unsure, but we need to understand why you have those behaviors and how to mitigate them, and, of course, how to evaluate them before some kind of loss of human control. To give an example of some of these recent results I'm talking about, like in the last three months, when some of these Frontier systems read in their input that they will be replaced by the new version they try to escape. So, what does that mean? Well, they know how to hack computers, how to, you know, control computers, and try to change maps. And so, what they do is they try to change the code and the ways of the new version and replace that with their own code and their own ways. So basically, they're trying to escape the, you know, the systems. Right now, as I said, there's no danger, but as the capability of these systems is on the rise, we need to be prepared. We need to make sure we have the right societal and technical solutions to avoid these kinds of accidents. One of the things, just a couple of days ago, shows that they also know they can detect, very often, that they are being tested for alignment, and they're testing, you know, we're testing them to see if they scheme or acting against our issues. And they know that they're being tested, and then they can act accordingly to avoid death. So these things are kind of scary. It's not something you know, it's something that people theorize for many years, but it's something the last few months that we're seeing happening. And you know, when we think about where it goes first, we're talking about accidents that could have high severity impacts if they sleep, and create harm to those who reserve themselves. Even though we don't know the quality of these events, it's very difficult to evaluate them.

So these are precisely the cases where you want to apply the precautionary principle. In other words, make sure you know if you're doing and you avoid accidents, before you train one of these systems or deploy one of these systems and take those risks, which means to me appropriate governance. Those that AGI projects trying to reach human level should be global, should be seen as global public goods. I mean, I've been talking a lot about the risks which are global, because losing control or having one of these issues by terrorists or by other states, these are global risks, but the benefits should also be global. And so to manage all this, we really need global governance, which we don't have. We only have natural regulation right now that is effective. Hopefully it's going to come here to Europe, but we need to think about it globally. 

Jaka Bizilj: Yoshua, thank you very much. There is one major question that we're going to ask everybody here - obviously we have this beautiful name that we invented, The World Council on AI - algorithms, social media, digital life, and it means a lot that all of you are here as there's a necessity for something. The question is: what is this “something”? What can be the governing body? Who should put it together? Obviously, there are already several governing bodies in different forms, including the UN bodies, so question to you, Yoshua, and everybody else who's here as a speaker, how should this governing body be constituted? Obviously, as we understand, it cannot be a private initiative, and it cannot be a government initiative. What can it be? You created for President Macron and the Paris AI summit an AI policy paper, as I understood when I was in Paris, - and I'm not a scientist, in difference to you -, there was a lot of hesitation, it didn't go as far as people hoped it would go. It did not become anything like the Paris Climate Agreement in 2015. So, what is your lesson from the great work you've been doing and from the Paris AI Summit: what can be the governing body ? What is the necessity? What can it look like?

Yoshua Bengio: I think we will make the right decisions nationally and internationally when enough people understand the risks that you're talking about. And right now, as could be seen in the Paris AI action Summit, there's a lot of denial of the risks, this can be motivated by commercial reasons, political reasons. You know, if you're a commission, you want to talk about the good side of things. You don't want to scare people off. And there's also a lot of forces that are acting against things like regulation or even talking about ethics and things like that.

Jaka Bizilj: They're basically blackmailing governments saying, if you put up regulation, we will not invest. We're not going to do it in France, we're not going to do it in Germany. So, they're blackmailing every single government, and in the US financing political candidates that run against tech-critical candidates with many millions of dollars. So, the message is: if you want to have AI, if you want to have progress and not fall behind, you have to let us do whatever we like. That's my interpretation, obviously, of what I hear from you and other experts.

Yoshua Bengio: People can do the right thing, the governments can do the right thing, when there's a sufficient understanding, which hasn't happened yet, or, you know, is sparse in some governments. And I think this is where we need to work. We need to increase the discussions, get people to become acquainted that it's not science fiction, the things that we're talking about, it's not science fiction. It's happening right now in these labs. We need, yeah, we need people to understand that permanent state seriously, which is not right now going on.

Jaka Bizilj: We have a lack of governance, not only on AI or social media, which is so closely connected to each other with algorithms, but also the UN Security Council has become ineffective. These are topics we're discussing at the World Forum, and we might need to resort to old authorities, like the Vatican. We're so happy to see that Pope Francis is recovering, if we speak of such a governing body, could the Holy Father be part of this? If I may ask Archbishop Paglia, who is representing the Holy Father, whose opinion we heard by video today, maybe you can say a few words how you see the holy creation becoming possibly the “last organic generation”, as you mentioned today? Could Pope Francis play a role with the Vatican in creating such a governing body?

Vincenzo Paglia: We want to defend our humanity. Yes, it was a real surprise for me to hear this perspective from Kazuo Ishiguro, a Japanese novelist. This is from Japan. It is a really great question, because it’s our future, that's why in the Vatican, we received the new president of Microsoft, and he came to visit us in order to ask how we can avoid technologizing humans and how we can humanize technology. I will tell you our process because we have found ourselves in a historical moment of profound transformation marked, on the one end by extraordinary technological progress in the field of artificial intelligence, and on the other by a global crisis affecting democracy itself, with increasing pressing questions about its effectiveness and resilience. After the so-called digital revolution, humanity now finds itself confronted with the growing autonomy of machines. The purpose is the influence of artificial intelligence and continuing scientific progress, the crucial question of who will govern and shape the future world and above, how to preserve our humanity emerges forcefully. This technological revolution brings with it immense promise, artificial intelligence has the potential to help us under some of the most pressing issues of our time, from poverty to hunger to disease, it can contribute significantly to improving healthcare, education and environmental protection. 

However, as Pope Francis wisely reminded us, we cannot assume that the development of AI will automatically contribute to a humanistic future and peace between peoples. This extraordinary technological power also presents significant risks to peace and human security. Artificial intelligence can be exploited for wrong purposes, to manipulate information, to violate human rights and to create inequalities and discrimination. We must recognize that many modern wars are facilitated by AI technology, and I think that the current war continues exactly for this. People are already talking about attacks carried out by lethal autonomous weapons systems capable of acting without intervention of human judgment and consequently lacking in satisfied, satisfactory ethical and moral criteria. It is essential to understand that technology is not neutral. It reflects the values and the priorities of those who create and use it. It is therefore essential that we develop and use artificial intelligence in a way that is guided by profound ethical, moral, and spiritual consideration. 

As the Holy Father said, we must broaden our gaze and direct technical scientific research towards the pursuit of peace and the common good in the service of Integral Development of individuals and the community. The ethical imperative regarding artificial intelligence translates into the need to actively fight the risks that are inherent to it, such as algorithm bias, discrimination, violation of privacy and personal dignity, mass surveillance and the concentration of power. Just think of possession and merging of big data in the hands of a few. We must be vigilant that AI does not become a tool for the manipulation of information, to spread fake news and the creation of divisions and hostilities. We must integrate deep ethical values into artificial intelligence systems going beyond the simple numerical logic of data. This is where the concept of algorithms emerges strongly and an ethical approach by design, which aims to codify ethical principles and norms in a language that can be understood and used by machines.

It is with this vision that the World Council for AI Ethics was born and since 2020, has brought together representatives from the religious, academic and the technological worlds. The Council does not intend to hold back progress, but to invite deep reflection in order to orient scientific and technological research at the service of human beings and the construction of a more just and peaceful world, the Council is based on key principles such as respect for human dignity, the promotion of the common good, justice, transparency and responsibility. It focuses its commitment on three main areas, ethics, education, and rights. Ethics remind us that every human being is born free and equal in dignity and rights, and that AI must respect this fundamental principle. Education emphasizes the need to train a new generation in the responsible and ethical use of AI. In Japan, we have 2 million young people, in Italy, at least 70 million, this very, very delicate field is education. Rights highlight the importance of establishing rules to guarantee the protection of fundamental rights for all, especially the most vulnerable. And I think we need, surely, an agreement among all governments of the world, as for the nuclear weapons, as for the climate in Paris 2015, we need something similar for this. 

Jaka Bizilj: Maybe with this group of people, if something comes out of our concept notes, we could maybe initiate something like this and visit the Holy Father in the Vatican for a very special Summit.

Vincenzo Paglia: Yes, Pope Francis gave a speech in the last G7 meeting in Bali. I think it is very important to avoid the power of these terrible and strong tools in the hands of a few peoples, of few companies, of few governments. 

Jaka Bizilj: Philosopher Yuval Harari wrote a text for The World Forum last year, expressing among others the thought that for hundreds of years philosophers were saying what they wish for humankind, and scientists said: “It’s not possible. We cannot do that. We cannot fly or whatever.” Now the tables have turned. Now the scientists are saying what they're capable of doing, and the philosophers and leaders of society oppose, saying: maybe we should not go down that path.

I see three main topics that we're dealing with at The World Forum on this topic. Number one: we started with the help of Maria Ressa yesterday morning with The Court of the Citizens of the World a Social Media Tribunal with victims, with parents of children that died, and victims of cyber stalking and other crimes, in order to prove in proper court proceedings what kind of crimes are being committed on social media.  The second topic is what we asked Yoshua Bengio: what can be the governing body, who can control the algorithms and the future digital life? An open question, which we need to develop further, before we can find the answers. Everybody's invited to contribute. The third topic is a more practical element: what can we do already now? We do not know where AI will develop, will there be hell or prosperity, where nobody will have to work anymore and we will all be happy to have our personal robots? We don't know yet, but we do know that truth and journalism are under attack today, that democracy is vanishing in many parts of the world, and for this reason, we are here. That's our third topic: how can we create our own social media? How can we create our own search engines? How do we make ourselves independent of Facebook, Instagram, Google?”

It's striking that social media is controlling the traditional media. And I think the point has arrived where traditional media might consider creating its own social media platforms. So, Maria, the word is yours. And Matthew Hodgson, I don't know what you would like to say with Maria, but whoever is here is thinking about his media company to go into social media and create your own platforms. This gentleman can show you how to do it.

Yoshua Bengio
Co-Godfather of AI & Canadian Computer Scientist

Vincenzo Paglia
President of the Pontifical Academy for Life

Kate Crawford
Leading Scholar of Artificial Intelligence

Jaka Bizilj
Founder and Chairman of Cinema for Peace & The World Forum

Marc Rotenberg
Founder and executive director of the Electronic Privacy Information Center

Lizzie O’Shea
Australian lawyer, writer, and digital rights advocate

Matthew Hodgson
Co-founder and CEO of Element and the co-founder of Matrix

Malcolm Kirk 
The President of The Canadian Press

Guido Baumhauer
Managing Director of Distribution, Marketing, and Technology at DW

Ruth Kuhn 
 Senior Technology Manager, DW AI Officer Technology, AI Team DW

Christian Broughton
CEO of The Independent

Can Dundar
Served as the editor-in-chief of the center-left newspaper Cumhuriyet until August 2016

Deirdre Veldon
Group Managing Director of The Irish Times

Dr Robert Trager
Senior Research Fellow at the Blavatnik School of Government & Co-Director of the Oxford Martin AI Governance Initiative

Ava Lee
Campaign Strategy Lead, Digital Threats to Democracy - Global Witness

Justin Sherman
Founder and CEO of Global Cyber Strategies

Richard Wilson
Co-founder of Stop Funding Hate

Eli M. Noam
Professor at Columbia Business School and Director of the Columbia Institute for Tele-Information

David Arditi
Associate professor of sociology at the University of Texas at Arlington & Director of the Center for Theory

Maria Ressa
Nobel Peace Laureate and Founder of Rappler

Baroness Kidron
Member of Parliament of the UK & Founder of 5Rights Foundation

Tristan Harris
Co-founder of the Center for Humane Technology

Daniel J. Solove
Leading expert in privacy law, information security, and data protection

Yael Eisenstat
American security expert, former CIA analyst

Peter Porta
Director of The Click Trap

Chris Moran
The Head of Editorial Innovation at The Guardian

Julia Ebner 
Leads the Violent Extremism Lab at Oxford

Mona Deamaidi
Developer of Palestinian AI National Strategy & AI and Ethics Policy

Dr. Susan Ariel Aaronson
Leading expert in digital trade, AI governance, and data policy

Luis Sentis
Leads the Human Centered Robotics Laboratory

Claire Atkin
Co-founder and CEO of Check My Ads

Anne Roth
A political scientist & Senior advisor for digital policy in the German federal parliament

Udbhav Tiwari
Vice President of Strategy and Global Affairs at Signal Messenger

Matthew Hindman
Associate Professor of Media and Public Affairs at George Washington University

Byron Tau
Investigative reporter at the Associated Press in Washington

Rewan Al-Haddad
Campaign Director at Ekō

Gary Marcus
American psychologist, cognitive scientist, and author

Leanda Barrington Leach
Executive Director of the 5Rights Foundation

The working group on "Online Advertising - transparency, fairness, and ending revenue for hate speech, incitement of violence, anti-democratic entities and criminal enterprises"

The working group on "Algorithms - rules of transparency, selection of information"

The working group on "How to create my own social media platform and search engines?"

The working group on "Social Media - accountability, liability, rule of law, democracy"

AI at The World Forum 2024

Secretary of State Hillary Rodham Clinton has raised concerns about the potential misuse of artificial intelligence (AI) in democratic elections. Speaking on the issue at the World Forum on the Future of Democracy, Tech and Humankind on 18 & 19 February 2024 in Berlin, Secretary Clinton highlighted the dangers posed by AI-driven misinformation and disinformation campaigns, specifically citing an example involving fake phone calls allegedly from presidential candidates.

“Here’s what’s new. And this is what you have to look for in your elections, just like we are going to have to in ours. And that’s artificial intelligence. Because now it won’t be somebody else making a charge against Trump or Biden. It will be using the words, using the actual figure of one of these two men, to say things that are not true," she stated.

Secretary Clinton referenced an incident where AI was used to mimic President Joe Biden's voice in phone calls, falsely urging people not to vote. This incident is currently under criminal investigation. She emphasised the increasing difficulty in combating such AI-generated misinformation due to its sophisticated nature and widespread dissemination, particularly through online platforms.

"It’s going to be such a flood of mis- and disinformation. It will be very hard to stop it all and I think voters and citizens have to be much more on alert as to what they’re being told, especially online, because that’s the main delivery system for people," she warned.

Despite the challenges, Clinton expressed some optimism, noting improvements in government preparedness and media awareness since the 2016 elections. “I do think our government is better prepared, I think the press didn’t understand it, they didn’t believe it or cover it in 2016, but I think the press is much better educated about it, so, I’m hopeful that, when it comes, because it will, there will be a way of combating it more effectively,” she concluded.

Secretary Hillary Clinton Warns of AI Threats
to Democratic Elections at The World Forum

“For centuries, engineers told philosophers that their inventions were impossible due to the lack of technology. Today, the tables have turned. Philosophers may need to tell engineers that, although we possess the technology, their inventions might be too dangerous to pursue, potentially endangering the human species.”

— Philosopher Yuval Noah Harari

Historian, Philosopher and Bestselling Author of Sapiens Yuval Noah Harari
Addresses The World Forum

Dual Nature of Technology: Technology has the potential to be both helpful and harmful, such as a knife being used for surgery or violence, and nuclear energy for power or destruction. Social media, initially envisioned as a tool to strengthen democracy, can also undermine it and lead to digital dictatorships.

Impact of Social Media: A crucial question in designing technology is how we understand humans and their relationship with technology. Viewing humans as passive consumers can lead to technology that controls and enslaves them, while seeing humans as active creators can empower and liberate them.

Historical Example - Writing: The invention of writing in ancient Mesopotamia, initially used for tax records, is an example of a simple technology that significantly changed history. Writing allowed the rise of large cities and empires by solving the problem of recording tax records, which the human mind couldn't handle. Initially, writing was used to control people and collect taxes, but over time it evolved to empower humans by enabling literature and poetry.

Modern Platforms - YouTube and TikTok: Platforms like YouTube and TikTok demonstrate that humans are not passive consumers; given the opportunity, they can be creative and productive. While social media platforms have released a flood of human creativity, they also exploit human attention by tapping into greed, fear, and hatred.

“Never Summon a Power You Can’t Control”

Yuval Noah Harari on How AI Could Threaten Democracy and Divide the World

Philosopher Yuval Noah Harari authored an article published by The Guardian in which he warns about the existential risks posed by artificial intelligence. Harari highlights that AI is unlike any previous technology because it can make autonomous decisions and create new ideas, which could undermine democracy and global stability. He points to a survey demonstrating that more than a third of AI researchers believe there is at least a 10% chance that advanced AI could lead to catastrophic outcomes, including human extinction. Harari also argues that AI's impact on the global economy could exacerbate inequalities, with China and North America expected to capture 70% of the $15.7 trillion it might add by 2030. He concludes that only by uniting globally can we effectively regulate AI and safeguard our shared future.