Why did I sign an appeal to halt AI development?

Regardless of whether we see benefits or an existential threat in the latest AI technologies, the gravity of the challenges these technologies bring is undeniable. Over the past few decades, technological advances have far outpaced reflections on their possible consequences. This need not and should not be the case. That technologies are not solely a source of good is becoming apparent today as we begin to perceive the destructive impact that certain digital technologies have on our democracies, security, and mental health. In the face of recent technological advances, such as artificial intelligence, we have an opportunity to avoid mistakes and at least try to redirect the development of these technologies toward authentic benefits, while at the same time mitigating risks. In this context, I decided to sign an appeal to temporarily halt work on AI systems. I also encourage others to do so. Below I present the main rationale that guided me.

The right to understand

I believe that each of us has a fundamental and inalienable right to understand the meaning of various elements and processes that make up the world around us. This is a condition for conscious participation in reality, including making informed individual and collective choices. We can voluntarily waive this right, but any external attempt to take it away should be treated as an attack on our freedom.

In my opinion, the contemporary source of threat to our right to understand lies first of all in technology, and in particular the speed at which it is developing. The cycle of technological development has diverged from the cycle of cultural development. We have not kept up with creating structured meanings for these technologies. We do not quite know what they mean, how they affect us, our culture, economy, society. Yes, we make attempts to understand, but they are most often realised after the fact, i.e. after the technology has already become a part our reality. Additionally, the process of social sense-building has its own dynamics. It requires building appropriate narratives, conducting public debate, processing conclusions into specific actions (e.g. legislation), etc. This is a process often spread over years. Meanwhile, at this moment, at least in certain technology areas (e.g. digital services and content), the technology cycles are already much shorter.

The effect is that the right to understand becomes an illusion. We cease to intellectually embrace the importance of processes fundamentally shaping our lives and creating the framework within which future generations will operate. This also applies to AI technology. We have a right to understand whether the latest developments of AI engineers, such as ChatGPT, are still just a “calculator,” only this time processing words rather than numbers, or perhaps the beginning of systems that will achieve some kind of autonomous consciousness. Even if it is just a new variety of “calculator,” we have the right to consider how it might affect us, our children, education, democracy, and so on. Crucially, we have the right to understand this before these systems enter production.

The paradigm of technological development

The increasing loss of the ability to control technological progress follows from what I call the paradigm of technological development. It is one of the most characteristic elements of modernity. Its essence is the prevailing belief in the validity of at least the following three claims regarding technology:

  • Technological development is essential for economic development
  • Technological development is necessary to increase the well-being of humanity (understood as broadly as possible, to include material well-being, convenience, peace of mind, etc)
  • In principle, technological development is impossible to stop.

My purpose is not to test in depth the validity or invalidity of these claims. I just want to draw attention to the danger of uncritically accepting these beliefs. I believe we have the right to ask questions about whether certain technologies actually contribute to prosperity, and what impact they may have on our democracy, culture, education, dignity, rights, and psychological well-being. It is clear that in addition to a range of technologies that add significant value, we are also creating solutions, or specific applications of existing solutions, with a disruptive or potentially disruptive impact on us and our societies and culture. Even if they are aware of the risks, supporters of the technological paradigm often assume that the balance will always be positive, i.e. the sum of benefits will outweigh the sum of the negative implications of technologies. But this is not always the case.

We are well aware that AI is an ambiguous technology—in addition to the potentially remarkable benefits it can bring to our world, it also carries a number of fundamental risks. The creators of these solutions openly talk about it as well. So why should we sit back and passively observe the unfolding of events? The scale of potential threats created by AI is so significant that passivity would be a display of irresponsibility, irrationality, and abdication of the desire to shape the world in which we function.

By bringing the work on AI to a halt, we could prove that technology’s detachment from the broader socioeconomic context is not irreversible. Creating a framework in which the creation and implementation of innovations would be subject to a certain degree of social control is in no way equivalent to negating freedom and creativity. It would only restore the necessary balance between various socioeconomic vectors for integral development. Referring to another example, administrative control over the introduction of new medicinal products or vehicles slows down development but guarantees that the development is stable in the long term. Does this limit the freedom of creation? I don’t think so. It rather causes this freedom to be exercised in a safe and civilised manner.


So, what are the major threats and challenges that AI potentially brings, making it worth revising the technological development paradigm?

The human being

It seems to me that most important is the threat that AI systems pose to a certain anthropological vision around which our entire world is built. At the same time, it is a threat whose reality is difficult to assess, as it requires an in-depth understanding of the capabilities and importance of both current and future AI solutions. However, I believe that even if we are talking about something that is abstract at this moment, it is still worth visualising the challenge. Regardless of where the development of AI systems currently stands, there is a certain target towards which some developers are clearly moving. This is the creation of artificial general intelligence (AGI) systems, whose cognitive and intellectual abilities are expected to surpass human abilities. This is accompanied by the belief that human intellectual capacity has limitations, and we must exceed them if we are to make further breakthroughs, increase the efficiency of certain processes, or simply create a better world.

But this ambitious vision is also marked by a potentially extremely difficult dilemma. The creation of a machine that surpasses humans in cognitive abilities and participates actively in the world may result in the emergence of new actors in our reality, who, with their above-average abilities, may quickly take control of it. It is particularly intriguing and probably frightening for many to imagine a reality in which these new actors autonomously create their own goals and take action to achieve them. Someone might say that the simplest way to avoid these risks is to design these systems to prevent them from creating their own goals. But it is potentially in the autonomy of operation of these systems that their greatest value lies. If the goals of these systems were to continue to be set by humans, they would continue to be burdened by the limitations of the human intellect, while the idea is to transcend these limitations, which may require creating the ability for machines to autonomously set goals (e.g. research goals).

Therefore, at some point, further development of AI may present us with a dilemma, which boils down to whether, in the name of progress, we want to abandon the anthropocentric world, the world in which humans set the basic rules and goals. If indeed a condition for progress is to transcend the limitations of the human mind, then sooner or later we will face the temptation to entrust this progress to new actors more capable than us. But by then, it may no longer be our own anthropocentric world. I don’t know about you, readers, but I would feel much more comfortable if I could make a free and informed choice in this matter.


If we actually believe that technology has become so alienated that we are unable to control it, then unfortunately this is a straight path to catastrophe for civilisation. Civilisations have existed as long as they have been able to provide a baseline ordering of reality, by which I mean the predictability of the elementary framework for socio-economic interactions. A prerequisite for the existence of such a framework is a minimal level of control over the processes that take place within a given civilisation. The lack of any control over technological development, or acceptance that control can only take place after the fact, naturally creates a risk of breakdown of the elementary order.


We could write more articles pondering how existing digital technologies are already modifying our culture. We are beginning to discover, for example, how social media is changing the way we reason, our ability to build narratives, the way we interact with others. We can see to what extent the internet has opened access to knowledge, but also to what extent it has made the space of mutual interactions public and dominated by hate speech. All of this changes us in fundamental ways and, certainly, not always good ones.

New AI-based tools, such as language models allowing the use of AI in work on text (both written and spoken), could compound these changes. The availability of solutions such as ChatGPT is forcing us for example to revise our approach to education. We need to rethink the goals of the education process in general, what skills we want to teach and why. We are beginning to perceive concerns that generations educated by ChatGPT will lose many of their independent critical thinking skills. Language models also create a risk of intensifying many of the negative phenomena already present today in the digital world. Thanks to these technologies, the possibilities of manipulating content, creating hidden messages, or carrying out fraud seem almost limitless.

Perhaps, if we had more time to react and adapt, some of the challenges we face today could be avoided or their negative effects would be limited. By reducing the pace of work on AI systems, we create an opportunity for ourselves to avoid mistakes from earlier stages of development of digital technologies. We could properly prepare for change.


The culture is directly related to the way we arrange the common space of our interactions. In the wider Western world, we do this democratically. Democracy is not just a way of electing authority. It is a much broader concept which presupposes the existence of an appropriate cultural context. Entrusting the choice of authority to the general public is likely to lead to better results than alternative political models only if that choice is made by free, informed individuals, understanding the significance of their choices and able to conduct structured debates about public affairs.

If instead of enhancing the abilities of individuals crucial for the functioning of democracy, new technologies cause those abilities to atrophy, democracy can quickly turn into a self-caricature. This could happen if technologies cause individuals to lose their ability to think critically and independently and participate in public debate. Such individuals are very easy to manipulate, and those who would like to manipulate them will have access to tools enabling this on an unprecedented scale.


AI technologies are probably not neutral for economies. They offer an opportunity for an unprecedented increase in efficiency, but this increase, especially in shock fashion, could lead to a revolution in the labour market and radically deepen social stratification.

This is a simple recipe for social revolution. Again, in this context, it is highly desirable to gain time to allow for in-depth study of the possible economic implications of AI systems and at least attempt to create tools to limit the negative impacts of socio-economic changes.


This theme is the most difficult to explain, as the claim that innovation can slow down innovation is at first glance completely counterintuitive. This risk will become a little clearer when we realise that innovation owes its development to date to the broad socio-cultural-economic context in which it has been able to develop. Human creativity, necessary for innovation, could bear fruit only under a model of social organisation that guaranteed its existence. This model consists of all those elements I mentioned above, such as democracy, a free individual with an understanding of the world, and a free and fair economy providing the largest possible group of actors the opportunity to implement their creativity. All these elements are closely intertwined.

Limiting individuals’ creativity by outsourcing more and more intellectual processes to machines, manipulating public debate, and as a result, bringing about the collapse of democracy or the unprecedented accumulation of capital in the hands of a few, are just examples of processes that could potentially occur with the development of AI, and upset the necessary social balance. Without creative individuals and political and economic freedom, innovation will probably continue to occur, but it will not necessarily be free. Therefore, by leading to further uncontrolled development of technologies, we may ourselves be limiting the conditions for their further emergence.

It is not about stopping development

Finally, I would like to warn against a misunderstanding that can easily arise from what I have written. While protesting the uncontrolled development of AI technology, I am not opposing technological development. Nor do I dream of returning to a centrally planned and controlled world. On the contrary, I believe that we have reached a point where, without the support of technology, we will not be able to maintain and improve the complex structure we have created. Moreover, history proves that the change that innovation brings can be something desirable and invigorating and contribute to increasing—to put it somewhat grandly—“the sum of good in the world.” Therefore, in principle, the system should provide opportunities for creation of innovation and be inherently open to change.

However, it cannot be assumed in advance that such an attitude must translate into an uncritical attitude toward innovation. Many technologies are morally ambiguous. The point is not to cross them out in advance. It is only a matter of organising the process of developing innovations to direct development as much as possible towards social and individual well-being and to limit the negative impact of innovations. This requires an elementary ability to control the development process and to draw red lines warning against entering areas where innovation-driven change may shake the foundations of our order.

At the same time, I am not saying that such control should be exercised exclusively by a centralised administrative apparatus. On the contrary, at the moment, it seems to me that the apparatus is no longer capable of achieving this goal on its own. Therefore, new ways should be sought to restore social control over technological progress.

Described by many as the happiest time in the history of mankind, the place where we are today has been achieved not only by technologies, but above all by a certain unique vision of humanity and the way we interact with each other as a result of this vision. Wanting to protect this place, we cannot just focus on technology, and certainly we need to raise the alarm when technologies, the fruit of this unique approach, begin to threaten it.

Krzysztof Wojdyło

Previous post
“Dark patterns” targeted by EU institutions
Next post
The Digital Markets Act: A revolution, and not only for gatekeepers