Over the past 20 years we have experienced technological developments dramatically changing our way of life. These developments, like all technological developments in general, help us perform various tasks more precisely, more efficiently, and faster. In general, they enable us to gain time.
However, when an invention solves one problem, it is quickly used for other purposes. It becomes applicable in completely different areas and leads to results not predicted and often not at all favoured by the inventor.
Neil Postman, an American philosopher and author of Technopoly: The Surrender of Culture to Technology, published in 1992, cites the example of a clock invented by Benedictine monks. As Postman explains, the monks invented the clock “in order to be more precise in performing their canonical prayers seven times a day,” but “had they known that the mechanical clock would eventually be used by merchants as a means of establishing a standardized workday and then a standardized product—that is, that the clock would be used as an instrument of making money instead of serving God—the monks might have decided that their sundials and water clocks were quite sufficient. Had Gutenberg foreseen that his printing press with movable type would lead to the breakup of the Holy Roman Catholic and Apostolic Church, he surely would have used his old wine press to make wine and not books.”
The same can be said about the most recent technological inventions in the field of so-called “algorithmic intelligence”. In general, just like previous technical breakthroughs, algorithmic intelligence makes our lives more efficient. It has, however, a more significant, originally unpredicted application. It is used as a tool in deciding about peoples’ lives. Banks use it to grant loans, universities to grant scholarships, the public administration to impose fines, and soon we can expect algorithmic intelligence to be used in courts to replace (not assist) judges.
But the practice of eliminating humans from decision-making with regard to other humans will lead in effect to the elimination of what is pivotal for civil society: the sense of responsibility of the decision-maker for his or her decision and its consequences, a trait normally associated only with humans.
Taking a decision means being responsible for it. The sense of responsibility cements civil society. Our personal and professional integrity is a function of our sense of responsibility, and our sense of responsibility is a function of our ability to pass judgment and therefore make a decision. When making decisions affecting other peoples’ lives we must know whether we are harming someone and causing them suffering or not. Harming people and making them suffer should normally be associated with a sense of guilt. The feeling of guilt is an unpleasant emotion, and on its own term reflects back the sense of responsibility.
There are always people behind the use of any technology. Those people can act either in good faith or in bad faith. The danger is that algorithms will allow people behind the technology to relieve themselves of the sense of responsibility, in the same way that hiding behind the letter of the law and its literal interpretation, behind formal procedures and chains of command, allowed them to do so in the past, to refuse to apply personal judgment and bear the psychological burden associated with it.
That is why algorithms called “weapons of math destruction” can be detrimental not only to the people they directly affect, but to the functioning of the civil society as a whole. They dilute or eliminate entirely the necessary sense of responsibility and facilitate abuses in decision-making processes.
There are many examples of how socially beneficial technologies may be used to harm and abuse entire communities on a massive scale, because responsibility for their application is not properly attributed. Big Data, which developed from a tool primarily used for targeted advertising into an instrument with profound applications to various commercial issues or chronic social problems, is used to manipulate people, or in the hands institutions of state power, to manage people without their consent or even knowledge. The people behind these abuses are able to remain anonymous, free from legal liability and, more importantly, unburdened by the psychological sensation of responsibility for the consequences of their decisions and acts.
It is our paramount responsibility as lawyers in this day and age to understand these technological processes and formulate clear rules enabling identification of abuses and allocation of responsibility for wrongful uses of technology. Protection of human rights and the core values underpinning civil society depends on this.
Our critical and sceptical mindset inherited from the thinkers of the 18th century should always urge us to ask the following questions with regard to new inventions and their application:
- What is the problem the technology is supposed to solve?
- Whose problem is it?
- Who and what institutions may be seriously harmed by the technological solution?
- What new problems may be created by the technology invented to solve a problem?
- What sort of people and institutions may acquire special economic and political power because of a technological change?
As Postman put it, “In the thirteenth century, perhaps it didn’t matter so much if people lacked technological vision; perhaps not even in the fifteenth century. But in the twenty-first century, we can no longer afford to move into the future with our eyes tightly closed.”