Information Society or Society of Control?

Summary

Introduction

1 A New Era

2 An Unprecedented Interpretation of Reality

3 New Potentials

4 Information or Control?

5 Digital Sustainability

6 The Pontificate of Francis

Conclusion

Introduction

It is not easy to review the novelties of the digital world and the challenges it represents for consciousness and freedom. The transformation, whose omnipresence and transformative power we all perceive today, especially after the pandemic, has not yet fully revealed its scope. However, to clarify the magnitude of these processes, we will propose an itinerary divided into several stages. We will start by describing what happened (A New Era) and then try to bring out the main characteristics of the Digital Age (An Unprecedented Interpretation of Reality). The perspectives of our analysis will then lead us to outline the potentials (New Potentials) and limits (Information or Control?) of these transformations. We will then indicate what could be a remedy to make the system more sustainable (Digital Sustainability), as well as point out the clues given by the magisterium of Francis (The Pontificate of Francis).

1 A New Era

The evolution of the computer has profoundly influenced all communication technologies while embracing all its potential. Initially, the computer seemed to be a tool reserved for large organizations and administrations, scientific research, and military commands. From the 1970s, microprocessor technology, the constant development of user-friendly software, and, in the 1990s, the rapid expansion of the network, transformed it into a machine accessible to everyone, like any other household appliance. To understand this change, we need to focus on the main characteristic of this new form of communication: digital.

In computer science and electronics, digital refers to the fact that all information is represented by numbers or is manipulated by numbers (the term is derived from the English digit, which means number). A certain set of information is represented in digital format, that is, as a sequence of numbers taken from a set of discrete values, that is, belonging to the same well-defined and circumscribed set. Currently, digital can be considered synonymous with numeric and is opposed to the form of information representation called analog. What is digital is opposed to what is analog, that is, non-numerable.

Digital, therefore, refers to the mathematics of the discrete, which works with a finite set of elements, while analog is modeled with the mathematics of the continuous, which deals with an infinite (numerable or non-numerable) number of elements. An object is digitized, that is, made digital, if its original (analog) state is translated and represented by means of a numerical set of elements. For example, a photo, normally composed of an infinite number of points, each of which is composed of an infinite range of colors, is digitized, and thus translated into a digital photo, when its surface is represented as being subdivided into a discrete number of points (usually small squares or rectangles called pixels), each of which is composed of a color represented, in turn, by a number.

Today, electronic communication, on the one hand, contributes to the weakening of the book as a source and tool of information and culture; on the other hand, in new ways, it continues and expands its service (as, for example, happens with the ebook). Moreover, if the printer allowed a different use of memory, the computer today further increases this change, endowed as it is with a vast capacity for data management.

Precisely because it processes the language of all other media in digital format, the computer has become the medium par excellence of the 21st century. In particular, it is a writing tool for everyone: journalists, writers, scientists, engineers, poets, and artists. It has largely modified traditional writing techniques, as it did for editing, photocomposition, and printing itself.

At the beginning of the 20th century, the human community was connected by telegraph and then by telephone. Today, global connections are made by computer: the exchange of money and goods in the stock market, air and rail traffic, etc., are controlled informatically. The same way allows millions of people to exchange messages without time or space limits.

2 An Unprecedented Interpretation of Reality

The revolution in science and technology brought by computers and information technology was skillfully described by Naief Yehya: “With a computer we can transform almost any human problem into statistics, graphs, equations. What is truly disturbing, however, is that by doing so, we create the illusion that these problems can be solved with computers” (YEHYA, 2005, p. 15).

Chris Anderson, the editor-in-chief of Wired[1], summarizes what the digital revolution[2] means for the scientific world:

Scientists have always relied on hypotheses and experiments. […] Faced with the availability of enormous amounts of data, this approach – hypothesis, theoretical model, and test – becomes obsolete. […] Now there is a better way. Petabytes allow us to say: “correlation is enough.” We can stop looking for theoretical models. We can analyze the data without any assumptions about what the data might show. We can send the numbers to the largest set of computers [clusters] the world has ever seen and let statistical algorithms find patterns [statistical] where science cannot. […] Learning to use a computer of this scale can be a challenge. But the opportunity is great: the new availability of a huge amount of data, combined with the statistical tools to process it, offers a whole new way of understanding the world. Correlation replaces causality, and the sciences can advance even without coherent theoretical models, unified theories, or some kind of mechanistic explanation. (ANDERSON, 2008, p. 106-107)[3]

The advent of digital research, where everything is transformed into numerical data, leads to the ability to study the world according to new epistemological paradigms: what matters is only the correlation between two quantities of data and no longer a coherent theory that explains this correlation[4]. Practically today we are witnessing technological developments (capacity to do) that do not correspond to any scientific development (capacity to know and explain): today correlation is used to predict with sufficient accuracy, even if there is no scientific theory to support it, the risk of asteroid impact, even unknown, in various places on Earth, institutional sites subject to terrorist attacks, the voting of individual citizens in the US presidential elections, the short-term trend of the stock market.

The use of computers and information technology in technological development has highlighted a linguistic challenge that occurs at the boundary between man and machine: in the process of mutual questioning between man and machine, projections and exchanges hitherto unimaginable arise, and the machine becomes no less human than man becomes machine (BENANTI, 2012).

3 New Potentials

The effect of the exponential digitalization of communication and society is leading, according to Marc Prensky (PRENSKY, 2001a, p. 1-6; PRENSKY, 2001b, p. 1-6), to a true anthropological transformation: the advent of digital natives. Digital native is a term applied to a person who has grown up with digital technologies such as computers, the internet, mobile phones, and MP3s. The term is used to refer to a new and unprecedented group of students entering the educational system. Digital natives emerged in parallel with the mass diffusion of computers with graphical interfaces in 1985 and window-based operating systems in 1996. The digital native grows up in a multi-screen society and considers technologies as a natural element without feeling any discomfort in manipulating and interacting with them.

In contrast, Prensky coined the term digital immigrant (digital immigrant) to indicate a person who

grew up before digital technologies and adopted them later. One of the differences between these individuals is the different mental approach they have towards new technologies: for example, a digital native will talk about their new camera (without defining its technological type) while a digital immigrant will talk about their new digital camera, as opposed to the chemical film camera they used before. A digital native, according to Prensky, is shaped by the media diet to which they are subjected: in five years, for example, they spend 10,000 hours playing video games, exchange at least 200,000 emails, spend 10,000 hours on the cell phone, spend 20,000 hours in front of the television watching at least 500,000 commercials, but dedicate only 5,000 hours to reading. This media diet produces, according to Prensky, a new language, a new way of organizing thought that will change the brain structure of digital natives.

Multitasking, hypertextuality, and interactivity are, for Prensky, just some characteristics of what seems to be a new and unprecedented stage in human evolution. Moreover, according to the author, although erratically and at our own speed, we are all moving towards digital enhancement that includes cognitive activities. In fact, he says, digital tools already extend and enrich our cognitive abilities in many ways. Digital technology enhances memory, for example, through data acquisition, storage, and retrieval tools. Digital data collection and decision support tools improve choice, allowing us to collect more data and check all the implications of that issue. Digital cognitive enhancement, enabled by laptops, online databases, three-dimensional virtual simulations, online collaborative tools, handhelds, and a host of other context-specific tools, is now a reality in many professions, even in non-technical fields such as law and the humanities. Therefore, instead of “technological empowerment,” Prensky prefers to talk about “digital empowerment” for three reasons: 1. Because today almost all technology is digital or supported by digital tools; 2. Digital technology differs from other technologies by being programmable, that is, it can be induced to do, at increasingly precise levels, exactly what is wanted (this customization capability is at the heart of the digital revolution); 3. Digital technology invests more and more energy in increasingly smaller versions of microprocessors, which in turn constitute the core of much of the technology capable of enhancing cognition. This miniaturization, along with ever-lower costs, is what will make digital technology available to everyone, although at different rates and in different locations (PRENSKY, 2009)[5].

4 Information or Control?

We live in a digital society and time, the Digital Age, a complex period because of the profound changes these technologies are producing. The Covid-19 pandemic accelerated a series of processes that had been radically changing society for some time because it was possible to dissociate content, knowledge, from its support[6]. The epochal change we are going through is produced by digital technology and its impact on our way of understanding ourselves and the reality around us.

To understand this challenge, we need to go back to the beginning of this transformation. In a grainy documentary filmed in 1952, at Bell Labs, mathematician and Bell Labs researcher Claude Shannon stands next to a machine he built. Built in 1950, it was one of the world’s first examples of machine learning: a robotic rat that solves mazes, known as Theseus. The Theseus of ancient Greek mythology navigated the labyrinth of a minotaur and escaped by following a thread he used to mark his path. But Shannon’s electromechanical toy was able to “remember” the route with the help of telephone relay switches.

In 1948, Shannon had introduced the concept of information theory in A Mathematical Theory of Communication, a document that provides the mathematical proof that all communication can be expressed digitally. Claude Shannon showed that messages could be treated purely as an engineering matter. Shannon’s mathematical and non-semantic theory of communication abstracts the meaning of a message and the presence of a human sender or receiver; a message, from this point of view, is a series of transmittable phenomena to which a certain metric can be applied (POLT, 2015, p. 181).

These insights gave rise to a new transdisciplinary view of reality: Norbert Wiener’s cybernetics. For Wiener, information theory is a powerful way of conceiving nature itself. While the universe is gaining entropy, according to the second law of thermodynamics – that is, its energy distribution is becoming less differentiated and more uniform – there are local counter-entropic systems. These systems are living organisms and the information-processing machines we build. These systems differentiate and organize themselves: they generate information (POLT, 2015, p. 181). The privilege of this approach is to allow cybernetics to exercise secure control over the interdisciplinary field it generates and deals with: “cybernetics can already be sure of its ‘thing’, that is, of calculating everything in terms of a controlled process” (HEIDEGGER; FABRIS, 1988, p. 34-35).

Starting in the decade before the Second World War, and accelerating during and after the war, scientists designed increasingly sophisticated mechanical and electrical systems that allowed their machines to act as if they had a purpose. This work intersected with other work on animal cognition and early work in computing. What emerged was a new way of looking at systems, not just mechanical and electrical, but also biological and social: a unifying theory of systems and their relationship to the environment. This movement towards “whole systems” and “system thinking” became known as cybernetics. Cybernetics frames the world in terms of systems and their goals.

According to cybernetics, systems achieve their goals through iterative processes or cycles of “feedback.” Suddenly, leading postwar scientists were seriously talking about circular causality (A causes B, B causes C, and finally, C causes A). Looking more closely, scientists saw the difficulty of separating the observer from the system. In fact, the system seemed to be a construction of the observer. The role of the observer is to provide a description of the system, which is given to another observer. The description requires a language. And the process of observing, creating language, and sharing descriptions creates a society. Since the late 1940s, the world of advanced research began to look at subjectivity – language, conversation, and ethics – and their relationship with systems and design. Different disciplines were collaborating to study “collaboration” as a category of control.

Until then, physicists had described the world in terms of matter and energy. The cybernetic community proposed a new view of the world through the lens of information, communication channels, and their organization. In this way, cybernetics was born at the dawn of the information age, in pre-digital communications and media, bridging the way humans interact with machines, systems, and each other. Cybernetics focuses on the use of feedback to correct errors and achieve goals: cybernetics makes the machine and the human being a kind of Shannon mouse.

It is at this level that we need to look more closely at the effects this may have on the understanding – of oneself and others – of the human being and on freedom. As discussions matured, the goals of the cybernetic community expanded. In 1968, Margaret Mead was contemplating the application of cybernetics to social problems:

As the world stage widens, there is a continuing possibility of using cybernetics as a form of communication in a world of increasing scientific specialization. […] we should look very seriously at the current situation of American society, within which we hope to develop these very sophisticated forms of dealing with systems that desperately need attention. Problems of metropolitan areas, […]. The interrelationships between different levels of government, the redistribution of income, […] the necessary links between parts of large industrial complexes… (MEAD, 1968, p. 45)[7]

The cybernetic approach, as Martin Heidegger would point out in his rereading of Wiener and the work of cyberneticists, “reduces” human activity itself, in the plurality of its configurations, to something functional and controllable by the machine: “man himself becomes ‘something planned, that is, controllable’ and, if such a reduction is not possible, he is bracketed as a ‘disturbing factor’ in the cybernetic calculation” (HEIDEGGER; FABRIS, 1988, p. 10). In fact, Fabris observes that:

In his analysis of the cybernetic phenomenon, Heidegger constantly keeps in mind the Greek matrix of the word and privileges this aspect, rather than – for example – the central notion of feedback, as a guiding thread to understand and explain the characteristics of such a “non-discipline discipline.” In the Heideggerian reading, cybernetics indicates the advent of a process of control and information within the different

thematic spheres of the various sciences. From a hermeneutic point of view, command and control (la Steuerung) are understood first of all, from a hermeneutic point of view, as the perspective within which man’s relations with the world are regulated. (FABRIS, 1988, p. 11)

Fabris observes that

cybernetics is seen by Heidegger as the most advanced moment, the most evident result of that domination of technique into which all Western metaphysics flows. The history of being – as it emerges from the university courses on Nietzsche in the 1930s – in fact, has its culmination in the event of technique, in which the will to power (will to will) that determines human action and extends to all spheres of reality finds full manifestation. Within this process of self-referentiality of the will, the cybernetic project receives its own justification and defines its relations with philosophy, taking on some of its tasks and assuming its traditional prerogatives. (FABRIS, 1988, p. 11)

At the heart of the cyberneticists, that is, those scholars who are the fathers of the information society, artificial intelligences, and all these impressive developments that digital is achieving in our lives, however, may have been the promise of an even greater purpose.

Gregory Bateson, Margaret Mead’s first husband, said in a famous interview that what excited him about discussions about cybernetics was this: “It was a solution to the problem of scope. From Aristotle, the final cause was always the mystery. This came to light then. We didn’t realize then (at least I didn’t, although McCulloch may have) that all logic would have to be reconstructed for recursion” (BRAND, 1976, p. 32-34)[8].

5 Digital Sustainability

If the information society can indeed, through digital feedback actions, place man in a condition of control by the machine (whether electronic or algorithmic), and if the cybernetic relationship in its most radical form of realization of the man-machine symbiosis can, in fact, deny the need to hypothesize final causes for action, a dystopian horizon appears on the horizon in which the information society inevitably collapses into a control society. The analysis of digital society allows us to reflect on the link between causes, necessity, and freedom, which digital realizes in its form of political implementation: it questions the very existence of a destiny of man that depends on his free will.

This form of cybernetic digitalization, which I would define here as “strong,” in order to emphasize how this is a possible form of society if forms of digital sustainability are not created (BENANTI; MAFFETTONE, 2021), risks eliminating the very possibility of positive freedom. In political language, this term, as Bobbio says, means

the situation in which a person has the possibility to direct his will towards an objective, to make decisions, without being determined by the will of others.” This form of freedom is also called “self-determination” or, even more appropriately, “autonomy.” […] The classic definition of positive freedom was given by Rousseau, for whom freedom in the civil state consists in the fact that there man, as part of the social whole, as a member of the “common self,” obeys no one but himself, that is, he is autonomous in the precise sense of the word, in the sense that he gives laws to himself and obeys no other law than those he himself has given: “Obedience to the law we prescribe for ourselves is freedom” (Social Contract, I, 8). This concept of freedom was assumed, under the direct influence of Rousseau, by Kant, […] in the Metaphysics of Morals, where legal freedom is defined as “the faculty of not obeying any law other than that to which citizens have given their consent” (II, 46). […] Civil liberties, the prototype of negative liberties, are individual liberties, that is, inherent to the individual: historically, in fact, they are the product of struggles to defend the individual considered either as a moral person and therefore having value in itself, or as a subject of economic relations, against the intrusion of collective entities such as the Church and the State […]. Freedom as self-determination, on the other hand, is generally referred to, in political theory, to a collective will, whether that will be that of the people or the community or the nation or the ethnic group or the homeland. (BOBBIO, 1978)

In light of these brief reflections, it seems to us that we can emphasize that the epistemological matrix of control inherent in the development of digital as a culture of cybernetic information still resides implicitly and unreflectively within the technical applications of the information society. It is up to civil society to create a debate so that the processes of digital technological innovation are challenged. However, the world of technology is today described by the category of innovation.

If we continue to look at technology only as innovation, we risk not realizing its scope of social transformation and, therefore, being unable to direct its effects towards good.

To be able to talk about innovation as a good, and to be able to orient it towards the common good, we need a qualification capable of describing how and which characteristics of progress contribute to the good of individuals and society. That is why, with Sebastiano Maffettone, we decided to adopt the category of digital sustainability.

The idea of digital sustainability draws attention to a broad concept, including the lasting expansion of individuals’ choice possibilities and the equitable improvement of their well-being prospects. Talking about digital sustainability means not putting technical capacity at the center of attention but keeping the human being at the center of thought and as the end that qualifies progress.

Using digital technology ethically today, respecting human ecology, means trying to transform innovation into a sustainable digital world. It means directing technology towards and for human development, and not simply seeking progress as an end in itself. Although it is not possible to think and implement technology without specific forms of rationality (technical and scientific thought), placing digital sustainability at the center of interest means saying that technical and scientific thought is not enough[9].

For there to be freedom, we need conscience and consciences to question technology, directing its development towards the common good.

6 The Pontificate of Francis

In this last part of the text, we would like to present the great sensitivity that the pontiff demonstrates regarding the technological theme and the innovative presence of digital as a dominant form of technology.

When reading the encyclical Laudato Si’, we find twenty explicit references to technology. The word technology first appears in the initial part of the text, where we delve into the analysis of the ecological problem to understand what is happening to our home (n. 16, 20, 34 – 2 times, 54 – 2 times); then, in the third chapter, where we seek the human root of the ecological problem (n. 102 – 3 times, 104 – 2 times, 105, 106 – 2 times, 109, 110, 113, 114 and 132); and only once in the chapter dealing with offering some lines of orientation and action (n. 165). Twice (n. 103 and 107) the term technoscience is preferred to technology. However, our investigation would not be complete without mentioning how the pontiff, connecting human action, technology, and ecological problem, juxtaposes the noun technology with the adjective technocratic, which occurs seven times – all in the third chapter – describing a certain inner attitude of the human being and his intentionality in relating to technology in dark and negative tones.

The analysis that Laudato Si’ offers of technology reflects the ambiguity of the technical tool that emerged at the intersection of ecology and technology. We must recognize that

Humanity has entered a new era, in which the power of technology puts us at a crossroads. We are heirs of two centuries of enormous waves of change […]. It is right to rejoice in these progressions and be excited about the wide possibilities that these incessant innovations open up to us, because “science and technology are a wonderful product of human creativity that God has given us.” The transformation of nature for useful purposes is a characteristic of the human species, since its beginnings; and thus technique “expresses the human soul’s tension towards a gradual overcoming of certain material conditions.” Technology has remedied countless ills that afflicted

and limited the human being […]. (LS n. 102)

However, we cannot ignore the fact that the skills we have acquired

give us tremendous power. Or rather: they give, to those who hold the knowledge and especially the economic power to enjoy it, an impressive dominion over the whole of humanity and the entire world. Never has humanity had so much power over itself, and nothing guarantees that it will use it well, especially if we consider the way it is doing it. (LS n. 104)

The problem of technology is a problem of the ends to be chosen to guide the use of technical means. Only if technology is oriented towards the realization of humanly qualified and humanizing values will its use be respectful of man and the environment. The ends served by technological means are the only ones capable of ethically justifying the technical means and their use (LS n. 103). However, it is not uncommon to witness a quest for technical power that seems to be assimilated to power itself: when technical progress is not driven by the pursuit of the common good and the realization of morally qualified values, it hardly becomes development, exposing humanity to blind arbitrariness (LS n. 105).

At this level, tracing the development of Laudato Si’ reveals the true nature of the technological problem:

The fundamental problem is another and even deeper one: the way humanity has actually assumed technology and its development along with a homogeneous and one-dimensional paradigm. In this paradigm, a conception of the subject stands out that progressively, in the logical-rational process, understands and thus appropriates the object that is outside. Such a subject develops by establishing the scientific method with its experimentation, which is already explicitly a technique of possession, domination, and transformation. It is as if the subject had in front of him the unformed reality entirely available for manipulation. Human intervention in nature has always been present, but for a long time, it had the characteristic of accompanying, seconding the possibilities offered by the things themselves; it was about receiving what natural reality itself allowed, as if extending the hand. But now, what matters is extracting as much as possible from things by the imposition of the human hand, which tends to ignore or forget the reality of what is in front of it. Therefore, man and things have ceased to shake hands amicably, becoming contenders. This easily leads to the idea of infinite or unlimited growth, which so enthused economists, finance theorists, and technologists. This implies the lie of the infinite availability of the planet’s goods, which leads to “squeezing” it to the limit and beyond. It is the false assumption that “there is an unlimited amount of energy and resources to be used, that their regeneration is possible immediately and that the negative effects of manipulations of the natural order can be easily absorbed. (LS n. 106)

The problem, the document continues, is the dominant technocratic mindset, which conceives all reality as an object that can be manipulated without limits. This is a reductionism that involves all dimensions of life. Technology is not neutral: it makes “choices about the type of social life to be developed” (LS n. 107). The technocratic paradigm also dominates the economy and politics; in particular, “The economy takes on all technological development in function of profit. […] But the market alone does not guarantee integral human development or social inclusion” (LS n. 109). Relying solely on technology to solve every problem means “hiding the real and deeper problems of the world system” (LS n. 111), given that “the progress of science and technology does not equate to the progress of humanity and history” (LS n. 113).

Thus, there seems to be a need for a “courageous cultural revolution” (LS n. 114) to recover the values and perception of what is important in the technological transformation process. When technology becomes an instrument for the implementation of one-dimensional thinking, what the pontiff defines as technocratic thinking, then its nature is perverted and becomes an instrument of dehumanization and destruction of the common home, plundering it, irreparably damaging it, and becoming a highly efficient implementation of ecological damage.

From this reading of Laudato Si’, it emerges how the magisterial text incorporates the internal tension of the world of technology. The response that the pontiff offers to Christians and people of goodwill to take charge of the management and use of technology is in the form of discernment and dialogue. Francis’s magisterium does not seek to resolve these tensions by giving lines or guidelines to be followed by virtue of the role or principle of authority, but assumes the complexity of the problem by indicating the need for a communion of intentions and dialogue to find shared solutions and capable of directing technology and its progress towards the common good in forms of authentic human development.

In addition to these lines, it is worth mentioning that it was the Pontifical Academy for Life that brought the frontier of reflection to the digital world. On a stage dominated by the word renAIssance (a play on words between rebirth and artificial intelligence – AI), the Rome Call for AI Ethics was signed on February 28, 2020. An open call originating from the Pontifical Academy for Life and involving industries, civil society, and political institutions aims to support an ethical and humanistic approach to Artificial Intelligence. The idea of this “call” to protect the dignity of the human person and the common home stems from the dialogues that took place over the past two years between the Academy and some of its members and part of the technological and industrial world. The idea of not drafting a unilateral or directly normative text is linked to the deep desire to promote a sense of shared responsibility among organizations, governments, and institutions with the aim of ensuring a future in which digital innovation and technological progress serve human genius and creativity and not its gradual replacement.

The document was signed by the following institutions: the Pontifical Academy for Life and its president, Mons. Vincenzo Paglia, Microsoft and its president Brad Smith, IBM and its vice-president John E. Kelly III, FAO and its director-general QU Dongyu, and the Italian government and its minister of Technological Innovation and Digitalization, Paola Pisano.

The text of the Call is divided into three parts: ethics, education, and rights, and is available on the internet on a specific website.

Regarding ethics, the Call starts from the consideration that “all human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood,” as stated in Article 1 of the Universal Declaration of Human Rights. Starting from this cornerstone, which today can be considered a kind of universal grammar, a threshold element, in a global and plural community, the first fundamental conditions that the person must enjoy, freedom and dignity, must be protected and guaranteed in the production and use of artificial intelligence systems (from now on AI).

Therefore, AI systems must be designed, developed, and implemented to serve and protect the human being and the environment in which they live. This is to allow technological progress to be a tool for the development of the human family while allowing respect for the planet, the common home. For this to happen, following the Call, three requirements must be met: AI must include every human being, discriminating against no one; it must have the good of humanity and the good of every human being at its core; it must be developed with an awareness of the complex reality of our ecosystem and be characterized by how it cares for and protects the planet with a highly sustainable approach, which also includes the use of artificial intelligence to ensure sustainable food systems in the future.

Regarding the user experience when interacting with the machine, the Call emphasizes the primacy of the human being: each person must be aware that they are interacting with a machine and cannot be deceived by interfaces that disguise the machine, giving it human appearances. Artificial intelligence technology should never be used to exploit people in any way, especially the most vulnerable (particularly children and the elderly). Instead, it should be used to help people develop their abilities and sustain our planet.

Ethical challenges then become educational challenges. Transforming the world through AI innovation means committing to building a future for and with the younger generation. This commitment must translate into an engagement with education, developing specific curricula that deepen the different disciplines, from the humanities to science and technology, to educate the younger generation.

The education of the younger generations, therefore, needs renewed commitment and increasing quality: it must be offered with methods accessible to all, non-discriminatory, and capable of offering equal opportunities and treatment. Access to learning must also be guaranteed to the elderly, who should be offered the opportunity to access innovative services, compatible with their stage of life.

Based on these considerations, the Call notes that these technologies can be extremely useful in helping people with disabilities learn and become more independent, offering help and opportunities for social participation (e.g., remote work for those with limited

mobility, technological support for those with cognitive disabilities, etc.).

To ensure that ethical demands and educational urgency do not remain mere voices, the Call outlines some elements that could generate a new era of law.

The development of AI in the service of humanity and the planet requires regulations and principles that protect people – especially the weak and less fortunate – and natural environments. The protection of human rights in the digital age must be placed at the center of public debate so that AI can act as a tool for the good of humanity and the planet.

It will also be essential to consider a method to make not only the decision-making criteria of AI-based algorithmic agents understandable but also their purpose and objectives. This will increase transparency, traceability, and accountability, making computer-assisted decision-making more valid.

Designing and planning AI systems that can be trusted implies promoting the implementation of ethical methods that can reach the heart of the algorithms, the engine of these digital systems. For this, the Call talks about “algor-ethics,” that is, principles, a kind of ethical safety barrier, which, expressed by those who develop these systems, become operative in the execution of the software. The Call thus enumerates the first algor-ethical principles that are recognized as fundamental for the correct development of AI.

The use of AI must, therefore, follow the following principles:

Transparency: in principle, AI systems must be understandable;

Inclusion: the needs of all human beings must be taken into account so that everyone can benefit and all individuals can receive the best possible conditions to express and develop themselves;

Responsibility: those who design and implement AI solutions must proceed with responsibility and transparency;

Fairness: not to create or act according to biases, thus safeguarding justice and human dignity;

Reliability: AI systems must be able to function reliably;

Safety and privacy: AI systems must operate safely and respect user privacy.

Ludwig Wittgenstein, in the Tractatus Logico-Philosophicus, wrote: “the limits of my language are the limits of my world.” Paraphrasing the philosopher of the last century, then, we can say that, to avoid being excluded from the world of machines, to avoid creating an algorithmic world devoid of human meaning, we must expand our ethical language so that it contaminates and determines the functioning of these systems called “intelligent.” Innovation, never more than today, needs a rich anthropological understanding to become an authentic source of human development. In his speech at the Plenary Assembly of the same Academy, Pope Francis responded to these instances when he spoke of digital technologies: “They can bear fruits of good,” but “a broader educational action” is needed. And the dangers “should not hide from us the great potential of these tools” (FRANCIS, 2020, p. 2).

At the end of our journey, we would like to focus on the challenges faced by the first generations of this new era.

In the next twenty years, the generation of children born in the third millennium will face three fundamental questions arising from digital reality and its ubiquity. The resolution of these questions will describe, for better or worse, a world so profoundly different from anything humanity has experienced that we can truly imagine the end of an era and the birth of a new world, a digital universe.

In light of this, the traditional sociological labels used to classify young people, such as Generation X, Y, or Z, are not sufficient. It seems to me that, due to the quality and characteristics of the synthetic reality we are producing, we must understand this generation as an Omega Generation. If we consider the philosophical, ethical, and practical challenges that synthetic reality presents, I think we can agree that this generation could be the last human generation as we have understood this term so far – I am aware that the expression is strong and provocative, but I hope in the following pages to do justice to this provocation. The central theme is whether this generation will be able to colonize and urbanize this new continent of synthetic reality, mythical desires, and almost unlimited technological potential. The ability to do of this generation could transform it into something very different from what we currently understand as human.

What we know is that the figure of the man who will inhabit our future is that of a wandering being, who seeks. If he is able to accept a spiritual call, he will return to being a viator (traveler, ed. note), otherwise, he will condemn himself to be a wanderer without direction.

Indeed, the Omega Generation has to respond, in a way that can no longer be delayed, to some fundamental questions about our human nature. These questions concern: humanity’s relationship with its environment; humanity’s relationship with technology; and humanity’s relationship with itself.

The Church, a specialist in humanity, as Paul VI defined it, has perceived these transformations and is becoming man’s companion in this novelty of the digital world, offering not abstract and theoretical solutions but allowing itself to be questioned by what is happening and becoming man’s companion on the path of history.

Conclusions

Transforming Innovation into Development

The finding that emerges from the proposed journey here is that the great power of technology can be a formidable tool to help humanity do good ever more effectively or can become the most effective instrument of dehumanization. What allows the distinction between these two outcomes?

The epochal change we are going through is the product of technology and its impact on how we understand ourselves and comprehend reality. However, the world of technology is today described by the category of innovation. However, if we continue to look at technology only as innovation, we risk not realizing its scope of social transformation and directing its effects towards good.

Innovation means an advance or gradual transformation, marked by an ever-increasing capacity and potential.

An atomic bomb compared to a club is an enormous advance (in the capacity to offend). But can we call this increase in capacity a good thing?

Beyond the specific example, the correct answer, in general, is “it depends.” Not all progress is for the good or involves only the good.

To be able to talk about innovation as a good and to be able to orient it towards the common good, we need a qualification capable of describing how and which characteristics of progress contribute to the good of individuals and society. For this, we use the category of development. The idea of human development draws attention to a comprehensive concept that focuses on the processes that expand individuals’ choices and improve their well-being prospects, and that allows individuals and groups to move as quickly as possible towards their empowerment.

Human development must, therefore, be understood as an end and not as a means that characterizes progress through the definition of priorities and criteria. Talking about development means, therefore, not putting technical capacity at the center of attention, but keeping man at the center of reflection and as the end that qualifies progress.

Using technology ethically today means trying to transform innovation into development. It means directing technology towards and for development and not simply seeking progress as an end in itself. Although it is not possible to think and implement technology without specific forms of rationality (technical and scientific thought), placing development at the center of interest means saying that technical-scientific thought is not enough by itself. Different approaches are needed, including the humanistic approach and the contribution of faith.

The development necessary to face the challenges of the era of change will have to be:

Global, that is, for all women and men and not just for some people or some privileged groups (distinguished by gender, language, or ethnicity).

Integral, that is, of every woman and every man.

Plural, that is, attentive to the social context in which we live, respectful of human plurality and different cultures.

Fruitful, that is, capable of laying the foundations for future generations, rather than being shortsighted and oriented towards using today’s resources without ever looking to the future.

Kind, that is, respectful of the earth that welcomes us (the common home), resources, and all living species.

For technology and

for our future, we need development that I would briefly describe as kind. This is ethics, and ethical choices are those that go in the direction of kind development.

Paolo Benanti. Pontifical Gregorian University. Original text, Italian. Submitted on 02/12/2022. Approved on 06/30/2022. Published on 12/30/2022. Translation Paolo Brivio.

References

ANDERSON, C. The End of Theory. Wired, n.16, 2008.

BENANTI, P. The Cyborg. Corpo e corporeità nell’epoca del postumano. Assisi: Cittadella, 2012.

BENANTI, P. Digital Age. Teoria del cambio d’epoca. Persona, famiglia e societĂ . Cinisello Balsamo: San Paolo, 2020.

BENANTI, P.; MAFFETTONE, S. Intelligenza artificiale e la frontiera dei principi. Corriere della Sera, ed. May 7, 2021. Available at: https://www.corriere.it/opinioni/21_maggio_17/intelligenza-artificiale-frontiera-principi-697e5326-b71d-11eb-ba17-f6e1f3fff06b.shtml Accessed: Sep 12, 2022.

BENANTI, P.; MAFFETTONE, S. “Sostenibilità D”. Le conseguenze dela rivoluzione digitale nelle nostre vite. Il Mulino, n. 2, 2021.

BOBBIO, N. LibertĂ . In: Enciclopedia del Novecento, Treccani, 1978. Available at: https://www.treccani.it/enciclopedia/liberta_%28Enciclopedia-del-Novecento%29/. Accessed: Nov 4, 2022.

BRAND, S. For God’s Sake, Margaret a conversation with Margaret Mead and Gregory Bateson. CoEvolutionary Quarterly, p. 32-44, June 10-21, 1976.

FRANCIS. Speech by Pope Francis to the participants in the plenary of the Pontifical Academy for Life on Feb 28, 2020. Rome: Vatican, 2020. Available at: https://www.vatican.va/content/francesco/en/speeches/2020/february/documents/papa-francesco_20200228_accademia-perlavita.html. Accessed: May 20, 2022.

FRANCIS. Encyclical Letter Laudato Si’: on care for our common home. SĂŁo Paulo: Paulinas, 2015.

HEIDEGGER, M.; FABRIS, A. (ed.). Filosofia e cibernética. ETS, 1988.

MEAD, M. Cybernetics of Cybernetics. In: Purposive Systems: Proceedings of the First Annual Symposium of the American Society for Cybernetics. VON FOERSTER, H. et al. New York: Spartan Books, 1968.

POLT, R. A Heideggerian Critique of Cyberbeing. In: Horizons of Authenticity in Phenomenology, Existentialism, and Moral Psychology, edited by H. Pedersen and M. Altman, Springer, Dordrecht, 2015, 181.

PRENSKY, M. Digital Natives, Digital Immigrants. On the Horizon v. 9, n. 5, p. 1-6, 2001a. Available at: https://www.scribd.com/doc/9799/Prensky-Digital-Natives-Digital-Immigrants-Part1. Accessed: Feb 4, 2022.

PRENSKY, M. Digital Natives, Digital Immigrants, part 2: Do They Really Think Differently? On the Horizon, v.9, n.6), p. 1-6, 2001b. Available at: https://www.twitchspeed.com/site/Prensky%20-%20Digital%20Natives, %20Digital%20Immigrants %20-%20Part2.htm. Accessed: Jan 10, 2022.

PRENSKY, M. H. Sapiens Digital: From Digital Immigrants and Digital Natives to Digital Wisdom. Innovate Journal of Online Education, v. 5, n. 3, art. 1. Available at: https://www.innovateonline.info/index.php?view=article&id=705. Accessed: Jan 3, 2022.

YEHYA, N. Homo cyborg. Il corpo postumano tra realtĂ  e fantascienza. Milano: Eleuthera, 2005.

[1] Wired is an American monthly magazine founded in 1993 and based in San Francisco. Known in the sector as The Internet Bible, it was founded by Italian-American Louis Rossetto, one of the leading experts on technology and the so-called digital revolution, along with Nicholas Negroponte, an American computer scientist famous for his innovative studies in the field of human-computer interfaces. It is currently directed by Chris Anderson, who previously worked for The Economist, Nature and Science. Wired (which literally means connected) deals with technological issues and how they influence culture, economy, and politics. Since February 2009, it has also been published in Italy. With regard to cyborgs, Wired is one of the richest sources of material and reflections.

[2] With digital revolution refers to the series of enormous changes in the world of communication and contemporary society as a whole, caused by the possibility of reducing all types of information to chains of bits</ em> and bytes.

[3] The original is in English, the translation is ours. Petabytes are a measure of a computer’s memory capacity. A petabyte is equal to 250, or 1,125,899,906,842,624, bytes – a byte is the unit of measurement for mass storage calculation. We will return to this subject in depth in the next topics.

[4] To get an idea of how large the amount of data we can process today is, it is enough to say that the first computers of the 1960s like ENIAC were capable of storing about ten bytes, while today, on average, a home user has a capacity of 1 terabyte (a thousandth of a petabyte) in their computer, 460 terabytes are all the digital climate data of the Earth, 530 terabytes are all the videos contained in the internet broadcasting system YouTube, and 1 petabyte of data is processed every 72 minutes by Google’s servers, the popular Internet search engine (ANDERSON, 2008, p. 106).

[5] The subject is vast and complex to be discussed in detail in this text. For more details, see BENANTI, 2020.

[6] Think of phenomena like fake news, the emergence of sharp power, the events at the Capitol, or Brexit, in the public sphere, or how digital is shaping expectations and modes of romantic relationships with platforms and modalities never seen before, to name just a few examples.

[7] The translation is mine.

[8] The translation is ours. Aristotle introduced the theory of causes in Physics II 3-7, Metaphysics Δ 2, Metaphysics A 3-10, and Posterior Analytics II 111. It has been the subject of much debate since the beginning. The importance of Aristotle’s theory of causes is mainly due to the fact that, from it, we can speak of knowledge when we can account for the principles and causes that played a role in the occurrence of a particular event.

[9] With Sebastiano Maffettone we wrote an article on digital sustainability, published in Il Mulino, v. 2, 2021.