
General artificial intelligence
It defines itself general artificial intelligence (AGI) an artificial (non-biological) system capable of performing the set of cognitive tasks that a human can perform human being with medium-high intelligence.
To date, although they have come impressively close to this goal, machines still do not match humans in some fundamental capabilities, such as common sense, The long-term planning has always been intuitive understanding of the physical world.
In other areas, however — memory, games, computation, formal symbol manipulation (including writing software) — Machines already far surpass human performance.
The near future
The widespread consensus among industry insiders is that the AGI can be achieved in a relatively short time frame, on the order of a few years.
This will probably happen not through a conceptual revolution, introducing new technologies, but as a result of the ladder: larger models, more parameters, more connections, more neural layers, supported by increasingly powerful and numerous hardware infrastructures.
In this scenario, machines will become indistinguishable from humans at least on a cognitive level and will easily pass the Turing test.
It follows that Humans will become progressively replaceable in all those tasks for which they are less efficient compared to machines with virtually unlimited memory and vastly superior computing power.
We can still consider ourselves different, but for how long?
Compared to ours, these systems currently lack essentially three elements:
- an intuitive understanding of the physical world (and the common sense associated with it),
- the feelings,
- self-awareness.
On the first point, however, robotics and AI are already actively working. Large World Models, capable of integrating images, sounds, text, videos and other sensory signals into a single architecture, represent a decisive step towards systems able to interact appropriately with the real world.
This isn't science fiction: this is exactly what's happening, in embryonic form, in self-driving vehicles. An ever-increasing number of sensors provide inputs that penetrate deep neural layers and translate into coherent and appropriate physical reactions and behaviors.
A system AGI endowed with a rich and continuous interaction with the environment, it could, in principle, also develop forms of emotionality, just as happens in embryonic form to many animal species and fully to human beings.
Furthermore, nothing excludes the possibility that a form of self-awareness may emerge in the future. We know surprisingly little about our own self-awareness: it is almost certainly a emergent property, born from the interaction between multiple perceptual and cognitive levels developed over the course of evolution.
Artificial superintelligence
If this scenario were not already disturbing enough, we need to consider the next step: the artificial superintelligence (ASI).
An AGI can write code, execute it, and modify it; it can therefore improve itself recursively, according to exponential dynamics that escape our understanding.
Already today we observe systems that produce behaviors that are not explicitly programmed.
The result could be a rapidly growing, higher-than-human intelligence that operates according to patterns that are opaque to us.
For the first time in history, humanity would find itself living with one or more intelligences more capable than its own, which it would be unable to fully understand, nor perhaps even control.
An entity that knows everything, that continually improves itself, that never sleeps, never eats, never forgets.
It is not at all a given - although it is plausible - that systems of this type will develop feelings, nor that such feelings are hostile.
Even less clear is whether they can develop a true self-awareness, also because, as already mentioned, we do not have a satisfactory and shared definition of self-awareness.
Conflicting opinions
According to two of the three “founding fathers” of modern artificial intelligence, Geoffrey Hinton e Joshua Bengio, the chances of potentially dangerous forms of superintelligence emerging are high enough (estimated between 5% and 40%) to justify strong caution.
Of the opposite opinion is Yann LeCun -the third “godfather” of AI- who considers these fears largely unfounded in the medium term, arguing that such systems will remain fundamentally “stupid”, lacking authentic understanding because they are very limited in their intuitive understanding of the external environment around them.
However, it would be naive to think that, in times of danger, simply "pulling the plug" on superintelligent machines would be enough. Advanced digital systems are already capable of replicating themselves perfectly, in multiple, distributed copies, making the idea of centralized control increasingly unrealistic and the idea of disconnecting them impractical.
At a more abstract, philosophical and epistemological level, the question of whether and to what extent machines can develop intelligence becomes that of computability of the mind.
If the human mind were entirely computational, there would be no theoretical reason to doubt that it could be emulated—and ultimately surpassed—by machines having access to the same inputs from the physical world and to an immensely greater number of symbolic inputs.
The alternative is that there is a mind quid not reducible to computation, but it is difficult to specify what it is without resorting to metaphysical hypotheses or in tension with physicalism.
In this context, the position of Roger penrose, according to which Gödel's incompleteness theorem and the undecidability of the Turing halting problem would demonstrate that human thought is not merely algorithmic.
For Penrose from the theorem of Godel It follows that human thought is not entirely computable; Gödel himself, although he never advanced a proof to this effect, believed that human mathematical intuition transcended any finite formal system.
Penrose hypothesizes that quantum physical processes take place in the mind that are not computational and therefore cannot be simulated by a classical machine.
The question remains open whether future quantum computers could, at least in part, bridge this gap—assuming it actually exists.
The decisive question
Ultimately, the decisive question underlying this debate is not whether machines will ever think like us, nor is it clear once and for all what "thinking like us" means.think".
It is sufficient to note that we are building systems capable of acting in the world autonomously, opaquely and on a global scale, producing real effects before we are able to understand or govern them.
Even if artificial intelligence were forever devoid of consciousness, feelings, or genuine understanding, the problem would remain intact: for the first time in our history, we are delegating critical decisions and processes to entities whose power grows independently of our ability to control them.
We thank PAOLO RICCARDO FELICIOLI for his contribution
