On 19 June 2024, at the COTTINO SOCIAL IMPACT CAMPUS headquarters in Turin, the XXII Management Training Day, the event that Asfor – Italian Association for Management Training dedicates every year to the role of management education to understand the evolutionary scenarios that involve people within organizations.
It is not surprising that the theme of the conference was also dedicated to a debate on artificial intelligence in the educational field and beyond, a topic that is now attracting everyone's attention, with this title: "AI, social impact and organizational learning. What changes in management culture and training?"
The conference was a further opportunity to reflect on the application of artificial intelligence, and developed into a fruitful dialogue between companies, the university world and training professionals who discussed these issues which have now become fundamental within the our world.
The conference developed over three sessions, all full of interesting interventions, often with theoretical contributions but also dedicated in some cases to projects already in progress.
Among the numerous interventions, however, what remains above all is the explosive session entitled "The human being in front of artificial intelligence: in search of a point of balance", in which Don Luca Peyron, Professor of Theology and scientific advisor of the Humane Technology Lab of the Catholic University and coordinator of the Service for the Digital Apostolate of Turin, illuminated the conference with an approach that was both brilliant and profound, offering many ideas for reflection on the relationship between man and car.
It is almost impossible to summarize it given Peyron's overwhelming dialectic, but even a few isolated sentences can make us understand the overall message that was powerfully conveyed.
- Artificial intelligence needs to be guided by beings who are human
- The responsibility of companies is to convey a culture on the use of machines among people
- We build machines because we naturally aim to save energy. We must recover the founding desire to "be there", to do something we like even without saving energy
- It is necessary to understand what jobs a machine does not must do it for us
- A digital native needs an analog native to understand the differences
- The AI ACT is a legislative instrument, based on minor harm. But our ethical question must be not “what harm is there?” but instead “what good is there?”
- We are made to desire something, we must recover the ethics of desire: that is, think of rules that seek all the good there is, and not that aim for the lesser evil.