The Webinar was included in the framework of the Milan Digital Week 2020, this year in a completely online version.
During the Webinar, theArtificial Intelligence Ethics Model Canvas for the digital governance of Smart Cities and evaluation of corporate responsibilities in the supply of services and goods, both on a territorial scale and for the simple user.
The potential and risks of the government of the Smart Cities entrusted to the Artificial Intelligences of companies and specialized start-ups (logistics, mobility, services, participation, health, e-commerce, etc.), also on the basis of the experience already achieved in San Francisco (California, Silicon Valley).
Through the 'AI ETHICS MODEL CANVAS situations will be described Governance urban entrusted to artificial intelligence e verified the conditions Ethical safety and reliability of processes, services and goods supplied by companies and specialized start-ups, including "reserve plans" in cases in which malicious anomalies occur towards the citizens and democratic systems of the "Polis".
The Webinar organized by Piazza Copernico as part of the Milan Digital Week intends to offer tools to implement the awareness of company MANAGERS and start-up FOUNDERS of the potential risks of Criminal liability (L. 231/01), of Class Action (L. 31/2019) and violation of Privacy (GDPR 2016/679) for the use of ARTIFICIAL INTELLIGENCES (AI) DO NOT ETHICALLY CONFORM EU Guidelines and OECD Recommendations (anthropocentric; reliable; fair), especially in the provision of services on a territorial scale (Smart Cities) or individual users (Smart life + E-commerce).
The recent ones EU guidelines ("Ethics of Artificial Intelligences"), OECD ("Artificial Intelligence Guidelines") e of the European Privacy Authorities ("Recommendations for the compliance of Artificial Intelligences with the GDPR") strongly pose the need for protect users / citizens the potential risks and / or damages deriving from the use of AI that are not ethically compliant in the provision of services or products (both on an urban scale and at the individual user / citizen level).
It is possible to encounter potential conflictual situations that lead to take legal action against companies and specialized start-ups, in the field of:
- Criminal liability;
- Class action;
- Privacy violation;
- Work safety;
- Environmental sustainability;
At the same time, like other product placements (gluten free; biological; ecological; energy efficiency, etc.), ensure that the AI used will not harm man, will support the environment, will not discriminate (by race, gender, religion, socio-economic class, etc.), will protect user privacy (staff, trade union , health and politics), can be a value added decisive in the development of the "corporate brand" and in building a relationship of "trust" with the community of its consumers.
It therefore becomes necessary to increase awareness of the top management levels of the company or of the startuppers about the strategic value of acquiring (ex-ante; in itinere; ex-post) CONTROL TOOLS AND ETHICAL VERIFICATION OF AI developed or acquired in the company, so as to guarantee, right from the conception and design phase, a significant level of:
The primary purpose, therefore, is to prevent, both on an urban and individual citizen scale:
• Financial losses due to self-referencing automatic trading loops;
• Errors in industrial investments due to incorrect big data (bias);
• Mass electronic identity theft or systematic violations of privacy;
• racial or social or religious discrimination in the provision of credits or loans;
• Gender discrimination in the evaluation of curricula or in personnel management;
• Fatal accidents at work in Industry 4.0 production cycles;
• fatal accidents deriving from autonomous driving systems (land and air);
• Collapse of energy and water supply due to network malfunctions or hacking;
• Fake news and hate campaigns on social networks or telematic information platforms;
Adopt an ethical model therefore, it becomes a strategic choice that guarantees the global competitiveness of the product / service that uses AI, because:
a) reassures the user communities that will use it;
b) lowers the risk of commercial boycott or hate campaigns on social media;
c) reduces any damages or Class Action that could derive from the damage to human beings (at single level or of entire communities̀);
d) ensures regulatory compliance;
e) increases the "reputation" of the corporate Brand.
The subjects to whom the Webinar is addressed:
• COMPANY MANAGERS or START-UP FOUNDERS, committed to managing the development of AI or their use in support of products / processes / services, and which therefore must guarantee reliability and safety for the entire supply chain.
• MINISTERIAL DIRECTORS or DIRECTORS OF PUBLIC BODIES, which are asked to develop functions or services capable of providing citizens with a different public administration (non-bureaucratic), capable of not discriminating against the weaker groups and in any case not causing damage / privileges to economic or business categories.
• RESEARCHERS or PROGRAMMERS, committed (individually or in teams) to develop innovative solutions and techniques in the area of neural networks or machine learning, so that they can evaluate the potential consequences of research results (or solutions) before they are placed on the market.
• STAKEHOLDERS and CITIZENS, focused on identifying new areas of responsibility so far not explored, or on the function of checking and verifying the instances of their reference subjects (from trade associations to those of consumers or minorities), especially for what concerns participation in the political life and government of Smart Cities.
• The potential ethical risks to be evaluated in an Artificial Intelligence
o The ethical and legal aspects of an AI:
- The human-centric approach in planning, management and monitoring,
- The pillar of the guarantee is a reliable and robust AI,
- The ethical aims of AI,
- The ethics of AI as a factor of global competitiveness of the EU,
- The relationship between AI Ethics and GDPR,
- The node of responsibility and / or legal personality of AI and robots,
- The profiles of legal responsibility for companies (current legislation),
- The profiles of collective responsibility for urban-scale services.
<br>• The EU Guidelines and the OECD Guidelines on AI Ethics
o The EU coordinated plan on AI
o Reliability requirements in the implementation phase:
- Data management,
- Design for everyone,
- Management of AI autonomy and human control,
- Respect for human autonomy (decision-making) and Privacy,
- Strength, safety, transparency,
- Involvement of stakeholders.
• The AI Ethics Model Canvas
o Introduction to the Business Model Canvas
o The innovative Canvas for AI Ethics:
- The fundamentals:
• Human autonomy,
• Data governance,
• GDPR and Privacy,
• Social and environmental impact,
• Human impact,
• Wellness and benefits;
- The transversals:
• Fairness and equality,
• Robustness and security,
• Transparency and traceability;
o The use of the "Legal Design" methodology for transparency
Arch. DANIELE VERDESCA
Information architect, he is currently also Director of the Cassa Edile of the province of Lecce. Former adjunct professor at the Department of Political Economy of the University of Siena and General Director of Formedil in Rome, he has long been at the forefront of promoting the digitization of management systems in the construction sector.
Co-founder of a start-up on semantic artificial intelligence, he is currently engaged in several innovative projects aimed at the use of AI in automated controls of corporate compliance (Regtech). Speaker at conferences and workshops, he is the author of numerous books and technical articles on the topic.