Aurora is a citizen of the digital world. She is threatened. The digital systems that surround her are increasingly able to make autonomous decisions over and above her and on her behalf. She feels that her moral rights, as well as the social, economic and political spheres, can be affected by the behavior of such systems. Although unavoidable, the digital world is becoming uncomfortable and potentially hostile to her as a human being and as a citizen. Notwithstanding the introduction of the GDPR and of initiatives to establish criteria on software transparency and accountability, Aurora feels vulnerable and unprotected.
EXOSOUL will build a software personalized exoskeleton that enhances and protects Aurora by mediating her interactions with the digital world according to her own ethics of actions and privacy of data. The exoskeleton disallows or adapts the interactions that would result in unacceptable or morally wrong behaviors according to the ethics and privacy preferences of Aurora. With her software shield, Aurora will feel empowered and in control, and more in balance of forces with the other actors of the digital world.
To reach the breakthrough result of automatically building a personalized exoskeleton, EXOSOUL will address multidisciplinary challenges never touched before: (i) defining the scope for and inferring citizens ethical preferences; (ii) treating privacy as an ethical dimension managed through the disruptive notion of active data; and (iii) automatically synthesizing ethical actuators, i.e., connector components that mediate the interaction between the user and the digital world to enforce her ethical preferences. EXOSOUL will deliver the first concrete contribution to an ethical approach to regulate the digital world in line with the goals of the European Data Protection Supervisor strategy 2015-2019.
Motivation – In their ordinary life, citizens in the digital world continuosly interact with software systems, e.g., by using a mobile device or from on board of a (autonomous) car. These systems are increasingly autonomous in making decisions over and above the users or on behalf of them. Often, their autonomy exceeds the system boundaries and invades user prerogatives. As a consequence, ethical issues – privacy ones included (e.g., unauthorized disclosure and mining of personal data, access to restricted resources) – are emerging as matters of utmost concern since they impact on the moral rights of each human being and affect the social, economic, and political spheres
The vision – The goal of EXOSOUL is to equip humans with an automatically generated exoskeleton, a software shield that protects them and their personal data via the mediation of all interactions with the digital world that would result in unacceptable or morally wrong behaviors according to their ethical and privacy preferences. The exoskeleton can take a whole spectrum of forms: from customized soft-libraries that the individual may deploy on the machines being used, to a sophisticated software interface that an individual may “wear”, eventually deployed on a body chip. Empowering the users with a personalized exoskeleton will introduce more symmetry of power in the present digital world and will effectively put humans in the center. Exoskeletons development also opens unprecedented business opportunities in the same way open source software did, which promoted the ethical principles of free software against the monopoly proprietary software producers. The European Union (EU) with its companies can become the scientific and technological leader of the future user-driven privacy and ethics systems. Furthermore, bringing back to the user part of the (digital) control helps to solve liability issues in autonomous systems by readdressing responsibility to users according to their specified ethics.
The challenges ahead – We address the challenge of automatically synthesizing a software exoskeleton starting from the ethics and privacy preferences of the user. In the ethical sphere, this requires to answer several cutting edge research questions concerning the need to: (i) identify a space of ethics and privacy preferences for users, to assess their compatibility with regulations, and to orchestrate interactions of users endorsing different preferences, so as to prevent deadlocks and to promote best ethical practices in digital societies; (ii) infer ethics and privacy preferences from the user, given that neither a person nor a society apply moral categories separately, rather everyday morality is in constant flux among norms, utilitarian assessment of consequences, and evaluation of virtues. We define the exoskeleton by considering two specific classes of interactions that citizens have with the digital world. The first one concerns interactions that involve the exchange of personal data, and that as such impact the privacy dimension, notably interactions with mobile apps through mobile devices. Until now, data are considered as passive entities and the logic implementing their life-cycle is decoupled from the data itself. For each datum that is shared over the Internet, the owner loses its track and control [
Logic theories and innovative mechanisms for inferring and specifying privacy and ethical user preferences
To address the challenge of specifying and inferring soft ethical preferences, we will start investigating a kind of “functional morality” , which enables machines to autonomously assess and respond to moral challenges. Our own work has addressed various hard ethics problems on human interactions with AI, robotic and bionic systems [2, 3, 4, 5], concerning the analysis of conflicts between competing normative ethics approaches and the development of public ethical policies to defuse those conflicts.
In operative terms, we will consider the relevant legislation of the member states (e.g., GDPR https://eugdpr.org/), ethical reference groups (https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf , https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf ), the normative approaches to ethics and the European perspective on responsible computing . Furthermore, we will elicit patterns for specifying privacy and ethics out of existing privacy and ethical rules defined by both the academic and industrial communities, examples of which maybe found in our previous work .
We will employ an iterative approach to the design and validation of the innovative mechanisms for inferring and specifying ethical and privacy preferences. Representative users will be in the loop at every stage.
 W. Wallach and C. Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Inc., New York, NY, USA, 2010. [BIBTEX]
 D. Amoroso and G. Tamburrini. The ethical and legal case against autonomy in weapons systems. Global Jurist, 17, 01 2017. [BIBTEX]
 G. Tamburrini. On the ethical framing of research programs in robotics. AI Soc., 31(4):463–471, Nov. 2016. [BIBTEX]
 A. Bicchi and G. Tamburrini. Social robotics and societies of robots. The Information Society, 31(3):237–243, 2015. [BIBTEX]
 M. Santoro, D. Marino, and G. Tamburrini. Learning robots interacting with humans: from epistemic risk to responsibility. AI & SOCIETY, 22(3):301–314, Jan 2008. [BIBTEX]
 Paola Inverardi. 2019. The European perspective on responsible computing. Commun. ACM 62, 4 (March 2019), 64-64. DOI: https://doi.org/10.1145/3311783. [BIBTEX]
 M. Autili, L. Grunske, M. Lumpe, P. Pelliccione, and A. Tang. Aligning qualitative, real-time, and probabilistic property specification patterns using a structured english grammar. IEEE Transactions on Software Engineering, 41(7):620–638, July 2015. [BIBTEX]