Aurora is a citizen of the digital world. She is threatened. The digital systems that surround her are increasingly able to make autonomous decisions over and above her and on her behalf. She feels that her moral rights, as well as the social, economic and political spheres, can be affected by the behavior of such systems. Although unavoidable, the digital world is becoming uncomfortable and potentially hostile to her as a human being and as a citizen. Notwithstanding the introduction of the GDPR and of initiatives to establish criteria on software transparency and accountability, Aurora feels vulnerable and unprotected.
EXOSOUL will build a software personalized exoskeleton that enhances and protects Aurora by mediating her interactions with the digital world according to her own ethics of actions and privacy of data. The exoskeleton disallows or adapts the interactions that would result in unacceptable or morally wrong behaviors according to the ethics and privacy preferences of Aurora. With her software shield, Aurora will feel empowered and in control, and more in balance of forces with the other actors of the digital world.
To reach the breakthrough result of automatically building a personalized exoskeleton, EXOSOUL will address multidisciplinary challenges never touched before: (i) defining the scope for and inferring citizens ethical preferences; (ii) treating privacy as an ethical dimension managed through the disruptive notion of active data; and (iii) automatically synthesizing ethical actuators, i.e., connector components that mediate the interaction between the user and the digital world to enforce her ethical preferences. EXOSOUL will deliver the first concrete contribution to an ethical approach to regulate the digital world in line with the goals of the European Data Protection Supervisor strategy 2015-2019.
Motivation – In their ordinary life, citizens in the digital world continuosly interact with software systems, e.g., by using a mobile device or from on board of a (autonomous) car. These systems are increasingly autonomous in making decisions over and above the users or on behalf of them. Often, their autonomy exceeds the system boundaries and invades user prerogatives. As a consequence, ethical issues – privacy ones included (e.g., unauthorized disclosure and mining of personal data, access to restricted resources) – are emerging as matters of utmost concern since they impact on the moral rights of each human being and affect the social, economic, and political spheres
The vision – The goal of EXOSOUL is to equip humans with an automatically generated exoskeleton, a software shield that protects them and their personal data via the mediation of all interactions with the digital world that would result in unacceptable or morally wrong behaviors according to their ethical and privacy preferences. The exoskeleton can take a whole spectrum of forms: from customized soft-libraries that the individual may deploy on the machines being used, to a sophisticated software interface that an individual may “wear”, eventually deployed on a body chip. Empowering the users with a personalized exoskeleton will introduce more symmetry of power in the present digital world and will effectively put humans in the center. Exoskeletons development also opens unprecedented business opportunities in the same way open source software did, which promoted the ethical principles of free software against the monopoly proprietary software producers. The European Union (EU) with its companies can become the scientific and technological leader of the future user-driven privacy and ethics systems. Furthermore, bringing back to the user part of the (digital) control helps to solve liability issues in autonomous systems by readdressing responsibility to users according to their specified ethics.
The challenges ahead – We address the challenge of automatically synthesizing a software exoskeleton starting from the ethics and privacy preferences of the user. In the ethical sphere, this requires to answer several cutting edge research questions concerning the need to: (i) identify a space of ethics and privacy preferences for users, to assess their compatibility with regulations, and to orchestrate interactions of users endorsing different preferences, so as to prevent deadlocks and to promote best ethical practices in digital societies; (ii) infer ethics and privacy preferences from the user, given that neither a person nor a society apply moral categories separately, rather everyday morality is in constant flux among norms, utilitarian assessment of consequences, and evaluation of virtues. We define the exoskeleton by considering two specific classes of interactions that citizens have with the digital world. The first one concerns interactions that involve the exchange of personal data, and that as such impact the privacy dimension, notably interactions with mobile apps through mobile devices. Until now, data are considered as passive entities and the logic implementing their life-cycle is decoupled from the data itself. For each datum that is shared over the Internet, the owner loses its track and control [
Logic theories and innovative mechanisms for inferring and specifying privacy and ethical user preferences
To address the challenge of specifying and inferring soft ethical preferences, we will start investigating a kind of “functional morality” , which enables machines to autonomously assess and respond to moral challenges. Our own work has addressed various hard ethics problems on human interactions with AI, robotic and bionic systems [2, 3, 4, 5], concerning the analysis of conflicts between competing normative ethics approaches and the development of public ethical policies to defuse those conflicts.
In operative terms, we will consider the relevant legislation of the member states (e.g., GDPR //eugdpr.org/), ethical reference groups (//edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf , //ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf ), the normative approaches to ethics and the European perspective on responsible computing . Furthermore, we will elicit patterns for specifying privacy and ethics out of existing privacy and ethical rules defined by both the academic and industrial communities, examples of which maybe found in our previous work .
We will employ an iterative approach to the design and validation of the innovative mechanisms for inferring and specifying ethical and privacy preferences. Representative users will be in the loop at every stage.
 W. Wallach and C. Allen. Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Inc., New York, NY, USA, 2010. [BIBTEX]
 D. Amoroso and G. Tamburrini. The ethical and legal case against autonomy in weapons systems. Global Jurist, 17, 01 2017. [BIBTEX]
 G. Tamburrini. On the ethical framing of research programs in robotics. AI Soc., 31(4):463–471, Nov. 2016. [BIBTEX]
 A. Bicchi and G. Tamburrini. Social robotics and societies of robots. The Information Society, 31(3):237–243, 2015. [BIBTEX]
 M. Santoro, D. Marino, and G. Tamburrini. Learning robots interacting with humans: from epistemic risk to responsibility. AI & SOCIETY, 22(3):301–314, Jan 2008. [BIBTEX]
 Paola Inverardi. 2019. The European perspective on responsible computing. Commun. ACM 62, 4 (March 2019), 64-64. DOI: //doi.org/10.1145/3311783. [BIBTEX]
 M. Autili, L. Grunske, M. Lumpe, P. Pelliccione, and A. Tang. Aligning qualitative, real-time, and probabilistic property specification patterns using a structured english grammar. IEEE Transactions on Software Engineering, 41(7):620–638, July 2015. [BIBTEX]
Exoskeleton design and newfangled techniques and tools for managing its life-cycle
This research theme concerns the definition of the exoskeleton software architecture and of the run-time analysis mechanisms, such as monitoring and enforcement, that serve to control the exoskeleton behavior according to the specified privacy and ethics preferences. An exoskeleton is composed of two parts: active data and ethical actuator.
Active data wrap personal data by adding the logic required to access personal data and manage their life-cycle, from creation to destruction, sharing and usage, according to the specified privacy preferences. The conformance to the privacy preferences is guaranteed by a monitoring and enforcing component that makes use of the internal operations and continuously checks and updates the life-cycle status to promptly detect and correct problems before privacy-violating actions are performed.
The ethical actuator translates conceptual ethical principles into concrete statements that serve as the basis for ethical decision making. An ethical actuator is composed of: (i) ethical rules defined by users, (ii) a monitor, (iii) an enforcer, and (iv) ethical actions.
Zhang, P., Pelliccione, P., Leung, H., & Li, X. (2018). Automatic generation of predictive monitors from scenario-based specifications. Information and software technology, 98, 5-31. [BIBTEX]
Gian Luca Scoccia, Stefano Ruberto, Ivano Malavolta, Marco Autili, Paola Inverardi: An investigation into Android run-time permissions from the end users’ perspective. MOBILESoft@ICSE 2018: 45-55. [BIBTEX]
Gian Luca Scoccia, Ivano Malavolta, Marco Autili, Amleto Di Salle, Paola Inverardi: User-centric Android flexible permissions. ICSE (Companion Volume) 2017: 365-367. [BIBTEX]
A. Tang, P. Pelliccione, P. Lago, H. Muccini and I. Malavolta, “What Industry Needs from Architectural Languages: A Survey” in IEEE Transactions on Software Engineering, vol. 39, no. 06, pp. 869-891, 2013. [BIBTEX]
Rebekka Wohlrab, Ulf Eliasson, Patrizio Pelliccione, Rogardt Heldal (2019) Improving the Consistency and Usefulness of Architecture Descriptions: Guidelines for Architects In: IEEE International Conference on Software Architecture (ICSA 2019), Hamburg, Germany, March 25-29. [BIBTEX]
Marco Autili, Paola Inverardi, Massimo Tivoli: Choreography Realizability Enforcement through the Automatic Synthesis of Distributed Coordination Delegates. Sci. Comput. Program. 160: 3-29 (2018). [BIBTEX]
Marco Autili, Davide Di Ruscio, Amleto Di Salle, Paola Inverardi, Massimo Tivoli: A Model-Based Synthesis Process for Choreography Realizability Enforcement. FASE 2013: 37-52. [BIBTEX]
This research theme concerns the definition and the realization of automated synthesis methods for the generation of: (i) a domain-independent exoskeleton starting from the user’s ethical and privacy preferences, and (ii) a domain-specific specialization of the domain-independent exoskeleton from the inputs provided by domain experts. These inputs regard information that are required to produce the code of the specialized exoskeleton, and package it as required by the target execution environment.
This is extremely challenging since it has to cope with the complexity of representing and enforcing ethical and privacy rules. However, we can base on our expertise on the prevention of interaction mismatches. Indeed, in our previous work [1, 2, 3, 4, 5, 6, 7] we exploited architectural specifications, including interaction and communication patterns, APIs, etc., to automatically generate integration and coordination code for the components forming a target distributed system.
Demonstrators and practical guidelines