AI that wants you well: the ethical implications of nudging and caring AI systems

Co-leaders
- Laurence Devillers (IA Chair – HUMAAINE)
- Marc-Antoine Dilhac (CIFAR Chair in AI ethics)
Partners
- IA Chair – HUMAAINE: http://humaaine-chaireia.fr
The objective of this project is to evaluate the ethical implications of nudging AI systems. A nudging AI system is a device that incites users to behave and act in a way they are not aware of. It takes various forms such as notifications, evaluations, rankings, but most importantly, recommendations in natural language. The theory of “nudging” developed by Thaler and Sunstein has set a moral constraint on the implementation of “nudging” to influence the behavior of individuals: it must pursue a good that is sought after by target users, or that is aligned with their best interest.
The project is threefold:
- We want to show, based on empirical research, that nudging AI systems have a greater impact on choice architecture than human monitoring and blur the line between recommendation and manipulation;
- We will specifically address the issue of autonomy and human agency, even in the obvious cases where the AI system promotes the user’s interest ;
- We will establish the ethical requirements that nudging AI systems must meet in order to be considered politically legitimate and socially trustworthy.