Abstract: About a decade ago, the concept of ’nudging’ was popularized by behavioural scientists and economists, including Richard Thaler and Cass Sunstein (2008). Nudge theory proposes ways in which behaviour can be indirectly influence by altering the environment, or choice architecture, in different ways, usually to trigger some kind of desired behavioural outcome by exploiting our natural cognitive biases. The idea is, in a sense, nothing new—advertisers have long known that by capturing our attention through pictures and words they can influence our decision-making. What is new, however, are the various ways in which this can now be done online, for example, by manipulating our search results, through suggestive search engines, purchasing recommendations, targeted advertisements, and even by integrating advertising into our social media feeds. Moreover, governments, corporations, and other institutions now have the capacity to target nudges for each individual. By using algorithms that operate on big data, nudges can be customized for individuals and their effectiveness can be tracked and adjusted as the algorithm learns from feedback data tracking a user’s behaviour. These technologies raise a bunch of new ethical questions, about paternalism, consent, privacy, and manipulation. In this talk I will examine the ethics of nudging effects of AI systems on human behaviour (e.g. influence of recommendations) as well as how humans might in turn nudge these AI systems to achieve more desirable outcomes.
Bio: Coming Soon
Karina Vold, PhD
AI Researcher at the University of Cambridge