Abstract
As intelligent systems are increasingly making decisions that directly affect society, perhaps the most
important upcoming research direction in AI is to
rethink the ethical implications of their actions.
Means are needed to integrate moral, societal and
legal values with technological developments in AI,
both during the design process as well as part of
the deliberation algorithms employed by these systems. In this paper, we describe leading ethics theories and propose alternative ways to ensure ethical
behavior by artificial systems. Given that ethics are
dependent on the socio-cultural context and are often only implicit in deliberation processes, methodologies are needed to elicit the values held by designers and stakeholders, and to make these explicit
leading to better understanding and trust on artifi-
cial autonomous systems