Further Thoughts on Defining f(x) for Ethical Machines: Ethics, Rational Choice, and Risk Analysis
DOI:
https://doi.org/10.32473/flairs.36.133203Keywords:
Ethics of artificial intelligence, Ethical pluralism, Autonomous ethical agents, Automated behavior, Automated reasoningAbstract
There is a tendency to anthropomorphize artificial intelligence (AI) and reify it as a person. From the perspective of machine ethics and ethical AI, this has resulted in the belief that truly autonomous ethical agents (i.e., machines and algorithms) can be defined, and that machines could, by themselves, behave ethically and perform actions that are justified from a normative standpoint. Under this assumption, and given that utilities and risks are generally seen as quantifiable, many scholars have seen consequentialism (utilitarianism) and rational choice theory as likely candidates to be implemented in automated ethical decision procedures, for instance to assess and manage risks as well as maximize expected utility. Building on a recent example from the machine ethics literature, we use computer simulations to argue that technical issues with ethical ramifications leave room for reasonable disagreement even when algorithms are based on ethical and rational foundations such as consequentialism and rational choice theory. By doing so, our aim is to illustrate the limitations of automated behavior and ethical AI and, incidentally, to raise awareness on the limits of so-called ethical agents.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Clayton Peterson

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.