LIMITS OF MACHINE MORALITY
15.5.2025 / essay
Daniel Putkinen
I reflect on Artificial Moral Agents (AMAs) and argue that autonomy may be inevitable, but morality should remain a human responsibility.
Machine ethics is a field concerned with the design and implementation of artificial intelligence (AI) systems that can make ethical decisions. It explores the possibility of giving human-made systems the ability to reason, what is right and what is wrong (Allen & Wallach, 2012). The practical implementation of machine ethics lies in the development of Artificial Moral Agents (AMAs), also known as “moral machines”. These are robots and computers that are made to participate in morally significant contexts, such as (but not limited to) military settings or hospitals (Allen & Wallach, 2012). AMAs are envisioned to make morally sound decisions based on principles of ethics that align with human values.
The question of whether robots or AI can be truly moral remains a subject of ongoing debate. Some researchers, like Allen and Wallach (2012), argue that it is not only possible to develop AMAs but, in a weak sense, inevitable, particularly due to machines gaining more autonomy. They propose that machine morality might exist on a spectrum, from basic “operational morality” to a more sophisticated “functional morality,” while acknowledging that “full moral agency” comparable to human capabilities poses profound challenges, including questions around consciousness and emotional capabilities (Allen & Wallach, 2012).
On the other hand, critics such as Van Wynsberghe and Robbins (2019) raise objections to the promotion of AMAs. These critiques call for a deeper examination of the motivations to develop morally capable machines (Van Wynsberghe & Robbins, 2019).
The development of Artificial Moral Agents is primarily driven by the increasing autonomy of machines in morally significant aspects of everyday life. As AI gains more significance in decision-making in fields such as healthcare, transportation, and law enforcement, the actions and decisions can cause negative consequences for humans. This demands the capability for moral reasoning in AI systems (Allen & Wallach, 2012).
However, my reflection on machine ethics and AMAs leads me to a position of skepticism and raises concerns regarding the possibility of machines being moral agents. I reject the idea that endowing machines with morality is a needed technical breakthrough; rather, I see it as a fundamental shift, transforming tools meant to serve humanity into entities that risk and undermine human agency and ethical judgment. Technology, from its inception, was meant to function as a tool, not as a substitute for human decision-making (Van Wynsberghe & Robbins, 2019). Just as a drill is designed to assist in creating holes, or a shovel to aid in digging, technology should enhance human capabilities, not replace the cognitive and ethical responsibilities that come with everyday decision making. Thus, the responsibility of dealing with ethically charged situations should always remain with human beings. I align my views with those of Van Wynsberghe and Robbins (2019), who raise critical concerns regarding moral deskilling in the context of AMAs. In my view, if humanity relies too heavily on machines to make ethical decisions, we risk the decline of our own moral capacity and ethical judgment. I see the possibility of this loss as a catastrophe, not only because of the practical dangers of AMAs making flawed decisions, such as misjudging a patient’s needs in healthcare, or failing to understand the context and emotional complexity of a human situation in law enforcement, but because ethics and morality are deeply rooted in human cognition, shaped by memories, emotions, and personal experiences (Van Wynsberghe & Robbins, 2019). Ethics is not a set of rules; it is something that constantly develops in the human mind. If the outsourcing of moral decision-making to machines is made, we risk losing the essence of what is human, to make decisions based on our past, and take responsibility for these choices. Machines, by their nature, lack emotional depth, contextual understanding, and lived experiences that form human morality. These cognitive responses are not simply coded functions, but are planted in the complex human nature. Therefore, not only can machines possess morality, they arguably should not.
Therefore, I remain unconvinced that machines can be regarded as genuine moral agents. In settings like healthcare, where AMAs may be tasked with end-of-life decisions or treatment prioritization, what is at stake is more than efficiency; it’s the human ability to respond with empathy, care, and moral reflection. These moments demand the presence of another human being, capable of understanding suffering, context, and human nature. Entrusting such decisions to machines risks reducing morality to computation, stripping it of its deeply human foundations. In this light, the pursuit of AMAs is not a necessary innovation, but a redirection of moral responsibility.
This essay has examined the foundational concepts of machine ethics and the development of Artificial Moral Agents (AMAs), highlighting the debates surrounding their role in morally charged contexts. Through my reflection and position on the matter, I have explored what it means for morality to be rooted in human nature, shaped by experiences, memories, and emotions. I have argued that while machines may simulate “operational morality” (Allen & Wallach, 2012), they cannot and arguably should not be regarded as true moral agents. My perspective has not extended to applications such as lethal autonomous weapon systems (LAWS) or autonomous vehicles, but instead focuses on the broader ethical implications of outsourcing human moral responsibility to machines. In doing so, I contend that the development of AMAs represents not a technical advancement but a philosophical dilemma that calls into question the preservation of human agency and accountability in decision-making.
References:
Allen, C. & Wallach, W., 2012. Moral machines: contradiction in terms or abdication of human responsibility. In: P. Lin, K. Abney & G. Bekey, eds. Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press, pp. 55–68.
Van Wynsberghe, A. & Robbins, S., 2019. Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, 25(3), pp. 719–735.