The Meaning and Criteria of Moral Agency in Intelligent Machines

Document Type : Research Paper

Author

Assistant Professor, Department of Moral Philosophy, University of Qom, Qom, Iran

10.22091/jptr.2025.12466.3256

Abstract

The advancements in artificial intelligence (AI) and the emergence of intelligent and superintelligent machines have significantly blurred the traditional boundaries between humans and machines. These technological developments have raised critical philosophical and ethical questions about the possibility of attributing moral agency to these systems. Can intelligent machines act morally, or should moral responsibility for their actions remain solely with their designers and operators? This paper aims to analyze these questions through a conceptual and analytical lens, focusing on the meaning, criteria, and philosophical implications of moral agency in the context of intelligent machines.
 
Defining Moral Agency
Moral agency refers to the capacity of an entity to make ethical decisions, act autonomously, and accept responsibility for its actions. Philosophical discussions on moral agency, rooted in
the works of thinkers such as Immanuel Kant and John Searle, identify several essential components:
Consciousness and Self-Awareness: The ability to recognize oneself as an independent entity and distinguish between oneself and others.
Moral Understanding: The ability to evaluate actions based on moral principles, and distinguishing between right and wrong.
Autonomy: The capacity to make independent decisions free from external constraints.
Intentionality and Free Will: The ability to act based on deliberate choices and goals.
Moral Responsibility: The capacity to accept accountability for one’s actions and their consequences.
These criteria form the foundation for evaluating whether an entity, such as an intelligent machine, can qualify as a moral agent.
Perspectives on the Moral Agency of Intelligent Machines
Denial of Moral Agency
One of the dominant perspectives in contemporary philosophy rejects the possibility of moral agency for intelligent machines. Thinkers such as John Searle and Daniel Dennett argue that machines, regardless of their computational complexity, lack the essential characteristics
of moral agents. Searle’s “Chinese Room” thought experiment illustrates this point by demonstrating that machines merely manipulate symbols without understanding their meaning, thus lacking true consciousness or intentionality.
From this perspective, moral agency is intrinsically tied to uniquely human attributes, such as subjective awareness, emotional sensitivity, and the capacity for moral intuition. Machines, as deterministic systems governed by algorithms, cannot possess the free will or intentionality necessary for moral responsibility. As a result, any moral actions performed by machines are ultimately attributable to their human creators.
This view also aligns with Kantian ethics, which emphasizes rationality, autonomy, and free will as prerequisites for moral agency. Kant argued that only beings capable of acting according to moral laws derived from rational deliberation could be considered moral agents.
Acceptance of Limited or Functional Moral Agency
Contrary to the denialist perspective, some scholars propose a more nuanced view that recognizes the potential for limited or functional moral agency in intelligent machines. Luciano Floridi and J. W. Sanders introduce the concept of “mind-less morality,” suggesting that moral agency does not necessarily require subjective awareness or intentional states. Instead, they argue that intelligent systems can be considered moral agents within specific contexts if they exhibit the following characteristics: interactivity, autonomy, and adaptability.
This perspective posits that machines can perform morally significant actions without possessing the full range of human-like cognitive and emotional capacities. For example, autonomous vehicles can make decisions that have moral implications, such as prioritizing the safety of passengers over pedestrians in emergency situations. While these decisions are based on pre-programmed algorithms, they can be seen as a form of functional moral agency.
Key Arguments and Counterarguments
The Role of Consciousness and Intentionality
A central debate in the discussion of moral agency for machines revolves around the role of consciousness and intentionality. Critics of machine moral agency, such as Searle, argue that without genuine consciousness and intentionality, machines cannot be considered moral agents. They contend that machines lack the ability to understand the moral significance of their actions, as they merely process information without any real comprehension.
Proponents of limited moral agency, however, argue that consciousness and intentionality are not strictly necessary for moral decision-making. They suggest that machines can be designed to follow ethical guidelines and make decisions that align with moral principles, even if they do not possess subjective experiences. This view is supported by the idea that moral agency can be understood in terms of functional roles rather than internal states.
Implications for Ethics and Society
Ethical Design and Regulation
The debate over machine moral agency has significant implications for the ethical design and regulation of AI systems. If machines are to be considered moral agents, even in a limited sense, it becomes crucial to ensure that they are designed with ethical principles in mind. This includes developing algorithms that prioritize human well-being, fairness, and transparency.
Regulatory frameworks will also need to address the challenges posed by autonomous systems, including issues of accountability, liability, and oversight. Policymakers must strike a balance between promoting innovation and ensuring that AI systems are used responsibly and ethically.
Conclusion
The question of whether intelligent machines can be considered moral agents is complex
and multifaceted. While traditional philosophical perspectives emphasize the importance of consciousness, intentionality, and free will for moral agency, more contemporary views suggest that machines can exhibit a form of functional moral agency within specific contexts.
The debate has important implications for the ethical design and regulation of AI systems, as well as for broader societal attitudes towards technology. As AI continues to evolve, it will be crucial to address these ethical and philosophical questions to ensure that the development and deployment of intelligent machines align with human values and principles.
In conclusion, while machines may not possess the full range of capacities required for moral agency in the traditional sense, they can still play a significant role in ethical decision-making. The challenge lies in defining the boundaries of machine moral agency and ensuring that these systems are designed and used in ways that promote human well-being and ethical integrity.
 

Keywords

Main Subjects


Bonnefon, J.-F., Rahwan, I., & Shariff, A. (2024). The Moral Psychology of Artificial Intelligence. Annual Review of Psychology, 75(1), 653–675.   
https://doi.org/10.1146/annurev-psych-030123-113559
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies (First edition). Oxford University Press.
Brożek, B., & Janik, B. (2019). Can artificial intelligences be moral agents? New Ideas in Psychology, 54, 101–106. https://doi.org/10.1016/j.newideapsych.2018.12.002
Dennett, D. C. (2014). When HAL kills, who’s to blame? Computer ethics. In Rethinking responsibility in science and technology / edited by Fiorella Battaglia, Nikil Mukerji, Julian Nida-Rümelin. Pisa University Press. https://doi.org/10.1400/225034
Floridi, L., & Chiriatti, M. (2020). GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 30(4), 681–694. https://doi.org/10.1007/s11023-020-09548-1
Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Graff, J. (2024). Moral sensitivity and the limits of artificial moral agents. Ethics and Information Technology, 26(1), 13. https://doi.org/10.1007/s10676-024-09755-9
Gunkel, D. (2018). Can machines have rights? In Living Machines: A Handbook of Research in Biomimetic and Biohybrid Systems (pp. 596–601).   
https://doi.org/10.1093/oso/9780199674923.003.0063
Jaworska, A., & Tannenbaum, J. (2023a). The Grounds of Moral Status. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Spring 2023). Metaphysics Research Lab, Stanford University.               
https://plato.stanford.edu/archives/spr2023/entries/grounds-moral-status/
Jaworska, A., & Tannenbaum, J. (2023b). The Grounds of Moral Status. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Spring 2023). Metaphysics Research Lab, Stanford University.               
https://plato.stanford.edu/archives/spr2023/entries/grounds-moral-status/
Ladak, A. (2024). What would qualify an artificial intelligence for moral standing? AI and Ethics, 4(2), 213–228. https://doi.org/10.1007/s43681-023-00260-1
Manna, R., & Nath, R. (2021). The Problem of Moral Agency in Artificial Intelligence. 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 1–4.             
https://doi.org/10.1109/21CW48944.2021.9532549
Misselhorn, C. (2022a). Artificial Moral Agents: Conceptual Issues and Ethical Controversy. In S. Voeneky, P. Kellmeyer, O. Mueller, & W. Burgard (Eds.), The Cambridge Handbook of Responsible Artificial Intelligence (1st ed., pp. 31–49). Cambridge University Press. https://doi.org/10.1017/9781009207898.005
Moor, J. H. (2006). The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems, 21(4), 18–21. IEEE Intelligent Systems. https://doi.org/10.1109/MIS.2006.80
Müller, V. C. (2023). Ethics of Artificial Intelligence and Robotics. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2023). Metaphysics Research Lab, Stanford University.            
https://plato.stanford.edu/archives/fall2023/entries/ethics-ai/
Schlosser, M. (2019). Agency. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2019). Metaphysics Research Lab, Stanford University.
https://plato.stanford.edu/archives/win2019/entries/agency/
Searle, J. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3(3), 417–457. https://doi.org/10.1017/s0140525x00005756
Shapiro, P. (2006). Moral Agency in Other Animals. Theoretical Medicine and Bioethics, 27(4), 357–373. https://doi.org/10.1007/s11017-006-9010-0
Sullins, J. P. (2011). When Is a Robot a Moral Agent? In M. Anderson & S. L. Anderson (Eds.), Machine Ethics (pp. 151–161). Cambridge University Press.         
https://doi.org/10.1017/CBO9780511978036.013
Talbert, M. (2024). Moral Responsibility. In E. N. Zalta & U. Nodelman (Eds.), The Stanford Encyclopedia of Philosophy (Fall 2024). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2024/entries/moral-responsibility/
 
 
CAPTCHA Image