Thursday, May 14, 2026
HomeNursingMoral zombies and moral agents: Why AI can never replace the ethical...

Moral zombies and moral agents: Why AI can never replace the ethical heart of nursing

As artificial intelligence surges into clinical settings – from algorithmic staffing tools to humanoid
robotic nurses – a landmark paper in the Hastings Center Report is sounding a clear warning: no matter
how convincingly a machine simulates empathy or generates context-aware responses, the moral soul of
nursing must remain irreducibly human. The stakes, the authors argue, could not be higher.

University of Pennsylvania School of Nursing’s Connie M. Ulrich, PhD, RN, FAAN, the Lillian S. Brunner Chair in Medical and
Surgical Nursing, Professor of Nursing, and Professor of Medical Ethics and Health Policy.

The scene is already familiar in forward-looking hospitals: AI tools summarising patient records, predicting deterioration, drafting clinical messages, and passively capturing conversations during care en­counters. Administratively, algorithms are forecasting staffing needs, flagging fraud, and fielding patient queries. In research, machine learning and natural-language processing are reshaping how evidence is synthesised and clinical trials optimised. Yet, for all this capability, a nine-author team led by Connie M. Ulrich of the Uni­versity of Pennsylvania School of Nursing is asking a question the industry has been reluctant to confront head-on: what hap­pens to the nurse as a moral agent when AI starts doing more and more of the thinking?

Their paper, published in the Hastings Center Report in January–February 2026, argues that the answer to that question will define not just the future of nursing, but the integrity of healthcare itself.

What it means to be a moral agent
The paper opens with a deceptively simple definition. A moral agent, the authors write, is “a person who has the ability to discern right from wrong and to be held accountable for their actions.” In nurs­ing, that definition carries extraordinary weight. With more than five million li­censed registered nurses in the United States alone – the largest single group of healthcare professionals – nurses are, col­lectively, one of the most consequential moral forces in modern medicine.

Ulrich and colleagues trace the philo­sophical lineage of moral agency from Aristotle’s virtue ethics through Kant’s deontological autonomy to Hume’s senti­mentalism, before arriving at the practical frameworks adopted by nursing’s own pro­fessional bodies. The American Association of Colleges of Nurses identifies three essen­tials for moral agency to occur: awareness of an ethical issue, making a moral choice, and acting on that choice. The American Nurses Association equates moral agency with moral autonomy – the freedom from internal and external constraints that might otherwise suppress ethical responsibility.

What makes the nursing context partic­ularly demanding, the authors note, is that clinical workdays are routinely character­ised by urgency, uncertainty, and vulner­ability. Nurses frequently make decisions without clear rules, balancing scientific evidence with the values of individual patients and families, institutional expec­tations, and their own ethical judgement. This irreducible complexity, Ulrich and colleagues argue, is precisely what distin­guishes a nurse from an algorithm.

The problem with ‘moral zombies’
Perhaps the most striking conceptual contri­bution of the paper is its engagement with the question of whether AI systems can them­selves be considered moral agents. The short answer, the authors contend, is no – at least not in any morally meaningful sense.

They draw on the work of philosopher Carissa Véliz, who has termed algorithms “moral zombies”: entities that cannot be held morally accountable because they lack sentience. Catrin Misselhorn has similarly argued that artificial systems are moral agents “only in a functional sense,” their apparent moral reasoning being nothing more than an artefact of infor­mation processing. Mario Verdicchio and Andrea Perin put it plainly: human moral agency is a combination of autonomy and sentience, and currently no AI system can be regarded as sentient.

The authors are unequivocal on this point. “We believe that nurses and health­care professionals should not confer moral agency on AI-powered systems,” they write. The concern is not merely philo­sophical. As humanoid robots increasingly serve as home companions and assist with tasks once performed by human carers
– furrowing their brows, holding conver­sations, even appearing to wince in pain
– the language of “agentic AI” and “auton­omous agents” risks blurring a boundary that matters enormously in practice.

Daniel Tigard’s concept of “artificial mor­al responsibility” adds an important nuance: even where an AI system behaves as if it were morally responsible, ultimate account­ability lies with the humans associated with it – its programmers, its deployers, its users. In a clinical setting, that means the nurse.

Where AI must not be allowed to tread
The paper is careful to acknowledge the genuine benefits AI can bring: reducing cognitive burden, improving access to data, offering probabilistic insights. But Ulrich and colleagues are equally clear about where the technology should not go.

End-of-life and palliative care discus­sions are cited as a domain where “clear limitations on AI should be instituted.” These encounters involve grief, moral an­guish, and deeply personal value systems. “Even with the sophistication of algorith­mic responses,” the authors observe, “AI will do little to comprehend the grief and moral angst that often accompany these personal patient care decisions.” An in­take assessment – where a nurse reads a patient’s eye gaze, verbal responses, and gestures to gauge mental state and capac­ity for self-efficacy – is another example of an intuitive exchange that cannot be del­egated to a machine.

The authors make two especially firm recommendations. First, AI should never be used in nursing care without the nurse being fully aware of its implementation and retaining their moral agency in deci­sion-making. Second, and more provoca­tively, AI should never be used to deter­mine whether to hire someone as a nurse. Algorithms, they argue, “cannot predict or identify how one might respond in high-pressure critical patient-care situa­tions,” nor can they assess empathy, moral judgement, or emotional intelligence – qualities that patients and families value most deeply.

Nurses must lead, not follow
The paper’s recommendations extend well beyond what AI should not do. The au­thors are insistent that nurses must be ac­tive architects of the AI systems that will shape their practice, not passive recipients.

“Nurses must be part of the design teams for AI systems, and they must be trained not simply to use AI tools but also to evaluate and interpret their recommendations within a broader ethical framework,” the authors state directly. This includes understanding the limitations of data-driven algorithms, recognising the risks of bias, and communi­cating AI-supported decisions in ways that reflect patients’ values and lived experiences.

Transparency is also central to their recommendations. Healthcare systems and electronic health record vendors should clearly disclose when AI is gener­ating summaries or treatment suggestions. This allows clinicians, patients, and car­ers to understand the provenance of in­formation and judge how to act on it. On the broader question of informed consent
– whether patients should be explicitly told when AI determined their treatment plan – the authors call for disclosure to re­main “the default starting point in clini­cal practice and research” while urging further normative and empirical work to clarify the issues.

A $2.7 billion question
The commercial backdrop to all of this is considerable. The global robotic nurse in­dustry is projected to grow at 17.07% an­nually, reaching $2,777.61 million by 2031. That trajectory gives the paper’s arguments a sharp urgency. As Geoffrey Hinton – widely regarded as the “Godfather of AI”
– has warned, 99% of investment has fo­cused on developing AI’s capabilities while only 1% has gone towards understanding and mitigating its risks. His recommenda­tion that the ratio should be closer to 50-50 resonates strongly with the paper’s call for nursing to exercise its professional author­ity over the technology entering its domain.

“Patients come to health care settings to be heard, seen, and valued by skilled professionals, not to seek care from machines,” the authors conclude. “While AI may simulate empathy or generate context-aware responses, it lacks intentionality, character, and the ability to be held accountable.” The alliance between nurs­es and AI may be an uneasy one, as they ac­knowledge – but it is, undeniably, already here.

The meaning of moral agency, Ulrich and colleagues insist, should not change as AI advances. What must change is how vigorously nurses and healthcare profes­sionals assert and protect it.

Reference:
Ulrich, C. M., Oh, O., You, S. B., et al. (2026). What does moral agency mean for nurses in the era of artificial intelligence? Hastings Center Report, 56(1), 18–23. https://onlinelibrary.wiley.com/doi/epdf/10.1002/hast.70030

- Advertisment -

Most Popular