As artificial intelligence surges into clinical settings – from algorithmic staffing tools to humanoid
robotic nurses – a landmark paper in the Hastings Center Report is sounding a clear warning: no matter
how convincingly a machine simulates empathy or generates context-aware responses, the moral soul of
nursing must remain irreducibly human. The stakes, the authors argue, could not be higher.

Surgical Nursing, Professor of Nursing, and Professor of Medical Ethics and Health Policy.
The scene is already familiar in forward-looking hospitals: AI tools summarising patient records, predicting deterioration, drafting clinical messages, and passively capturing conversations during care encounters. Administratively, algorithms are forecasting staffing needs, flagging fraud, and fielding patient queries. In research, machine learning and natural-language processing are reshaping how evidence is synthesised and clinical trials optimised. Yet, for all this capability, a nine-author team led by Connie M. Ulrich of the University of Pennsylvania School of Nursing is asking a question the industry has been reluctant to confront head-on: what happens to the nurse as a moral agent when AI starts doing more and more of the thinking?
Their paper, published in the Hastings Center Report in January–February 2026, argues that the answer to that question will define not just the future of nursing, but the integrity of healthcare itself.
What it means to be a moral agent
The paper opens with a deceptively simple definition. A moral agent, the authors write, is “a person who has the ability to discern right from wrong and to be held accountable for their actions.” In nursing, that definition carries extraordinary weight. With more than five million licensed registered nurses in the United States alone – the largest single group of healthcare professionals – nurses are, collectively, one of the most consequential moral forces in modern medicine.
Ulrich and colleagues trace the philosophical lineage of moral agency from Aristotle’s virtue ethics through Kant’s deontological autonomy to Hume’s sentimentalism, before arriving at the practical frameworks adopted by nursing’s own professional bodies. The American Association of Colleges of Nurses identifies three essentials for moral agency to occur: awareness of an ethical issue, making a moral choice, and acting on that choice. The American Nurses Association equates moral agency with moral autonomy – the freedom from internal and external constraints that might otherwise suppress ethical responsibility.
What makes the nursing context particularly demanding, the authors note, is that clinical workdays are routinely characterised by urgency, uncertainty, and vulnerability. Nurses frequently make decisions without clear rules, balancing scientific evidence with the values of individual patients and families, institutional expectations, and their own ethical judgement. This irreducible complexity, Ulrich and colleagues argue, is precisely what distinguishes a nurse from an algorithm.
The problem with ‘moral zombies’
Perhaps the most striking conceptual contribution of the paper is its engagement with the question of whether AI systems can themselves be considered moral agents. The short answer, the authors contend, is no – at least not in any morally meaningful sense.
They draw on the work of philosopher Carissa Véliz, who has termed algorithms “moral zombies”: entities that cannot be held morally accountable because they lack sentience. Catrin Misselhorn has similarly argued that artificial systems are moral agents “only in a functional sense,” their apparent moral reasoning being nothing more than an artefact of information processing. Mario Verdicchio and Andrea Perin put it plainly: human moral agency is a combination of autonomy and sentience, and currently no AI system can be regarded as sentient.
The authors are unequivocal on this point. “We believe that nurses and healthcare professionals should not confer moral agency on AI-powered systems,” they write. The concern is not merely philosophical. As humanoid robots increasingly serve as home companions and assist with tasks once performed by human carers
– furrowing their brows, holding conversations, even appearing to wince in pain
– the language of “agentic AI” and “autonomous agents” risks blurring a boundary that matters enormously in practice.
Daniel Tigard’s concept of “artificial moral responsibility” adds an important nuance: even where an AI system behaves as if it were morally responsible, ultimate accountability lies with the humans associated with it – its programmers, its deployers, its users. In a clinical setting, that means the nurse.
Where AI must not be allowed to tread
The paper is careful to acknowledge the genuine benefits AI can bring: reducing cognitive burden, improving access to data, offering probabilistic insights. But Ulrich and colleagues are equally clear about where the technology should not go.
End-of-life and palliative care discussions are cited as a domain where “clear limitations on AI should be instituted.” These encounters involve grief, moral anguish, and deeply personal value systems. “Even with the sophistication of algorithmic responses,” the authors observe, “AI will do little to comprehend the grief and moral angst that often accompany these personal patient care decisions.” An intake assessment – where a nurse reads a patient’s eye gaze, verbal responses, and gestures to gauge mental state and capacity for self-efficacy – is another example of an intuitive exchange that cannot be delegated to a machine.
The authors make two especially firm recommendations. First, AI should never be used in nursing care without the nurse being fully aware of its implementation and retaining their moral agency in decision-making. Second, and more provocatively, AI should never be used to determine whether to hire someone as a nurse. Algorithms, they argue, “cannot predict or identify how one might respond in high-pressure critical patient-care situations,” nor can they assess empathy, moral judgement, or emotional intelligence – qualities that patients and families value most deeply.
Nurses must lead, not follow
The paper’s recommendations extend well beyond what AI should not do. The authors are insistent that nurses must be active architects of the AI systems that will shape their practice, not passive recipients.
“Nurses must be part of the design teams for AI systems, and they must be trained not simply to use AI tools but also to evaluate and interpret their recommendations within a broader ethical framework,” the authors state directly. This includes understanding the limitations of data-driven algorithms, recognising the risks of bias, and communicating AI-supported decisions in ways that reflect patients’ values and lived experiences.
Transparency is also central to their recommendations. Healthcare systems and electronic health record vendors should clearly disclose when AI is generating summaries or treatment suggestions. This allows clinicians, patients, and carers to understand the provenance of information and judge how to act on it. On the broader question of informed consent
– whether patients should be explicitly told when AI determined their treatment plan – the authors call for disclosure to remain “the default starting point in clinical practice and research” while urging further normative and empirical work to clarify the issues.
A $2.7 billion question
The commercial backdrop to all of this is considerable. The global robotic nurse industry is projected to grow at 17.07% annually, reaching $2,777.61 million by 2031. That trajectory gives the paper’s arguments a sharp urgency. As Geoffrey Hinton – widely regarded as the “Godfather of AI”
– has warned, 99% of investment has focused on developing AI’s capabilities while only 1% has gone towards understanding and mitigating its risks. His recommendation that the ratio should be closer to 50-50 resonates strongly with the paper’s call for nursing to exercise its professional authority over the technology entering its domain.
“Patients come to health care settings to be heard, seen, and valued by skilled professionals, not to seek care from machines,” the authors conclude. “While AI may simulate empathy or generate context-aware responses, it lacks intentionality, character, and the ability to be held accountable.” The alliance between nurses and AI may be an uneasy one, as they acknowledge – but it is, undeniably, already here.
The meaning of moral agency, Ulrich and colleagues insist, should not change as AI advances. What must change is how vigorously nurses and healthcare professionals assert and protect it.
Reference:
Ulrich, C. M., Oh, O., You, S. B., et al. (2026). What does moral agency mean for nurses in the era of artificial intelligence? Hastings Center Report, 56(1), 18–23. https://onlinelibrary.wiley.com/doi/epdf/10.1002/hast.70030




