Savio Saldanha SJ
DOI: 10.5281/zenodo.17993534
20-12-2025
I
recently watched a movie, about some Unmanned Aerial Vehicle operators’
dilemma. They are ordered to target a building where a terrorist is hiding. While
outside this building a young girl is selling bread. The operators face a
dilemma, fire the missile and kill the innocent girl with the terrorist or let
the terrorist escape. I could feel their tension within me as I watched and
hoped that the girl would be somehow miraculously saved. But after the movie, I
began reflecting on the news reports about the UCV’s.
Unmanned combat vehicles (UCVs) expose a fault line in contemporary conscience: they promise “clean” warfare while intensifying the moral distance between the one who kills and the one who dies. At this crossroads, Catholic moral theology and recent papal teaching converge on a clear point: no machine may be allowed to decide to take a human life, and any system that obscures or fragments human moral responsibility is ethically suspect, even when it operates inside a “just war” framework. The question of my reflection, “is it ethical, using robots to kill human beings?”
From Predator Drones to “Killer Robots”
UCVs
emerged in the late 20th century as tools for reconnaissance and targeted
strikes, exemplified by the U.S. Predator and Reaper drones used from the Gulf
conflicts to Afghanistan and Iraq, and later in theatres like the
Armenia–Azerbaijan and Russo‑Ukrainian wars. What began as remotely piloted
aircraft has expanded into an ecosystem of unmanned ground vehicles, kamikaze
drones, and sea‑borne systems increasingly coupled with artificial intelligence
for detection, tracking, and engagement.
At
each step, two promises drive their deployment: greater protection for one’s
own soldiers and “surgical” precision in targeting, ostensibly reducing
civilian casualties. At the same time, these systems deepen a psychological and
moral distance: operators may be thousands of kilometres away, or, the machine
may operate in fully autonomous modes, humans may be removed from real-time decision-making
altogether, turning battlefields into laboratories for live testing of
algorithms.
The fact
that unmanned combat systems cost a fraction of the previously conventional
systems this makes it possible for small countries with limited defence budget
to field a sizeable fleet of UCV’s. Unmanned aerial systems operating in swarms
can overwhelm enemy air defences and cause severe damage to their military and
other infrastructure facilities. UCV’s can operate in high-risk zones thus
reducing own casualties while inflicting serious casualties on the other side. These
entire factors make the UCV’s a lucrative deal.
A New Technological Temptation
Philosophically,
the ethical problem is not technology as such but a new configuration of power,
distance and uncertainty. Hans Jonas, was a German American philosopher. While
reflecting on technological power in general, he argued that the unprecedented
scale and irreversibility of our actions demand a new “imperative of
responsibility”, which is act so that the effects of your action
are compatible with the continued, genuinely human life of humanity on earth.
For Jonas, not everything that can be done technologically ought to be done;
technological possibility must be constrained by an anticipatory ethics that
takes seriously the worst‑case scenarios, a “heuristics of fear”
that refuses naïve optimism about neutral progress.
The
ethical challenge posed by UCVs is not technology itself, but what technology
enables when it reshapes responsibility. Hans Jonas’s Imperative of
Responsibility offers a crucial philosophical lens here. Jonas argued that
modern technological power has outpaced traditional ethical frameworks. Jonas’s
thought resonates deeply with Catholic moral theology. His insistence on
restraint, foresight, and responsibility parallels the Church’s emphasis on
prudence, moral accountability, and the common good.
When
we apply this principle to autonomous or semi‑autonomous weapons, Jonas’s
approach highlights two dangers: first, that the chain of responsibility
becomes so distributed (programmers, commanders, operators, political
authorities) that no one decisively “owns” the lethal decision; and second, that
the very success and efficiency of these systems seduce societies into
normalising permanent, ‘low visibility’ warfare. What looks like
rational, data driven targeting can conceal a profound moral abdication, since
algorithms cannot bear responsibility or suffer remorse, yet their opaque
decisions shape life and death on the ground.
At
this point, I believe it is important to clarify my stance. Unmanned combat
vehicles are not intrinsically immoral; rather, the ethical evaluation depends
on the manner of their use and the degree of human moral responsibility
retained in their deployment. These Unmanned systems are a boon when they are
used for non-lethal activities like monitoring traffic, delivering medical aid
and essential supplies to remote places and some are even configured for
medical evacuation. My argument is therefore not anti-technology, but ethically
critical. As Hans Jonas insists, the problem is not technological capability as
such, but moral abdication — when human agents allow responsibility for
life-and-death decisions to be obscured or displaced by technical systems.
Just War, Human Agency, and the Limits of Delegation
Catholic
moral tradition does not approach war with naïveté. Classical ‘just war’
theory — articulated by Augustine, Aquinas, and reaffirmed in the Catechism
of the Catholic Church (§§2307–2317) — insists that even in a defensive
war, combatants are bound by moral principles. Among these, ius in bello
principles such as discrimination and proportionality require concrete moral
judgment in each act of lethal force.
Contemporary
ethicists analysing autonomous weapons argue that systems which remove or
radically weaken human control over individual lethal decisions cannot in
practice reliably satisfy these principles, because algorithms cannot
adequately interpret context, intention, surrender, or non-combatant status in
the way moral agents must.
A
detailed philosophical analysis by Wendell Wallach and Allen Colin concludes that “human‑out‑of‑the‑loop” weapons —
those that select and engage targets without real-time human intervention — are
“highly morally problematic”. This is precisely because their design and
use impede human agents’ ability to exercise morally informed judgment, and
thus amount to an abdication of responsibility. Even where there remains a
nominal “human in the loop,” the speed, complexity, and opacity of AI‑driven
targeting can according to Robert Sparrow reduce the human role to rubber‑stamping,
undermining the requirement that each act of lethal force be a genuinely
personal, accountable decision.
Catholic
ethics does not claim that unmanned platforms are intrinsically immoral.
Rather, it insists that lethal force is morally non-delegable. When
technology undermines the capacity of human agents to exercise responsible
judgment, it exceeds the moral limits of delegation.
The Magisterium’s Emerging Witness
Over
the last decade, the Holy See has consistently raised alarms about lethal
autonomous weapons systems (LAWS), often popularly called “killer robots,”
explicitly including armed drones and unmanned vehicles in this category.
Vatican representatives to the United Nations have warned that removing human
agency from the moral equation is problematic not only ethically but also for
the foundations of international humanitarian law, since “autonomous weapons
systems cannot be considered as morally responsible entities.”
Pope
Francis had repeatedly sharpened this concern. In his interventions on artificial
intelligence and war, he insisted that no machine should ever be allowed to choose
to take a human life. Thus calling for the development and use of lethal
autonomous weapons to be reconsidered and ultimately banned. In 2024 he
addressed world leaders, stressing the need for “ever greater and proper human
control” and warning that AI lacks the human capacity for moral judgment and
therefore must not be entrusted with lethal decision-making. This magisterial
trajectory is not a marginal footnote to Catholic social teaching; it flows
from a consistent defence of human dignity, the primacy of conscience, and the
demand that technological progress be subordinated to integral human
development and the common good.
Conscience at the Console: The Drone Operator’s
Dilemma
From
a pastoral standpoint, Church voices have begun to recognise that unmanned
warfare creates a new kind of combatant whose battlefield is a screen. Vatican
officials have noted that drone operators and those involved in deploying unmanned
systems often lack adequate formation and time for moral discernment, even as
their split‑second decisions affect lives far away, with psychological and
spiritual consequences for both victims and operators.
There
is a double “anesthesia” at work here. On one side, geographical and sensory
distance can dull empathy: there is no blood, only pixels; no cry, only data.
On the other, institutional distance fragments responsibility: engineers,
commanders, analysts, and politicians can each tell themselves they merely
played a minor technical role, while the system as a whole carries out lethal
actions without any single conscience fully confronting their gravity. For a
Christian, this runs directly counter to the vocation to see and respond to the
concrete face of the other, especially the vulnerable enemy who remains, even
in war, a bearer of the imago Dei.
Is It Ethical to Use Robots to Kill?
This
brings me back to my initial question - “Is it ethical, using robots to kill
human beings?” This cannot be answered in the abstract, as though there
were a single switch to flip between “ethical” and “unethical.” Catholic moral
theology pushes us to distinguish levels:
- If “robot” means a system that autonomously selects and kills
targets without meaningful, responsible human control, current magisterial
teaching and serious philosophical reflection converge toward a negative
answer: such systems should not be developed or used, and should be
subject to a binding international ban.
- If “robot” means an unmanned platform (air, land, sea) still under
robust human moral agency, then the perennial criteria of just war — just
cause, right intention, last resort, proportionality, discrimination — still
apply, and the question becomes whether such platforms actually help or
hinder compliance with these criteria in practice.
Yet
even in the second, more nuanced case, there remains a deep unease in the
Christian conscience. The more warfare becomes asymmetrical, remote, and
technologically mediated, the easier it becomes for powerful states to wage low
risk perpetual conflicts with minimal domestic political cost and minimal
existential exposure of their own soldiers. Jonas’s ethics of responsibility,
Pope Francis’s appeals for a ban on lethal autonomous weapons, and the
Vatican’s insistence on non‑delegable human agency all point in the same
direction: technological sophistication does not lessen the gravity of killing;
it heightens the demand for moral scrutiny and self‑limitation.
A Conscience at the Crossroads
Writing
as a theology student and Jesuit scholastic my own context sharpens this
debate. I stand between at least three pressures: a global South that often
bears the brunt of “remote” wars and experimental weapons, a Western context in
which high‑tech security discourses are taken for granted, and an ecclesial
tradition increasingly vocal about the non‑negotiability of human moral agency
in the use of force. This is not a purely theoretical knot; it intersects with
the lives of families fleeing drone‑shadowed skies, with soldiers and operators
wrestling with guilt, with policy debates that risk drifting far ahead of
public moral reflection.
As a
Jesuit, I feel that in this situation Ignatian spirituality offers a way
forward. Ignatian discernment can offer a way of resisting both technological
inevitability and moral numbness in the age of unmanned war. Instead of
treating UCVs as neutral tools whose use is determined only by strategic
necessity, discernment asks: what is this technology doing to my desires, my
imagination and my capacity to be moved by the suffering of concrete persons?
In the Ignatian tradition, one is invited to “compose the place,” to
place oneself prayerfully in the concrete scene — here, the drone feed, the
targeted house, the girl selling bread — and to notice the interior movements
that arise: consolation that draws toward reverence for life and justice, or
desolation that manifests as indifference, fascination with power, or
rationalisation of avoidable harm. Examined in this light, decisions about
designing, authorising, or operating UCVs cannot be reduced to technical risk
assessments; they become moments of encounter with the Crucified in history,
who identifies himself with the victims of “clean” warfare as well as with
soldiers whose consciences are strained to breaking point. Ignatian discernment
thus calls Christians involved in these systems — engineers, officers,
chaplains, policymakers — to a slow, honest scrutiny of spirits, so that
choices about “robotic” killing are made, if at all, under the sign of the poor
and crucified Christ rather than under the seduction of efficiency, distance,
and fear. Although it might feel cliché or impractical to some, I sincerely
believe that it is the only manner in which human conscience and responsibility
can act in justifiable manner in the usage of the UCV’s.
From
this crossroads, a Christian response might be framed in three movements.
First, a clear “no” to machines deciding who lives and who dies — an ethical
red line voiced both by Jonas’s imperative of responsibility and by Pope
Francis’s call that no machine should ever choose to take a human life. Second,
a rigorous, honest examination of whether current patterns of unmanned warfare
are truly serving just peace or merely lowering the threshold for resorting to
force, especially against populations with little power to respond. Third, a
renewed commitment to form consciences — of engineers, military leaders,
policymakers, and even religious leaders — capable of resisting the seduction
of “clean” killing and insisting that human lives, even enemy lives, are never
reducible to targets in a dataset.
At
this intersection of drones and doctrine, algorithms and ethics, my reflection
becomes a question addressed not only to the whole Church but to the entire
humanity: will our conscience allow itself to be automated, or will it reclaim
the slow, costly, deeply human work of responsibility in the age of unmanned
war?
Bibliography
Aljazeera. “Pope Francis Calls for Ban on ‘Lethal
Autonomous Weapons’ at G7.” 14 June 2024.
Catholic Culture. “Pope, at G7 Summit, Calls for Ban
on Lethal Autonomous Weapons.”
Catholic News Agency. “Vatican Again Calls for a
Moratorium on Killer Robots.”
Catholic Philly. “Autonomous Weapons Systems Threaten
Peace, Says Vatican Official.” 27 March 2019.
Holy See Mission in Geneva. “Technology Should Better
Human Life, Not Take It.” 17 September 2025.
Jonas, Hans. The Imperative of Responsibility: In
Search of an Ethics for the Technological Age. Chicago: University of
Chicago Press, 1984.
National Catholic Register. “Drone Wars: The Morality
of Robotic Weapons.”
Pope Francis. Address on Artificial Intelligence and
Peace, G7 Summit, June 2024 (and subsequent appeal: “Reconsider the Development
of Lethal Autonomous Weapons”). Vatican News. 9 July 2024.
Stop Killer Robots Campaign. “Statement to the UN
General Assembly First Committee on Lethal Autonomous Weapons Systems.” 13
October 2020.
UN, Holy See Statements. “Emerging Technology at the
Service of Humanity: Called to Be Peacemakers.” Catholic Bishops’ Conference of
England and Wales, 21 May 2024.
Wallach, Wendell, and others. “Autonomous Weapons and
Moral Responsibility.” In The Oxford Handbook of Ethics of War, Oxford
University Press, 2016.

No comments:
Post a Comment