Savio Saldanha SJ
https://doi.org/10.5281/zenodo.17742128
29-11-2025
Introduction
Artificial
Intelligence (AI) has shifted from abstract concept to everyday reality,
affecting education, research, and even personal and spiritual growth. As
educational institutions respond with strict rules and bans on AI’s use, there
is growing concern that technology is advancing faster than regulation,
creating a sense that we are fighting a losing battle. After reading recent
papal documents and philosophical literature pertaining to AI, I think this
development calls for a more integrative approach: one that promotes
understanding, ethics, and discernment in the use of AI, especially among youth
and researchers. As we stand on the crossroad of the conscience we face a
dilemma of the ethical and moral use of AI by students and youth and the limits
for its usage.
The Problem
Currently,
many institutions aim to restrict or eliminate AI use among students, fearing
it will replace genuine learning with automated shortcuts. However, as AI
becomes more advanced—capable of generating essays, solving complex problems,
and evading detection—these methods are increasingly ineffective. This leads to
frustration on all sides, eroding trust, stifling curiosity, and weakening
genuine intellectual formation.
The Solution
The
future lies not in fighting AI, but in fostering a mature, ethical, and
reflective engagement with it. By drawing on Catholic teaching and
philosophical inquiry, educators and leaders can guide students towards a
responsible, creative, and truly human integration of AI.
AI as a Tool: Neutral by Nature, Ethical
by Use
Pope
Francis emphasized that AI is a tool—one that can advance knowledge,
democratize education, and serve humanity (Pope Francis, 2024). Like all tools,
it carries no moral quality by itself. Its effects depend entirely on how people
use it: for good or ill, to foster justice or inequality, to create genuine
understanding or simply shortcut effort. Pope Leo XIV likewise notes the
double-edged nature of AI, affirming that while “AI is above all else a tool,”
its ethical value lies in intention and use (Pope Leo XIV, 2025).
Centrality of the Human Person
The
Church’s teaching places human dignity at the center of all technology. Pope
Leo XIV writes that AI must be assessed “in light of the integral
development of the human person and society,” not just on material outcomes
(Pope Leo XIV, 2025). Pope Francis cautions that even sophisticated technology
should not undermine the human capacity for moral decision, reflection, and
authentic encounter (Pope Francis, 2024). Both teach that any use of AI must
support—not replace—uniquely human creativity, judgment, and critical thought.
Insights from the Philosophy of AI
The
philosophy of AI, as outlined by Müller (2024), not only helps clarify what AI
is and is not, but also shows how reflection on AI and reflection on the human
person belong together. Müller begins from the classical research program
launched at Dartmouth in 1956, which conjectured that every aspect of learning
and intelligence could in principle be precisely described and simulated on a
machine. In this sense, “Classical AI” is a research project aimed at building
computer-based agents that genuinely have intelligence, and it stands in
continuity with the well-known distinction between “strong AI” and “weak AI.”
Strong AI maintains that an appropriately programmed computer literally has a
mind and cognitive states, while weak AI treats computer systems as powerful
tools for simulating mental processes and for studying the mind without
claiming that the machine itself understands. This distinction resonates with
Searle’s “Chinese Room Argument,” which suggests that rule-based symbol
manipulation, even when behaviorally successful, is not yet genuine
understanding; from a theological standpoint, this confirms that intentionality
and consciousness—and therefore moral responsibility—remain rooted in the human
person rather than in the artifact.
Müller
then contrasts this classical, ambitious understanding of AI with what he calls
“Technical AI”: a family of concrete methods in computer science—search,
probabilistic reasoning, expert systems, control engineering, machine learning,
and so on—used to build systems for perception, modelling, planning, and
action. Here AI is not a claim about minds, but a toolbox for constructing
systems that behave intelligently in restricted domains. Since around 2015, the
rise of deep machine learning, fuelled by massive data and computing power, has
dramatically increased the performance of such systems in translation, text
generation, games, vision, and autonomous driving, sometimes surpassing human
capabilities in specific tasks. Yet Müller stresses that this success does not
settle the philosophical question of intelligence itself; it only shows that
certain forms of intelligent behaviour can be produced by non-human,
non-conscious mechanisms. Theologically, this supports a nuanced view: AI can
exhibit impressive capacities without thereby becoming a subject of rights,
duties, or grace, because its “intelligence” is instrumental rather than
existential.
Because
of these two strands—classical and technical—Müller argues that the philosophy
of AI must address three Kantian questions: What is AI? What can AI do? What
should AI be? He proposes an “AI philosophy” that does not merely apply
pre-existing concepts to a new object, but allows the very concepts of
intelligence, agency, and normativity to be re-examined in light of AI systems.
For example, work on the Turing Test shows how operational criteria for
“thinking” can shift public language, even if they do not resolve deeper
metaphysical issues about consciousness. At the same time, debates about goals
and values in AI highlight a crucial limit: current systems exhibit remarkable
instrumental intelligence (they are very good at finding means to given ends),
but they lack genuine metacognitive reflection on which goals are worth
pursuing and why. Müller notes that without such reflection on the goodness and
relevance of ends, AI cannot be a full moral agent, and talk of “machine
ethics” in a strong sense is misleading.
This
analysis dovetails with Catholic concerns about “algor-ethics.” If AI systems,
even highly sophisticated ones, cannot autonomously ground or revise their own
goals in the light of truth and the good, then they must remain embedded within
human practices of discernment, responsibility, and virtue. Pope Francis’s call
for ethical frameworks for AI can thus be deepened by Müller’s claim that
normative reflection is not an optional “add-on” but an elementary part of any
genuinely rational life-form. In human beings, this reflective capacity is tied
to conscience, practical wisdom, and an openness to transcendence; in machines,
by contrast, the selection and evaluation of goals must ultimately be designed,
monitored, and judged by persons. The philosophy of AI therefore reinforces a
central intuition of Catholic moral theology: intelligent artefacts may transform
the conditions of action, but they do not displace the primacy of human agents,
whose freedom and moral growth remain at the heart of any authentic “ethics of
AI.”
Papal concerns about “outsourcing”
formation to AI
Recent
interventions by Pope Leo XIV deepen this educational perspective by explicitly
addressing the temptation to let AI “do our homework” in place of real
learning. Speaking to students, he acknowledges that AI can be a powerful aid
for study but insists it must never replace the hard work of thinking, judging,
and creating for oneself, because these are precisely the activities through
which persons grow in freedom and responsibility. In this view, AI belongs to
the order of tools, whereas wisdom and moral discernment arise only through the
engaged, embodied exercise of human intelligence in relationship with others
and with God.
At
the same time, the Pope does not reject AI as such; he calls it “one of the
defining features of our time” and urges educators and parents to guide
young people toward uses of AI that genuinely help and do not hinder their
human development. The decisive question is not whether AI is present in
schools, but whether its use forms or deforms students: does it cultivate
intellectual honesty, patience, and collaborative learning, or does it
encourage passivity, plagiarism, and isolation. This resonates with the broader
Catholic insistence that technology must always be evaluated in light of the
dignity of the person and the integral growth of children and adolescents, who
are particularly vulnerable to the allure of effortless solutions.
From
a philosophical and pastoral standpoint, the papal warning against delegating
one’s homework to AI can be read symbolically as a warning against outsourcing
the very struggle that makes education transformative. If students learn to
treat AI as a substitute for their own judgment and creativity, they risk
hollowing out the interior capacities—attention, critical reflection, moral
imagination—that Catholic tradition associates with the formation of
conscience. By contrast, when AI is used transparently and critically, as an
instrument that supports research and reflection without replacing them, it can
become an ally in precisely the “intergenerational apprenticeship” that the Church
envisions.
The Role of Education: From Control to
Formation
A
recurring theme in both papal and philosophical sources is the importance of
education as formation, not just transmission of skills or facts. Pope Leo XIV
calls for an “intergenerational apprenticeship” so that young people can learn
to integrate technology wisely into their lives (Pope Leo XIV, 2025). Education
should develop students’ responsibility, discernment, and creativity, equipping
them to use AI for real growth—intellectually, morally, and spiritually—rather
than as a means of shortcutting learning or escaping effort (Pope Francis,
2024; Müller, 2024).
Pope
Francis warns against technophobia and calls for dialogue—across cultures,
generations, and disciplines—to ensure AI serves the common good. He advocates
a shared ethical foundation (“algor-ethics”) and stresses the need for healthy
politics to direct technological change towards justice, inclusion, and the
flourishing of all (Pope Francis, 2024).
Personal Reflection and Practical
Recommendations
As
we stand at a crossroads of conscience regarding the ethical and moral use of
AI by students and youth, the dilemma often appears stark: should AI be banned
altogether or tightly limited, and who has the authority to fix and enforce
those limits. Witnessing the rigid enforcement of AI bans in many institutions,
it seems increasingly likely that such efforts will fail in the long term, not
only because the technology will outpace policing, but because a purely
prohibitive strategy neglects the deeper formation of judgment that both
philosophy and Catholic teaching demand. The categories developed by Müller
help here: if much of what is now called “AI” is in fact “technical
AI”—powerful but limited methods for perception, modeling, and
decision-support—then treating these tools as if they were already
quasi-personal agents to be excluded altogether risks confusion and fear rather
than clarity.
A
more coherent response is to move from prohibition to formation, from mere
rule-enforcement to an “intergenerational apprenticeship” in the wise use of
technology, as Pope Leo XIV suggests. If, as Müller argues, current AI systems
exhibit at most instrumental intelligence—remarkable skill in finding means to
given ends, but no genuine reflection on which ends are good or just—then the
responsibility for setting and evaluating goals necessarily remains with human
agents. This implies that institutions, families, and Church communities cannot
abdicate discernment to algorithms, nor to external regulators alone; they must
themselves cultivate the virtues and criteria by which AI use is judged. In
this perspective, the key question is not simply “how much AI is allowed,” but
“how do we form persons who can use AI without outsourcing the inner work of
thinking, choosing, and taking responsibility.”
Practically,
this means encouraging openness, integrity, and critical reflection on AI
rather than secrecy and evasion. Policies will still be needed—there must be
some boundaries on plagiarism, data misuse, and academic dishonesty—but these
norms should be embedded in a broader pedagogical project that teaches students
not only what to avoid, but how and why to use AI well. Drawing on Pope
Francis’s call for “algor‑ethics,” educators can invite students to ask in each
concrete case: does this use of AI support or undermine my own learning, my relationships,
and the dignity of others. By engaging youth in such questions, institutions
help them move from passive users of opaque systems to discerning subjects who
not only understand the technical limits of AI but also its moral
implications.
In
this light, I conclude that, instead of asking whether to ban or permit AI in
the abstract, our central task is to shape a culture in which AI is integrated
into education in a way that preserves the primacy of human intelligence,
conscience, and community. Those who “decide the limits” of AI use—teachers,
parents, Church leaders, and students themselves—should be seen not primarily
as regulators but as co-responsible participants in a shared work of formation.
By teaching young people to collaborate with AI without surrendering their
capacity for wonder, critical thought, and moral responsibility, institutions
can foster maturity, wisdom, and resilience—qualities that the philosophy of AI
identifies as properly human, and that Catholic spirituality recognises as the
fruit of grace working through human freedom.
References
- Müller,
V. C. (2024). Philosophy of Artificial Intelligence: A Structured
Overview. In: Smuha NA, ed. The Cambridge Handbook of the Law,
Ethics and Policy of Artificial Intelligence. Cambridge Law Handbooks.
Cambridge University Press; 2025:40-58. DOI: https://doi.org/10.1017/9781009367783.004
- Pope
Francis. (2024, June 14). Address at the G7 Session on Artificial Intelligence,
Borgo Egnazia, Puglia. The Holy See.
- Pope
Leo XIV. (2025, June 19). Message to participants in the Second Annual
Conference on Artificial Intelligence, Ethics, and Corporate Governance,
Rome. The Holy See.
- Pope
Leo XIV. (2025, November 21). Address to young people at the National
Catholic Youth Conference, Indianapolis. Vatican News / USCCB summary
reports.

No comments:
Post a Comment