Based on peer‑reviewed research (2023–2026)
Generative artificial intelligence is now a permanent feature of business education. The central question is no longer whether AI will be used, but whether its use will strengthen or weaken the formation of responsible professionals.
Drawing on recent peer‑reviewed research, this article examines what evidence shows about generative AI in business education and interprets those findings through the lens of Catholic Social Teaching, with particular attention to human dignity, subsidiarity, and the common good.
What the Empirical Evidence Establishes
Across experimental, survey‑based, longitudinal, and review studies, a consistent pattern emerges:
- Learning benefits appear only when assessment and task design are adjusted
- Stronger outcomes occur when student reasoning is evaluated, not just final outputs
- AI literacy operates as a cognitive skill, not a technical shortcut
- Generative AI is most effective for explanation, synthesis, and scenario exploration
- Ill‑structured business problems reshape performance distributions
Generative AI does not lower standards. It reveals whether we were ever measuring thinking in the first place.
These findings provide the empirical foundation for a moral and educational analysis.
Human Dignity and the Purpose of Learning
Catholic Social Teaching begins with the inherent dignity of the human person. Education, therefore, is not merely a process of producing outputs, but of forming rational agents capable of understanding, judgment, and responsibility.
The research shows that when generative AI replaces student reasoning, learning degrades. When AI supports explanation, critique, and synthesis, higher‑order learning improves.
This distinction matters.
It affirms that students are not interchangeable processors of information, but persons whose intellectual agency must remain central. AI serves education only when it supports, rather than substitutes for, human reasoning.
Subsidiarity: AI as Support, Not Substitution
The principle of subsidiarity holds that higher‑level systems should assist, not displace, the proper functions of persons and smaller communities.
Empirical studies consistently show that generative AI is most effective when used as a cognitive aid: reducing iteration costs, expanding exploration, and providing feedback that students actively evaluate.
When AI assumes responsibility for decision‑making or judgment, learning outcomes flatten and assessment validity collapses.
From a subsidiarity perspective, this confirms that judgment and accountability are non‑delegable.
AI should remain subordinate to human decision‑making, not elevated above it.
The Common Good and Professional Formation
Business education serves the common good by preparing graduates to act responsibly in conditions of uncertainty, time pressure, and imperfect information.
The research shows that traditional assessments often fail in generative‑AI‑enabled environments, producing performance convergence that obscures true capability.
Redesigning assessments to emphasize reasoning, justification, and accountability better aligns education with the realities of modern professional life, where AI is already embedded in decision processes.
Forming graduates who can use AI wisely, rather than defer to it uncritically, serves organizations, markets, and society as a whole.
Assessment Validity as a Moral Question
Assessment is not merely a technical exercise. It signals what an institution values and what forms of behavior it rewards.
If assessments can be completed by AI without understanding, they fail not only pedagogically, but morally: they encourage abdication of responsibility while claiming to measure competence.
The research makes clear that generative AI exposes these weaknesses.
The response is not prohibition, but reform grounded in accountability and transparency.
Ethical Alignment in an Age of Automation
Generative AI introduces opacity, delegation of reasoning, and diffused responsibility.
Business education must therefore teach:
- When AI use is appropriate
- When human judgment is non‑delegable
- How AI‑assisted decisions are documented and justified
This is not moralizing. It is professional integrity under conditions of automation.
Sources
- Pallant et al. (2025) — Shows that mastery‑oriented use of generative AI
produces higher‑order learning, while procedural use degrades outcomes,
establishing the central role of assessment design.
- Huo & Siau (2024) — Identifies opportunities and risks of GenAI in business education,
including cognitive dependency and assessment integrity challenges,
and proposes a responsible integration framework.
- Bergenholtz et al. (2025) — Demonstrates performance convergence in ill‑defined,
time‑pressured business exams, exposing assessment validity failures.
- Weng et al. (2024) — Reviews assessment approaches across 34 studies,
highlighting the shift toward career‑driven and lifelong learning outcomes.
- Hon (2025) — Systematic review documenting mixed effects of generative AI
on engagement and performance, emphasizing gaps in longitudinal evidence.

Leave a Reply