BRUSSELS — Catholic and other faith-based organizations risk being sidelined as global artificial intelligence (AI) governance is increasingly shaped by technical and commercial priorities rather than moral or human-centered values, Church leaders warned during a recent dialogue in the European capital.
The concerns were raised at an Ethics of AI dialogue hosted by the COMECE (Commission of the Bishops’ Conferences of the European Union) and European Future Talks, amid growing controversies surrounding high-risk AI systems and regulatory delays.
Rising Concerns Over High-Risk AI Systems
Friederike Ladenburger, adviser for ethics, research, and health at COMECE, told EWTN News that the Church risks exclusion from crucial policy discussions:
“Marginalization risk exists when guidance is shaped primarily by technical, commercial, and regulatory actors,” she said.
Under the European Union’s AI Act, systems involving employment, migration, healthcare, or fundamental rights fall under high-risk categories, including:
- AI tools influencing diagnosis and treatment
- Patient triage systems
- Robotic surgery assistants
Failures in these systems “could result in serious injury or death,” Ladenburger noted.
The act bans several AI applications entirely, such as:
- Social scoring
- AI-based emotion recognition in workplaces/schools
- Most real-time biometric identification in public spaces
Controversies Fueling Urgency
Recent international cases highlight the risks of poorly governed AI technologies:
- Ireland’s Data Protection Commission launched an investigation into X over harmful deepfake generation.
- French prosecutors raided the company’s Paris offices in a related cybercrime probe.
- Mark Zuckerberg testified in a U.S. civil trial concerning algorithms’ impact on children.
Professor Philip McDonagh of Dublin City University argued that excluding religious voices could deepen global ethical gaps:
“Faith groups reach 84% of the world’s population. Their underrepresentation risks widening gaps in ethical oversight and AI literacy.”
Need for Ongoing Dialogue, Not Just Regulation
The European Commission has missed its February deadline to provide guidance on implementing the AI Act, raising concerns among faith groups who seek clear standards for high-risk systems.
McDonagh cautioned that legal frameworks alone are insufficient:
“We need sustained reflection on human dignity, democracy, and peace. Regulation without ongoing dialogue is incomplete.”
He pointed to Article 17 of the Treaty on the Functioning of the European Union, which requires regular dialogue between EU institutions and churches, as a vital avenue for continued engagement.

Vatican & Global Approaches: Human-Centered AI
McDonagh highlighted similarities between:
- MANAV — Prime Minister Narendra Modi’s human-centric AI vision presented at the AI Impact Summit in New Delhi
- Vatican guidelines such as:
- The Rome Call for AI Ethics by the Pontifical Academy for Life
- Antiqua et Nova, the Vatican’s doctrinal note on AI and human intelligence
These initiatives stress that AI must support, not replace, human judgment.
He also warned of high-risk areas requiring moral vigilance:
- Military AI systems
- Algorithm-driven manipulation of political discourse
- Social polarization through digital platforms
“The ethical dimension is not optional,” McDonagh said. “It determines whether AI serves humanity or undermines human dignity.”
