As AI becomes increasingly embedded in financial advisory services, trust is being redefined; no longer rooted only in human relationships, but increasingly shaped by algorithms, data, and system design.
At a recent AI4POL event, researchers from the University of East Anglia and Unitelma Sapienza presented their latest report on trust in AI-driven financial services. The presentation sparked an insightful discussion with experts from CONSOB, Banca d'Italia, Sapienza Università di Roma, and the Corte Costituzionale della Repubblica, exploring how EU regulatory frameworks are adapting to this shift.
Trust as market infrastructure
Trust emerged as more than an ethical value or a regulatory objective. It was framed as a core infrastructure of financial markets, underpinning predictability, contractual reliability, and investor confidence. Participants emphasized that trust operates across multiple dimensions — institutional, operational, legal, and ethical — and must be understood as something actively produced and maintained through governance and oversight.
AI and the transformation of decision-making
AI is no longer a neutral support tool. It increasingly structures choices, filters information, and influences outcomes, reshaping how financial decisions are made. This shift moves trust away from individual advisors and toward system architecture, code quality, and data governance, raising critical questions about how regulation can safeguard not only financial outcomes, but also the integrity of AI-mediated decision-making processes.
Robo-advisors as a paradigmatic case
Robo-advisors illustrate the convergence of key EU regulatory regimes, including the AI Act, MiFID II, and DORA. Each framework approaches trust differently: through explainability and rights protection, investor suitability, and operational resilience under stress. The discussion highlighted the systemic risks associated with herding behavior and the concentration of AI and cloud service providers, positioning robo-advisors as a central case study for understanding AI's broader impact on market trust.
Supervisory and regulatory perspectives
Supervisors are themselves increasingly deploying AI for market oversight, fraud detection, and governance analysis. In practice, trust relies on traceability, documentation, reviewability, contestability, and meaningful human-in-the-loop oversight. DORA, in particular, was seen as marking a shift from managing operational risk to ensuring operational resilience, focusing on recovery, continuity, and system reliability in the face of disruption.
Looking ahead
The discussion underscored a shared conclusion: trust is both the glue and the safeguard of AI-driven financial markets. As AI adoption accelerates, the challenge for regulators, supervisors, and market participants alike is to strengthen trust in ways that are measurable, operational, and resilient without eroding human agency or market stability.
by
AI4POL
/
Read more



