AI, Ethics, and Digital Services in Europe
AI is now a key part of digital services, shaping how people manage finances, shop online, and access healthcare. While it brings many benefits, it also raises concerns about ethics, fairness, and accountability.
To address these challenges, the EU has introduced comprehensive legislation, including the AI Act, Data Act, and Digital Markets Act, to promote responsible AI use.
The High-Level Expert Group on AI (HLEG-AI) has also defined seven key principles for trustworthy AI, such as human oversight, technical reliability, and fairness. However, putting these principles into practice remains difficult.
The AI4POL project works to close this gap by developing tools and data methods to help policymakers and regulators monitor AI's impact. It aims to ensure AI systems align with European values, protect consumer rights, and promote responsible innovation
Examining Four AI Case Studies
This report, developed by Work Package 3 "AI-Powered Public Insight for Smarter, Citizen-Centric Regulation", examines four cases of AI used in digital services, with a particular focus on the financial sector. By analyzing these examples, we assess how well current AI practices align with ethical standards and legal frameworks. Our aim is to identify both successful approaches and areas where improvements are needed, contributing to more effective AI governance in Europe.
AI in Financial Services: Current Practices and Ethical Challenges
The use of AI in financial services is rapidly expanding, improving operational efficiency, user experience, and enabling more data-driven decision-making and service personalization.
While adherence to the HLEG-AI guidelines varies, most systems show strong alignment with the principles of human oversight and privacy and data governance. For example, Morgan Stanley demonstrates high compliance, while companies like Klarna apply AI more selectively. In most cases, users retain control over key decisions, such as investment adjustments or credit approvals, and personal data is handled according to GDPR and other regulatory standards.
Key Areas for Improvement
Despite these positive developments, common gaps remain, especially in transparency and fairness:
Many AI systems still function as "black boxes," offering limited insight into how decisions are made.
Few services provide accessible explanations or offer meaningful ways for users to challenge outcomes.
While fairness is often assumed, such as through wider access to services like Buy Now, Pay Later or robo-advising, there is little evidence of formal bias assessments or inclusive model testing.
Without external audits or clear safeguards, these systems risk reinforcing existing inequalities, particularly in sensitive areas like lending or fraud detection.
Pathways for Future Improvement
The recurring challenges in transparency and fairness align closely with the goals of the AI4POL project. Key actions for improvement include:
Enhanced Explainability: AI systems should provide user-friendly explanations of automated decisions, clear error-handling processes, and public documentation of algorithm evaluations to build trust and enable effective oversight.
Fairness and Inclusion: Regular bias testing, independent audits, and disaggregated performance evaluations across user groups can help detect and reduce structural inequalities in AI models.
These measures would not only strengthen regulatory compliance but also help ensure more ethical, inclusive, and trustworthy AI use in financial services.
by
Gjergji Kasneci & Yuxiao Li
/
Read more