
Shaping AI Governance:
Using Artificial Intelligence to Support
Regulators & Policy Makers
Scroll to view more

Shaping AI Governance:
Using Artificial Intelligence to Support
Regulators & Policy Makers
Scroll to view more

Shaping AI Governance:
Using Artificial Intelligence to Support
Regulators & Policy Makers
Scroll to view more

Shaping AI Governance:
Using Artificial Intelligence to Support
Regulators & Policy Makers
Scroll to view more



Regulating AI
Protecting Values
AI4POL is a three-year Horizon Europe project aimed at supporting European regulators and policymakers by providing the necessary tools, insights, and frameworks to develop and enforce AI regulations that are aligned with European values, human rights, and citizens’ needs.
The project is led by Tilburg University and brings together a diverse consortium of research and policy experts from Centerdata, the Munich School of Politics and Public Policy and Technical University of Munich, the University of Rome Unitelma Sapienza, the University of East Anglia, and Visionary Analytics. Together, these institutions combine interdisciplinary expertise in law, economics, data science, and the social sciences—including political science and ethics—to advance responsible and future-proof AI governance across Europe.
Why we exist
Artificial Intelligence is rapidly transforming economies, societies, and global power structures. While the EU has introduced new AI regulations, technology continues to advance faster than policymakers can respond—often leaving them without the tools, expertise, or frameworks needed to keep up. AI4POL was created to help bridge this gap. The project focuses on how Europe can effectively implement its vision for fair, transparent, and socially responsible AI development. AI4POL aims to strengthen the EU’s role in the global AI landscape by providing policymakers and regulators with the knowledge, tools, and strategies they need to understand, monitor, and regulate AI technologies. The goal: to create evidence-based, future-proof policies grounded in human rights, European values, and the needs of citizens.
What we do
AI4POL is an EU-funded Horizon Europe research project that brings together experts from law, economics, data science, political science, and ethics to develop practical tools and policy insights for AI governance. The project focuses on four key areas: 1. Next Generation AI Governance: We work on tools and frameworks for secure and lawful data sharing, detecting self-preferencing in digital markets, and identifying online consumer manipulation—helping shape enforceable, resilient AI regulations. 2. AI-Powered Public Insight for Smarter, Citizen-Centric Regulation: We use AI to simplify complex digital regulations for citizens, while also creating structured feedback tools to help regulators better understand public needs and experiences. 3. Trustworthy AI for Financial Services: We are developing a policy toolkit based on risk models and ethical principles to help financial regulators assess and manage AI risks—supporting safe innovation and maintaining trust in financial systems. 4. AI-Development and Institutional Constraints: We study how AI capabilities interact with governance structures worldwide and have created an AI Threat Index, with a focus on China and Russia as case studies. Together, these work packages enable AI4POL to support effective, future-proof AI policies and regulations that reflect European values and societal priorities.
How we do it
AI4POL works collaboratively across disciplines and sectors to deliver tools and recommendations that are both rigorous and practical. Through close engagement with EU policymakers, regulators, industry experts, civil society, and the research community, AI4POL translates research into concrete outputs, such as policy briefs, workshops, dashboards, and legal frameworks. Our approach ensures that regulation of AI is not only informed by the latest technical and ethical insights but also aligned with the real-world needs of those tasked with enforcing it.
Who we are
The project brings together a diverse consortium of research and policy experts. Led by Jens Prüfer from Tilburg University, the project's interdisciplinary Work Packages are headed by Inge Graef from Tilburg University (WP2), Gjergji Kasneci from the Technical University of Munich (WP3), Sean Ennis from the University of East Anglia (WP4), and Jens Prüfer (WP5). Together, their teams combine expertise in law, economics, data science, and the social sciences—including political science and ethics—to advance responsible and future-proof AI governance across Europe.
Why we exist
Artificial Intelligence is rapidly transforming economies, societies, and global power structures. While the EU has introduced new AI regulations, technology continues to advance faster than policymakers can respond—often leaving them without the tools, expertise, or frameworks needed to keep up. AI4POL was created to help bridge this gap. The project focuses on how Europe can effectively implement its vision for fair, transparent, and socially responsible AI development. AI4POL aims to strengthen the EU’s role in the global AI landscape by providing policymakers and regulators with the knowledge, tools, and strategies they need to understand, monitor, and regulate AI technologies. The goal: to create evidence-based, future-proof policies grounded in human rights, European values, and the needs of citizens.
What we do
AI4POL is an EU-funded Horizon Europe research project that brings together experts from law, economics, data science, political science, and ethics to develop practical tools and policy insights for AI governance. The project focuses on four key areas: 1. Next Generation AI Governance: We work on tools and frameworks for secure and lawful data sharing, detecting self-preferencing in digital markets, and identifying online consumer manipulation—helping shape enforceable, resilient AI regulations. 2. AI-Powered Public Insight for Smarter, Citizen-Centric Regulation: We use AI to simplify complex digital regulations for citizens, while also creating structured feedback tools to help regulators better understand public needs and experiences. 3. Trustworthy AI for Financial Services: We are developing a policy toolkit based on risk models and ethical principles to help financial regulators assess and manage AI risks—supporting safe innovation and maintaining trust in financial systems. 4. AI-Development and Institutional Constraints: We study how AI capabilities interact with governance structures worldwide and have created an AI Threat Index, with a focus on China and Russia as case studies. Together, these work packages enable AI4POL to support effective, future-proof AI policies and regulations that reflect European values and societal priorities.
How we do it
AI4POL works collaboratively across disciplines and sectors to deliver tools and recommendations that are both rigorous and practical. Through close engagement with EU policymakers, regulators, industry experts, civil society, and the research community, AI4POL translates research into concrete outputs, such as policy briefs, workshops, dashboards, and legal frameworks. Our approach ensures that regulation of AI is not only informed by the latest technical and ethical insights but also aligned with the real-world needs of those tasked with enforcing it.
Who we are
The project brings together a diverse consortium of research and policy experts. Led by Jens Prüfer from Tilburg University, the project's interdisciplinary Work Packages are headed by Inge Graef from Tilburg University (WP2), Gjergji Kasneci from the Technical University of Munich (WP3), Sean Ennis from the University of East Anglia (WP4), and Jens Prüfer (WP5). Together, their teams combine expertise in law, economics, data science, and the social sciences—including political science and ethics—to advance responsible and future-proof AI governance across Europe.
Why we exist
Artificial Intelligence is rapidly transforming economies, societies, and global power structures. While the EU has introduced new AI regulations, technology continues to advance faster than policymakers can respond—often leaving them without the tools, expertise, or frameworks needed to keep up. AI4POL was created to help bridge this gap. The project focuses on how Europe can effectively implement its vision for fair, transparent, and socially responsible AI development. AI4POL aims to strengthen the EU’s role in the global AI landscape by providing policymakers and regulators with the knowledge, tools, and strategies they need to understand, monitor, and regulate AI technologies. The goal: to create evidence-based, future-proof policies grounded in human rights, European values, and the needs of citizens.
What we do
AI4POL is an EU-funded Horizon Europe research project that brings together experts from law, economics, data science, political science, and ethics to develop practical tools and policy insights for AI governance. The project focuses on four key areas: 1. Next Generation AI Governance: We work on tools and frameworks for secure and lawful data sharing, detecting self-preferencing in digital markets, and identifying online consumer manipulation—helping shape enforceable, resilient AI regulations. 2. AI-Powered Public Insight for Smarter, Citizen-Centric Regulation: We use AI to simplify complex digital regulations for citizens, while also creating structured feedback tools to help regulators better understand public needs and experiences. 3. Trustworthy AI for Financial Services: We are developing a policy toolkit based on risk models and ethical principles to help financial regulators assess and manage AI risks—supporting safe innovation and maintaining trust in financial systems. 4. AI-Development and Institutional Constraints: We study how AI capabilities interact with governance structures worldwide and have created an AI Threat Index, with a focus on China and Russia as case studies. Together, these work packages enable AI4POL to support effective, future-proof AI policies and regulations that reflect European values and societal priorities.
How we do it
AI4POL works collaboratively across disciplines and sectors to deliver tools and recommendations that are both rigorous and practical. Through close engagement with EU policymakers, regulators, industry experts, civil society, and the research community, AI4POL translates research into concrete outputs, such as policy briefs, workshops, dashboards, and legal frameworks. Our approach ensures that regulation of AI is not only informed by the latest technical and ethical insights but also aligned with the real-world needs of those tasked with enforcing it.
Who we are
The project brings together a diverse consortium of research and policy experts. Led by Jens Prüfer from Tilburg University, the project's interdisciplinary Work Packages are headed by Inge Graef from Tilburg University (WP2), Gjergji Kasneci from the Technical University of Munich (WP3), Sean Ennis from the University of East Anglia (WP4), and Jens Prüfer (WP5). Together, their teams combine expertise in law, economics, data science, and the social sciences—including political science and ethics—to advance responsible and future-proof AI governance across Europe.
Why we exist
Artificial Intelligence is rapidly transforming economies, societies, and global power structures. While the EU has introduced new AI regulations, technology continues to advance faster than policymakers can respond—often leaving them without the tools, expertise, or frameworks needed to keep up. AI4POL was created to help bridge this gap. The project focuses on how Europe can effectively implement its vision for fair, transparent, and socially responsible AI development. AI4POL aims to strengthen the EU’s role in the global AI landscape by providing policymakers and regulators with the knowledge, tools, and strategies they need to understand, monitor, and regulate AI technologies. The goal: to create evidence-based, future-proof policies grounded in human rights, European values, and the needs of citizens.
What we do
AI4POL is an EU-funded Horizon Europe research project that brings together experts from law, economics, data science, political science, and ethics to develop practical tools and policy insights for AI governance. The project focuses on four key areas: 1. Next Generation AI Governance: We work on tools and frameworks for secure and lawful data sharing, detecting self-preferencing in digital markets, and identifying online consumer manipulation—helping shape enforceable, resilient AI regulations. 2. AI-Powered Public Insight for Smarter, Citizen-Centric Regulation: We use AI to simplify complex digital regulations for citizens, while also creating structured feedback tools to help regulators better understand public needs and experiences. 3. Trustworthy AI for Financial Services: We are developing a policy toolkit based on risk models and ethical principles to help financial regulators assess and manage AI risks—supporting safe innovation and maintaining trust in financial systems. 4. AI-Development and Institutional Constraints: We study how AI capabilities interact with governance structures worldwide and have created an AI Threat Index, with a focus on China and Russia as case studies. Together, these work packages enable AI4POL to support effective, future-proof AI policies and regulations that reflect European values and societal priorities.
How we do it
AI4POL works collaboratively across disciplines and sectors to deliver tools and recommendations that are both rigorous and practical. Through close engagement with EU policymakers, regulators, industry experts, civil society, and the research community, AI4POL translates research into concrete outputs, such as policy briefs, workshops, dashboards, and legal frameworks. Our approach ensures that regulation of AI is not only informed by the latest technical and ethical insights but also aligned with the real-world needs of those tasked with enforcing it.
Who we are
The project brings together a diverse consortium of research and policy experts. Led by Jens Prüfer from Tilburg University, the project's interdisciplinary Work Packages are headed by Inge Graef from Tilburg University (WP2), Gjergji Kasneci from the Technical University of Munich (WP3), Sean Ennis from the University of East Anglia (WP4), and Jens Prüfer (WP5). Together, their teams combine expertise in law, economics, data science, and the social sciences—including political science and ethics—to advance responsible and future-proof AI governance across Europe.
Work Packages
Our Partners
This project has received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101177455
This project has received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101177455
This project has received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101177455
This project has received funding from the European Union’s Horizon Europe research and innovation program under grant agreement No 101177455