
Europe has long treated artificial intelligence (AI) as an economic opportunity. Researchers at TILEC argue that framing is now dangerously outdated.
For years, European policy discussions on AI have centered on innovation, competitiveness, and economic growth. But the research led by the Tilburg Law and Economics Center (TILEC), carried out within the AI4POL project, challenges this framing at its root. AI, the researchers argue, is no longer just a tool for productivity — it has become a geopolitical instrument of power.
Not all AI poses the same risk
A key contribution of the research is its insistence on precision. Rather than treating AI as a single abstract technology, the team distinguishes four fundamentally different types; each carrying its own risk profile:
Type 1Rule-based systems Classical expert systems operating on explicit, predefined logic. | Type 2Data-driven learning Machine learning models trained on large datasets to identify patterns. |
Type 3Autonomous physical systems Drones and robotics capable of acting independently in the physical world. | Type 4Generative AI Systems that produce text, images, and video at scale and with high fidelity. |
This taxonomy is not an academic exercise, it is the foundation for a more targeted approach to AI governance and threat assessment. Generative AI, for example, can be weaponized to flood European societies with tailored disinformation, potentially swaying elections. Autonomous weapons systems compress the time between detection and attack, making military escalation faster and far harder to manage diplomatically. Meanwhile, Europe's continued dependence on foreign chips, cloud infrastructure, and AI models creates structural vulnerabilities that adversaries can exploit.
The greatest risk arises when these threats converge: cyberattacks, disinformation, and political manipulation can simultaneously put pressure on critical infrastructure and democratic institutions.
Jens Prüfer (Project Lead)
From general debate to real-time detection
The TILEC team's response to this challenge is concrete. Together with Centerdata, they are developing a real-time threat dashboard capable of detecting early-stage AI-enabled threats originating from countries such as China and Russia. The tool is designed to help policymakers act quickly and with greater situational awareness — moving from reactive crisis management to proactive, intelligence-driven governance.
Central to WP5 is the development of a theoretical AI Threat Index: a framework that maps the intersection of a state's technological AI maturity with the institutional and governance constraints on those who control high-risk AI systems. Political, economic, legal, and ethical dimensions are all factored in. This index feeds into the dynamic dashboard and supports EU policymakers with actionable, value-based recommendations.
Reframing the question
The researchers are emphatic: Europe does not need more general debates about AI. It needs targeted defensive strategies. The fundamental question has shifted. It is no longer how Europe can economically benefit from AI, but how it can protect itself against AI as a geopolitical weapon. That reorientation — from opportunity to resilience — may be the most important policy insight the project offers.
Read more
