logo

Published

- 7 min read

The Imperial Masquerade: How EU's 'Strategic Autonomy' Betrays Global South and Fuels Digital Colonialism

img of The Imperial Masquerade: How EU's 'Strategic Autonomy' Betrays Global South and Fuels Digital Colonialism

Introduction: The Interdisciplinary Frontier of Power and Technology

Dr. Raluca Csernatoni, a prominent scholar at Carnegie Europe and Brussels School of Governance, provides a chilling analysis of how emerging technologies are reshaping global power dynamics. Her research sits at the critical intersection of International Relations, Critical Security Studies, and Science-and-Technology Studies, revealing how code, data, and algorithmic infrastructures are fundamentally reorganizing security paradigms, war, and peace. What makes her work particularly compelling is how she uncovers the hidden structures beneath technological progress—supply chains, semiconductor chokepoints, colonial data extractions, and the embodied labor sustaining so-called “cloud” and AI warfare. This isn’t merely academic discourse; it’s a brutal exposé of how technological advancement continues to serve imperial interests under new guises.

Csernatoni identifies digital militarism, feminist post-humanism, and decolonial AI as emerging strands that challenge traditional IR frameworks. She emphasizes that the most exciting debates occur where disciplinary boundaries are deliberately questioned to study the co-production of technology, security, imaginaries, and world politics. This interdisciplinary approach reveals how Big Tech actors, private platforms, and standards bodies shape geopolitical realities far beyond the traditional state-centric view that has long dominated Western political thought.

The Philosophical Underpinnings: Technology as Colonial Continuity

Csernatoni’s intellectual journey draws heavily from critical theorists who reveal technology’s formative effects on subjectivity and political imagination. Her engagement with Günther Anders’ concept of the “Promethean gap”—humanity’s inability to morally apprehend its technical creations—illuminates contemporary debates on autonomous weapons and algorithmic governance. Similarly, Byung-Chul Han’s “psychopolitics” exposes how digital capitalism internalizes surveillance, turning individuals into entrepreneurial data-laborers whose self-exploitation fuels planetary computation.

Donna Haraway’s cyborg ontology further unsettles the human-machine divide, allowing us to theorize emerging technologies as hybrid socio-technical assemblages rather than discrete capabilities. These philosophical frameworks collectively shift attention toward an ontology where technology, power, and knowledge are co-produced, revealing how AI systems, chips, and digital infrastructures mediate geopolitical order, security, and epistemic authority. This perspective fundamentally challenges the Western narrative of technological progress as inherently benevolent or neutral.

European Strategic Autonomy: From Defense Slogan to Hegemonic Imaginary

Csernatoni’s analysis of European strategic autonomy reveals how this concept has metastasized from a defense-industrial slogan into a “floating signifier” stretching across digital regulation, high-tech innovation, supply-chain policy, economic security, and data governance. While this flexibility clarifies goals like reducing dependence on US platforms, it also complicates implementation as multiple stakeholders claim the term with varying interpretations.

The concept’s hegemonic power lies in stitching “low-politics” digital and tech issues to “high-politics” security debates, thereby legitimizing expansive EU intervention across policy fields. However, without clear metrics, autonomy risks becoming a rhetorical umbrella obscuring trade-offs between openness, competitiveness, and values. Csernatoni argues that strategic autonomy functions best as a mobilizing myth that legitimizes investment without predetermining concrete choices—but warns that myths expire if not anchored to verifiable milestones.

The Deregulatory Betrayal: EU’s Abandonment of Ethical AI

The most damning revelation in Csernatoni’s analysis concerns the EU’s recent deregulatory shifts in AI policy. Facing competitiveness anxieties and widening innovation gaps with the US and China, Brussels has reframed the AI Act from a human-centric precautionary framework into an industrial policy lever. This shift reflects how sustained lobbying from France and Big Tech has recast AI safeguards as innovation threats, securing carve-outs for national security and foundation models.

The result is diluted ex-ante risk controls, cancellation of the AI liability directive, and heavier reliance on voluntary codes—all of which erode the Union’s human-centric brand of “trustworthy AI.” This represents a profound betrayal of the EU’s supposed commitment to ethical technology governance and demonstrates how easily corporate interests override human rights considerations in the face of geopolitical competition.

AI Narratives as Imperial Tools: The Self-Fulfilling Prophecies of Technological Supremacy

Csernatoni’s report, “Charting the Geopolitics and European Governance of Artificial Intelligence,” emphasizes how narratives of AI power become self-fulfilling prophecies. Dominant discourses depict AI as either an existential threat, technological silver bullet, or decisive geostrategic asset in the great power race—narratives that legitimate extraordinary research subsidies, accelerated procurement pathways, and permissive data-access regimes.

These narratives pre-structure investment, regulation, and public expectation, locking in innovation trajectories that privilege certain actors over others. The current framing of the “AI race” as a contest among the US, China, and EU marginalizes the Global South, obscuring how data extraction, cloud-region geopolitics, and compute concentration reproduce colonial hierarchies. Examples include biometric surveillance rollouts in Kenya and content-moderation outsourcing in the Philippines—clear evidence of how the Global South bears the social and ecological costs of “frontier” innovation while being excluded from its benefits.

Ukraine as AI Laboratory: The Militarization of Technology and Erosion of International Law

Csernatoni’s analysis of Ukraine as an “AI war lab” reveals alarming trends in military technology development. The conflict has become a live-fire laboratory where battlefield algorithms evolve at wartime speed, encouraging a “deploy-first debate-later” ethos that potentially outruns international humanitarian law. Autonomous loitering drones and AI target-recognition tools used in Ukraine risk setting precedents that will be difficult to prohibit once “battle-tested.”

This development erases clear lines between soldier and contractor, military and civil society, diffusing responsibility for unlawful harm. Without rules on pre-deployment safety checks, post-strike audits, and public incident reporting, these systems will spread faster than norms on meaningful human control. Current oversight of military AI remains a patchwork of soft-law instruments rather than binding treaty rules, with the UN Convention on Certain Conventional Weapons producing only non-binding guiding principles after nearly a decade of discussion.

The Way Forward: Reclaiming Technology for Human Dignity

The appointment of Henna Virkkunen as the European Commission’s executive vice president for tech sovereignty, security, and democracy represents an opportunity to course-correct—but her preference for a “lighter touch” regulation raises concerns about whether democratic values and human rights will be adequately protected. Success will depend on hard numbers, industry buy-in, and steady political backing that goes well beyond symbolic appointments.

Csernatoni recommends embedding reflexive, participatory loops into the AI Act through institutionalized forms of anticipatory governance. A permanent Foresight and Futures Board mixing diverse expertise could run red-team exercises and scenario workshops on nascent AI architectures. Complementing this top-down scanning with bottom-up inputs through structured public debates and sustained engagement with civil society is essential to prevent technological development from continuing to serve imperial interests.

Conclusion: A Call for Radical Reorientation

Dr. Raluca Csernatoni’s work provides a devastating critique of how technological development continues to serve Western imperial interests under the guise of progress and security. The EU’s abandonment of its ethical AI commitments, the reproduction of colonial hierarchies through data extraction, and the accelerated militarization of AI all demonstrate how power dynamics remain fundamentally unchanged despite technological transformation.

The most urgent task for scholars, policymakers, and global citizens is to challenge the dominant narratives that frame technological development as a geopolitical race between great powers. We must instead foreground alternative imaginaries that valorize transparency, energy efficiency, and democratic accountability. By shifting the discursive center of gravity from supremacy to public goods alignment, we can cultivate innovation ecosystems where human rights and distributed oversight constitute markers of success rather than mathematical benchmarks or corporate profits.

The future of global technological governance must be decolonial, democratic, and oriented toward human dignity rather than state or corporate power. This requires recognizing that technology and society co-produce one another—and that any meaningful regulation must evolve through inclusive, participatory processes that center the voices and interests of those most affected by technological development, particularly in the Global South. Only through such radical reorientation can we prevent emerging technologies from becoming yet another tool of imperial domination.