Algorithms at War: AI and the Future of Sovereignty
- Liliana Trigilio

- 3 days ago
- 7 min read
The recent increase in weaponized disinformation campaigns is raising concerns about AI's role in the changing global political landscape.

Artificial Intelligence (“AI”) offers remarkable opportunities to improve how we work, communicate, and innovate, but its growing role in global politics demands careful oversight.[2] While AI can enhance efficiency and connection, the same technology also enables disinformation campaigns that are faster, cheaper, and more persuasive than ever before. [3]
Why AI Changes the Game
Propaganda and influence campaigns have existed for as long as international relations itself, but AI turbocharges these practices.[4] Generative AI models, machine systems trained to produce new content by mimicking patterns in the data they were fed, can pump out fake news articles, photos, or even deepfake videos in seconds.[5] Automatons spread disinformation across platforms, while social media algorithms then amplify the most sensational posts, pushing divisive falsehoods into millions of users’ feeds before fact-checkers have the chance to react.[6] The advanced AI content makes it hard to distinguish real and fake media, and because fake media often is able to spread wider and faster, this propaganda can control public perception.[7]
This is not hypothetical; it is happening now. In 2025, Moldova’s elections were hit by Russian-linked AI-generated disinformation, including deepfake videos and spoofed news sites.[8] In response, the United States has sanctioned state-sponsored entities tied to Iran’s Islamic Revolutionary Guard Corps (IRGC) and Russia’s Main Intelligence Directorate (GRU) for AI-enabled election interference.[9]
The Legal Fault Lines
These incidents are more than isolated security concerns; they expose unresolved legal fault lines within the doctrine of non-intervention. Under international law, states cannot intervene in each other's internal affairs.[10] Elections are at the heart of sovereignty, so when AI-driven disinformation manipulates voters, it risks crossing the line into unlawful intervention. Some scholars argue that if foreign content distorts the choices of sovereign [11]
There is also a human rights dimension to this issue. Free and fair elections are tied to the right of political participation, and disinformation that skews public opinion undermines that right.[12] But cracking down too aggressively risks chilling free expression, another core human right. [13] Authoritarian governments already use the message of the “fight against disinformation” as a cover for censorship.[14] Even if disinformation is illegal, proving who is behind it is tough.[15] AI makes attribution harder: bots, fake accounts, and synthetic AI-generated content mask state involvement. [16] Without a clear means of attribution, holding states accountable under international law remains elusive.[17]
National and Regional Resources
Governments are starting to act, albeit unevenly. For example, Brazil’s Supreme Electoral Tribunal has taken a hardline against AI disinformation.[18] Brazil’s electoral court warned candidates they could lose the right to run or have their mandate revoked if they use AI-generated smear tactics.[19] The EU’s Digital Services Act requires major platforms to monitor and mitigate system, taking a hard stance against the use of deepfakes in campaign materials and threatening to disqualify candidates who spread AI-generated falsehoods.[20] The U.S. is targeting abuses piecemeal, such as by flagging deepfake ads and requiring social media users to document the origin of their content.[21] But legal frameworks remain patchy. Technology evolves faster than regulations, allowing malicious actors to stay one step ahead.[22]
The Political Stakes
AI-enabled disinformation is not just a legal issue - it is a political weapon.[23] AI-driven disinformation poses a range of risks, including but not limited to the following:
First, AI causes an erosion of trust as repeated exposure to fake content erodes confidence in media, elections, and institutions. [24] Second, algorithm-driven amplification pushes people to extremes, making societies easier to divide and polarize.[25] Third, there is a risk of hybrid warfare as foreign powers increasingly deploy disinformation as part of statecraft, short of outright conflict.[26] Lastly, there are significant authoritarian risks. States may use the threat of disinformation to justify mass censorship and surveillance.[27]
Building a Defense
The path forward is not simple, but several strategies are emerging. First, define when disinformation amounts to coercion or unlawful intervention under international law.[28] To date, no organization has formally defined when disinformation amounts to coercion or unlawful intervention under international law.[29] The Europe Union has come the closest to treating disinformation as a potential breach of sovereignty through frameworks such as the Digital Services Act and Code of Practice on Disinformation, but broad regulatory protections alone will not shield citizens from the evolving coercive potential of AI-driven influence operations.[30]
Second, invest in technical tools and multilateral mechanisms for tracing AI campaigns to their sources.[31] The United States has taken sweeping action to regulate AI-enabled disinformation through sanctions, election rules, and targeted prohibitions, but the technology is evolving quickly. Legal frameworks must be able to develop in tandem, which in turn requires the development of technologically advanced tools.[32]
Third, require transparency about AI-generated content and take stronger action against coordinated inauthentic behavior. In a significant step, California’s governor Gavin Newsom signed a suite of new laws aimed at regulating artificial-intelligence systems and social-media platforms.[33] The new law requires social media companies to label AI-generated content, impose age-verification controls for children’s apps, and mandates transparency for large online platforms about the origins of uploaded material beginning in 2027.[34] In parallel, India’s government has proposed sweeping new regulations requiring AI-generated content to be clearly labelled, obliging platforms to verify users’ assertions and use built-in tools to detect and limit the spread of misleading content.[35]
Lastly, coordinate globally. Democracies should share intelligence, align sanctions, and set common standards.[36]Democracies that face AI-enabled disinformation campaigns cannot rely solely on national measures; they must share intelligence, align sanctions, and set common regulatory standards to close loopholes that malign actors exploit.[37] By establishing cross-border information-sharing mechanisms, mutual recognition of AI-audit certifications, and joint sanctions lists, democratic governments can present a unified front that reduces safe-havens for disinformation operations and strengthens collective resilience.[38]
Conclusion
Artificial intelligence is not just noise online or help with grocery lists—it’s a new front in international politics.[39] It operates as a subtle yet potent tool of statecraft, capable of eroding public trust, destabilizing democratic elections, and blurring the line between legitimate influence and foreign interference. From deepfakes designed to sway voters to coordinated networks amplifying polarizing narratives, AI-fueled tactics weaponize information ecosystems at a scale and speed governments struggle to counter, and democracies that fail to adapt risk losing ground not to armies, but to algorithms.[40]
[1] Raimond Spekking, [Parliamentary elections in Moldova photograph], European Forum for Democracy & Solidarity (Sept. 30, 2025), https://europeanforum.net/election-overview-parliamentary-elections-in-moldova/ .
[2] Md. Abul Mansur, AI and Cyber-Enabled Threats to Democracy through Algorithmic Manipulation and Generative AI in Undermining Democratic Integrity, 44 Eur. Sci. J. ESI Preprints 658 (Aug. 26, 2025), https://eujournal.org/index.php/esj/article/view/20103.
[3] Rhiannon Williams, Humans May Be More Likely to Believe Disinformation Generated by AI, MIT Tech. Rev. (June 28, 2023), https://www.technologyreview.com/2023/06/28/1075683/humans-may-be-more-likely-to-believe-disinformation-generated-by-ai/.
[4] Jessica Brandt, Propaganda, Foreign Interference, and Generative AI, Brookings Inst. (Nov. 8, 2023), https://www.brookings.edu/articles/propaganda-foreign-interference-and-generative-ai/.
[5] Id.
[6] Md. Abul Mansur, supra note 1.
[7] Rhiannon Williams, supra note 2.
[8] Oana Marocico & Seamus Mirodan, How Russian-funded fake news network aims to disrupt election in Europe - BBC investigation, BBC News (Sept. 20, 2025), https://www.bbc.com/news/articles/c4g5kl0n5d2o Promo-Lex, a Moldovan election-monitoring nonprofit, identified over 500 fake TikTok accounts spreading coordinated anti-EU and anti-Sandu narratives that amassed 1.3 million views in just three days; the Moldovan government responded by creating a national center to counter disinformation. Id.
[9] U.S. Dep’t of the Treasury, Treasury Sanctions Entities in Iran and Russia That Attempted to Interfere in the U.S. 2024 Election, Press Release No. JY2766 (Dec. 31, 2024), https://home.treasury.gov/news/press-releases/jy2766.
[10] Sir Michael Wood, Non-Intervention (Non-interference in Domestic Affairs), Princeton Encyclopedia of Self-Determination (2025), https://pesd.princeton.edu/node/551.
[11] Angelo Toma, Algorithmic Foreign Influence: Rethinking Sovereignty in the Age of AI, Lawfare (Aug. 20, 2025), https://www.lawfaremedia.org/article/algorithmic-foreign-influence--rethinking-sovereignty-in-the-age-of-ai.
[12] Toma, supra note 11.
[13] Sean Stevens, Americans Worry About AI in Politics — But They’re More Worried About Government Censorship, FIRE, (June 6, 2025), https://www.thefire.org/news/americans-worry-about-ai-politics-theyre-more-worried-about-government-censorship.
[14] Joel Simon, Avoiding the Disinformation Trap,New Yorker (Feb. 12, 2024), https://www.newyorker.com/news/daily-comment/avoiding-the-disinformation-trap.
[15] Bradley Honigberg, The Existential Threat of AI-Enhanced Disinformation Operations, Just Security (July 8, 2022), https://cset.georgetown.edu/article/the-existential-threat-of-ai-enhanced-disinformation-operations/.
[16] Id.
[17] Anatolii Marushchak, Stanislav Petrov & Anayit Khoperiya, Countering AI-Powered Disinformation Through Legal Regulation, (PMC, Feb. 2025), https://pmc.ncbi.nlm.nih.gov/articles/PMC11747593/.
[18] Gustavo Borges, Combating Disinformation by the Brazilian Judiciary: Initiatives for the 2024 Municipal Elections, Wilson Ctr. (Oct. 25, 2024), https://www.wilsoncenter.org/blog-post/combating-disinformation-brazilian-judiciary-initiatives-2024-municipal-elections.
[19] Brazil Justice Moraes Warns Political Candidates Not to Use AI Against Opponents, Reuters (Feb. 29, 2024), https://www.reuters.com/world/americas/brazil-justice-moraes-warns-political-candidates-not-use-ai-against-opponents-2024-02-29/.
[20] The Digital Services Act (DSA): Ensuring a Safe and Accountable Online Environment, Eur. Comm’n (Oct. 27, 2022), https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en.
[21] Kyra Paul-Fowler, 2024 Trending Legislation: Federal and State Attempts to Curb Deepfakes, Vill. L. Rev. Blog (Aug. 21, 2024), https://www.villanovalawreview.com/post/2642-2024-trending-legislation-federal-and-state-attempts-to-curb-deepfakes.
[22] Nicholas Emery-Xu, Richard Jordan & Robert Trager, International Governance of Advancing Artificial Intelligence, 40 AI & Soc’y 3019 (2024), https://doi.org/10.1007/s00146-024-02050-7.
[23] Bradley Honigberg, supra note 16.
[24] Shanze Hasan & Abdiaziz Ahmed, Gauging the AI Threat to Free and Fair Elections, Brennan Ctr. for Just. (Mar. 6, 2025), https://www.brennancenter.org/our-work/analysis-opinion/gauging-ai-threat-free-and-fair-elections.
[25] Id.
[26] Md. Abul Mansur, supra note 1.
[27] Joel Simon, supra note 15.
[28] Prohibition of Intervention, Cyber Law Toolkit (last visited Jan. 4, 2026) https://cyberlaw.ccdcoe.org/wiki/Prohibition_of_intervention.
[29] Matija Franklin, Philip Moreira Tomei & Rebecca Gorman, Vague Concepts in the EU AI Act Will Not Protect Citizens from AI Manipulation, OECD.AI (Sept. 7, 2023), https://oecd.ai/en/wonk/eu-ai-act-manipulation-definitions.
[30] Id.
[31] Jon Bateman & Dean Jackson, Countering Disinformation Effectively: An Evidence-Based Policy Guide, Carnegie Endowment for Int’l Peace (Jan. 31, 2024), https://carnegieendowment.org/research/2024/01/countering-disinformation-effectively-an-evidence-based-policy-guid.
[32] America’s AI Action Plan, The White House (July 2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
[33] Bollag, Sophia, California Governor Signs New Artificial Intelligence Laws, GovTech (Oct. 14, 2025), https://www.govtech.com/policy/california-governor-signs-new-artificial-intelligence-laws.
[34] Id.
[35] Aditya Kalra & Munsif Vengattil, India Proposes Strict Rules to Label AI Content Citing Growing Risks, Reuters (Oct. 22, 2025), https://www.reuters.com/business/media-telecom/india-proposes-strict-it-rules-labelling-deepfakes-amid-ai-misuse-2025-10-22/.
[36] Cheng-chi (Kirin) Chang, The First Global AI Treaty: Analyzing the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, U. Ill. L. Rev. Online 86 (Dec. 20, 2024). https://illinoislawreview.org/online/the-first-global-ai-treaty/.
[37] Alexander Romanishyn, Olena Malytska & Vitaliy Goncharuk, AI-Driven Disinformation: Policy Recommendations for Democratic Resilience, 8 Futur. & Artificial Intell. 1569115 (2025), https://doi.org/10.3389/frai.2025.1569115.
[38] Romanishyn, Malytska & Goncharuk, supra note 37.
[39] Md. Abul Mansur, supra note 1.
[40] Williams, supra note 2.



Comments