AI bot swarms threaten to undermine democracy
When AI Can Fake Majorities, Democracy Slips Away

Full article
A joint essay with Daniel Thilo Schroeder & Jonas R. Kunst, based on a new paper on swarms with 22 authors (including myself) that just appeared in Science. (A preprint version is here, and you can see WIRED’s coverage here.)
Automated bots that purvey disinformation have been a problem since the early days of social media, and bad actors have been quick to jump on LLMs as a way of automating the generation of disinformation. But as we outline in the new article in Science we foresee something worse: swarms of AI bots acting together in concert.
The unique danger of a swarm is that it acts less like a megaphone and more like a coordinated social organism. Earlier botnets were simple-minded, mostly just copying and pasting messages at scale—and in well-studied cases (including Russia’s 2016 IRA effort on Twitter), their direct persuasive effects were hard to detect. Today’s swarms, now emerging, can coordinate fleets of synthetic personas—sometimes with persistent identities—and move in ways that are hard to distinguish from real communities. This is not hypothetical: in July 2024, the U.S. Department of Justice said it disrupted a Russia-linked, AI-enhanced bot farm tied to 968 X accounts impersonating Americans. And bots already make up a measurable slice of public conversation: a 2025 peer-reviewed analysis of major events estimated roughly one in five accounts/posts in those conversations were automated. Swarms don’t just broadcast propaganda; they can infiltrate communities by mimicking local slang and tone, build credibility over time, and then adapt in real time to audience reactions—testing variations at machine speed to discover what persuades.
Why is this dangerous for democracy? No democracy can guarantee perfect truth, but democratic deliberation depends on something more fragile: the independence of voices. The “wisdom of crowds” works only if the crowd is made of distinct individuals. When one operator can speak through thousands of masks, that independence collapses. We face the rise of synthetic consensus: swarms seeding narratives across disparate niches and amplifying them to create the illusion of grassroots agreement. Venture capital is already helping industrialize astroturfing: Doublespeed, backed by Andreessen Horowitz, advertises a way to “orchestrate actions on thousands of social accounts” and to mimic “natural user interaction” on physical devices so the activity appears human. Concrete signs of industrialization are already emerging: the Vanderbilt Institute of National Security released a cache of documents describing “GoLaxy” as an AI-driven influence machine built around data harvesting, profiling, and AI personas for large-scale operations.
Because humans update their views partly based on social evidence—looking to peers to see what is “normal”—fabricated swarms can make fringe views look like majority opinions. If swarms flood the web with duplicative, crawler-targeted content, they can execute “LLM grooming,” poisoning the training data that future AI models (and citizens) rely on. Even so-called “thinking” AI models are vulnerable to this,
We cannot ban our way out of the threat of generative-AI-fueled swarms of misinformation bots, but we can change the economics of manipulation. We need five concrete shifts.
First, social media platforms must move away from the “whack-a-mole” approach they currently use. Right now, companies rely on episodic takedowns—waiting until a disinformation campaign has already gone viral and done its damage before purging thousands of accounts in a single wave. This is too slow. Instead, we need continuous monitoring that looks for statistically unlikely coordination. Because AI can now generate unique text for every single post, looking for copy-pasted content no longer works. We must look at network behavior instead: a thousand users might be tweeting different things, but if they exhibit statistically improbable correlations in their semantic trajectories or propagate narratives with a synchronized efficiency that defies organic human diffusion.
Second, we need to stop waiting for attackers to invent new tactics before we build defenses. A defense that only reacts to yesterday’s tricks is destined to fail. We should instead proactively stress-test our defenses using agent-based simulations. Think of this like a digital fire drill or a vaccine trial: researchers can build a “synthetic” social network populated by AI agents, and then release their own test-swarms into that isolated environment. By watching how these test-bots try to manipulate the system, we can see which safeguards crumble and which hold up, allowing us to patch vulnerabilities before bad actors act on them in the real world.
Third, we must make it expensive to be a fake person. Policymakers need to incentivize cryptographic attestations and reputation standards to strengthen provenance. This doesn’t mean forcing every user to hand over their ID card to a tech giant—that would be dangerous for whistleblowers and dissidents living under authoritarian regimes. Instead, we need “verified-yet-anonymous” credentialing. Imagine a digital stamp that proves you are a unique human being without revealing which human you are. If we require this kind of “proof-of-human” for high-reach interactions, we make it mathematically difficult and financially ruinous for one operator to secretly run ten thousand accounts.
Fourth, we need mandated transparency through free data access for researchers. We cannot defend society if the battlefield is hidden behind proprietary walls. Currently, platforms restrict access to the data needed to detect these swarms, leaving independent experts blind. Legislation must guarantee vetted academic and civil society researchers free, privacy-preserving access to platform data. Without a guaranteed “right to study,” we are forced to trust the self-reporting of the very corporations that profit from the engagement these swarms generate.
Finally, we need to end the era of plausible deniability with an AI Influence Observatory. Crucially, this cannot be a government-run “Ministry of Truth.” Instead, it must be a distributed ecosystem of independent academic groups and NGOs. Their mandate is not to police content or decide who is right, but strictly to detect when the “public” is actually a coordinated swarm. By standardizing how evidence of bot-like networking is collected and publishing verified reports, this independent watchdog network would prevent the paralysis of “we can’t prove anything,” establishing a shared, factual record of when our public discourse is being engineered.
None of this guarantees safety. But it does change the economics of large-scale manipulation.
The point is not that AI makes democracy impossible. The point is that when it costs pennies to coordinate a fake mob and moments to counterfeit a human identity, the public square is left wide open to attack. Democracies don’t need to appoint a central authority to decide what is “true.” Instead, they need to rebuild the conditions where authentic human participation is unmistakable. We need an environment where real voices stand out clearly from synthetic noise.
Most importantly, we must ensure that secret, coordinated manipulation is economically punishing and operationally difficult. Right now, a bad actor can launch a massive bot swarm cheaply and safely. We need to flip those physics. The goal is to build a system where faking a consensus costs the attacker a fortune, where their network collapses like a house of cards the moment one bot is detected, and where it becomes technically impossible to grow a fake crowd large enough to fool the real one without getting caught.
– Daniel Thilo Schroeder, Gary Marcus, Jonas R. Kunst
Daniel Thilo Schroeder is a Research Scientist at SINTEF. His work combines large-scale data and simulation to study coordinated influence and AI-enabled manipulation (danielthiloschroeder.org).
Gary Marcus, Professor Emeritus at NYU, is a cognitive scientist and AI researcher with a strong interest in combatting misinformation.
Jonas R. Kunst is a professor of communication at BI Norwegian Business School, where he co-leads the Center for Democracy and Information Integrity.
Everyone thinks they’re above it, that they can detect the bots or remain unmoved. That’s why it won’t change and the only real solution is to stop using ALL social media. Even lemmy.
It’s always been the case that propaganda only works on the target audience. Thats why it’s so interesting to look through historical propaganda - it seems unreal and is easy to see through. Bots are just personalized propaganda machines.
Everyone thinks they’re above it, that they can detect the bots or remain unmoved.
I’d say the bigger problem is that there’s nothing “to be done” at an individualist level. Bot swarms are going to exist and you’re going to be exposed to them either directly (via your social media diet) or indirectly (via traditional news, casual conversation with people who consume social media, and the public policies that inevitably emerge from these trends). Simply “being aware” isn’t a defense. You’re still going to have your priors chipped away by these tidal shifts in public discourse.
Look at the impact the High Protein diet fetish has had on every fucking food retailer. Seems like every product has to include a sticker telling you how many grams of protein is in it. We went through the same diet fad exposure with GMOs and saturated fats and fiber. We got deluged with crime statistics in news, entertainment, and politics at every angle to justify the War on Crime. We’re still eating tons of shit from the anti-vax trends of the Bush Era.
But that won’t stop society from imploding.
Society isn’t ending. It’s always doing this shit. What’s changed is you. When you were young, the baseline felt normal because its all you knew. Now its changing under your feet and you’re struggling to rectify your childhood awareness with an evolved landscape as an adult.
Yes, good comment. But in the past it at least took some effort. You had to write a book that got popular or something along those lines.
Also, bots aren’t really threatening to undermine democracy, at least in my country, as it was undermined a long time ago.
We’ve further mechanized the process of sales and marketing in an industry that we’ve been at the forefront of since the turn of the 20th century.
Modern anti-AI guys would have been eating up Adbusters 'zines 30 years ago.
Well yeah, adbusters was based
Only if you’ve been asleep and ignored the bots, troll farms, social media subversives interested in clicks (paid or otherwise), international interference using the same tools, billionaires pushing the media to tell lies or hide the truth.
What a useless article. We’re already overwhelmed with lies and half-truths. The only thing that’s going to happen is that a few troll farm employees are going to be out of work when they get replaced by AI.
So does “Capitalism” - the psycho religious belief sponsored by its rich leaders - just another authoritarian/grifting cult…
Relevant XKCD
All of the things in that image are tactics that have been being used for nearly a decade now. It’s not a thing unique to AI.
LLMs have only made it so that the bad operators who are pushing these kinds of operations now have a force multiplier.
We have already have a bot problem, but the capabilities were such that only a human would be able to draft arguments and posts while the vast majority of bots were simply there to manipulate the algorithm/votes so that their content is signal boosted.
Now a single person can control a large amount of accounts which can actively respond and argue like a real person in addition to the existing vote manipulation bot swarm.
Look at how many ‘people’ suddenly appeared out of the woodwork after the Minnesota shooting and started posting despite being inactive for months. Go look on any big instance at the number of communities which have moderators squatting on popular community names despite them having 0 subscribers, traffic or posts.
I don’t doubt that there are entire instances being run by these operations so that, much like the ever famous r/conservative, they can control the echo chamber.
LLMs are just the latest tool in their arsenal, but this tactic of manipulating social media is taking place right now.
Oh no. We must stop this threat to democracy that hasn’t already happened yet.
Thank God democracy hasn’t already been undermined for years.
So much of the modern political landscape is just suburban white people waking up and realizing they aren’t exempt from the police state that’s been harassing everyone else since the nation’s founding.
Admittedly, this is how you get class consciousness. So it’s not bad, per say. Just a bit ironic that Trump’s War On Woke is radicalizing entire municipalities at a breakneck pace.
Just more ram bro I promise bro we’ll save the world with more ram bro please bro more ram
Malicious AI Swarm: thank you
We are so, so, so thoroughly cooked. I can’t even.
Just create a fact-checking / pro-democracy „AI swarm“. Tired of this fear mongering where apparently only the right wing knows how to use the technology, along with foreign troll farms.
That kind of thing requires budget, though. The right/capital has that, volunteers don’t.
Uhm all the non right wing parties? There‘s money there too. And it’s not necessarily so expensive.
With some good leadership in opposition, leading people to a coherent strategy, these influence operations would fall flat. It’s only because our opposition parties, all mainstream parties, are captured by the rich, and or rich nazis.
We need organization, and leadership. Now it’s too late perhaps though, fixed elections and all, even if we had that leadership which we don’t. Worthless disliked aristocrats will be our fake champions, promising empty platitudes that they at most will make perfunctionary efforts to fulfill then give up.
No reform of not offered. 2008 was the last time we were offered that, and it failed by and large.
Skynet plz.
Shutting down the Internet permanently is our only hope.
The old guard of human operated propaganda farms feel threatened by having their playing field leveled. They’re scared that non-capitalists may now have the means to have their voices heard over the cacophonous noise of the billionaire owned media and government sponsored troll farms.







