After a few weird social media interactions with accounts that seemed inhumanely cruel, I started to get suspicious about whether these were real people. Clicking on particularly aggressive user accounts turned up profiles with very few or no friends and no pictures… suspect indeed. I saw the term “rage baiting bots” and something clicked in my head. I was so intrigued by the proliferation of foreign misinformation bots during the last two presidential elections, so the idea of rage baiting bots wasn’t too far off. I decided to take a deeper dive, and what I’ve learned about them is totally freaky, but also heartening in a way. Realizing that all that nastiness isn’t coming from real people gives me a little hope.
Did you all know about this? Have you heard of rage baiting bots?
Here’s some info: Rage baiting bots are automated or semi-automated social media accounts designed to provoke emotional reactions, especially anger, fear, or outrage, in users. They are generative AI at its worst. They’re used to amplify divisive content, derail conversations, or flood threads with inflammatory comments. While they can appear on all sides of the political spectrum, many researchers have noted a disproportionate presence of far-right or authoritarian-aligned rage baiting bots in recent years. Want a real-life example? (Note: the first two comments are from the rage baiting bot, and the last is from someone following the discussion.)
Familiar, right? And, terrible! I have not included my comment that elicited this response, which has since been deleted. I responded calmly, with facts, and a Master’s degree in Epidemiology. I responded with concern, not scorn, as someone who knows about vaccines. Fortunately, I have a thick skin, which is apparently holding in all the fat, according to this person.
What exactly are rage baiting bots?
• Bots: Fully automated accounts that post, like, retweet, or reply without human input. They rely on generative AI.
• Cyborgs: Accounts run by humans but assisted by automation tools.
• Troll farms: Coordinated groups of real people who act like bots, mass-producing outrage and misinformation.
These accounts often:
• Use triggering language (e.g., racial slurs, hyper-partisan rhetoric).
• Amplify conspiracy theories or misinformation.
• Mimic real users (sometimes with AI-generated profile pictures).
• Hijack trending topics or hashtags to inject divisive content.
Who’s behind them?
The origins of rage baiting bots vary. They can come from:
1. State actors:
• Russian, Chinese, and Iranian troll farms have been widely documented running disinformation campaigns, often supporting right-wing or authoritarian narratives in the U.S. and Europe.
• Example: Russia’s Internet Research Agency (IRA) played a major role in the 2016 U.S. election, using bots and fake accounts to sow division.
2. Political campaigns and interest groups:
• Some domestic political actors deploy bots to simulate grassroots support (“astroturfing”).
• Right-wing PACs and media influencers have been linked to bot amplification networks.
3. Private companies for legitimate and/or shady marketing.
4. People using rage reactions to drive social media engagement to make money.
How common are rage baiting bots?
While exact numbers are elusive, here’s what we know:
• On X (formerly Twitter) social bots make up less than 1% of total users, but posted more than 30% of Trump’s impeachment-related content. The issue here is that these bots share information from garbage sources as easily as they share vetted information.
• A Brookings Institution report found that bot activity tends to spike around elections, court rulings, major tragedies, or controversial legislation.
• The decline in content moderation (e.g. under Elon Musk’s ownership of X) has made it easier for bots to operate unchecked, making them more visible and pervasive now than a few years ago.
Why rage baiting bots matter
Rage bots can:
• Distort public opinion: They create the illusion of widespread support for fringe views.
• Disrupt civil discourse: By injecting hostility, they can shut down genuine conversation.
• Manipulate media narratives: Journalists and influencers can be misled by bot-amplified trends.
Now, when I see a particularly spiteful account like this with no ties to a human, I have a much better understanding of what’s going on and I refuse to waste my energy arguing with bots. Even when they are sharing false and dangerous information about vaccines.
Pin this post and be sure to follow Vermont Moms on Pinterest!
Vermont Moms Insiders get exclusive content, so sign up today!