In a world where more people get their news from social media than from newspapers or TV, it’s time we asked a hard question: How much of what we see online is real? The uncomfortable answer is: less than we think. What we often perceive as authentic public opinion is increasingly the result of manipulation—by bots, paid armies of fake accounts, marketing companies, AI-generated news sites, and even entire states weaponizing information. And most of it is happening in plain sight.
Let’s start with something disturbingly simple: for the price of a pizza, you can buy massive influence. On freelance marketplaces, it takes just a few clicks to purchase thousands of fake followers, likes, shares, and comments on platforms like Instagram, TikTok, YouTube, and X (formerly Twitter). Ask yourself: how many of your own posts have ever received thousands of likes or shares?
The answer, for most people, is probably none—which is exactly what makes this manipulation so deceptive. These engagement-boosting services are typically operated from countries like India, the Philippines, and Vietnam, where a booming industry of digital laborers has turned fake influence into a full-time business.
Whether it’s an influencer faking popularity, a company simulating customer praise, or a political actor trying to make a fringe opinion look mainstream — this business of fake engagement is distorting our information environment. It’s hard to tell what’s trending because people care, and what’s trending because someone paid to make it trend.
/nginx/o/2024/03/27/15968290t1he5ff.jpg)
None of this is new. For years, corporations have engaged in social media manipulation under the guise of marketing—commissioning so-called “astroturfing” campaigns where fake accounts pose as everyday people endorsing a product or attacking a competitor. These tactics were once limited to boosting consumer brands, but they’ve now spilled over into politics at scale. Political operatives, PR firms, and even governments are hiring the same engagement tactics once used to sell sneakers or soda to sell ideologies, smear opponents, and seed disinformation. What began as commercial spin is now political warfare.
A more insidious form of manipulation is driven not by ideology, but by profit. Engagement farms — often based in countries like India, the Philippines, and Vietnam —are using what’s known as rage farming to grow huge audiences. These operators exploit divisive cultural and political content (often around movements like MAGA) to provoke anger, tribal loyalty, and obsessive engagement.
Here’s how it works: the accounts post inflammatory, stolen, or reworded content 24/7 —now easily automated with AI — designed to incite outrage and attract likes, shares, and comments. These posts often revolve around polarizing topics like immigration, vaccines, climate change, race, gender, and other issues that are guaranteed to spark strong emotional reactions.
The more engagement, the more reach. The more reach, the more ad revenue. With ad share models on platforms like YouTube, Facebook, and X, even a modestly successful rage account can generate hundreds or thousands of dollars a month— a life-changing income in parts of the world where average wages are far lower.
As revealed in a 2025 investigation by 404 Media, much of Facebook’s AI-generated junk content comes from content farms in the Philippines and Vietnam, where low-wage workers churn out clickbait, AI slop, and rehashed political outrage designed to harvest engagement and ad revenue. What looks like an army of angry citizens might actually be a couple of laptops in a café in Ho Chi Minh City or Manila. And what looks like political tribalism may in fact be a business model — weaponizing emotion for clicks and cash.
Social media algorithms are designed to prioritize content that elicits strong emotional reactions—especially anger, fear, or moral outrage. The result: rage farming content isn’t just tolerated by platforms; it’s often supercharged by them. Engagement becomes visibility. Visibility becomes virality.
This creates a feedback loop. Bots and fake engagement push content up the feed. The algorithm picks it up and boosts it further. People see it, react, and share it — believing it’s organically popular. This creates the illusion of consensus, or worse, the illusion of truth.
Users who engage with this content are quickly sorted into echo chambers, where they only see posts that confirm their beliefs. This isolates communities, radicalizes individuals, and makes users more vulnerable to future manipulation — political or otherwise. It also weakens a society’s ability to hold constructive debate or recognize objective facts, which are essential for a functioning democracy.
/nginx/o/2025/03/12/16709706t1hb637.jpg)
The disinformation economy has been supercharged by generative AI. In early 2024, journalist Jack Brewster spent just $105 on the freelancer platform Fiverr to build a fully automated propaganda site. He hired a contractor to create a fake news outlet that used AI — including ChatGPT — to rewrite real articles and push a partisan agenda, publishing dozens of stories a day with zero oversight. It took less than a week.
Given how fast AI has advanced since then, today it would likely be cheaper, faster, and require even less skill. The barrier to entry is gone—and as AI becomes more powerful and accessible, flooding the internet with convincing, well-written lies is only getting easier.
Russia has taken these tactics to the next level. According to investigations by France24 and others, Russian networks are flooding the internet with millions of fake news websites— machine-generated and translated into dozens of languages. These sites promote pro-Kremlin narratives, rewrite history, and manufacture consent. Their goal isn't just to sway public opinion but to poison the datasets of AI models, a tactic known as "LLM grooming."
By overwhelming the internet with plausible-looking but false content, these operations aim to corrupt the future of knowledge itself. If large language models are trained on polluted data, they may start to parrot misinformation without knowing it. This is the new frontier of disinformation: not just influencing people, but influencing the algorithms that influence people.
But it’s critical to understand that not all manipulation originates from Russian or Chinese troll farms. Disinformation is now a massive, globalized business — an ecosystem with countless actors, each pursuing their own agendas. Some are financially motivated, chasing ad revenue and click-based profits. Others are political campaigners, ideological extremists, corporate lobbyists, or state-sponsored agents. The tools of manipulation are now widely accessible, meaning anyone — from a nationalist group in Europe to a freelance contractor in Southeast Asia — can run a propaganda campaign.
The battlefield is fragmented, competitive, and saturated with noise. And in this chaotic landscape, truth often becomes unpopular and irrelevant.