The AI Threat You’ll Never See Coming Is Already Talking to You Online

The AI Threat You’ll Never See Coming Is Already Talking to You Online

Coordinated swarms of AI personas can now mimic human behavior well enough to manipulate online political conversations and potentially influence elections.

They will not show up at rallies or cast ballots, but they can still move a democracy. Researchers are increasingly worried about AI-controlled personas that look and sound like ordinary users, then quietly steer what people see, share, and believe online.

A policy forum paper in Science describes how swarms of these personas could slip into real communities, build credibility over time, and nudge political conversations in targeted directions at machine speed. The main shift from earlier botnets is teamwork. Instead of posting the same spam in bulk, the accounts can coordinate continuously, learn from what gets traction, and keep the same storyline intact across thousands of profiles, even as individual accounts come and go.

Inside the Mechanics of AI Persona Networks
Newer large language models paired with multi-agent systems make it possible for one operator to run a whole cast of AI “voices” that appear local and authentic. Each persona can speak in a slightly different style, reference community norms, and respond quickly to pushback, which makes the activity harder to spot as manipulation.

The swarm can also run massive numbers of quick message tests, then amplify the versions that change minds most effectively. Done well, it can manufacture the feeling that “everyone is saying this,” even when that consensus is carefully engineered.

Read more

اپنا تبصرہ بھیجیں