When the generative-AI boom first kicked off, one of the biggest concerns among pundits and experts was that hyperrealistic AI deepfakes could be used to influence elections. But new research from the Alan Turing Institute in the UK shows that those fears might have been overblown. AI-generated falsehoods and deepfakes seem to have had no effect on election results in the UK, France, and the European Parliament, as well as other elections around the world so far this year.

Instead of using generative AI to interfere in elections, state actors such as Russia are relying on well-established techniques—such as social bots that flood comment sections—to sow division and create confusion, says Sam Stockwell, the researcher who conducted the study. Read more about it from me here.

But one of the most consequential elections of the year is still ahead of us. In just over a month, Americans will head to the polls to choose Donald Trump or Kamala Harris as their next president. Are the Russians saving their GPUs for the US elections? 

So far, that does not seem to be the case, says Stockwell, who has been monitoring viral AI disinformation around the US elections too. Bad actors are “still relying on these well-established methods that have been used for years, if not decades, around things such as social bot accounts that try to create the impression that pro-Russian policies are gaining traction among the US public,” he says. 

And when they do try to use generative-AI tools, they don’t seem to pay off, he adds. For example, one information campaign with strong ties to Russia, called Copy Cop, has been trying to use chatbots to rewrite genuine news stories on Russia’s war in Ukraine to reflect pro-Russian narratives. 

The problem? They’re forgetting to remove the prompts from the articles they publish. 

In the short term, there are a few things that the US can do to counter more immediate harms, says Stockwell. For example, some states, such as Arizona and Colorado, are already conducting red-teaming workshops with election polling officials and law enforcement to simulate worst-case scenarios involving AI threats on Election Day. There also needs to be heightened collaboration between social media platforms, their online safety teams, fact-checking organizations, disinformation researchers, and law enforcement to ensure that viral influencing efforts can be exposed, debunked, and taken down, says Stockwell. 

But while state actors aren’t using deepfakes, that hasn’t stopped the candidates themselves. Most recently Donald Trump has used AI-generated images implying that Taylor Swift had endorsed him. (Soon after, the pop star offered her endorsement to Harris.) 



Source link

By admin

Malcare WordPress Security