Through the middle of a high-stakes election being held during a mind-melting heat wave, a blizzard of confusing deepfakes blows across India. The variety seems endless: A.I.-powered mimicry, ventriloquy and deceptive editing effects. Some of it is crude, some jokey, some so obviously fake that it could never be expected to be seen as real.

The overall effect is confounding, adding to a social media landscape already inundated with misinformation. The volume of online detritus is far too great for any election commission to track, let alone debunk.

A diverse bunch of vigilante fact-checking outfits have sprung up to fill the breach. While the wheels of law grind slowly and unevenly, the job of tracking down deepfakes has been taken up by hundreds of government workers and private fact-checking groups based in India.

“We have to be ready,” said Surya Sen, a forestry officer in the state of Karnataka who has been reassigned during the election to manage a team of 70 people hunting down deceptive A.I.-generated content. “Social media is a battleground this year.” When Mr. Sen’s team finds content they believe is illegal, they tell social media platforms to take it down, publicize the deception or even ask for criminal charges to filed.

Celebrities have become familiar fodder for politically pointed tricks, including Ranveer Singh, a star in Hindi cinema.

During a videotaped interview with an Indian news agency at the Ganges River in Varanasi, Mr. Singh praised the powerful prime minister, Narendra Modi, for celebrating “our rich cultural heritage.” But that is not what viewers heard when an altered version of the video, with a voice that sounded like Mr. Singh’s and a nearly perfect lip sync, made the rounds on social media.

“We call these lip-sync deepfakes,” said Pamposh Raina, who leads the Deepfakes Analysis Unit, a collective of Indian media houses that opened a tip line on WhatsApp where people can send suspicious videos and audio to be scrutinized. She said the video of Mr. Singh was a typical example of authentic footage edited with an A.I.-cloned voice. The actor filed a complaint with the Mumbai police’s Cyber Crime Unit.

In this election, no party has a monopoly on deceptive content. Another manipulated clip opened with authentic footage showing Rahul Gandhi, Mr. Modi’s most prominent opponent, partaking in the mundane ritual of swearing himself in as a candidate. Then it was layered with an A.I.-generated audio track.

Mr. Gandhi did not actually resign from his party. This clip contains a personal dig, too, making Mr. Gandhi seem to say that he could “no longer pretend to be Hindu.” The governing Bharatiya Janata Party presents itself as a defender of the Hindu faith, and its opponents as traitors or impostors.

Sometimes, political deepfakes veer into the supernatural. Dead politicians have a way of coming back to life via uncanny, A.I.-generated likenesses that endorse the real-life campaigns of their descendants.

In a video that appeared a few days before voting began in April, a resurrected H. Vasanthakumar, who died of Covid-19 in 2020, spoke indirectly about his own death and blessed his son Vijay, who is running for his father’s former parliamentary seat in the southern state of Tamil Nadu. This apparition followed an example set by two other deceased titans of Tamil politics, Muthuvel Karunanidhi and Jayalalithaa Jayaram.

Mr. Modi’s government has been framing laws that are supposed to protect Indians from deepfakes and other kinds of misleading content. An “IT Rules” act of 2021 makes online platforms, unlike in the United States, responsible for all kinds of objectionable content, including impersonations intended to cause insult. The Internet Freedom Foundation, an Indian digital rights group, which has argued that these powers are far too broad, is tracking 17 legal challenges to the law.

But the prime minister himself seems receptive to some kinds of A.I.-generated content. A pair of videos produced with A.I. tools show two of India’s biggest politicians, Mr. Modi and Mamata Banerjee, one of his staunchest opponents, emulating a viral YouTube video of the American rapper Lil Yachty doing “the HARDEST walk out EVER.”

Mr. Modi shared the video on X, saying such creativity was “a delight.” Election officers like Mr. Sen in Karnataka called it political satire: “A Modi rock star is fine and not a violation. People know this is fake.”

The police in West Bengal, where Ms. Banerjee is the chief minister, sent notices to some people for posting “offensive, malicious and inciting” content.

On the hunt for deepfakes, Mr. Sen said his team in Karnataka, which works for a state government controlled by the opposition, vigilantly scrolls through social media platforms like Instagram and X, searching for keywords and repeatedly refreshing the accounts of popular influencers.

The Deepfakes Analysis Unit has 12 fact-checking partners in the media, including a couple that are close to Mr. Modi’s national government. Ms. Raina said her unit works with external forensics labs, too, including one at the University of California, Berkeley. They use A.I.-detection software such as TrueMedia, which scans media files and determines whether they should be trusted.

Some tech-savvy engineers are refining A.I.-forensic software to identify which portion of a video was manipulated, all the way down to individual pixels.

Pratik Sinha, a founder of Alt News, the most venerable of India’s independent fact-checking sites, said that the possibilities of deepfakes had not yet been fully harnessed. Someday, he said, videos could show politicians not only saying things they did not say but also doing things they did not do.

Dr. Hany Farid has been teaching digital forensics at Berkeley for 25 years and collaborates with the Deepfakes Analysis Unit on some cases. He said that while “we’re catching the bad deepfakes,” if more sophisticated fakes entered the arena, they might go undetected.

In India as elsewhere, the arms race is on, between deepfakers and fact-checkers — fighting from all sides. Dr. Farid described this as “the first year I would say we have really started to see the impact of A.I. in interesting and more nefarious ways.”



Source link

By admin

Malcare WordPress Security