Could Artificial Intelligence Spell Doom For Journalism?

Could Artificial Intelligence Spell Doom For Journalism?
Representational image sources: Premium Times/Freepik

They say that with the onset of technology, people put more layers between them, and it gets hard to tell apart reel life from real life.

On top of misinformation, anchors delivering the news have also become deep fakes, all thanks to artificial intelligence tech. What could this mean for journalism?

With a sleek hairdo, almost as if the hair was plastered to his head, the video of a well-dressed young man, Alex, had surfaced online, talking about ineffective measures of the United States to curb rising gun violence. But something does not sit right with this video. Alex’s lips are not synced to the voiceover, the grammar in the captions is off, and he looks too animated to be real.

That’s exactly what it was – a deepfake – representing a fictitious channel called Wolf News, which used artificial intelligence (AI) software to create fake avatars and disseminate China’s state-aligned propaganda. As China still struggles to recover from the COVID-19 pandemic, it is now facing a deeper crisis of information propaganda.

This is not the first instance where deepfakes have been made mere puppets in the hands of the government. These advanced technologies have also distorted public figures like Ukraine’s President Volodymyr Zelenskyy, by making a fake video of him asking his country to surrender to Russia.

However, while using such software it’s also possible to create a completely unique avatar and pair it up with any of the 120 accents and languages offered, and top it up with 85 characters of different genders, ages, ethnicities, voice tones and fashion choices.

These state-of-the-art services are linked to Synthesia, a London-based AI company. The five-year-old startup develops a deepfake avatar creation software. Simply put up a script, and one of the digital actors created with Synthesia’s tools will read it for you. As its website reads, it’s “as simple as writing an e-mail”.

Synthesia’s activities were uncovered by a US research report called Graphika at a time when Beijing had actually adopted regulations to monitor misuse of AI to spread misinformation. The faster the technology has evolved these past years, the easier it has been for misinformation to seep through media channels and suit the pro-Communist government narrative.

While many thought that the emergence of AI could be transformative, it in turn has raised concerns of being a manipulative tool in the hands of the powerful.

When monitoring pro-China disinformation campaigns known as “spamouflage“, Graphika claimed to have come across the deepfakes on various sites including Twitter, Facebook, and YouTube.

It doesn’t simply end there. China had previously deployed networks across social media channels that supported pro-government principles and discredited Western actions. As tech savvy as this may seem, many tell-tale signs gave them away – high levels of activity promoting propaganda and repetitive use of the same hashtags, newly created accounts, accounts with usernames that appeared to be randomly generated, and accounts with very few followers.

While some of these profiles were formed to post unique content, others just shared, liked, and commented on original posts to get more eyeballs.

Since it is intended to mimic a grassroots campaign, this type of behaviour is frequently called “astroturfing”. In a nutshell, it all comes under ‘spamouflage’, a state-aligned influence operation that has been promoting China’s global prominence and disinformation since late 2022, as per the report “Deepfake It Till You Make It”.

These efforts transcended into removal of content within China such as international magazines with pages torn out, or the BBC flickering to black when it carried reports on delicate subjects like Tibet, Taiwan, or the Tiananmen Massacre of 1989.

However, this has taken a different turn today. The domestic approach has now turned international and has gotten more assertive.

It becomes paramount to understand that the co-founder and CEO of Synthesia, Victor Riparbelli, claimed that anyone who used its technology to produce the avatars found by Graphika had breached the company’s terms of service. These terms stipulate that “political, sexual, personal, criminal, and discriminatory content” should not be created with the company’s technology.

With the passage of time, misinformation and propaganda branched out to other platforms like YouTube, where the state-controlled narrative pulled the strings behind the cameras of influencers. These influencers not only had access to locations that international correspondents have been barred from reporting, but are also government-sponsored.

As the information apocalypse adds fuel to the fire, i.e. the breeding of propaganda and misinformation, distinguishing fact from fiction becomes tougher. Ironically enough, artificial intelligence is the only potent tool that safeguards us from the dangers of this digital battlefield. Such tools call for reinstating faith in web media, and the need for more transparency with respect to the general audience.

A complex scenario like this one reminds one of traditional laws that protect citizens from crime, terrorism, and give rise to a plethora of questions. Would dedicated laws against AI ensure the safety of users on the internet? What provisions can be made for the regulation of companies operating in this space? Should there be government oversight on their functioning? For now, these questions remain unanswered. However, authorities across the world will have to take swift action, at least to maintain their own longevity, if not to protect public interest.

 

Read more: What Do Increasing Indian Users On Gleeden Say About Monogamy In India?

Related Stories

Share this news

To Stay Updated Sign up Now