Deepfakes, Blackmail, and the Dangers of Generative AI

Deepfakes, Blackmail, and the Dangers of Generative AI

Deepfakes, Blackmail, and the Dangers of Generative AI | DeviceDaily.com

 

The capability of generative AI is accelerating rapidly, but fake videos and images are already causing real harm, writes Dan Purcell, Founder of Ceartas io.

This recent public service announcement by the FBI has warned about the dangers AI deepfakes pose to privacy and safety online. Cybercriminals are known to be exploiting and blackmailing individuals by digitally manipulating images into explicit fakes and threatening to release them online unless a sum of money is paid.

This, and other steps being taken, are ultimately a good thing. Still, I believe the problem is already more widespread than anybody realizes, and new efforts to combat it are urgently required.

Why can Deepfakes be located so easily?

What is troubling for me about harmful deepfakes is the ease with which they can be located. Rather than the dark, murky recesses of the internet, they are found in the mainstream social media apps that most of us already have on our smartphones.

A bill to criminalize those who share deepfake sexual images of others

On Wednesday, May 10th, Senate lawmakers in Minnesota passed a bill that, when ratified, will criminalize those who share deepfake sexual images of others without their prior consent. The bill was passed almost unanimously to include those who share deepfakes to unduly influence an election or to damage a political candidate.

Other states that have passed similar legislation include California, Virginia, and Texas.

I’m delighted about the passing of this bill and hope it’s not too long before it’s fully passed into law.  However, I feel that more stringent legislation is required throughout all American states and globally. The EU is leading the way on this.

Minnesota’s Senate and the FBI warnings

I’m most optimistic that the robust actions of Minnesota’s Senate and the FBI warnings will prompt a national debate on this critical issue. My reasons are professional but also deeply personal. Some years ago, a former partner of mine uploaded intimate sexual images of me without my prior consent.

NO protection for the individual affected — yet

The photos were online for about two years before I found out, and when I did, the experience was both embarrassing and traumatizing. It seemed completely disturbing to me that such an act could be committed without recourse for the perpetrator or protection for the individual affected by such an action. It was, however, the catalyst for my future business as I vowed to develop a solution that would track, locate, verify, and ultimately remove content of a non-consensual nature.

Deepfake images which attracted worldwide interest

Deepfake images which attracted worldwide interest and attention recently include the arrest of former Donald Trump, Pope Francis’ stylish white puffer coat, and French President Emmanuel Macron working as a garbage collector. The latter was when France’s pension reform strikes were at their height. The immediate reaction to these photos was their realism, though very few viewers were fooled. Memorable? Yes. Damaging? Not quite, but the potential is there.

President Biden has addressed the issue

President Biden, who recently addressed the dangers of AI with tech leaders at the White House, was at the center of a deepfake controversy in April of this year. After announcing his intention to run for re-election in the 2024 U.S.

Presidential election, the RNC (Republican National Committee) responded with a YouTube ad attacking the President using entirely AI-generated images. A small disclaimer on the top left of the video attests to this, though the disclaimer was so small there’s a distinct possibility that some viewers might mistake the images as real.

If the RNC had chosen to go down a different route and focus on Biden’s advanced age or mobility, AI images of him in a nursing home or wheelchair could potentially sway voters regarding his suitability for office for another four-year term.

Manipulation images has the potential to be highly dangerous

There’s no doubt that the manipulation of such images has the potential to be highly dangerous. The 1st Amendment is supposed to protect freedom of speech. With deepfake technology, rational, thoughtful political debate is now in jeopardy. I can see political attacks becoming more and more chaotic as 2024 looms.

If the U.S. President can find themselves in such a vulnerable position in terms of protecting his integrity, values, and reputation. What hope do the rest of the world’s citizens have?

Some deepfake videos are more convincing than others, but I have found in my professional life that it’s not just highly skilled computer engineers involved in their production. A laptop and some basic computer know-how can be virtually all it takes, and there are plenty of online sources of information too.

Learn to know the difference between a real and fake video

For those of us working directly in tech, knowing the difference between a real and fake video is comparatively easy. But the ability of the wider community to spot a deepfake may not be as simple. A worldwide study in 2022 showed that 57 percent of consumers declared they could detect a deepfake video, while 43 percent claimed they could not tell the difference between a deepfake video and a real one.

This cohort will doubtless include those of voting age. What this means is convincing deepfakes have the potential to determine the outcome of an election if the video in question involves a political candidate.

Generative AI

Musician and songwriter Sting recently released a statement warning that songwriters should not be complacent as they now compete with generative AI systems. I can see his point. A group called the Human Artistry Campaign is currently running an online petition to keep human expression “at the center of the creative process and protecting creators’ livelihoods and work’.’

The petition asserts that AI can never be a substitute for human accomplishment and creativity. TDM (text and data mining) is one of several ways AI can copy a musician’s voice or style of composition and involves training large amounts of data.

AI can benefit us as humans.

While I can see how AI can benefit us as humans, I am concerned about the issues surrounding the proper governance of generative AI within organizations. These include lack of transparency, data leakage, bias, toxic language, and copyright.

We must have stronger regulations and legislation.

Without stronger regulation, generative AI threatens to exploit individuals, regardless of whether they are public figures or not. In my opinion, the rapid advancement of such technology will make this notably worse, and the recent FBI warning reflects this.

While this threat continues to grow, so does the time and money poured into AI research and development. The global market value of AI is currently valued at nearly US$ 100 billion and is expected to soar to almost two trillion US dollars by 2030.

Here is a real-life incident reported on the news, KSL, today.

— Please read so you can protect your children — especially teenagers.

Read Here.

The parents have recently released this information to help all of us.

The top three categories were identity theft and imposter scams

The technology is already advanced enough that a deepfake video can be generated from just one image, while a passable recreation version of a person’s voice only requires a few seconds of audio. By contrast, among the millions of consumer reports filed last year, the top three categories were identity theft and imposter scams, with as much as $ 8.8 billion was lost in 2022 as a result.

Back to Minnesota law, the record shows that one sole representative voted against the bill to criminalize those who share deepfake sexual images. I wonder what their motivation was to do so.

I’ve been a victim myself!

As a victim myself, I have been quite vocal on the topic, so I would view it as quite a ‘cut and dried’ issue. When it happened to me, I felt very much alone and didn’t know who to turn to for help. Thankfully things have moved on in leaps and bounds since then.  I hope this positive momentum continues so others don’t experience the same trauma I did.

Dan Purcell is the founder and CEO of Ceartas DMCA, a leading AI-powered copyright and brand protection company that works with the world’s top creators, agencies, and brands to prevent the unauthorized use and distribution of their content. Please visit www.ceartas.io for more information.

Featured Image Credit: Rahul Pandit; Pexels; Thank you!

The post Deepfakes, Blackmail, and the Dangers of Generative AI appeared first on ReadWrite.

ReadWrite

(21)