Your face in an intimate video? Artificial intelligence advances without laws

Market for pornographic videos using deepfakes is booming

Market for pornographic videos using deepfakes is booming

Photo: Charles Deluvio via Unsplash

You deepfakeshyperrealistic montages created with aritifical intelligencehave worried experts since its first tests, in mid-2017. Among many uses, from humor videos to fake news policies, the production of pornography with this resource it was a topic that already attracted attention. Now, with AI tools making assembly easier, the problem seems even bigger.

A cybersecurity firm report deeptrance, published in 2019, already showed the scale of the problem. Data from the time showed that 96% of deepfake videos were pornographic. Of this portion, 100% were women. When deepfakes were posted on channels in the YouTube without erotic content, the 100% female victims dropped to 39%.

Since the beginning of 2018, videos have been circulating on the internet with the faces of celebrities in pornographic videos. the actresses Emma Watson It is Anne Hathaway were the first to be portrayed in this way. But nowadays websites are filled with a bigger portfolio of faces, like Mariah Carey, Beyonce It is Gal Gadotamong others.

In Brazil, the situation is no different. A quick search on the internet for this report found videos that had montages of the type with the presenter Elianathe actress Marina Ruy Barbosathe singer Anita and the influencers Rafa Kalimann It is Mariana Nolasco.

Those who don’t find the face they want can even use programs to produce their own material, with several web applications available. Some of them even advertise their services on big platforms.

Experts heard by Byte believe that the initiative expresses a misogynistic feeling of power towards women. Furthermore, as long as artificial intelligence remains a lawless land in many countries, we will be far from being able to solve the problem.

Apps are on the market

Ads for deepfake apps appear directly alongside explicit videos on various porn sites. An NBC survey in March showed that an app for creating such videos had run more than 230 ads across NBC’s services. Goalincluding Facebook, Instagram and Messenger.

Some of the ads showed what appeared to be the beginnings of pornographic videos with the well-known sound of the porn platform’s intro track. pornhub playing. Seconds later, the women’s faces were swapped with those of famous actresses.

After NBC News asked Meta for comment, all ads for the app were removed from the company’s services.

so much to apple and how much Google removed the application exposed by NBC from their app stores. But there are still many other options available. When searching for “deepfake” in app stores like the Play Store, dozens of apps with similar technological resources appear.

App stores are full of deepfake generators

Photo: Terra Byte

Despite many platforms banning apps that produce manipulative and malicious content, many still go unnoticed. After all, it’s hard to predict the intention of the person who downloads the program to produce content.

“People can profit both legally and illegally,” explains Bruno Sartori, entrepreneur and artist whose deepfake works have become popular in Brazil in recent years. Its most accessed contents are parodies involving the former president Jair Bolsonaro.

He explains that a content producer can, for example, create fake pornography of himself to publish on sites like OnlyFans and earn your money legally. Or a person can sell their image and allow others to build upon it.

Recently, two computer science students used AI to create a hyperrealistic female avatar and make money by selling sexy photos on forums. In an interview with Rolling Stone, the young men claimed to have created the account as a joke, after discovering a post on Reddit about a user who had earned US$ 500 selling photos of real women.

F19 Feeling pretty today 🙂

by u/Cl4ud14_ in Faces

“On the other hand, there are in obscure corners of the internet who offer the service to create this false content, usually with celebrities, in a criminal way”, says Sartori.

The other illegal possibility is to create deepfake material to extort a victim and profit from it. Or even promote the “revenge porn“, the act of sharing explicit content with the aim of vindictively humiliating someone.

Sight adjustment?

Deepfakes are part of the debate about facilities provided by AI that can become dangerous. The latest such controversy involved a image of Pope Francis in an alleged designer outfit which went viral on social media and even appeared on news portals. But everything, of course, was just a creation.

“The misuse will happen when any technology becomes a commodity (merchandise)”, says Rafael Sbarai, professor of digital products and data at FAAP (Fundação Armando Alvares Penteado).

The expert believes it is very difficult to stop the development of artificial intelligence tools, as proposed by technology executives, including Elon Musk, in an open letter at the end of last month. The document called for a six-month moratorium on training the next generation of AI tools to give regulators and industry time to set safety standards.

“Pausing AI development would only increase unfair competition, given that some companies will respect it and others won’t”, points out Tainá Junquilo, doctor in law and researcher at the Rio Institute of Technology and Society (ITS Rio).

Regulating misuse, on the other hand, may be possible, but it will depend on the will of political actors, according to Sbarai.

“Without government, state and laws – even global – it becomes increasingly difficult to debate the subject. I am not in favor of stricter laws, but a deeper discussion between government, state and society on the subject so that laws can be created. The European Union (EU) does this very well,” he points out.

European lawmakers are debating rules for the use of AI by companies, demanding transparency of how these models work internally, under penalty of fine.

The block of proposals is being called the “AI Law” (or AI Actin English) and deals with the control of the application of technology, such as use in professional tests and examinations, aid in court sentences and facial recognition in public places.

Last Monday (17), lawmakers tasked with writing a new version of the bill said they are committed to adding excerpts to “direct the development of very powerful artificial intelligence in a human-centric, safe and reliable direction,” according to a statement. document reviewed by The Wall Street Journal.

It’s in Brazil?

Some specialists believe that, if Europe takes the lead in regulation, other countries may use the legal piece to support their own laws.

In Brazil, the type of project that made the most progress is the Bill 21/2020, which was approved in the Chamber of Deputies in September 2021 and is awaiting a vote in the Federal Senate. The text received additions from teams of jurists and still needs to undergo substantial changes.

Among the points that most generated debate in the PL are the difficulty of technical definitions by legislators, such as the very concept of AI and other terms in the area of ​​data processing. Also under discussion is the existence of an authority to implement and impose administrative sanctions.

In addition to the discussion of the PL, Junquilho, from ITS Rio, defends legislation that determines ethics committees within companies. They would point out bias correction and transparency in the use of AI, such as for example that all images and speeches generated by robots have digital stamps identifying their origin.

PL that seeks to regulate the use of AI is defended by experts

Photo: Marcello Casal Jr via Agência Brasil

But, despite being important, laws and self-regulation alone do not solve the problem, in the view of the interviewees.

“The sole paragraph of Article 216 of the Penal Code already considers sexual harassment by anyone who makes a montage to include another person in pornographic content, for example. The penalty is six months to one year imprisonment and a fine, but, really, no I believe it inhibits those who intend to create this illegal content”, comments Sartori.

The deepfake creator is in favor of educational campaigns so that the population first understands what technology is.

“How are people going to suspect what they are seeing if most of them are unaware of the technology’s existence?”, he asks.

Without awareness work, the normalization of deepfakes may make the distinction between what is real and what is not, for some people, no longer matter. This is the vision of Matheus Santos, master in communication from USP (University of São Paulo) and researcher in the Desinformante project.

“We need to value the ‘importance of caring’, questioning, consuming passively. If we get to the point where everyone doesn’t care, we lose the common thread that can unite everyone in the same reality”, he comments.

How did we get here?

With this type of technology, the role of submission and control of women, often represented in pornographic pieces, which stage rapes and domination scenarios, becomes even more accessible.

“A person who has their image used without their authorization feels absolutely violated”, says psychologist and sexologist Luísa Miranda.

For her, those who consume this type of content seek a type of validation that they do not get in the dynamics of society. The specialist believes that technologies, by themselves, cannot be blamed, because they are just escape tools.

“The user seeks in pornography this place of ‘I can have several women at my disposal’, something that does not happen in reality. In real life, you have to be willing to change to be a better person, and not everyone is willing to do that,” he says.

Source: Byte Writing

Source link

About Admin

Check Also

Combat expert Wim de Puter has a successful martial arts school with fifty members: “Just a little while and the school will be full” (Leuven)

martial arts jiujitsu Wim de Puter is already 41, but moving on: “My age doesn’t …

Leave a Reply

Your email address will not be published. Required fields are marked *