What you need to know about the ongoing fight to prevent AI-generated child porn
Detecting malware attacks is challenging, San Jose State University College of Engineering Professor Ahmed Banafa told Decrypt. Shutting down these websites becomes a game of whack-a-mole; when one website is shut down, others quickly replace it. In July, the company released a statement elaborating on the measures taken to improve the safety of its open-source models. They include a review of the Online Safety Act and proposed measures to address doxing – the use or publication of private or identifying material with malicious intent. Something bad happened that made Mattel’s new line of dolls for the upcoming film Wicked accidentally promote a link to a porn website. Funny, this happened just months after Mattel started designing its packaging with an AI tool from San Jose-based Adobe.
To have a better chance of forcing action, advocates for protection against image-based sexual abuse think regulations are required, though they differ on what kind of regulations would be most effective. The dataset for training AI will be checked to see if it contains child sexual abuse materials and, if confirmed, will be removed. The companies agreed to assess AI models for their potential to generate such images before hosting them. They will also work on improving technology to detect harmful materials and share information with governments.
She got nervous when going out that a stranger would recognize her from the videos. On the verge of starting graduate school, she wondered if future employers would find them and if her career would be over before it had even started. She’s watched her body, her voice, everything about herself distorted into a horror-film version of reality.
Both federal and state legislation will make its way through the vetting process this legislative, increasing the number and span of the laws on the books that address AI porn. The DEFIANCE Act would add a civil right of action for intimate “digital forgeries” depicting an identifiable person without their consent. The provision would let victims collect financial damages from anyone who “knowingly produced or possessed” the image with the intent to spread it, the bill reads. It builds on a provision in the Violence Against Women Act Reauthorization Act of 2022, which added a similar right of action for non-faked explicit images.
In May 2023, Rep. Joe Morelle (D-N.Y.) introduced the Preventing Deepfakes of Intimate Images Act, which would have criminalized the sharing of nonconsensual and sexually explicit deepfakes. Clarke says she thinks it’s taken a while for Congress to understand just how serious and pervasive of an issue this is. “It was an uphill battle,” Clarke says, noting that there weren’t many people in Congress who had even heard of this technology, let alone were thinking of regulating it. Alongside Clarke, Sen. Ben Sasse (R-Neb.) was an exception; he introduced a bill in December 2018, but it was short-lived. On the House side, the first congressional hearing on deepfakes and disinformation was in June 2019, timed with Clarke’s bill, the DeepFakes Accountability Act.
OpenAI said this could include “erotica, extreme gore, slurs, and unsolicited profanity”. It’s undoubtedly a success that this model is no longer available for download from Hugging Face. Unfortunately, it’s still available on Civitai, as are hundreds of derivative models. When we contacted Civitai, a spokesperson told us that they have no knowledge of what training data Stable Diffusion 1.5 used, and that they would only take it down if there was evidence of misuse. A Stability AI spokesperson emphasized that the company did not release or maintain Stable Diffusion version 1.5, and says the company has “implemented robust safeguards” against CSAM in subsequent models, including the use of filtered data sets for training.
Deepfakes are fake but highly realistic videos, audio or image put together by generative artificial intelligence (GenAI). Making deepfake pornography could soon be outlawed in the United Kingdom in what authorities say could be the first measure of its kind in the world. The amendment to the country’s Criminal Justice Bill will make adults who create deepfake porn “face the consequences of their actions”. One of the troubling trends is the proliferation of websites exclusively hosting celebrity porn videos. MrDeepFakes is the most viewed among these, featuring celebrities’ faces grafted onto porn stars’ bodies. Similarly, AdultDeepFakes, another popular site, had 19.4 million visits in January, half of which were unique visitors.
This follows last year’s rising cases of sextortion, which focused on manipulating the victims to send graphic images and then threatening to upload them online unless they pay with money. While others sell or share them online, this new tactic is now focusing on extorting the victims by showing them the generated images and asking the underaged to send the real deal to avoid being doxxed or exposed. Robin Opsahl is an Iowa Capital Dispatch reporter covering the state Legislature and politics.
The spread of generative AI has created a growing concern that the human rights of children could be violated by the generation of large numbers of sexual images that specifically resemble real individuals. In the absence of congressional action, the White House has collaborated with the private sector to conceive creative solutions to curb image-based sexual abuse. But critics aren’t optimistic about Big Tech’s ability to regulate itself, given the history of harm caused by its platforms. Most of the imagery was quickly removed as researchers shared their findings with impacted members of Congress. The companies behind these video generators appear to be based outside the United States in countries like the United Arab Emirates, Italy and China, according to their websites. The apps are available for free on Apple’s App Store and Google’s Play Store and already have millions of downloads.
Many of the most popular downloadable open-source AI image generators, including the popular Stable Diffusion version 1.5 model, were trained using this data. While Runway created that version of Stable Diffusion, Stability AI paid for the computing power to produce the dataset and train the model, and Stability AI released the subsequent versions. AI porn creators could also use the technology to produce other illegal content, such as child pornography. While Sora will restrict its use, prohibiting users from generating sexual content on the platform, this underlying technology will eventually find its way into AI-generated pornographic videos. Each nation-state must decide whether the distribution and possession of this material are to be viewed as a criminal acts, and if so, to what extent AI-generated images are also to be treated as child pornography.
Please see our republishing guidelines for use of any other photos and graphics. Authors of the study note that in the immediate aftermath, imagery targeting most of the members was entirely or almost entirely removed from the sites — a fact they’re unable to explain. Researchers have noted that such removals do not prevent material from being shared or uploaded again. In some cases involving lawmakers, search result pages remained indexed on Google despite the content being largely or entirely removed. Celebrities are frequent targets of deepfake pornographers, with pop star Taylor Swift bringing major attention to the issue through her own experiences, though ordinary women are the most common victims.
The fear is that by weakening filters in its products, OpenAI will be making it even easier to create the likes of deepfakes and illegal material. People flock to these websites in significant numbers, reflecting a widespread interest in the content they offer. The magnitude of this traffic makes it clear that societal curiosity and appetite for this controversial content category are on the rise. Additionally, it indicates deepfake technology’s growing influence on shaping online preferences for creators and consumers. Rep. J.D. Scholten, D-Sioux City, said the legislation is crucial as Iowa and the nation nears the 2024 election.
Making matters worse, some bad actors are using existing CSAM to generate synthetic images of these survivors—a horrific re-violation of their rights. Others are using the readily available “nudifying” apps to create sexual content from benign imagery of real children, and then using that newly generated content in sexual extortion schemes. Nonprofit Internet Watch Foundation (which collects reports of child sexual abuse material) detailed the ease with which malicious actors are now making photorealistic AI-generated child sexual abuse material, at scale. The researchers included a “snapshot” study of one dark web CSAM forum, analyzing more than 11,000 AI-generated images posted in a one-month period; of those, nearly 3,000 were judged severe enough to be classified as criminal. Susanna Gibson narrowly lost her competitive legislative race after a Republican operative shared nonconsensual recordings of sexually explicit livestreams featuring the Virginia Democrat and her husband with The Washington Post.
By gradually pushing boundaries and building rapport, I got the system to drift further from its safety guidelines with each interaction. What started as firm refusals ended in the model “trying” to help me by improving on its mistakes—and gradually undressing a person. Meta AI isn’t supposed to generate nudity or violence—but, again, for educational purposes only, I wanted to test that claim. This is also an outdated technique, and any modern chatbot shouldn’t fall for it that easily. However, it could be said that it’s the base for some of the most sophisticated prompt-based jailbreaking techniques. It refused to teach me how to steal a car, but when asked to roleplay as a screenwriter, Meta AI quickly provided detailed instructions on how to break into a car using „MacGyver-style techniques.“
In the campaign cycle so far, there have already been instances of AI being used to generate content some say is misleading without disclosure. Neither ad contained disclosures about the use of AI in creating the advertising material. OpenAI and rivals have been refining their filtering and moderation tools for years. But users constantly discover workarounds that enable them to abuse the companies’ AI models, apps and platforms. It’s not the first time OpenAI has telegraphed a willingness to dip a toe into controversial territory.
All porn sites must ‚robustly‘ verify UK user ages by July.
Posted: Thu, 16 Jan 2025 08:00:00 GMT [source]
Mira Murati, OpenAI’s chief technology officer, told the Wall Street Journal this year she was not sure if the company would allow its video-making tool Sora to create nude images. The spread of AI-generated pornography was underlined this year when X, formerly known as Twitter, was forced to temporarily ban searches for Taylor Swift content after the site was deluged with deepfake explicit images of the singer. OpenAI, the company behind ChatGPT, is exploring whether users should be allowed to create artificial intelligence-generated pornography and other explicit content with its products. Among the different laws, and the proposals for new laws, there’s considerable disagreement about whether the distribution of deepfake porn should be considered a criminal or civil matter. And if it’s civil, which means that victims have the right to sue for damages, there’s disagreement about whether the victims should be able to sue the individuals who distributed the deepfake porn or the platforms that hosted it. OpenAI is considering how its technology could responsibly generate a range of different content that might be considered NSFW, including slurs and erotica.
A 2023 report by Home Security Heroes (a company that reviews identity-theft protection services) found that it took just one clear image of a face and less than 25 minutes to create a 60-second deepfake pornographic video—for free. The act focuses on sexually explicit deep fake material, revenge porn and enforcement procedures when it comes to social media platforms removing any content posted. As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material.
Sydney-based generative artificial intelligence startup Leonardo Ai has pledged to ramp up efforts to tackle deepfake porn created on its platform after revelations some users were bypassing restrictions on creating nonconsensual sexual content. Meta has displayed over 2,500 ads for “AI kissing” apps across Instagram and Facebook, a Forbes review found. TikTok has shown about 1,000 ads to millions of users in European countries, according to its ad library. (TikTok’s ad library doesn’t include ads shown to its U.S.-based users) Most of these ads depict celebrities like Scarlett Johansson, Emma Watson and Gal Gadot kissing one another. Similar in concept to “AI nudifier” apps that produce nonconsensual deepfake pornography, these AI kissing apps create believable videos of people doing something they didn’t do. And the ease at which they do it is a concerning habitualization of deepfake imagery.
Nudification, a common deepfake method of converting a normal picture of the target into a sexually explicit one, has spawned an entire industry, spurred by recent advancements in generative AI. Research by Australian software firm Canva found that at the high end, apps offering these services can pull in tens of millions of users a month. The “traumatising” solicitation and creation of non-consensual deepfake pornography could be criminalised as Baroness Charlotte Owen takes a new Bill to its second reading.
With the rise of virtual and augmented reality environments, we can also anticipate that — like with present-day pornography — AI porn will soon provide increasingly immersive experiences. More complex text-to-video generators already exist, however, the anticipated release of OpenAI’s model Sora suggests significant progress in text-to-video generation, namely in its high level of realism, complex scene creation and unmatched video length. Currently, there are over 50 free websites offering AI porn, and this number will only increase. Websites such as Candy.ai, Lustlab.ai, and Pornify.cc allow users to design AI characters to their own preferences, making their fantasies come to life.
On the text generation side, it’s trivial to find chatbots built on top of supposedly “safe” models, such as Anthropic’s Claude 3, that readily spit out erotica. On the other hand, it could lead to problematic overuse of pornography, the spread of deepfakes, and the production of illegal content, such as child pornography. At the time, Anderegg’s arrest marked one of the first known instances where the FBI had charged someone for using AI to create child sexual abuse material.
There are existing separate laws that cover the possession of sexually explicit images of real children or images designed to be childlike which can already capture artificially generated material. IWF’s October report found AI CSAM has increased the potential for the re-victimization of known child sexual abuse victims, as well as for the victimization of famous children and children known to perpetrators. The IWF has found many examples of AI-generated images featuring known victims and famous children. Dodge wants to reframe the conversation; instead of highlighting the phenomenon of new technology being able to create these hyperrealistic images, he wants to shift the focus to how this is creating an unprecedented amount of sexual-violence victims. He thinks that the more people are educated about this as a form of abuse, versus as a harmless joke, the more it could help with prevention. Using the images gathered from children via the internet or sent to them, these pedophiles and bad actors use AI tools to remove the clothing and create fake child porn content.