AI-generated child sexual abuse content increasingly being found on internet, says watchdog

In the past six months alone, analysts at the Internet Watch Foundation have seen a 6% increase in confirmed reports containing AI generated child sexual abuse material, compared with the preceding 12 months.
Image used for representational purposes only.
Image used for representational purposes only. File photo | EPS
Updated on
3 min read

AI generated child sexual abuse content is increasingly being found on publicly accessible areas of the internet, exposing even more people to the harmful and horrific imagery, according to a safety watchdog.

The Internet Watch Foundation (IWF) said that many of the images and videos of children being hurt and abused are so realistic that they can be very difficult to tell apart from imagery of real children and are regarded as criminal content in the eyes of UK law, much in the same way as ‘traditional’ child sexual abuse material would be.

In the past six months alone, analysts at the IWF have seen a 6% increase in confirmed reports containing AI generated child sexual abuse material, compared with the preceding 12 months.

The IWF, Europe’s largest hotline dedicated to finding and removing child sexual abuse imagery from the internet, is warning that almost all the content (99%) was found on publicly available areas of the internet and was not hidden on the dark web.

Most of the reports have come from members of the public (78%) who have stumbled across the criminal imagery on sites such as forums or AI galleries. The remainder were actioned by IWF analysts through proactive searching.

Analysts say that viewing AI generated content of children being sexually abused can be as distressing as seeing real children in abuse imagery if a person is not prepared or trained to cope with seeing such material.

Some AI child sexual abuse material is classed as non-photographic imagery, such as cartoons, and is also regarded as harmful to view and accordingly assessed by IWF analysts.

The IWF traces where child sexual abuse content is hosted so that analysts can act to get it swiftly removed.

More than half of the AI generated content found in the past six months was hosted on servers in two countries, the Russian Federation (36%) and the United States (22%), with Japan and the Netherlands following at 11% and 8% respectively.

Jeff, a Senior Internet Content Analyst at the IWF, said: “This criminal content is not confined to mysterious places on the dark web. Nearly all of the reports or URLs that we’ve dealt with that contained AI generated child sexual abuse material were found on the clear web.

“I find it really chilling, as it feels like we are at a tipping point and the potential is there for organisations like ourselves and the police to be overwhelmed by hundreds and hundreds of new images, where we don’t always know if there is a real child that needs help.”

Survivors have told the IWF how traumatising it is for images of their abuse to continue to be circulated and used online, as it impacts on their ability to heal and move on from their ordeal.

Derek Ray-Hill, Interim Chief Executive Officer at the IWF, said: “People can be under no illusion that AI generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online.

“To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.

“The protection of children and the prevention of AI abuse imagery must be prioritised by legislators and the tech industry above any thought of profit. Recent months show that this problem is not going away and is in fact getting worse. We urgently need to bring laws up to speed for the digital age, and see tangible measures being put in place that address potential risks,” he added.

Related Stories

No stories found.
The New Indian Express
www.newindianexpress.com