Videos of sexually suggestive, AI-generated children are racking up millions of likes on TikTok, study finds
By Clare Duffy, CNN
New York (CNN) — AI-generated videos showing what appear to be underage girls in sexualized clothing or positions have together racked up millions of likes on TikTok, even though the platform’s rules prohibit such content, according to new research from an online safety non-profit.
Researchers found more than a dozen accounts posting videos featuring AI-generated girls wearing tight clothing, lingerie or school uniforms, sometimes in suggestive positions. The accounts have hundreds of thousands of followers combined. Comments on many of the videos included links to chats on the messaging platform Telegram, which offered child pornography for purchase, according to the report.
Thirteen accounts remained active as of Wednesday evening after 15 of them were flagged through TikTok’s reporting tool last week, according to Carlos Hernández-Echevarría, who led the research as assistant director and head of public policy at Maldita.es. The Spain-based non-profit, which studies online disinformation and promotes media transparency, released the report Thursday.
The report raises questions about TikTok’s ability to enforce its own policies regarding AI content, even when that content appears to show sexualized images of computer-generated children. Tech platforms including TikTok face increased pressure to protect young users as more jurisdictions pass online safety legislation, including Australia’s under-16 social media ban, which went into effect this week.
“This is not nuanced at all,” Hernández-Echevarría told CNN. “Nobody that is, you know, a real person doesn’t find this gross and want it removed.”
TikTok says it has a zero tolerance policy for content that “shows, promotes or engages in youth sexual abuse or exploitation.” Its community guidelines specifically prohibit “accounts focused on AI images of youth in clothing suited for adults, or sexualized poses or facial expressions.” Another section of its policies states that TikTok does not allow “sexual content involving a young person, including anything that shows or suggests abuse or sexual activity,” which includes “AI-generated images” and “anything that sexualizes or fetishizes a young person’s body.”
The company says it uses a combination of vision, audio and text-based tools, along with human teams, to moderate content. Between April and June 2025, TikTok removed more than 189 million videos and banned more than 108 million accounts, according to the company. It says 99% of content violating its policies on nudity and body exposure, including of young people, was removed proactively, and 97% of content violating its policies on AI-generated content was removed proactively.
A TikTok spokesperson did not provide a comment specific to the report.
The findings
Maldita.es discovered the TikTok videos through test accounts it uses to monitor for potential disinformation or other harmful content as part of its work.
“One of our team members started to see that there was this trend of these (AI-generated videos of) really, really young kids dressed as adults and, particularly when you went into the comments, you could see that there was some money incentive there,” Hernández-Echevarría said.
Some of the accounts described their videos in their bio sections as “delicious-looking high school girls” or “junior models,” according to the report. “Even more subtle videos like those of young girls licking ice cream are full of crude sexual comments,” it states.
In some cases, the accountholders used TikTok’s “AI Alive” feature — which animates still images — to turn AI-generated images into videos, Hernández-Echevarría said. Other videos appeared to have been created using external AI tools, he said.
Comments on many of the videos included links to private Telegram chats that advertised child pornography, according to the report.
“Some of the accounts responded to our direct messages on TikTok with links to external websites that sold AI-generated videos and images that sexualized minors, with prices ranging from 50 to 150 euros,” the report states.
Researchers did not follow through with any transactions, and the group reported the websites and Telegram accounts to police in Spain.
“Telegram is fully committed to preventing child sexual abuse material (CSAM) from appearing on its platform and enforces a strict zero-tolerance policy,” Telegram spokesperson Remi Vaughn said in a statement to CNN. “Telegram scans all media uploaded to its public platform against a database of CSAM removed by moderators to prevent it from being spread. While no encrypted platform can proactively monitor content in private groups, Telegram accepts reports from NGOs around the world in order to enforce its terms of service.”
Vaughn said more than 909,000 public groups and channels related to child sexual abuse material have been removed by Telegram in 2025.
The group says it flagged 15 accounts and 60 videos to TikTok through the app’s reporting tools on Tuesday, December 2, classifying them as “sexually suggestive behavior by youth.” The accounts had a total of nearly 300,000 followers and their 3,900 videos had more than 2 million likes combined, according to the report.
By Friday, TikTok responded that 14 of the accounts did not violate their rules and one account had been “restricted,” Maldita.es said. The group appealed each decision, but “exactly 30 minutes after the appeal for every single case,” TikTok reiterated its initial decision, the report states.
Of the 60 videos the group reported, TikTok responded on Friday that 46 of them did not violate their policies and it removed or restricted 14. After researchers appealed, TikTok removed three more videos and restricted another. It was not immediately clear how video restrictions differed from removals.
Among those videos that were not removed were one featuring an AI-generated young girl, scantily clad in the shower, and other AI-generated images that appeared to show young girls posing suggestively in lingerie or bikinis, according to the group.
“There is absolutely no way a human being sees this and doesn’t understand what’s happening,” Hernández-Echevarría said. “The comments are super crude, are full of the most disgusting people on earth making comments.”
By Wednesday, at least one account and one video that TikTok’s content review process previously indicated did not violate its rules were no longer available. Hernández-Echevarría said it was not clear why they weren’t taken down when first reported.
Thursday’s report comes after a separate study published in October by the UK not-for-profit Global Witness found that TikTok had directed young users toward sexually explicit content through its suggested search terms. That report found TikTok’s search suggested “highly sexualized” terms to users who reported being 13 and were browsing in “restricted mode.” TikTok said in response that it had removed content that violated its policies and launched improvements to its search suggestion feature.
The-CNN-Wire
™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.