
AI and CSAM: A Look at Real Cases
Artificial Intelligence (AI) is to today what the Internet was to the 1990s: an amazing development which will likely propel future generations forward in knowledge, research and efficiency. Yet, just like the internet, AI models also hold the potential to further a bad actor’s agenda. Specifically those who wish to trade and view Child Sexual Abuse Material (CSAM.)
In this article, we’re diving into three key cases involving AI and CSAM. We’ll pinpoint common themes and see how they’re shaping the current legal landscape. We’ll also look at how federal and state governments are responding, explore the types of emerging CSAM and offer some recommendations for tackling this issue before it hits your jurisdiction.
Each of the following cases shares a common thread but brings a unique perspective to this complex topic.
Case Summaries
- U.S. vs. Mecham (Circa 2020)
Clifford Mecham superimposed the faces of actual children onto explicit photographs of adults, making it appear as if minors were engaged in sexual activity. In this case, the courts ruled that the “morphed” child pornography did not enjoy the protection of the First Amendment. However, they also noted that the images didn’t depict sadistic or masochistic conduct, as no child was actually involved in a sexual act.
Drawing on Ashcroft v. Free Speech Coalition, the court stated, “no child is involved in the creation of virtual pornography,” and questioned whether morphed CSAM was close enough to real CSAM to be considered unprotected speech. Ultimately, the court ruled that since the pornography depicted an actual child, it fell outside First Amendment protections.
- U.S. vs. Tatum (2023)
David Tatum was sentenced to 40 years for sexual exploitation of a minor and using AI to create child pornography. Tatum used secret recordings of his victims to create illicit images and videos of them, similar to Mecham, but utilizing AI to place real victims’ faces into pornographic situations.
- U.S. vs. Smelko (2023)
Like Tatum, Smelko possessed images where child actors’ faces were superimposed onto nude bodies and persons performing sexual acts. The jury convicted Smelko of two counts of possessing child pornography.
All three cases involved real children’s faces placed into sexual images or videos, but the “execution” in how this was done, or what criminal element charges were levied varies. The end product might impact the court’s decision, and we haven’t even tapped into a bigger philosophical question – what if an image is depicting a child but isn’t an actual human being or “person?” What if AI or the machine learning did so from images of real children, or worse, what if AI developed it without being exposed to CSAM material? This leads us to the emerging question and concern of AI-generated CSAM across the spectrum.
Categories of CSAM
Riana Pfefferkorn’s excellent article “Addressing Computer-Generated Child Sex Abuse Imagery: Legal Framework and Policy Implications” breaks down CSAM into a few categories:
- Morphed CSAM: Involves an identifiable child’s image morphed into a CSAM image, as seen in the three cases.
- Photorealistic CSAM: Machine Learning (ML)-generated virtual CSAM indistinguishable from photographic CSAM, regardless of the training data.
- Abuse-trained CSAM: Virtual CSAM created by learning from real CSAM datasets.
These categories originate from Federal Section 2252A, which includes nuanced distinctions. For instance, in Ashcroft v. Free Speech Coalition (2002), the Supreme Court ruled that computer-generated CSAM involving adults who look like minors or virtual imaging is protected speech. This means that federal and state approaches can differ significantly.
U.S. Federal vs. State Law
Federal law, as mentioned above, is covering all different scenarios and is very inclusive when it comes to “what qualifies as CSAM?” The U.S. Federal government covers several aspects of Child Sexual Abuse Material as follows:
“child pornography” means any visual depiction, including any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct, where:
- the production of such visual depiction involves the use of a minor engaging in sexually explicit conduct;
- such visual depiction is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct; or
- such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct.”
Federal law appears to treat “indistinguishable” forms of CSAM with the same severity as real CSAM. This means that if the AI-generated content looks like a real minor and is perceived as such by viewers, it will be prosecuted just like real CSAM. The FBI has successfully used subsection B to prosecute cases where real victims’ faces were inserted into sexual scenarios. However, to the best of this author’s knowledge, the full test of “AI-generated CSAM” has not yet been explored in court.
This leaves a significant question unanswered: what about CSAM material produced entirely by AI, without using real photos for training? Will this still be classified as CSAM, or could it be considered a form of “creativity”? This is a crucial area that has yet to be definitively addressed in the legal system. Moreover, if AI generated CSAM is found in a suspect’s possession, how can one prove it was, or was not trained from a model which was exposed to CSAM?
For the time being, let us shift our focus to the state level. Living in Kansas, I took a closer look at what our lawmakers have to say about AI-generated CSAM. To offer a broader perspective, I also examined how another state addresses the issue, highlighting how state rulings might diverge from or align with federal law.
Kansas
Statute | Kansas State Legislature (kslegislature.org)
Kansas has a “catch-all” provision in its statute that defines “visual depiction” broadly. It includes any photograph, film, video, digital or computer-generated image or picture, regardless of how it was produced: “whether made or produced by electronic, mechanical or other means.”
However, the Kansas Statutes consistently define a “child” as a “person.” This opens the argument that “no person was harmed” if the image is entirely computer-generated and doesn’t depict a real, living person. Despite this, there’s still room for debate and differing interpretations of it in court.
To the authors knowledge, AI CSAM has not been an element of crimes brought before the Courts yet.
Utah
By contrast, the State of Utah offers specificity in what is required for prosecution. According to their law, “It is an affirmative defense to a charge of violating this section that no minor was actually depicted in the visual depiction or used in producing or advertising the visual depiction.” Essentially what this means is, if no real child were involved, all charges could be dropped. This provision does leave room to argue that “real children were hurt in creating the CSAM because it trained the AI model,” yet proving this could be extremely complex.
As we see, there are potential “loopholes” in federal law that offenders might exploit. When it comes to state laws, the burden of proof may even increase, adding more work for prosecutors and Law Enforcement Officers. So, what can you do as an investigator looking to combat the generation and distribution of CSAM online?
Recommendations
At this early stage of this growing phenomena, combating AI-generated CSAM requires proactive measures. It is new to many law enforcement professional out there, and practices are not yet fully established. We can therefore use education and an open conversation with lawmakers, experts, and the community on the problem of how to tackle AI generated CSAM. Some topics of discourse might include:
- Engaging in Dialogue: Foster open communication with prosecutors to understand the legal nuances and prepare for potential cases.
- Federal Support: Establish connections with federal entities to leverage their resources and expertise.
- Legislative Advocacy: Lobby for clear state and federal laws addressing AI-generated CSAM, similar to efforts by Rep. Anna Pauline Luna.
- Technology Development: Invest in tools that can detect CSAM derived from known datasets, aiding in the identification and prosecution of offenders.
- Stay updated on Research: Cellebrite’s experts, Heather Barnhart and Jared Barnhart, just published a blog on identifying AI images from a forensic lens.
The fight against CSAM, particularly AI-generated material, is complex and evolving. By understanding legal precedents, engaging in proactive dialogue, and advocating for legislative clarity, law enforcement and prosecutors can better combat this heinous crime. The time to start these conversations is now.
About the Author
As a Deployment Engineer at Cellebrite, William Arnold operates within Services, Delivery and Customer Success at Cellebrite where he configures, installs and ensures the smooth operation of our products for our customers and offers case assistance when and where needed. For nearly a decade, William served in law enforcement working all kinds of high stress cases from child exploitation to kidnapping to homicides. He worked incredibly technical cases, including having to manually carve an entire phone chip-off for SMS messages.