
AI-Generated CSAM: Staying Ahead of the Threat
Digital investigations are always evolving and seasoned law enforcement professionals often feel unprepared to tackle the myriad of challenges presented by technology. Despite years of experience and comprehensive training, there are instances that take even the most seasoned investigators by surprise.
One scenario may be: a teen girl reports photos she posted of herself are being misused. The photos she shared were not inappropriate, rather the girl’s face has been superimposed onto another body engaged in sexual conduct. AI is making this girl’s life a living hell.
For law enforcement, this scenario encapsulates a new era in digital forensics—one where the lines between reality and fabrication blur, and where traditional investigative techniques face unprecedented challenges. The implications of AI-generated CSAM pose profound questions about privacy, security and the ethical deployment of technology in the pursuit of justice. How do we navigate this landscape with integrity, efficacy and sensitivity?
Identifying Deepfake Images from a Forensic Lens
This situation has already transitioned from a hypothetical into reality. As outlined by Cellebrite’s own Heather Mahalik Barnhart in Forensic Magazine, “In Florida, two teenagers are facing criminal charges – accused of employing AI to generate explicit images of a classmate.” From the investigator’s point of view, this could pose several challenges:
- If the photograph is being changed, how can I find it in the mountain of images which mobile devices now contain? Hash matching can’t be used, and manual scrolling would take significant time.
- Has a crime been committed?
- Is anything being done to combat this problem, or help investigators with this new threat?
Know what you can do for the Investigation
Sifting through thousands, hundreds of thousands, or even millions of images is a daunting task. Being unable to rely on hash matching only makes an already difficult endeavor even more formidable. Luckily there are tools such as Cellebrite Pathfinder which allows an investigator to target a specific person in a search and even allows investigators to build their own visual searches. By leveraging this enormous computing power, we are saving time and greatly increase investigative efficiency. In the above example, upon obtaining a suspect’s device or devices, an investigator could use Pathfinder’s Facial Similarity by uploading a picture of the victim, and using the Image Analytics to actively search for faces that are similar to hers, without needing to match hash values.
Additionally, you can leverage Cellebrite’s expert knowledge from resources such as Heather Mahalik Barnhart and Jared Barnhart, who recently discovered some key tells of AI imagery versus authentic imagery. Even the smallest pattern can serve to lead you in the right direction, as Heather writes, “In general – the file size of an AI image is generally smaller – but you can’t just use that to identify it. Still, it’s a nice marker to start looking in that direction.”
Know what you can do jurisdictionally
Some U.S. states are trying to get ahead of the problem, while others mandate a victim must be identified, or allows for an affirmative defense that “no minor was actually depicted in the visual depiction.” We have covered how different governmental entities are treating this matter in my previous article.
Long story short, conversations with your prosecutors, local legislators, or other policy makers will likely shape the future of these investigations, and the subsequent punishments. It is worth restating that the service “Take it Down,” run by NCMEC, can be used as a precaution, assisting your community in reducing the dissemination of such generated content.
What we are doing at Cellebrite
As alluded to, Cellebrite employs some of the best minds who can attack problems like these head on, as Heather explains, “We are creating a handful of deepfake images using various AI-generating applications on our Android and iOS test devices. Then we are taking the same kinds of images through traditional means. We then put them side by side and dig into the EXIF of the images and the databases that track image artifact metadata.”
Fake or Real? Detecting AI Images from a Forensic Lens
The fight against child sexual abuse material (CSAM) requires a multifaceted approach, where the sum of the investigation is made by smaller parts, yet if one approach is missing or neglected, the investigation may be put at risk. Understanding your local jurisdictional nuances is equally crucial as having the right investigative solutions that can help you find the right leads faster. Tools like Cellebrite’s Pathfinder enhance investigators’ capabilities by enabling targeted facial similarity and other visual analysis. They can also help to map connections between offenders and track down the path in which CSAM is distributed. Having open dialog with local experts such as law makers, prosecutors or specialists allows one to be better prepared for the unknown and is just as important as actively reacting to these cases.
No matter how you or your agency choose to prepare for the threat of malicious AI use, Cellebrite will be standing by ready to support. Got any similar issue you would like to discuss with us? Hit us on stump-us@cellebrite.com.
About the Author
As a Deployment Engineer at Cellebrite, William Arnold operates within Services, Delivery and Customer Success where he configures, installs and ensures the smooth operation of our products for our customers and offers case assistance when and where needed. For nearly a decade, William served in law enforcement working all kinds of high stress cases from child exploitation to kidnapping to homicides. Will has given expert testimony in computer forensics in multiple cases. He worked incredibly technical cases, including having to manually carve an entire phone chip-off for SMS messages.