“Fake content is a serious and increasingly pressing issue.” Source: Adobe

Public trust in the integrity of digital media is being compromised as consumer-grade editing software becomes more effective at deceiving viewers. As a result of these advancements in technology the general public is now left to wonder if even the most generic image or video is real.

This is becoming a dangerous sentiment when interrogations, investigations, and trials increasingly rely on digital evidence to identify the guilty parties, communicate to judges and convince juries. As digital forensics comes under increased scrutiny, it’s imperative that governments, agencies and law enforcement adhere to forensically sound methods during the entire investigative workflow that conform to both domestic and international standards.

Social networks and software vendors react

The ongoing public debate of how digital giants like Facebook need to detect and reject fake news is also starting to react to the rising threat of deep fake pictures and videos. The potential for fraud, framing or defaming is a serious cause for concern worldwide. In the long term, this general feeling of distrust fueled by deep fake technology could undermine confidence in digital data both inside and outside courtrooms.

As Facebook takes more responsibility for culling fake news using Artificial Intelligence, software vendors are starting to assist with the identification of fake or tampered with media. Advancement in this type of research is desperately being sought by legitimate news agencies, online publishers and software developers to gain back public trust.

“We live in a world where it’s becoming harder to trust the digital information we consume, and I look forward to further exploring this area of research.” Richard Zhang, Adobe Software Researcher (source)

The software that’s empowering imagery and audio manipulation

Easily accessible applications such as Adobe After Effects and Fakeapp can effectively demonstrate the rapid creation of photo realistic face-swapping techniques virtually undetectable when compared with the original files. Because groundbreaking Artificial Intelligence powers these apps, the level of visual nuance is so sophisticated that even the most discerning viewer is susceptible.

This quality of digital manipulation is also becoming a reality for audio files. Adobe has another initiative, Adobe VoCo, that empowers sound designers to digitally put words into someone’s mouth and then output seamless natural speech with their unique tone. With only 20 minutes of a test subject’s voice, the application can replicate the signature sounds of the speaker to say whatever text is fed into the software.

Initially meant for voice over correction in the entertainment industry, this software in the hands of bad actors could give digital forensics additional challenges. With other vendors like DeepMind delivering similar voice swapping results with their WaveNet offering, it’s only a matter of time before all digital media becomes suspect and a whole new level of digital authentication is required.

Moreover, vocal impersonation could empower new hacking techniques. Voice activated phone access, home security or vehicles could have their security severely compromised if wider adoption of this audio manipulation technique takes off. The potential for an ethical minefield of user-generated disinformation could become a stark possibility right around the corner.

On the flip side of generative audio applications are speech recognition solutions that can determine what someone is saying in a video without audio. LipNet, an automatic lip-reading from video content solution could turn garbled or missing audio content from video into understandable speech. LipNet performs with 93% accuracy when compared to an experienced lip reader that can only comprehend 52% of the content.  

Democratizing image forensics

Adobe, one of the market leaders in image and video editing solutions, has recently publicized research that automatically detects manipulated images of faces. Using AI and machine learning, the application is trained to spot splicing, cloning and removal of objects. The algorithm designed to detect manipulated faces demonstrated 99% accuracy. Even though Adobe has not released this as a commercial product, it has emphasized that it is committed to this initiative.

“While we are proud of the impact that Photoshop and Adobe’s other creative tools have made on the world, we also recognize the ethical implications of our technology,” Source: Adobe

Source: Adobe

The next step for Adobe

Experimentation to restore photos to their original state has already begun, however this reversal of imagery is understandably more complex than constructive techniques.

“The idea of a magic universal ‘undo’ button to revert image edits is still far from reality.” – Richard Zhang, Adobe Researcher (source)

The DFIR community should be aware of future challenges associated with the wider occurrences of fake digital evidence. In the meantime, Cellebrite has already been instrumental in detecting fake messaging apps involved in investigations. In a recent case, Cellebrite Physical Analyzer identified a Fake SMS app that a suspect tried to use to frame his victim. Read the full case study here.

As experimentation to solve fraudulent digital evidence advances, Cellebrite will continue to ensure forensically sound methods that are accepted by agencies and law enforcement around the world.

 

 

Share this post