Understanding and Detecting "Deepfakes"

Posted by InfoLit Learning Community on 5/31/19 2:20 PM

Information Literacy, InfoLit Learning Community, disinformation

webinar imageDiscussing disinformation is a compelling way to teach information literacy to students, as it includes an appealing combination of controversy and technology. One topic that is sure to draw students in is the issue of “deepfakes,” videos that very convincingly portray some falsehood—usually a person saying something that they never said.

The topic is hot online, with numerous outlets describing how these videos work and what a detriment they will be to online discourse. For a quick overview, students could read a 2018 Guardian article by Oscar Schwarz, “You Thought Fake News Was Bad? Deep Fakes are Where Truth Goes to Die.” The article discusses how a video showing President Trump criticizing Belgian climate change policy was not only fake, its makers never intended it to be taken seriously. “The video’s creators later said they assumed that the poor quality of the fake would be enough to alert their followers to its inauthenticity,” writes Schwarz, a point that it will be useful to make to students: bad production values mean something.

On the more in-depth side, a podcast episode that could easily work as part of a syllabus is “Breaking News” by WNYC’s Radiolab, which interviews technologists such as those at Adobe who have created a tool that can copy voices. After “listening” to two hours of a person speaking, the tool can create audio of the “person” saying anything, even words that didn’t feature as part of the two-hour sample.

Much of the online discussion around deepfakes is about technological means of detecting them; see for example this video in which engineers at Purdue University discuss their computer program that detects fakery by learning what a real video looks like, from numerous video examples.

Also interesting is an easy way for students to detect deepfakes, an activity that can provide a quick and interesting class demo. It is outlined in an article for The Conversation by Siwei Lyu, Associate Professor of Computer Science and Director, Computer Vision and Machine Learning Lab at the University at Albany. Deepfakes use online photos of a given person to help create a fake video of them, explains Lyu. Because most photos of the person will show them with their eyes open, subjects in deepfake videos don’t blink much. See The Conversation for a more detailed explanation of this phenomenon and a deepfake video that shows it at work.

LC email banner

Subscribe to Email Updates

Follow us and like us!

Follow by Email Facebook Twitter LinkedIn 

Recent Posts