Wikimedia+Libraries International Convention 2025/Programme/deepfake-authenticity-heritage
- Clifford Anderson
Time: (14:30 - 15:00)
Room: Computer Lab (Room 5)
Abstract:
The term ‘deepfake’ was first coined in 2017 to describe the use of deep neural networks to generate synthetic voices, images, and videos. In its early years, the technology was primarily used to create pornography, memes, and political propaganda. As the technology has matured, deepfakes have also become a tool of financial fraud, with corporations reporting millions in losses annually.
However, ‘deepfakes’ also have positive uses. The cultural heritage sector, for example, has created compelling interactive exhibits that use synthetic media and artificial intelligence to bring artists and artworks to life, so to speak. In 2019, the Dalí Museum in St. Petersburg, Florida exhibited a deepfake version of Salvador Dalí titled “Dalí Lives” that interacted with visitors. In 2024, the same museum recreated Dalí’s “lobster phone” to allow visitors to call the artist and engage with his digital simulacra in conversation about his artwork. Scholars debate the ethics of creating such installations, which promote public engagement but perhaps at the cost of diluting artistic and historical authenticity.
In this talk, I explore a related side-effect of using AI-generated synthetic media in cultural heritage, namely, how deepfakes affect cultural memory. Researchers are now investigating how deepfakes change our perceptions of personal and cultural history. As more and more institutions make their collections freely available online, and these images in turn become training data for improving deep neural networks capable of making better and more compelling synthetic images, what responsibility do cultural heritage institutions have to serve as stewards of authentic (digital) memory?