Wikimedia+Libraries International Convention 2025/Programme/disinformation-markets-ai

Logo for WikiLibCon Disinformation Markets in the age of AI: Deepfakes, Synthetic Media, and the Future of Computational Propaganda on Digital Platforms
With:
  • JOHN OLUWASEYE ADEBAYO
  • Dr. Nkem Osuigwe
  • Mr. Olatubosun Busuyi Akole
Day: January [online], 2025
Time: [online]
Room: [online]

Abstract:
Advances in technology through algorithm-enhanced machine learning and AI have ushered the world into new information generation and distribution paradigms through synthetic media, including deepfakes on digital platforms. Currently, synthetic media, designed to imitate human features, makes it impossible to distinguish between fabricated and authentic content on digital platforms. Deepfakes, according to Chesney and Citron (2019), are digitally manipulated artificial media content that portrays people saying or doing things that never occurred in reality; these are made possible by Artificial Intelligence (AI) developments, especially machine learning. In the age of AI and machine learning algorithms, deepfakes have become a viable tool for disinformation and have significant implications for political stability, reliable information ecosystems, and public trust in content on digital platforms. While the global debates on internet governance continue, digital platforms are tasked with producing mechanisms to detect and mitigate the influence of synthetic media meant for deceiving and manipulating people through disinformation (Chesney & Citron, 2019).

The effects of deepfakes on political structures and engagements are critical, considering their connection with all phases of human endeavors, including economy, education, infrastructure, security, and global relations for sustainable development. Therefore, in the age where prominent political actors engage data analytics through bots and algorithms to determine and shape public policy and people’s perceptions, the developments in deepfakes further complicate the frontiers of computational propaganda for competitive advantage. According to Woolley and Howard (2016), computational propaganda involves the adoption of algorithms to spread information that is shaped by service ideology to appeal to people’s emotions and biases to circumvent their rationality and enhance their behavior to support and promote ideas drawn by the promoters of the contents on digital platforms. Through disinformation generated by deepfakes and shared on digital platforms, different violent conversations could be instigated among people of different political divides, thereby promoting civil unrest, personality attacks, and cyberbullying. Hence, due to their affordability and independent operations, AI and machine learning algorithms present potential dangers with different threats to the integrity of information flows on digital platforms (Jacobsen & Simpson, 2023). From the above descriptions, the rise of deepfakes through other synthetic media has driven the world into a new sphere of information communication through computational propaganda. However, digital platforms primarily instrumental in information access are susceptible to spreading disinformation through AI-generated content because of the need for more modern systems to detect and moderate content. While platform agents are developing technical solutions to address this menace, political and tech actors continue to leverage deepfakes to interfere with social cohesion, global relations, and democratic processes by influencing people’s perceptions and opinions with disinformation. Consequently, disinformation orchestrated by deepfakes on digital platforms can adversely influence many people and cause skepticism in public trust of information systems, threatening the foundation of informed decision-making and sustainable development. Based on the preceding, there is a need for different digital navigators, information professionals, Wikimedians, library workers, Wikibrarians, and technology policy experts from around the globe to examine and make recommendations on how to address the challenges disinformation posed by deepfakes and synthetic media to promote computational propaganda on digital platforms. The concentrations of this session include but are not limited to approaches to developing detection mechanisms, strategies for expanding information literacy frameworks, and structures for internet governance. While the freedom of expression is well substantiated, this session seeks to establish thoughtful positions and perceptions of information professionals on enhancing partnerships among governments, platform owners, institutions, technology experts, and civil society on the way forward. The session seeks to propose an action-oriented agenda for Wikimedia and library communities on how to strengthen the information environment by influencing policies and innovations that can mitigate disinformation by deepfakes. Specifically, the proposed session seeks to achieve the following objectives: i. Explore the effects of deep fakes on public trust in information systems and information institutions ii. Examine the role of digital platforms in the spread of disinformation through synthetic media iii. Assess existing detection mechanisms and platform responses to the reports of disinformation and harmful content iv. Identify the ethical, legal, psychological, emotional, social, and economic consequences of deep fakes-influenced disinformation on digital platforms v. Discuss how policies and laws can be used to regulate the use of online platforms for political activities vi. Suggest collaborative frameworks for regulating synthetic media among platform owners, governments, technology experts, and civil society This proposed session aims to give participants deep insights into the evolving threats posed by deepfakes and synthetic media to intensify disinformation through computational propaganda. Through the demystification of platforms’ role in algorithm-influenced disinformation, participants will be equipped with knowledge of the technical, political, ethical, and policy challenges embedded in synthetic media and be energized to propose action-oriented strategies to be adopted by digital navigators, Wikimedia and library communities in curbing the spread of disinformation in online domains. Through the introduction, discussion, and debriefs on critical information literacy and contextual intelligence, the session will empower Wikimedians with skills to identify deepfake disinformation and protect the integrity of Wikimedia platforms while engaging in content creation and moderation on any concepts or topics. In addition, information professionals, Wikibrarians, and library workers will find the helpful session as an improvement on their expertise and roles as information literacy experts to train their users on advanced critical thinking and technological skills necessary to identify deepfakes and disinformation on digital platforms. The session assures participants of the potential to increase their knowledge base in steering the complex pathways of digital information ecosystems to confront the breeding threats of disinformation on digital platforms. References 1. Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753. 2. Chesney, R., & Citron, D. (2019). Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics. Foreign Affairs. https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war 3. Jacobsen, B. N., & Simpson, J. (2023). The tensions of deepfakes. Information, Communication & Society, 27(6), 1095–1109. https://doi.org/10.1080/1369118X.2023.2234980

4. Woolley, S. C., & Howard, P. N. (2016). Social media, revolution, and the rise of the political bot. In Robinson P., Seib P., Frohlich R. (Eds.), Routledge handbook of media, conflict, and security (pp. 282–292). Routledge.