Accessible Machine Learning for Misinformation and Influence Operation Analysis
PI: Chris Re
Department: Computer Science
Sponsor: United States Navy (USN) ONR NEPTUNE Program
Advancements in the field of Artificial Intelligence (AI) and Machine Learning (ML) have allowed for rapid improvement of capabilities within nearly all industries. Employment of these technologies is undoubtedly a core component of the modernization of the United States Department of Defense’s (DoD) and Intelligence Community (IC) capability set. However, there are unique challenges that the DoD faces in regard to the implementation of traditional AI/ML paradigms -- the impact and consequences that the standard problems of data sensitivity, cost, and time-to-deploy are multiplied when applied within the context of DoD/IC missions.
This is particularly salient in the case of analyzing influence operations and misinformation campaigns. Over the past several years, the United States has witnessed the grave effects of both such phenomena, whether it be misinformation surrounding COVID-19, or targeted influence operations seeking to delegitimize the democratic process. The USN, and the DoD more broadly, need tools that are rapidly deployable and improvable, as well as accessible to ensure both operational efficacy and compliance with the DoD’s Ethical AI Principles. We propose the use of programmatic labeling and weakly-supervised ML as part of this toolkit to combat the growing threat that influence operations and misinformation pose to the United States and its national security interests.
H4D Focus Areas: AI/ML, Big Data, Technology Transition