Reading Time: 6 minutes
By: Miguel Zapata

Introduction

Artificial intelligence (AI) ushered in a new era of technological advancement, profoundly impacting defense, intelligence, and homeland security operations.  AI systems now perform tasks that were once the exclusive domain of humans, able to analyze vast datasets and make complex decisions at unprecedented speeds.  While these developments offer considerable advantages, they also provide adversaries with tools to leverage AI for malicious purposes, directly affecting national security.  In 2018 and 2019, a popular conservative YouTube channel posted a manipulated video of U.S.  House Speaker Nancy Pelosi appearing to be intoxicated.   The viral video highlighted the potential of fake videos to undermine political figures and influence public opinion.  Even though the video was clearly fake (most of the edits simply slowed the video down), the video still went viral – the damage was already complete.  This underscores how manipulated media can erode trust in public figures, government institutions, and even disrupt democratic processes.   This article examines how AI and large language models aid in the production of disinformation, offer mechanisms to combat AI, and keep its uses in check.

AI-Generated Disinformation and National Security

Large Language Models (LLMs) are central to AI generated disinformation.  LLMs are advanced AI systems trained on vast amounts of textual data to understand and generate human-like language.  They work by predicting the next word in a sentence based on the context provided by previous words, allowing them to produce coherent and contextually relevant text.  AI integrates with LLMs to leverage their language comprehension capabilities and power chatbots, content creation applications, translation services, and summarization tools.   While LLMs offer many benefits, they can be misused to generate disinformation that appears authentic, making it difficult to distinguish fake content from real.  For instance, adversaries could use LLMs to create fake news articles, social media posts, or official statements that spread false information rapidly and convincingly.

Malicious uses of LLMs already demonstrate the ability to sow distrust.   A few examples causing direct national security concerns include fake images of an explosion at the Pentagon, a fake video where President Zelinsky ordered Ukrainians to surrender, and a series of fake photos that showed the arrest of former US President Donald Trump.   The fake explosion next to the Pentagon was widely shared on social media platforms, causing a brief dip in the stock market as people reacted to the supposed incident before officials and experts debunked it.   The images of Donald Trump gained more than 5 million views, and whether or not intended, added divisive rhetoric to an already heated political climate.   

Disinformation has already been shown to affect public perception, secondary effects on financial markets, and even international relations.  As we continue to leverage the capabilities of LLMs, it becomes increasingly important to develop strategies to mitigate the spread of disinformation and safeguard against its harmful effects.  This requires collaboration between technology developers, policymakers, and the general public to ensure the responsible and ethical use of AI tools in an evolving digital landscape.

Detection Strategies

To counter these threats, researchers are developing tools to detect AI-generated content.  Analysts can examine linguistic patterns to help identify unnatural language usage or inconsistencies that differ from human writing.  Another technique detects digital fingerprints imprinted by AI algorithms in synthetic media.  Additionally, corroboration with trusted sources can help verify authenticity.  The concepts behind these techniques are not necessarily innovative.  Still, the collective use and the broad implementation of the methods can finally help stakeholders push back against the spread of disinformation.

Another effective mitigation strategy involves encouraging AI developers to embed watermarks into AI-generated outputs across multiple media types—text, audio, video, and images – essentially generating a discoverable fingerprint.  These watermarks are subtle markers integrated into the content, allowing for detection and verification without altering the user experience.  Each media type requires its own watermarking method to address specific technical challenges and ensure effectiveness.   For watermarks to be successful, they must include integrity and verification checks to ensure they have not been tampered with, similar to a digital license.  This involves utilizing cryptographic techniques such as digital signatures or hashes that securely bind the watermark to the content.  The tamper-proof design ensures that watermarks remain intact despite common transformations like compression or formatting changes.

By standardizing the inclusion of watermarks, authorities can quickly identify AI-generated content, verify its authenticity, and trace its origin.  This practice stops the average disinformation spreader malicious actors and supports investigations.  The knowledge that AI-generated outputs can be traced back to their origin may deter adversaries from using AI maliciously.  If bad actors understand their actions are not anonymous, the watermarks can help decrease the chances someone will risk getting caught.   This deterrent effect is a powerful tool in the broader strategy to safeguard truthful information.   Ongoing innovation and research are still needed, though, as determined malicious actors will attempt to remove or forge watermarks.  Regardless, the successful implantation of watermarks requires widespread adoption and consensus among developers and organizations to standardize and accept such techniques. 

Strategic Investments in AI for National Security

There are already a few disparate attempts to research the AI problem.   The U.S.  Department of Defense established the Joint Artificial Intelligence Center (JAIC) to enhance AI adoption and ensure secure use.   The National Security Commission on Artificial Intelligence (NSCAI) emphasized this need in its 2021 final report, highlighting the necessity for integrated governmental action to safeguard national security (final being the significant word).   In April 2023, the Department of Homeland Security announced the establishment of the Homeland Security Artificial Intelligence Task Force to enhance the nation’s resilience against AI-driven threats.

AI technologies have matured, but with this growth, so too have malicious applications.   A comprehensive whole-of-government approach is crucial to effectively combat AI’s negative applications.  This collective effort enables the development of cohesive policies and strategies that address the complex nature of AI risks.  Establishing international AI-use norms and standards is also crucial in promoting global security and stability.  Collaborative efforts among nations must help develop common ethical AI development and deployment frameworks, emphasizing transparency, accountability, and respect for human rights.  The Organization for Economic Co-operation and Development (OECD) initiated international conversations in 2019, discussing topics such as acceptable behavior guidelines in AI development and use.  This is a great start, but it is not the impactful determination that the international community needs.  In the same way, the Tallinn Manual was created to set international standards for cyber warfare, it is crucial for an internationally respected and recognized organization to establish standards and protocols for the use of AI (NATO endorsed the Tallinn manual).

Data Poisoning Attacks for Defensive Purposes

The concept of data poisoning is often perceived as a threat to AI systems, but it can also be strategically employed to bolster security measures.  By deliberately introducing specific patterns or markers into AI training data—a controlled form of data poisoning—AI engineers can embed identifiable fingerprints within the outputs of AI models.  These fingerprints enable the detection and attribution of AI-generated content, assisting in efforts to combat malicious uses.  An AI language model can be trained on data containing distinctive linguistic patterns or syntax structures.  These patterns are unlikely to occur naturally and can be identified in the generated text, signaling that they originated from a particular model.  In image and video generation, slight alterations in pixel arrangements or embedding of invisible watermark patterns can be injected during model training to make content more detectable.  These modifications can be invisible to human eyes but can be detected using specialized algorithms.

Weaponizing data poisoning in this manner offers several advantages for national security.  Forcing a digital fingerprint enhances attribution possibilities and allows authorities to trace AI-generated content back to its source.   This chain of custody generates awareness.  The presence of fingerprints aids in rapidly identifying AI-generated content, enabling timely responses to mitigate potential harm.  Knowing that AI-generated outputs can be traced may discourage adversaries from using AI maliciously.  

Conclusion

The malicious use of artificial intelligence poses significant threats to national security, particularly through disseminating AI-generated disinformation.  As AI technologies advance, the research and development apparatus must develop robust strategies to overcome these challenges.  Detection tools, such as linguistic pattern analysis and digital fingerprinting, are crucial in identifying synthetic media.  Furthermore, implementing watermarks in AI-generated content can encourage responsible AI practices, allowing for the verification and authentication of media.  By proactively addressing the potential misuse of AI, we can safeguard national security and uphold the integrity of information in the digital age.

Further Technical Reading:
Media Forensics and DeepFakes
Deepfake Media Forensics: State of the Art and Challenges Ahead

Disclaimer: The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force or the U.S. Government.

Subscribe to show your support!

Leave a Reply

Trending

Discover more from Over The Horizon Journal

Subscribe now to keep reading and get access to the full archive.

Continue reading