Home » Blog » Archive » Volume 5 » Number 1 » AI as an Analytic Force Multiplier: Opportunities in Intelligence Agencies

AI as an Analytic Force Multiplier: Opportunities in Intelligence Agencies

Intelligence Info - Descarcă PDFSfetcu, Nicolae (2026), AI as an Analytic Force Multiplier: Opportunities in Intelligence Agencies, Intelligence Info, 5:1, 94-109, DOI: 10.58679/II17928, https://www.intelligenceinfo.org/ai-as-an-analytic-force-multiplier-opportunities-in-intelligence-agencies/

 

Abstract

Intelligence agencies have always been shaped by technologies that expand what can be collected, processed, and understood about the world. In the contemporary intelligence environment, the defining constraint is not scarcity of information but abundance: persistent surveillance, expanding sensor networks, proliferating digital communications, and the explosive growth of open-source data have created “data deluge” conditions in which human attention becomes the limiting factor. This article surveys major uses and applications of AI in intelligence agencies across the intelligence cycle (collection through dissemination), highlights representative public programs (especially in geospatial intelligence and language technologies), and evaluates governance and risk-management challenges – such as bias, transparency, security, and the dangers of automation-driven error propagation – drawing on official frameworks and peer-reviewed research.

Keywords: AI, artificial intelligence, AI opportunities, intelligence agencies, intelligence cycle, GEOINT, IMINT, SIGINT, OSINT, HUMINT, AI vulnerabilities, AI ethics

IA ca multiplicator analitic al forței: oportunități în agențiile de informații

Rezumat

Agențiile de informații au fost întotdeauna modelate de tehnologii care extind ceea ce poate fi colectat, procesat și înțeles despre lume. În mediul contemporan al serviciilor de informații, constrângerea definitorie nu este lipsa informațiilor, ci abundența: supravegherea persistentă, extinderea rețelelor de senzori, proliferarea comunicațiilor digitale și creșterea explozivă a datelor open-source au creat condiții de „avalanșă de date” în care atenția umană devine factorul limitativ. Acest articol analizează principalele utilizări și aplicații ale IA în agențiile de informații de-a lungul ciclului de informații (colectare prin diseminare), evidențiază programe publice reprezentative (în special în domeniul informațiilor geospațiale și al tehnologiilor limbajului) și evaluează provocările legate de guvernanță și managementul riscurilor – cum ar fi părtinirea, transparența, securitatea și pericolele propagării erorilor bazate pe automatizare – bazându-se pe cadre oficiale și cercetări evaluate de colegi.

Cuvinte cheie: IA, inteligența artificială, oportunități IA, agenții de informații, ciclul IA, GEOINT, IMINT, SIGINT, OSINT, HUMINT, vulnerabilități IA, etica IA

 

INTELLIGENCE INFO, Volumul 5, Numărul 1, Martie 2026, pp. 94-109
ISSN 2821 – 8159, ISSN – L 2821 – 8159, DOI: 10.58679/II17928
URL: https://www.intelligenceinfo.org/ai-as-an-analytic-force-multiplier-opportunities-in-intelligence-agencies/
© 2026 Nicolae SFETCU. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.

 

AI as an Analytic Force Multiplier: Opportunities in Intelligence Agencies

Nicolae SFETCU[1]
nicolae@sfetcu.com

[1] Researcher – Division of History of Science (DIS)/Romanian Committee of History and Philosophy of Science and Technology (CRIFST) of the Romanian Academy, ORCID: 0000-0002-0162-9973, Web of Science Researcher ID V-1416-2017

 

Introduction

Intelligence agencies have always been shaped by technologies that expand what can be collected, processed, and understood about the world. In the contemporary intelligence environment, the defining constraint is not scarcity of information but abundance: persistent surveillance, expanding sensor networks, proliferating digital communications, and the explosive growth of open-source data have created “data deluge” conditions in which human attention becomes the limiting factor. Artificial intelligence (AI) – including machine learning (ML), natural language processing (NLP), computer vision, and, more recently, foundation models and generative AI—has therefore become central to the modernization agendas of many intelligence services. Publicly available policies and programs suggest that agencies view AI less as a substitute for analysts than as a capability for triage, pattern discovery, decision support, and workflow acceleration, while attempting to preserve legal compliance, human judgment, and accountability. (GCHQ 2026)

The main potential and actual uses of AI in intelligence agencies include the automation of administrative and organizational processes, cybersecurity processes, and information analysis through “AI-enhanced augmented intelligence” (Babuta et al. 2023). According to Weinbaum & Shanahan,

“Future intelligence tradecraft will depend on accessing data, molding the right enterprise architecture around data, developing AI-based capabilities to dramatically accelerate contextual understanding of data through human-machine and machine-machine teaming, and growing analytic expertise capable of swimming and navigating in enormous data lakes.” (Weinbaum and Shanahan 2018).

This article surveys major uses and applications of AI in intelligence agencies across the intelligence cycle (collection through dissemination), highlights representative public programs (especially in geospatial intelligence and language technologies), and evaluates governance and risk-management challenges – such as bias, transparency, security, and the dangers of automation-driven error propagation – drawing on official frameworks and peer-reviewed research.

AI Across the Intelligence Cycle

A useful way to map AI applications is to align them with the intelligence cycle: collection, processing and exploitation, analysis and production, and dissemination. Across these stages, AI primarily functions as (1) automation of repetitive tasks, (2) augmentation that helps humans search and prioritize, and (3) prediction/forecasting that supports anticipatory decision-making.

Here are some common uses of AI by intelligence agencies:

  • Data collection and processing: AI systems can automatically collect, clean and process large amounts of structured and unstructured data (“big data”) from various sources, including open-source information, social media, and classified documents. This capability allows analysts to access a wider range of information quickly and efficiently. AI algorithms can analyze large amounts of data to identify patterns and trends that may not be apparent to human analysts. This can help detect potential threats or suspicious activity.
    • Web scraping: AI can automate the collection of data from a variety of sources on the Internet, including social media, news articles, and public databases.
    • Text analytics: Artificial intelligence-based natural language processing (NLP) can extract valuable insights from large amounts of unstructured text data, allowing analysts to quickly identify trends, sentiments, and key insights.
    • AI algorithms can be used to process and analyze large amounts of data, including open-source information, satellite imagery and social media content.
    • Machine learning techniques can identify patterns, anomalies, and potential threats from unstructured data sources, allowing unusual behavior or potential threats to be detected. This is especially valuable in identifying emerging trends and unconventional threats.
  • Natural Language Processing (NLP): NLP technology enables agencies to automatically process and understand large volumes of textual data, including written reports, social media posts, emails and more. Sentiment analysis can help gauge public opinion and sentiment around various topics of interest.
    • NLP enables agencies to process and understand large amounts of textual data, including multilingual and encrypted communications.
    • Sentiment analysis can help understand public opinion and identify potential risks, can be used to gauge public sentiment on social media platforms and news outlets, providing information about public opinion, potential unrest, or public support for particular issues or actors.
    • Behavioral analytics: By monitoring user and system behavior, data analytics can identify suspicious activity, helping to prevent insider threats.
  • Image, face, and video recognition: Advanced computer vision algorithms enable the analysis of images and videos to identify objects, locations and individuals. This is crucial for tracking and identifying targets of interest. AI can be used to analyze images and videos from surveillance cameras, drones, or other sources to identify objects, people or anomalies of interest. Facial recognition and object detection technologies are particularly relevant to security and counter-terrorism efforts.
    • AI-based computer vision can analyze images and videos to identify objects, people and locations, which is valuable for tracking and monitoring.
    • AI-based image analysis tools can identify objects, locations and even people in photos and videos.
    • Facial recognition technology helps identify potential threats or persons of interest.
    • Intelligence services use artificial intelligence-based systems to monitor and track the activities of individuals and interest groups.
  • Speech and audio analysis:
    • AI can transcribe and analyze spoken language, making it useful for monitoring communications and conversations in different languages.
    • Speech recognition technology can transcribe and analyze audio recordings, helping intelligence agencies monitor and track conversations and identify specific speakers or dialects.
  • Geospatial analysis: AI can process geospatial data, such as satellite imagery and GPS data, to monitor the movement of military forces, infrastructure development, and other geographic aspects of interest.
  • Social media monitoring:
    • AI can sift through large amounts of social media data to identify emerging threats, track the activities of individuals or groups, and monitor sentiment.
  • Threat identification:
    • AI can scan vast data sets to identify individuals or entities of interest and track their activities and associations over time.
  • Threat assessment: AI systems can help assess the credibility and severity of threats and predict potential terrorist activities by analyzing various data sources, including online communications.
    • AI can help assess the credibility and severity of threats by analyzing a wide range of data sources and identifying common indicators of potential threats.
    • AI algorithms can identify patterns and anomalies in data, making it easier to spot potential threats or trends that might be overlooked by human analysts.
  • Augmented analytics: AI augmented analysis has been variously defined; but broadly, it is the use of AI to “…enhance human intelligence rather than operate independently of or outright replace it. It’s designed to do so by improving human decision-making and, by extension, actions taken in response to improved decisions” (IEEE 2019). Augmented intelligence analysis has been made possible by new developments in AI technology, particularly the development of machine learning and deep learning; counterterrorism; Human rights monitoring and humanitarian uses; and surveillance information collection (Blanchard and Taddeo 2023).
  • Data fusion: AI can integrate data from multiple sources, including human intelligence (HUMINT), signals intelligence (SIGINT), and open-source intelligence (OSINT), to provide a comprehensive picture of a situation.
  • Cybersecurity: Cybersecurity has become a critical component of our modern digital world. With the increasing integration of technology into every aspect of our lives, the security of our digital infrastructure is of paramount importance. AI is used to detect and respond to cyber threats, including monitoring network traffic for suspicious activity, identifying vulnerabilities and predicting potential cyber-attacks. It can also be used in offensive cyber operations.
    • Intelligence agencies use AI to improve cyber security by detecting and mitigating cyber threats.
    • AI-based intrusion detection systems can identify unusual network activities and vulnerabilities.
    • Cyber espionage is a pervasive and evolving threat that poses significant challenges to national security, corporate interests, and individual privacy.
    • Advanced persistent threats (APTs) are a class of cyber threats that pose a significant challenge to organizations and nations around the world. APTs are known for their advanced tactics, techniques, and procedures, as well as their ability to infiltrate and operate persistently on target systems for long periods of time.
  • Counter-terrorism: AI can help identify people with links to extremist groups or detect the online dissemination of extremist content.
  • Simulations and modeling: AI can be used to create predictive models and simulations to better understand complex geopolitical situations, potential conflicts, or the impact of policy changes.
  • Predictive analytics: Machine learning models can forecast potential security threats and geopolitical developments based on historical data, current events, and various indicators. Predictive analytics helps intelligence agencies proactively prepare for potential threats.
    • AI can help with predictive analytics by evaluating historical data to forecast potential security threats and trends.
    • Machine learning models can help identify emerging threats and vulnerabilities.
    • Machine learning models can predict future events or trends based on historical data, helping intelligence agencies anticipate potential security risks.

Collection: tasking, triage, and sensor optimization (high level)

In collection, AI can help prioritize what data to gather and where to focus limited collection resources. While operational details are typically classified, publicly described research programs indicate the thrust: automating broad-area search of satellite imagery, detecting change over time, and identifying candidate events of interest for human review. IARPA’s SMART program, for example, aims to use machine learning to detect and characterize large-scale natural or anthropogenic processes (e.g., construction activity, crop growth) across multi-source satellite imagery, effectively shifting collection/monitoring from “human scan” to machine-enabled triage. (IARPA 2026b)

Processing and exploitation: turning raw data into usable signals

Processing and exploitation (often abbreviated “PED” in imagery contexts) is where AI has long provided measurable value, because it involves high-volume transformation tasks:

  • Computer vision for object detection, scene classification, and change detection in imagery and full-motion video.
  • Speech and language technologies for transcription, translation, entity extraction, and document clustering.
  • Data fusion methods that align multi-modal sources (text, imagery, signals metadata) into consistent representations for query and link analysis.

A prominent public example is the U.S. defense intelligence ecosystem’s use of computer vision to assist with reviewing drone or surveillance video. Project Maven (a Department of Defense initiative closely tied to intelligence workflows) was explicitly established to provide computer vision algorithms that detect and classify objects in full-motion video for analyst review—an archetypal PED acceleration use case. (DoD 2017)

Language processing for intelligence work is illustrated by IARPA-supported efforts to improve machine translation and triage in low-resource languages. The MATERIAL program was designed to enable effective triage and analysis of large volumes of data in less-studied languages, emphasizing adaptability when training data are scarce—conditions common in intelligence environments. (NIST 2017a)

Analysis and production: discovery, fusion, anomaly detection, and forecasting

At the analysis stage, AI supports all-source fusion, pattern detection, and hypothesis generation/testing. Applications include:

  • Entity-centric analytics (link analysis across people, organizations, locations, events)
  • Anomaly detection (identifying unusual behavior that may warrant investigation)
  • Forecasting and early warning (estimating likelihoods of events or trends)

IARPA’s HAYSTAC program, for example, aims to build models of “normal” movement patterns across times and populations and characterize what makes activity atypical—an approach applicable to identifying suspicious trajectories or emergent behaviors in sensor-rich environments (with accompanying privacy responsibilities explicitly noted in the program description). (ODNI 2026b)

Forecasting has also been an explicit intelligence R&D target. IARPA’s ACE program sought to improve the accuracy and timeliness of intelligence forecasts by combining judgments across analysts with advanced aggregation techniques—an early signal of the broader trend toward probabilistic, data-assisted analytic tradecraft. (ODNI 2026a)

A critical caveat is that AI tools can unintentionally amplify confirmation bias: systems tuned to retrieve evidence similar to what analysts have previously labeled as relevant may be less effective at surfacing contradictory indicators. MITRE has argued that current AI often helps find supporting evidence but not disconfirming evidence, potentially reinforcing premature analytic closure unless deliberately mitigated. (Shea 2021)

Dissemination: summarization, drafting support, and traceable reporting

Dissemination increasingly involves AI-assisted writing and summarization—especially with generative AI—though agencies emphasize “human in the loop.” IARPA’s REASON program aims to improve the evidence and reasoning in draft analytic reports by pointing analysts to relevant evidence and helping evaluate alternative explanations, framing AI as a tradecraft assistant rather than an autonomous author of judgments. (IARPA 2026a)

Publicly described tools also indicate a focus on open-source intelligence (OSINT) workflows. A CIA-associated publication describing the intelligence community’s approach to OSINT notes an “OSIRIS” tool that applies generative AI to develop insights from open-source material and reached initial operational capability in 2023 – illustrating how generative AI is being positioned as an interface layer for searching, summarizing, and querying large unclassified corpora. (Pulju 2024)

Domain Applications: GEOINT, SIGINT, OSINT, HUMINT Support, and Cyber

Although “INTs” overlap in practice, AI applications often cluster by data type.

GEOINT / IMINT: computer vision at scale and model accreditation

Geospatial intelligence has become one of the most visible domains for AI adoption because imagery volume is massive and amenable to machine vision. The U.S. National Geospatial-Intelligence Agency (NGA) has publicly articulated a vision to apply AI across the GEOINT mission, aiming for increasingly accurate detections and reports that support defense and intelligence partners. (NGA 2026)

Equally important is the move from experimental models to governed deployment. NGA has described an Accreditation of GEOINT AI Models (AGAIM) initiative to standardize evaluation and risk management for GEOINT models – an attempt to professionalize assurance, interoperability, and trust in operational AI. (NGA 2024)

These efforts align with the broader reality that, in imagery analysis, the question is rarely whether AI can detect anything; it is whether detections are sufficiently reliable, explainable, and calibrated for the decision context, and whether errors (false positives/negatives) are well understood and bounded.

SIGINT and language-heavy workflows: speech, translation, and triage

In signals and communications contexts, AI’s role is often to reduce raw streams into searchable, structured artifacts: transcription, language identification, translation, topic clustering, and entity extraction. The MATERIAL program’s emphasis on low-resource languages reflects a core intelligence challenge: adversaries do not communicate in conveniently high-resource, commercially optimized language domains. (NIST 2017b)

Generative AI raises the prospect of more natural-language querying of large corpora, but it also heightens the risk of “plausible-sounding” errors. A CIA Studies in Intelligence article on the promise and peril of AI warns that while generative tools can accelerate summaries and spark insights, they can also produce falsehoods and encourage overreliance, turning a tool into a “crutch” if not carefully governed. (Brown 2024)

OSINT: scalable collection, narrative discovery, and “gray noise” reduction

OSINT is a natural fit for AI because open data is voluminous, heterogeneous, and fast-moving. Public reporting notes that U.S. intelligence agencies have experimented with generative AI assistants to summarize and query large bodies of open-source information, driven by urgency about exponential data growth. (Bajak 2024)

NATO-related publications also discuss ML-enabled narrative search and social media analytics, illustrating how AI supports discovery of themes and narratives in large online ecosystems (with the obvious caveat that such analytics can be brittle, vulnerable to manipulation, and prone to false inference if not paired with rigorous validation). (Forrester et al. 2021)

HUMINT support: prioritization, safety, and administrative acceleration

AI’s use in HUMINT-related work is typically described publicly in indirect terms: automating administrative burdens, prioritizing leads, and supporting analytical fusion rather than replacing human source handling. The key value proposition is often time: reducing paperwork, surfacing relevant context, and helping analysts connect disparate reports. Public discussions of OSINT integration also emphasize that agencies are reconsidering how open-source capabilities complement other INTs rather than standing alone. (Pulju 2024)

Cyber intelligence and defensive operations: anomaly detection and automated triage

Cyber is already highly automated; AI extends this by improving anomaly detection, malware classification, and incident triage. At the same time, cyber is the domain where adversaries can most directly attack AI systems through data poisoning, model inversion, and manipulation – making AI security a first-order intelligence concern.

Securing AI: Adversarial Threats, Backdoors, and Model Assurance

Intelligence agencies face a distinctive AI problem: they must often deploy models in contested environments against adaptive adversaries. This makes adversarial ML and supply-chain risk unusually salient.

IARPA’s TrojAI program is a clear public example of this concern. It seeks to detect “Trojan” (backdoored) AI systems before deployment – recognizing that an attacker could embed hidden behaviors that trigger under particular inputs. (IARPA 2026c)

From an organizational standpoint, this shifts “AI adoption” from a question of model accuracy alone to a broader assurance regime: provenance of training data, integrity of model weights, red-teaming, secure MLOps pipelines, and continuous monitoring for drift or compromise. These concerns also intersect with the reality that intelligence agencies may increasingly procure or fine-tune large foundation models – systems whose complexity and opacity raise the difficulty of assurance.

IoT Vulnerabilities

As the Internet of Things (IoT) grows, so do the vulnerabilities of these devices, creating opportunities for attackers to compromise networks. The heterogeneity of IoT devices comes in different shapes and sizes, with different operating systems, firmware, and communication protocols. This diversity creates a complex security landscape, making it difficult to implement uniform security measures across devices.

The Internet of Things (IoT) includes devices with sensors, processing capability, software and other technologies that connect and exchange data with other devices and systems via the Internet or other communication networks (Shafiq et al. 2022). There are several privacy and security concerns due to the growth of IoT technologies and products, which require specific approaches from governments to develop international and local standards, guidelines and regulatory frameworks (NYC 2021). In this sense the Military Internet of Things (IoMT) is a class of Internet of Things for combat and war operations. A complex network of interconnected entities that continuously communicate with each other to coordinate, learn, and interact with the physical environment to perform a wide range of activities in a more efficient and informed way (Cameron 2018). Future military battles will be dominated by machine intelligence and cyber warfare (Kott et al. 2015). In IoMT there is the possibility to incorporate inanimate and harmless objects, such as plants and rocks, into the system by equipping them with sensors that will transform them into information collection points (Saxena 2017). In IoMT there is the possibility to incorporate inanimate and harmless objects, such as plants and rocks, into the system by equipping them with sensors that will transform them into information collection points (Mattern and Flörkemeier 2010). In IoMT, communication between the involved entities (Gudeman 2017), and mutual collaboration between human agents and electronic entities in the network (Lawless et al. 2019), are essential.

Governance and Ethics: Law, Accountability, and Human Judgment

AI in intelligence is not merely a technical matter; it implies democratic legitimacy, civil liberties, and public trust. Several intelligence services have therefore published ethical principles and governance frameworks.

The U.S. Office of the Director of National Intelligence (ODNI) released Principles of AI Ethics for the Intelligence Community, emphasizing lawful use, integrity, transparency/accountability, objectivity/equity, human-centered development, and security/resilience. (DNI 2026) The U.S. intelligence community has also provided a supporting AI Ethics Framework intended to guide personnel on how to procure, build, use, and manage AI and its data responsibly. (INTEL 2020)

As foundation models became more prominent, the intelligence community issued interim guidance regarding the acquisition and use of foundation AI models, explicitly anchoring AI use within existing legal authorities and privacy/civil-liberties protections (e.g., references to EO 12333 and Attorney General guidelines in the document). (INTEL 2026)

The United Kingdom’s GCHQ has likewise framed its approach in terms of “augmented intelligence,” emphasizing AI that collates and highlights significant data for analysts rather than replacing them – positioning human judgment as central to legitimacy and reliability. (GCHQ 2026)

From the research literature, ethical analysis stresses recurring challenges: opacity, bias, the risk of automation complacency, and difficulties of accountability when machine outputs influence high-stakes decisions. A systematic review on the ethics of AI for intelligence analysis highlights these issues and argues that ethical risks are not peripheral but structurally tied to how AI is deployed in analytic workflows. (Blanchard and Taddeo 2023)

A complementary perspective comes from risk-management standards such as NIST’s AI Risk Management Framework (AI RMF 1.0), which – while not intelligence-specific – offers a structured approach to mapping, measuring, and managing AI risks across the lifecycle. This kind of framework is particularly relevant for intelligence agencies attempting to institutionalize testing, evaluation, and governance rather than relying on ad hoc adoption. (Tabassi 2023)

High-Stakes Uses and Public Controversy

AI is expected to be particularly useful in the field of intelligence, due to the large data sets available for analysis (Congressional Research Service 2020). Project Maven incorporates computer vision and AI algorithms into intelligence-gathering cells to identify hostile activities (Corrigan 2017). The Central Intelligence Agency (CIA) has approx. 140 projects in development that use AI (Tucker 2017). IARPA is working on projects to develop algorithms for recognizing and translating multilingual speech in noisy environments, geolocating images without the associated metadata, fusing 2D images to create 3D models, and building tools for inferring the function of a building based on the life analysis model (IARPA 2023), and in the field of military logistics for predictive aircraft maintenance (Weisgerber 2017).

To combat deep fake technologies, DARPA launched the Media Forensics (MediFor) project, which aims to ” automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video ” (Corvey 2017), and SemaFor, which is trying to develop algorithms that will automatically detect, attribute and characterize different types of deep fakes (Congressional Research Service 2020).

AI can also be used to create “digital life patterns”, where an individual’s digital “fingerprint” is combined and correlated with purchase history, credit reports, professional CVs and subscriptions to create a profile comprehensive behavioral (Watts 2021).

The DoD developed a concept of Joint All-Domain Command and Control (JADC2) (Hoehn 2020) to create a single source of information, also known as a “common operational picture”, for decision makers (Clark 2017). Intelligence has developed related projects, such as the military’s Project Convergence and the Air Force’s Advanced Battle Management System (Koester 2020), and DARPA’s Mosaic Warfare program seeks to use AI (DARPA 2017).

DOD is testing other AI capabilities to enable cooperative behavior or swarming (simultaneous “swarm” action of many small attack units) (Russon 2015).

Some reported intelligence uses of AI – especially in surveillance and targeting contexts – have become politically and ethically contentious. Investigative reporting has described AI-enabled systems used to accelerate target identification or surveillance analysis, raising concerns about error rates, dehumanization, and inadequate human oversight. (McKernan and Davies 2024)

For academic analysis, such cases underscore a central governance problem: when AI shifts the bottleneck from “finding candidates” to “approving actions,” institutional pressure may reduce the meaningfulness of human review. Whether or not specific allegations are fully substantiated, the pattern is plausible and consistent with well-known sociotechnical dynamics: increased throughput can normalize lower scrutiny unless policy, training, and audit mechanisms deliberately preserve deliberation.

Conclusion

AI is reshaping intelligence agencies primarily by compressing time and expanding analytic reach: computer vision accelerates GEOINT and video exploitation; NLP and translation help triage multilingual data; anomaly detection and forecasting support early warning; and generative AI is emerging as a workflow layer for summarization, search, and drafting support. Public programs such as IARPA’s SMART, HAYSTAC, MATERIAL, REASON, and TrojAI, along with NGA’s push for enterprise AI and model accreditation, illustrate that intelligence adoption is not only about capability but also about scaling assurance, evaluation, and governance. (IARPA 2026b)

At the same time, the very features that make AI valuable – speed, automation, pattern detection—also magnify risks: biased outputs, hallucinated summaries, confirmation bias, adversarial manipulation, and accountability gaps. Intelligence-specific ethical frameworks (ODNI; GCHQ) and broader risk-management standards (NIST AI RMF) represent attempts to operationalize responsible AI, but these frameworks must be continuously tested against real deployment pressures – especially where AI touches sensitive surveillance or kinetic decision chains. (DNI 2026)

Ultimately, AI in intelligence agencies should be understood as a sociotechnical transformation: it changes not only what analysts can do, but how agencies define evidence, manage uncertainty, and justify conclusions under law and democratic oversight. The long-term effectiveness of AI-enabled intelligence will likely depend less on any single model’s accuracy than on the robustness of the institutions – technical, legal, and ethical—that surround and constrain these tools.

Bibliography

  • Babuta, Alexander, Marion Oswald, and Ardi Janjeva. 2023. “Artificial Intelligence and UK National Security: Policy Considerations.” November 2. https://rusi.orghttps://rusi.org.
  • Bajak, Frank. 2024. “US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent.” AP News, May 23. https://apnews.com/article/us-intelligence-services-ai-models-9471e8c5703306eb29f6c971b6923187.
  • Blanchard, Alexander, and Mariarosaria Taddeo. 2023. “The Ethics of Artificial Intelligence for Intelligence Analysis: A Review of the Key Challenges with Recommendations.” Digital Society 2 (1): 12. https://doi.org/10.1007/s44206-023-00036-4.
  • Brown, Zachery Tyson. 2024. “The Incalculable Element”: The Promise and Peril of Artificial. 68 (1). https://www.cia.gov/resources/csi/static/643e18ba5bf779749a14059019db53b2/Article-The-Promise-and-Peril-of-Artificial-Intelligence-Studies-68-1-March-2024.pdf.
  • Cameron, Lori. 2018. “Internet of Things Meets the Military and Battlefield: Connecting Gear and Biometric Wearables for an IoMT and IoBT.” IEEE Computer Society, March 1. https://www.computer.org/publications/tech-news/research/internet-of-military-battlefield-things-iomt-iobt/.
  • Clark, Colin. 2017. “‘Rolling The Marble:’ BG Saltzman On Air Force’s Multi-Domain C2 System.” Breaking Defense, August 8. https://breakingdefense.sites.breakingmedia.com/2017/08/rolling-the-marble-bg-saltzman-on-air-forces-multi-domain-c2-system/.
  • Congressional Research Service. 2020. “Artificial Intelligence and National Security (R45178).” https://crsreports.congress.gov/product/details?prodcode=R45178.
  • Corrigan, Jack. 2017. “Indian Strategic Studies: Three-Star General Wants AI in Every New Weapon System.” https://www.strategicstudyindia.com/2017/11/three-star-general-wants-ai-in-every.html.
  • Corvey, William. 2017. “Media Forensics.” https://www.darpa.mil/program/media-forensics.
  • DARPA. 2017. “Strategic Technology Office Outlines Vision for ‘Mosaic Warfare.’” https://www.darpa.mil/news-events/2017-08-04.
  • DNI. 2026. “Principles of Artificial Intelligence Ethics for the Intelligence Community.” https://www.dni.gov/files/ODNI/documents/Principles_of_AI_Ethics_for_the_Intelligence_Community.pdf.
  • DoD. 2017. “Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven).” https://www.govexec.com/media/gbc/docs/pdfs_edit/establishment_of_the_awcft_project_maven.pdf.
  • Forrester, Bruce, Shadi Ghahar-Khosravi, and Suzanne Waldman. 2021. “Machine Learning-Enabled Narrative Search in the Information Environment.” Machine Learning. https://publications.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-SAS-OCS-ORA-2021/MP-SAS-OCS-ORA-2021-AIML-02-4.
  • GCHQ. 2026. “Pioneering a New National Security: The Ethics of Artificial Intelligence.” https://www.gchq.gov.uk/artificial-intelligence/index.html?utm_source=chatgpt.com.
  • Gudeman, Kim. 2017. “Next-Generation Internet of Battle Things (IoBT) Aims to Help Keep Troops and Civilians Safe.” https://ece.illinois.edu/newsroom/news/3875.
  • Hoehn. 2020. “Defense Capabilities : Joint All Domain Command and Control / John R. Hoehn, Nishawn S. Smagh. – Vanderbilt University.” https://catalog.library.vanderbilt.edu/discovery/fulldisplay/alma991043717816903276/01VAN_INST:vanui.
  • IARPA. 2023. “Research Programs.” https://www.iarpa.gov/index.php/research-programs.
  • IARPA. 2026a. “REASON.” https://www.iarpa.gov/research-programs/reason?utm_source=chatgpt.com.
  • IARPA. 2026b. “SMART – Space-Based Machine Automated Recognition Technique.” https://www.iarpa.gov/research-programs/smart?utm_source=chatgpt.com.
  • IARPA. 2026c. “TrojAI – Trojans in Artificial Intelligence.” https://www.iarpa.gov/research-programs/trojai?utm_source=chatgpt.com.
  • IEEE. 2019. “What Is Augmented Intelligence? – IEEE Digital Reality.” https://digitalreality.ieee.org/publications/what-is-augmented-intelligence.
  • INTEL. 2020. “Artificial Intelligence Ethics Framework for the Intelligence Community.” https://www.intelligence.gov/ai/ai-ethics-framework?utm_source=chatgpt.com.
  • INTEL. 2026. “Common Intelligence Community Interim Guidance Regarding the Acquisition and Use of Foundation AI Models.” https://www.intel.gov/assets/documents/702-documents/declassified/2024/Common_Intelligence_Community_Interim_Guidance.pdf.
  • Koester, Jay. 2020. “JADC2 ‘Experiment 2’ Provides Looking Glass into Future Experimentation.” Www.Army.Mil. https://www.army.mil/article/234900/jadc2_experiment_2_provides_looking_glass_into_future_experimentation.
  • Kott, Alexander, David S. Alberts, and Cliff Wang. 2015. “Will Cybersecurity Dictate the Outcome of Future Wars?” Computer 48 (12): 98–101. https://doi.org/10.1109/MC.2015.359.
  • Lawless, William, Ranjeev Mittu, Donald Sofge, Ira SS Moskowitz, and Stephen Russell. 2019. “Connect the Dots on State-Sponsored Cyber Incidents – Compromise of the Czech Foreign Minister’s Computer.” Council on Foreign Relations. https://www.cfr.org/index.php/cyber-operations/compromise-czech-foreign-ministers-computer.
  • Mattern, Friedemann, and Christian Flörkemeier. 2010. “Vom Internet der Computer zum Internet der Dinge.” Informatik-Spektrum 33 (2): 107–21. https://doi.org/10.1007/s00287-010-0417-7.
  • McKernan, Bethan, and Harry Davies. 2024. “‘The Machine Did It Coldly’: Israel Used AI to Identify 37,000 Hamas Targets.” World News. The Guardian, April 3. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes.
  • NGA. 2024. “NGA Launches GEOINT-Specific Artificial Intelligence Model Accreditation Pilot | National Geospatial-Intelligence Agency.” https://www.nga.mil/news/NGA_Launches_GEOINT-specific_Artificial_Intelligen.html?utm_source=chatgpt.com.
  • NGA. 2026. “GEOINT Artificial Intelligence | National Geospatial-Intelligence Agency.” https://www.nga.mil/news/GEOINT_Artificial_Intelligence_.html?utm_source=chatgpt.com.
  • NIST. 2017a. “IARPA MATERIAL Program.” NIST, August 17. https://www.nist.gov/itl/iad/mig/iarpa-material-program.
  • NIST. 2017b. “IARPA MATERIAL Program.” NIST, August 17. https://www.nist.gov/itl/iad/mig/iarpa-material-program.
  • NYC. 2021. “NYC Office of Technology and Innovation – OTI.” https://www.nyc.gov/content/oti/pages/.
  • ODNI. 2026a. “IARPA Big Data.” https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/iarpa_big_data.pdf.
  • ODNI. 2026b. “IARPA Kicks Off New Research Program to Detect Changes in Movement Patterns.” Site page. Office of the Director of National Intelligence, January 18. https://www.dni.gov/index.php/newsroom/press-releases/press-releases-2023/3693-iarpa-kicks-off-new-research-program-to-detect-changes-in-movement-patterns?utm_source=chatgpt.com.
  • Pulju, John. 2024. “Debating How the IC Should Approach Open Source Intelligence.” Studies in Intelligence, no. 3. https://www.cia.gov/resources/csi/static/d6fd3fa9ce19f1abf2bbc5cf21dfe53e/Article-A-Roundtable-Debate-About-OSINT.pdf.
  • Russon, Mary-Ann. 2015. “Google Robot Army and Military Drone Swarms: UAVs May Replace People in the Theatre of War.” International Business Times UK, April 16. https://www.ibtimes.co.uk/google-robot-army-military-drone-swarms-uavs-may-replace-people-theatre-war-1496615.
  • Saxena, Shalini. 2017. “Researchers Create Electronic Rose Complete with Wires and Supercapacitors.” Ars Technica, March 1. https://arstechnica.com/science/2017/03/researchers-grow-electronic-rose-complete-with-wires-and-supercapacitors/.
  • Shafiq, Muhammad, Zhaoquan Gu, Omar Cheikhrouhou, Wajdi Alhakami, and Habib Hamam. 2022. “The Rise of ‘Internet of Things’: Review and Open Research Issues Related to Detection and Prevention of IoT-Based Security Attacks.” Wireless Communications and Mobile Computing 2022 (August): e8669348. https://doi.org/10.1155/2022/8669348.
  • Shea, Mike. 2021. Intelligence After Next: Breaking Past AI’s Confirmation Bias. https://www.mitre.org/sites/default/files/2021-08/pr-21-0090-intelligence-after-next-breaking-past-ai-confirmation-bias.pdf.
  • Tabassi, Elham. 2023. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. National Institute of Standards and Technology (U.S.). https://doi.org/10.6028/NIST.AI.100-1.
  • Tucker, Patrick. 2017. “What the CIA’s Tech Director Wants from AI.” Defense One, September 6. https://www.defenseone.com/technology/2017/09/cia-technology-director-artificial-intelligence/140801/.
  • Watts, Clint. 2021. “Artificial Intelligence Is Transforming Social Media. Can American Democracy Survive?” Washington Post, October 28. https://www.washingtonpost.com/news/democracy-post/wp/2018/09/05/artificial-intelligence-is-transforming-social-media-can-american-democracy-survive/.
  • Weinbaum, Cortney, and John N.T. Shanahan. 2018. “Intelligence in a Data-Driven Age.” National Defense University Press. https://ndupress.ndu.edu/Media/News/News-Article-View/Article/1566262/intelligence-in-a-data-driven-age/https%3A%2F%2Fndupress.ndu.edu%2FMedia%2FNews%2FNews-Article-View%2FArticle%2F1566262%2Fintelligence-in-a-data-driven-age%2F.
  • Weisgerber, Marcus. 2017. “Defense Firms to Air Force: Want Your Planes’ Data? Pay Up.” Defense One, September 19. https://www.defenseone.com/technology/2017/09/military-planes-predictive-maintenance-technology/141133/.

Leave a Reply

Your email address will not be published. Required fields are marked *