They think about a lot of other AI applications as well but this one cuts straight to the heart of the matter.
From the U.S. Army Mad Scientist Laboratory blog, June 6:
Applying Artificial Intelligence/Machine Learning to the Target Audience Analysis Model
[Editor’s Note: Our regular readers know the Mad Scientist Laboratory continues to explore the potential benefits Artificial Intelligence and Machine Learning (AI/ML) bring to the future of warfighting and the Operational Environment. As Dr. James Mancillas so eloquently stated, “The integration of future AI systems has the potential to permeate the entirety of military operations, from acquisition philosophies to human-AI team collaborations.” Warfighting is a process-rich endeavor, where speed is oftentimes the decisive factor. AI/ML can overcome limits in human cognitive abilities to provide our Warfighters with a battlefield “edge.”
Today’s post adds to our compendium of understanding with MSG Casey A. Kendall‘s submission exploring how AI/ML could complement (but not replace!) human instinct and intuition in Psychological Operations (PSYOPS) by applying its sheer information processing power and machine speed to analyze Target Audiences (TA) in our on-going endeavor to “Persuade, Change, Influence” our competitors and adversaries. MSG Kendall’s submission was the first runner up in our fourth annual Army Mad Scientist / U.S. Army Sergeants Major Academy (SGM-A) Writing Contest — Enjoy!]
The topic of artificial intelligence and machine learning (AI/ML) increasingly headlines conversations throughout a variety of professions. From law firms to academia, experts attempt to identify how the use of AI/ML can benefit their field; conversely, these experts are also examining the potential for AI/ML to circumvent or corrupt processes within their field. Regardless of the viewpoint, all understand that AI/ML is a powerful tool and in one way or another it is the way of the future. Within the Psychological Operations (PSYOP) regiment, planners are continuously evaluating their own processes to ensure they incorporate new innovations and technologies to outpace our adversaries; however, far too often the speed of innovation and the bureaucracy of applying new techniques keeps the field two steps behind. AI/ML is a technology that PSYOP cannot live without and if it cannot incorporate this tool, its adversaries will quickly outpace. PSYOP must adapt its processes to include the use of AI/ML platforms to augment human instinct and intuition as practitioners conduct target audience analysis to develop effective influence and persuasion products and actions.
A Brief Primer on the Target Audience Analysis Model
TAAM has been in existence in its current form since the late 1990’s, albeit with revisions made to account for changes in methods of communication and reach since that time. The purpose of U.S. Army PSYOP is to influence the attitudes, values, beliefs, and ultimately the behavior of selected groups or individuals in support of U.S. national interests and military objectives (Department of the Army [DA], 2022). Through an eight-step process, the PSYOP analyst uses the Target Audience Analysis Model (TAAM) to identify certain key elements which are necessary to effectively influence the target audience and change the desired behavior.
Through this process, the analyst selects and then refines target audiences (TA) based on their assessed ability to achieve the behavioral change. Analysts examine conditions to understand how the TA view the world around them and how that affects their current behavior. These conditions can be external, such as significant events or the TA’s immediate environment, or internal, such as attitudes, values, and beliefs (DA, 2022). An understanding of the conditions that affect the TA will lead the analyst to identify vulnerabilities which are those characteristics, motives, or conditions that the PSYOP practitioner hopes to exploit in order to influence the TA’s behavior (DA, 2013). Once the analyst identifies the points of leverage by which they can most effectively influence the TA, they must then assess the level to which the TA is susceptible to influence. This step is crucial because it instructs the PSYOP practitioner about the balance between PSYOP messages and influence actions that planners must later develop. If the analyst assesses that the TA has low susceptibility, it means the TA is less susceptible to influence through traditional messaging. When this occurs, PSYOP planners will instead choose to influence the TA’s external environment by altering or manipulating existing conditions. Thus far, the analyst has focused on how the TA thinks or perceives the environment around them, but they must also consider how the TA receives and processes information from their environment.
Once the analyst identifies the best approach for developing their influence products (either messages or actions), they must engage in an eight-step process to describe the TA’s accessibility, or their availability for influence targeting through a variety of media types or engagements (DA, 2022). Accessibility not only assesses traditional media such as television, radio, newspapers, or even the internet, but also key communicators, influencers, or non-traditional channels that may be unique or relevant to that particular TA. This step ties directly to the next step in which the analyst develops arguments and recommended psychological actions....
....MUCH MORE
Very related:July 2, 2022
RAND Corporation on Fourth Industrial Revolution Technologies And Influence Campaigns/Information Warfare
Sometimes I think McLuhan could see the future:
Examining the Effects of Technologies on Strategic Deterrence in the 21st Century
Biotechnology
Decision support systems (DSSs) and technologies
Directed energy
Hypersonic systems
Information- and perception-manipulation technologies
Quantum information and sensing systems
Robotics and semi- and autonomous systems....
....Information- and Perception-Manipulation Technologies
Information- and perception-manipulation technologies cover a wide range of tools designed to distort the perception or beliefs of one individual or set of individuals for the purpose of achieving the perpetrator’s desired effect. These technologies are generally enabled by AI and aspects of cyber and rely on processing large amounts of data. In the context of international security, this set of technologies enables adversaries to conduct advanced influence operations. For purposes of this report, we examine four mechanisms through which information can be modified, with the goal of influencing or misleading targeted individuals or groups: (1) deepfakes, (2) microtargeting, (3) machine learning–driven programs, and (4) spoofing algorithms.
Deepfakes are “realistic photo, audio, video, and other forgeries generated with artificial intelligence (AI) technologies.”14 The word deepfake itself is recent, dating back to late 2017. Although forgeries have always existed, AI makes them much more sophisticated and harder to differentiate from a genuine photo or video. Making deepfakes is also relatively cheap and easy, broadening the scope of individuals and organizations that can engage in this activity.Microtargeting requires access to large amounts of detailed information on individuals to identify highly specific audiences that can be targeted by a message tailored to match their profile and increase the relevance of the message being communicated. An important characteristic of such “micro-audiences” thus is not so much size as homogeneity—all members of the audience share one or more characteristics that the sender seeks to exploit.15
Generally used as an advertising tactic, microtargeting can also be used to make phishing attacks more effective by targeting only the most “valuable” (from the attacker’s perspective) individuals in a given company or organization (known as “whaling attacks”).16Machine learning refers to a process that involves statistical algorithms that replicate human cognitive tasks by deriving their own procedures through analysis of large training data sets. During the training process, the computer system creates its own statistical model to accomplish the specified task in situations it has not previously encountered.17
More-advanced forms of machine learning are referred to as deep learning, meaning the algorithm is able to analyze more-complex forms of data and detect more nuance (for instance, identifying images of a car versus a bus or understanding the sentiment behind a given passage of text). Machine learning is considered to be a subfield of AI because it is the process that enables computers to learn how to complete tasks on their own rather than simply executing commands written by humans. One notable application of machine learning has been the development of bots, which are computer programs designed to emulate human behavior, particularly in online interactions. Other applications include speech recognition, image recognition, robotics, and reasoning.
Spoofing refers to a form of interference that seeks to obscure or falsify the true source of information (often through impersonation) or replace a stream of information with false or malicious content. Common types of spoofing include caller ID spoofing, email spoofing, media access control (MAC) or Internet Protocol (IP) address spoofing, and Global Positioning System (GPS) spoofing. Caller ID spoofing is the simplest form of spoofing and occurs when “a caller deliberately falsifies the information transmitted to your caller ID display to disguise their identity.”18Email spoofing is similar in nature and entails manipulating an email to make it look like it came from a different, trusted source rather than the true sender. GPS spoofing is a more sophisticated form of spoofing that consists of “an intentional intervention that aims to force a GPS receiver to acquire and track invalid navigation data.”19
This type of spoofing works by generating false GPS signals to deceive satellite-based navigation systems—collectively referred to as Global Navigation Satellite Systems—into believing they are located somewhere other than their actual position. Box 4.6 summarizes the potential military applications of manipulation technologies.
Box 4.6Manipulation techniques like the ones just discussed could be used to undermine national will in crisis or war by portraying political or military leaders engaging in embarrassing, illegal, or otherwise reprehensible behavior. They could be used as part of traditional deception and concealment operations or connected to much broader and longer-term efforts to undermine the societal coherence of an adversary.....
Some of the footnotes:
14 Kelley M. Sayler and Laurie A. Harris, Deep Fakes and National Security, Washington, D.C.: Congressional Research Service, October 14, 2019, updated June 8, 2021.
15 Tom Dobber, Ronald Ó. Fathaig, and Frederik J. Zuiderveen Borgesius, “The Regulation of Online Political Micro-Targeting in Europe,” Internet Policy Review, Vol. 8, No. 4, December 2019, pp. 2–3.
16 See, for example, United Kingdom Government, National Cyber Security Centre, “Whaling: How it Works, and What Your Organisation Can Do About It,” webpage, October 6, 2016.
17 Kelley M. Sayler, Artificial Intelligence and National Security, Washington, D.C.: Congressional Research Service, November 21, 2019, p. 2, and Keith D. Foote, “A Brief History of Machine Learning,” Dataversity webpage, March 26, 2019.