Active measures. The LSE Institute for Global Affairs defines active measures as a Soviet term that came into use in the 1950s to describe “overt and covert techniques for influencing events and behavior in foreign countries.” Information operations, including disinformation campaigns, are one form of active measures.
Advanced Persistent Threat (APT). “Well-resourced and trained adversaries that conduct multi-year intrusion campaigns targeting highly sensitive economic, proprietary, or national security information.” (Hutchins, Cloppert, Amin, “Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains”)
Adversary. The perpetrator of a malicious cyber incident.
All-source intelligence. Intelligence that includes all sources of information. All-source intelligence can incorporate signals intelligence, intercepted communications, open source intelligence and human intelligence gleaned from interpersonal contact. When taken into account with technical evidence, all-source intelligence can help push up the confidence level in an attribution judgment. (Romanosky and Boudreaux, “Private Sector Attribution of Cyber Incidents”)
Amplification. Increasing the spread and prominence of a piece of online content, or a narrative, by engaging with it.
Aperture. The “scope of sources that can be brought to bear on a specific investigation.” The quality of an attribution judgment is likely to rise with the number of useful intelligence sources. (Rid and Buchanan, “Attributing Cyber Attacks”)
Artifact/artefact. A piece of forensic data left behind by a user, which may or may not provide information about the user’s activities and thus aid in the attribution process. Registry keys are an example of an artifact.
Attribution. The process by which evidence of the origins of a cyber incident is collected, assessed and ascribed to a responsible entity or individual. (Lin, “Attribution of Malicious Cyber Incidents”; Romanosky and Boudreaux, “Private Sector Attribution of Cyber Incidents”)
Attribution Assessment/Attribution Judgment. The evaluation reached as to who or what is responsible for a cyber incident.
Backstop. The creation of background elements for a fabricated persona, such as social media posts or profile photos, to make that persona seem like an authentic individual. (Stanford Internet Observatory, “Potemkin Pages”)
Boosterism. Disinformation tactic where repetitive content is created and disseminated to reinforce the perception that a given position represents a popular point of view. Boosterism can be accomplished through the creation of online sock puppets who serve as authors; “independent media” front websites; byline placement in independently run, politically aligned outlets; and dissemination and amplification via social networks. (Stanford Internet Observatory, “Potemkin Pages”)
Command and control server. Server that communicates with systems that have been infected with malware, sending directions to the compromised devices.
Confidence. Certainty level that the responsible entity or individual has been identified in the attribution process. Depending on the quality of the evidence gathered, an entity making the attribution may describe its attribution judgment as low-, moderate- or high-confidence (or in other similar terms, such as “reliable” or “conclusive”). Even an attribution judgment that is described as high-confidence, however, should not be taken as an absolute certainty, as there remains a risk that the judgment could be wrong. (Lin, “Attribution of Malicious Cyber Incidents”)
Coordinated inauthentic behavior. Facebook and Twitter describe coordinated inauthentic behavior as groups of pages or people working together, using a network of fake accounts, to mislead others about who they are or what they are doing. Facebook has in the past publicly identified nation-states in which certain networks of accounts caught engaging in such behavior originated.
Cyber incident. When the security of an information technology-based system has been breached or attacked. A malicious cyber incident occurs when foul play was involved in the breach. (National Cyber Security Centre (UK); Lin, “Attribution of Malicious Cyber Incidents”)
Cyber threat actor. States, groups, or individuals who, with malicious intent, aim to take advantage of vulnerabilities, low cyber security awareness, and technological developments to gain unauthorized access to information systems in order to access or otherwise affect victims’ data, devices, systems, and networks. (Canadian Centre for Cyber Security)
De-layer. A term used by the U.S. Office of the Director of National Intelligence to describe how the conclusions of an attribution judgment should be presented. An analyst “de-layers” the judgment when they provide, in the attribution statement, a “clear distinction” between the physical location where the activities originated, the individuals or groups involved, and whether the actor that sponsored or directed the activities can be identified. (Director of National Intelligence, “A Guide to Cyber Attribution”)
Dissemination pathway. The way in which information reaches the user. In a disinformation campaign, there may be multiple dissemination pathways to the public, including platforms (such as Facebook, YouTube, Twitter, WeChat, Line, Instagram, Whatsapp, Medium, Quora and LiveLeak), front media properties and authentic media outlets.
Distributor. A person or persona that amplifies content by sharing it on social networks.
Down-ranking. An intentional de-prioritization of content in a ranking algorithm (for example, a search engine or a curated feed), making the content more challenging for users to find.
Fingerprint. A string that uniquely identifies a larger amount of data. Fingerprints are typically used to avoid the comparison and transmission of bulky data; it is easier to send and compare a shorter bit string when trying to identify larger blocks of data.
Front entities/inauthentic properties. Entities whose online presence (for example, a Facebook Page or Twitter account) is intended to mislead readers into believing they are authentic think tanks, research organizations, academic institutions or media properties. The front entities can be used to publish original content promoting the perpetrator’s message (while masquerading as academic or independent content), and serve as an affiliation for fake personas. (Stanford Internet Observatory, “Potemkin Pages”)
Indicators of compromise (IOC). Pieces of forensic data that serve as evidence of a cyber intrusion. Indicators of compromise can include IP addresses, malware, log entries, hashes, network traffic, domain names, time of day, programming languages and coding patterns. (Romanosky and Boudreaux, “Private Sector Attribution of Cyber Incidents”)
Influence operation. Building on Pascal Brangetto and Matthijs Veenendaal’s definition of influence operations, we define such operations as coordinated, integrated, and synchronized applications of diplomatic, informational, military, economic, and other capabilities to foster attitudes, behaviors, or decisions by target audiences that further a certain party’s interests and objectives. These operations can take place in peacetime, crisis, conflict, and post-conflict periods. Influence operations can be, but are not always, led by states.
Information laundering. See Narrative laundering.
Information operation. A military operation conducted in the information environment to achieve an advantage over an adversary. The RAND Corporation defines information operations and warfare as “the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent.” Information operations are a state-led subset of influence operations.
Infrastructure. “The physical and/or virtual communication structures used to deliver a cyber capability or maintain command and control of capabilities. Attackers can buy, lease, share and compromise servers and networks to build their infrastructure. They frequently establish infrastructure using legitimate online services, from free trials of commercial cloud services to social media accounts. Some are loath to abandon infrastructure, while others will do so because they can rebuild within hours. Some routinely change infrastructure between or even within operations to impede detection.” (Director of National Intelligence, “A Guide to Cyber Attribution”)
Interactions. Ways in which users can engage with social media posts, including likes, reactions, comments and shares. Interactions are a way of measuring the reach and influence of content disseminated in a disinformation campaign.
Log file. A file that keeps an automatic record of activity in operating systems and in certain software. A log file may contain traces or artifacts that may be useful in the attribution process.
Memetic propaganda. Using misleading, partisan or one-sided memes that promote a particular political viewpoint, memetic propaganda is designed to be participatory and shareable. While narrative propaganda attempts to persuade using words, memetic propaganda generally relies on bite-sized images, sayings or videos, often employing provocative humor. (Stanford Internet Observatory, “Potemkin Pages”)
Narrative laundering. Disinformation tactic in which a story is planted or created, and then legitimized, through repetition or through a citation chain across other media entities. A narrative can be laundered through aligned publications, authentic media, and unwitting or witting participants. Synonymous with information laundering. (Stanford Internet Observatory, “Potemkin Pages”)
Narrative propaganda. The use of misleading, partisan or one-sided writing to push a political viewpoint. For example, narrative propaganda can take the form of long-form essays or geopolitical analysis. (Stanford Internet Observatory, “Potemkin Pages”)
Network intrusion. Unauthorized activity on a network.
Off-platform domain. Term used by Facebook to describe any site that is not Facebook. In a recent Facebook public statement on the removal of coordinated inauthentic behavior, one manipulation tactic described is the usage of fake and compromised accounts to manage Facebook Pages and drive other users to off-platform domains such as state-controlled media websites.
Open source intelligence (OSINT). The collection and analysis of publicly available data and other information for intelligence purposes.
Operational cluster. A collection of operations that can be grouped together in a variety of ways, including groupings by tactics or by targets. (Stanford Internet Observatory, “Potemkin Pages”)
Operational security (OPSEC). The process by which an entity identifies critical information about itself that is generally available, and assesses the security of the information.
Payload. The component of the malware that performs the malicious attack (i.e., the exploit). (Rid and Buchanan, “Attributing Cyber Attacks”)
Persona. Profile of an online individual, which can be assembled by individuals or by entities for a number of purposes, including socializing or disseminating information. A fake persona refers to a fabricated online identity that persists over time or across multiple platforms, whose aim is to create a perception that the person behind the identity is real. A suspicious persona refers to an online identity that exhibits signs of being a fake persona, but for which there is insufficient evidence to determine that the persona is fabricated. (Stanford Internet Observatory, “Potemkin Pages”)
Platform manipulation. Twitter defines “platform manipulation” as the use of its services “to engage in bulk, aggressive or deceptive activity that misleads” other users or disrupts their experience. According to Twitter, platform manipulation includes inauthentic engagements, which attempt to make Twitter accounts or content appear more popular or active than they are, and coordinated activity, which attempts to artificially influence conversations through the use of multiple accounts, fake accounts, automation and/or scripting. Twitter has in the past made public attribution judgments about state-backed information operations caught engaging in platform manipulation.
Privilege escalation. According to MITRE ATT&CK, the methods by which an attacker exploits vulnerabilities to gain higher-level permissions to a system or network.
Provenance. The origins of a piece of content or an information operation.
Red team. As a noun, “red team” refers to a group within an organization that is authorized to help improve security systems and policies by taking on an adversary’s point of view and simulating attacks. As a verb, “red team” means the process of finding weaknesses by simulating attacks.
Sandbox. A virtual environment in which new or untested software can be run securely.
Social exhaust. Traces of online activity, particularly on social media networks, that individuals leave behind. A lack of social exhaust may indicate that a persona is inauthentic. (Stanford Internet Observatory, “Potemkin Pages”)
Social network engagement. The ways in which users on a social media platform can interact with posts. On Facebook, for example, engagement means likes, shares, reactions and comments. On Twitter, engagement means likes, retweets, quote tweets and replies. Video views count as engagement. Social network engagement is one of the most quantifiable measures of reach. (Stanford Internet Observatory, “Potemkin Pages”)
Source account. The account from which a piece of content was first shared.
Spectrum of state responsibility. A tool developed by Jason Healey of the Atlantic Council to facilitate transparent and precise assigning of responsibility for an attack or campaign of attacks. The spectrum describes ten ways in which a state can be involved or culpable in a cyber intrusion.

State-backed operation. A term used by Twitter to describe an information operation conducted, on its platform, with the support of a nation-state.
Stub profile. An online profile containing only the minimum required information. Along with other indicators, such as a stolen or stock photo as a profile picture, a stub profile could be one indicator that a persona is suspicious or fake. (Stanford Internet Observatory, “Potemkin Pages”)
Takedown. A platform’s removal of accounts and/or pages found to violate its policies. Twitter, for example, has taken down information operations for engaging in platform manipulation, while Facebook has taken down accounts for coordinated inauthentic behavior.
Technical forensics. The process of looking for technical clues left behind in an intrusion. (Lin, “Attribution of Malicious Cyber Incidents”)
TTP. TTP stands for Tactics, Techniques and Procedures. The National Institute of Standards and Technology’s Computer Security Resource Center defines TTP as the behavior of an actor, using three levels of description: a tactic is the highest-level description, a technique is a more detailed description of behavior in the context of a tactic, and a procedure is a highly-detailed, lower-level description in the context of a technique.
Trace. A piece of forensic data left behind by a user, which may or may not provide information about the user’s activities and thus aid in the attribution process.
Tradecraft. The Office of the Director of National Intelligence defines tradecraft as “behavior frequently used to conduct cyber attack or espionage.” According to the DNI, unique tradecraft can be a key indicator in determining attribution, as “habits are more difficult to change than technical tools.” However, “these unique tradecraft indicators diminish in importance once they’ve become public and other actors can mimic them.”
Vector. A place where, or a path through which, information spreads. In an influence operation, social networking sites and media outlets can serve as vectors for the dissemination of propaganda and mis- and disinformation.
Zero-day vulnerability. According to Norton, a newly-discovered software flaw where the developer has yet to release an update or patch to fix the flaw.