Types of Operations

Journalists reporting on attribution will encounter different kinds of operations: cyber operations, influence operations or a combination of both. There is also an important distinction between influence operations and information operations. This page breaks down these concepts and includes an influence operations detection workflow that analysts use to help journalists understand the different components of finding coordinated inauthentic behavior online.

Cyber incidents vs. Influence Operation

Cyber incidents (often in the form of network intrusions or cyber espionage) are different creatures than more recent cyber-enabled influence operations. Herbert Lin points out that cyber incidents target computers, while influence operations target minds. 

Both, though, depend on deception. Cyber incidents deceive a person to get them to click on an email link, install malware, or give over their username and password. Influence operations deceive someone into believing content is authentically created and true. However, actors in an influence operation use social media platforms exactly as they are intended — i.e., they are not exploiting a zero-day vulnerability in the platform. The actors use the algorithms already developed by the platforms to amplify their content. Before 2016, most social media platforms focused their attention on potential cyber operations against their physical networks; now they also work to combat influence operations on the sites themselves. 

Some operations, particularly those run by state actors over a period of time, will use a combination of cyber and influence operations. Journalists need to know what kind of, or what combination of, operations have taken place before reporting on them.

Influence vs. Information Operations

Like traditional cyber operations, influence operations are not a recent phenomenon; Soviet-style propaganda techniques such as narrative laundering during the Cold War is one example in a long history of foreign influence operations trying to control the narrative to influence people’s thoughts and, sometimes, actions. However, the online influence operations of today are computational: they can take advantage of algorithms that diffuse false narratives quickly and on a tremendous scale. 

It is important to note the distinction between information operations and influence operations. The term “information operations” has historically been used in a military context, to disrupt decision making capabilities of the target while protecting one’s own. Influence operations, on the other hand, are not limited to military operations or state actors but can also be used by a variety of actors, including trolls or spammers, in times of war or peace. Therefore, information operations are a subset of influence operations limited to military operations led by states. 

Once platforms adopted these terms, some have used “information operation” interchangeably with “influence operation.” Attribution.news uses the term “influence operations” to capture this wider definition. 

How Analysts Find Influence Operations 

Detection of influence operations, particularly on social media, demands more than simple content analysis. Assessing the authenticity or truthfulness of specific content is only occasionally a signal; many campaigns promote opinion-based narratives that are not disprovable, so fact-checking is largely ineffective. Instead, to uncover coordinated inauthentic behavior, analysts use a variety of data analysis techniques and investigative methods that aim to be narrative-, platform- and language-agnostic. These include investigations into the media produced (content), the accounts involved (voice), and the way the information spreads across platforms (dissemination): 

Anomalous Content: What was spread, when, and how much. In a large dataset, one process that analysts employ is filtering for statistically significant content (such as an anomalously high volume of similar content, or identical links). This process surfaces interesting data for further analysis, such as domains, hashtags, tagged usernames or n-grams.

Anomalous Voice: Who were the accounts creating or promoting the hashtags, domains or content. This involves investigating clusters or persistent communities of accounts at both network and individual levels. Analysts look for behavioral similarity (such as what and when accounts post) and platform specific interactions (such as retweets, following, friending, liking or replying) that indicate connections between accounts. They also look for inauthenticity markers. 

Anomalous Dissemination: How the content  spread. Understanding the flow of information — such as how accounts coordinated to amplify their message, and how content hopped from platform to platform — provides an understanding of the pathways that specific actors use to spread information to their target audiences. 

The purpose of a multi-faceted analysis is to avoid the kind of overfitting that account- or behavior-only modeling sometimes delivers (for example, in assessing whether a particular account is a bot), or the false positives that can come from overweighting the significance of content (the Internet Research Agency’s memes, for example, were widely adopted by targeted communities and are still frequently posted by authentic actors). Regional norms must be considered: shortcuts like “bots use profile pictures of nondescript items like flowers rather than their faces” fall apart in countries in which users are concerned about being identified by government entities or insurgents. Finally, any given characteristic of an operation that is noted as a significant factor in detection will often be avoided by a  sophisticated or well-resourced actor in the future; evaluation must take into account adversarial adaptation and copycats.