How do researchers and analysts sift through the evidence to make an attribution? These examples may help journalists understand how difficult attribution can be and how to weigh the information they’re given —which in some cases may conflict. The three examples of attribution below follow analysts working with different kinds of information on different operations:
- A cyber operation targeting Ukraine’s critical infrastructure.
- An influence operation promoting pro-Saudi Arabian content during the Washington Post journalist Jamal Kashoggi’s disappearance and death, resulting in Twitter taking down automated accounts.
- An already attributed dataset of the GRU’s 2016 influence operation during the U.S. presidential elections, where researchers found additional personas potentially linked to the operation.automated accounts.

Attributing a Cyber Operation: Sandworm, an Advanced Persistent Threat (APT)
This case study follows a Russian cyber espionage group, Sandworm, also referred to as VooDoo Bear, Quedagh, BlackEnergy Group and Telebots. The group’s origins date back to 2009, but only in recent years have governments such as the United States and the United Kingdom publicly attributed cyberattacks to Sandworm and linked it to a unit in the Russian military intelligence service, colloquially referred to as the GRU. This study will focus on those early days of Sandworm’s discovery, and how journalist Kim Zetter pieced together different attribution judgments at the time. Zetter’s reporting and Andy Greenberg’s book “Sandworm” informed much of this section.
For journalists, this case study shows how difficult it can be to report on a cyber incident right after it happens and to know whose attribution judgment to reference when there are conflicting reports or statements. However, this story also highlights that it is okay to point out where there are gaps of evidence — and to report that there isn’t a conclusive answer to who was behind the attack at the time.
The Attack on the Ukrainian Power Grid
On December 23, 2015, three power distribution centers in Western Ukraine providing power to a quarter of a million Ukrainians completely shut down, all at the same time. The attack started with a phishing email sent to the IT staff and system administrators at three distribution centers, enabling the perpetrators to steal the staff’s credentials and log into the network. The phishing email contained a Microsoft Word attachment that planted malware known as BlackEnergy3 onto the machines, providing hackers access to spread the malware through the companies’ systems. Importantly, the attackers gained access to the Supervisory Control and Data Acquisition (SCADA) network, which controlled the physical machinery of the power centers.
On the day of the attack, the hackers launched a telephone denial-of-service attack against one of the control centers so that calls made by Ukrainian residents about the power outage could not go through the already overwhelmed servers. It didn’t end there. The attackers used another piece of malware called KillDisk to wipe all the essential system files at the operation stations and crash computers — rendering the power stations totally useless.
Connecting the Dots
It is easier to connect the dots in retrospect, as Greenberg has done in “Sandworm,” between the attack on the power grid in 2015, Sandworm’s previous targeting efforts in 2014, and attacks that were still to come.
In 2014, the American-based threat intelligence group iSight was analyzing a variant of the same malware. One analyst found a unique identifier in the malware code used by hackers to keep track of their victims. The unique identifier, arrakis02, was a reference to a science fiction novel, “Dune.” The iSight analyst looked through VirusTotal, an archive of older malicious code, for other “Dune” references, finding that the same malicious code was used to target a NATO event in December 2013, a Polish energy firm in June 2014, and the Ukrainian government. Thus, it was possible that the same threat actor used the malware for these three attacks. What’s more, after looking through iSight’s published findings, analysts from the cybersecurity company Trend Micro noticed that the hackers had been using files designed to be opened by software used for SCADA systems; in other words, the malware was intended to take control of industrial control systems.
Assigning Attribution
Different entities at different times made different decisions about whether to publically attribute an actor or not. A timeline of that narrative below:
A few weeks/months after the attack:
- Governments: The Ukranian government attributed the 2015 power grid hack to Russia days after the outage. However, no other government publically attributed Russia.
- Cybersecurity firms: Two U.S.-based cyber security firms, iSight and SANS, published blog posts of their technical analysis of the cyber incident. Both came to the conclusion that the incident was a “coordinated intentional attack.” However, iSight attributed the attack to Sandworm while SANS believed it was too early to make a link.
- Zetter’s reporting: In an article published a few weeks after the event, Kim Zetter addressed the question of attribution head on by warning the audience that attribution is difficult and “can be used for political purposes.” Zetter presented the claims made by iSight but qualified them with those made by another security firm, ESET, which was less sure it was Russia. Zetter highlighted what was still unknown about the attack: how many energy distribution centers were attacked (which would come to light later) and how and when the hackers got onto the networks. In a follow-up article published in March 2016, Zetter answered these unknowns as more information came to light after the attack, but reported that some analysts involved in the investigation, like Robert M. Lee, still hesitated to assign public attribution to a single actor.
A year and beyond after the attack:
- A year after the December 2015 attack, another one hit Ukraine’s power grids. The Slovakian security firm ESET as well as Robert Lee and his team at Dragos published blog posts about the incident. Both firms found links to the 2015 power grid attack, and Lee’s team stated with high confidence that the attack had direct ties to Sandworm.
- The U.S. government wouldn’t publicly acknowledge Russia’s cyberwar in Ukraine until February 2018, after the Russian attack referred to as “NotPetya.”
- The 2015 cyber attack is now widely accepted as an operation attributed to Russia.
Key Lessons from the Attribution Judgment
There are three critical understandings for journalists from this story. When is attribution made? Attribution isn’t always clear at publication. Who assigns attribution? A cybersecurity firm might only present a technical analysis and leave out the larger political context. What are the politics that come into play when assigning attribution? In this case, the U.S. abstained from acknowledging Russia was behind the Ukrainian attack.

Attributing an Influence Operation: Using Automated and Real Accounts to Control a Pro-Saudi Narrative
In October 2018, Twitter took down a series of accounts spreading disinformation about the death of Washington Post journalist Jamal Khashoggi. On October 2, Khashoggi entered the Saudi Arabia consulate in Istanbul, Turkey. For a few days, Khashoggi’s whereabouts were unknown; Saudi officials said he left while Turkish officials said there were no signs that he left the building. Almost a week later the Turkish government told U.S. officials that an audio and video recording surfaced of Khashoggi’s death inside the consulate. Months later, The Washington Post reported that the CIA had assessed with “medium-to-high confidence” that the Crown Prince was involved in the killing of Khashoggi, exchanging messages with his chief propagandist, Saud al-Qahtani, who gave the order to kill Khashoggi via a Skype call.
From the early days of Khashoggi’s disappearance to months after the CIA found the Saudi crown prince to have ordered his death, the Saudi government constantly changed the narratives of what happened to Khashoggi. The Saudi government first stated that Khashoggi left immediately after visiting the consulate. Then, under pressure to say more about Khashggi’s fate after Turkish officials asserted he had been killed, Riyadh later stated Khashoggi had been killed in a fight in the Istanbul consulate.
During this changing public narrative in the media, the Saudi government aimed to control the narrative around Khashoggi’s disappearance and death through other means as well, launching an influence operation on Twitter.
The Botnet Twitter Campaign
During the time of Khashogi’s disappearance and the investigation of Turkish officials, pro-Saudi hashtags proliferated on Twitter, such as #الغاء_متابعه_اعداء_الوطن (#unfollow_enemies_of_the_nation); the hashtag was active for four days (from October 14, 2018 to October 18, 2018) and appeared more than 100,000 times. As the influence campaign unfolded on Twitter, a group of disinformation investigators, including independent researcher Josh Russell, Marc Owen Jones of Hamad bin Khalifa University and Ben Nimmo of Graphika, tracked the hashtag campaigns and activity of accounts pushing pro-Saudi narratives. According to Nimmo, the accounts repeatedly posted the same content at the same time in the same order. However, the accounts spaced out their tweets to not draw too much attention to their automated presence. Nimmo stated that the goal of these accounts was to reach as wide an audience as possible, which was one of the reasons the automated accounts tweeted in both Arabic and English, and aimed to amplify their tweets enough to get their messages trending on Twitter. Other hashtags and narratives also trended globally — such as a hashtag in Arabic that roughly translates to “#We_all_trust_Mohammed_Bin_Salman” and the phrases, “We have to stand by our leader” and “campaign to close Al Jazeera, the channel of deception” — and were featured in 60,000 and 100,000 tweets respectively.
Russell later shared a list of hundreds of accounts with NBC News, who then alerted Twitter to the network.
Assigning Attribution
The news that the accounts were taken down broke in an NBC article on October 18, 2018. At the time, Twitter did not publicly disclose the takedown of these accounts or release the content and did not assign public attribution to an entity.
A year after Twitter took down the pro-Saudi accounts, the platform announced a takedown removing a larger network of over 88,000 accounts connected to social media marketing company Smaat as part of a “significant state-backed information operation on Twitter originating in Saudi Arabia.” The Stanford Internet Observatory examined a subset of the network. Accounts in this dataset demonstrably retweeted Saudi elites denying the Crown Prince’s involvement in Khashoggi’s death. This finding reveals that pro-Saudi narratives were indeed being pushed on Twitter with state support around the time of Khashoggi’s death; whether this means that the hashtags or automated accounts from 2018 were additionally from a Saudi state-backed operation remains inconclusive, but this subsequently-discovered activity provides a data point indicating plausibility.
Key Lessons from the Attribution Judgment
This case study exemplifies that while some of the best disinformation researchers can discover networks of anomalous behavior and botnets, they are limited when it comes to concrete attribution about where accounts originate, how they are connected, and who may own them. However, the platforms do have access to the critical information to link accounts with an entity. The need for both inputs in assessing who was responsible for an action is something that journalists should consider when writing about attribution claims for operations on platforms such as Facebook or Twitter.

Attributing Additional Assets: GRU Online Operations, 2014-2019
The third case study illustrates how analysts can work from within an already attributed dataset and follow a thread of analysis to find assets that may additionally be linked to the operation. Researchers at the Stanford Internet Observatory (SIO) published a white paper on the Directorate of the General Staff of the Armed Forces of the Russian Federation — more commonly known as GRU — and its online influence operations from 2014 to 2019. The SIO researchers received a dataset originally provided by Facebook to the United States Senate Select Committee on Intelligence (SSCI). Some of these Facebook Pages related to influence operations, centered in different regions or around different domestic and international political conflicts. Others related to hybrid “hack-and-leak” operations, including the 2016 GRU hack of the DNC — an intrusion operation that itself was attributed by the Special Counsel’s office within the Department of Justice, which based its attribution on the indictment of Viktor Borisovich Netyksho and 11 other GRU officers. (Netyksho Indictment ¶ 38)
Connecting the Dots and Assigning Attribution
Using Facebook’s attributed data and the Special Counsel’s report as a jumping-off point, the Stanford research team was able to uncover a more expansive network established to create and promote Russian state-aligned propaganda.
Within the Facebook dataset, the researchers observed patterns of action that led them to suspicious personas, most of whom appeared to be fabrications, who had authored and distributed stories on the GRU-attributed Facebook Pages and their corresponding websites. Using open source intelligence methods, the researchers worked through the initial network of personas to identify additional affiliated personas and accounts on other social platforms, including Twitter, Medium and Telegram, among others. To appropriately portray the degree of confidence in this persona identification process, the researchers included a table in their report that clearly laid out the criteria used to justify their inclusion in the network. They described five characteristics of fabricated personas:
- wrote bylined content for an attributed GRU outlet
- had a profile photo stolen from another social media user
- disseminated GRU media outlet content on their social media accounts
- had their social media account taken down
- had other supporting evidence
The report authors additionally sought outside feedback from the tech platforms, as well as other researchers including Bellingcat and DFRLab. That process included asking the outside researchers to disprove the assessment that the accounts were Russian fabrications by finding justifications to support the hypothesis that the suspicious personas were, in fact, real people. Recognizing that some relationships between individual likely personas and the overall GRU network were inconclusive but strongly suspicious, the researchers stated, “with the information at our disposal we are not able to make a conclusive attribution to GRU; we note their strong adjacencies to these operations above and throughout this document.”
Key Lessons from the Attribution Judgment
This case study provides journalists with an example of how researchers build on the analysis of an already attributed dataset to communicate more understanding of a threat actor’s capabilities, while maintaining a clear boundary between what is concretely attributed and what is not. While there is still an asymmetry of information between platforms and researchers, such assessments can offer additional information that informs further discovery — and, possibly, eventual concrete attribution. It is key that the research team reached out to additional researchers to challenge their methodologies and underlying assumptions; having another team of experts critique or dismantle a theory in advance of publication provides an opportunity to discover biases or gaps in thinking.