Tag Archives: Threat Analysis Group

TAG Bulletin: Q1 2021

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q1 2021. It was last updated on February 16, 2021.

January

  • We terminated 4 YouTube channels and 1 advertising account as part of our ongoing investigation into coordinated influence operations linked to Ukraine. This campaign uploaded content in Russian pertaining to current events in Kazakhstan and critical of European Union policies toward Moldova.

  • We terminated 5 blogs as part of our investigation into coordinated influence operations linked to Morocco. This campaign uploaded content in Arabic that was critical of the Algerian government. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 5 YouTube channels as part of our investigation into coordinated influence operations linked to Brazil. This campaign was linked to a PR firm named AP Exata Intelligence and uploaded content in Portuguese expressing support for several mayoral candidates in Brazil. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 6 YouTube channels as part of our investigation into coordinated influence operations linked to Kyrgyzstan. The campaign uploaded content in Kyrgyz critical of the former President Almazbek Atambayev and the opposition leader Adakhan Madumarov. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 3 advertising accounts as part of our investigation into coordinated influence operations linked to Egypt. This campaign was linked to a PR firm named Mubashier and uploaded content in Arabic supportive of the Russian government across several countries in the Middle East.

  • We terminated 1 YouTube channel as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on current events in Ukraine.

  • We terminated 1 YouTube channel, 2 advertising accounts and 1 mobile developer account as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on such topics as the U.S. election and the poisoning of Alexei Navalny.

  • We terminated 5 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on such topics as the annexation of Crimea and the Syrian civil war.

  • We terminated 2 YouTube channels and 1 advertising account as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on historical events in Afghanistan, Armenia and Ukraine.

  • We terminated 2 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on such topics as the U.S. current events and Alexei Navalny political rallies.

  • We terminated 2,946 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and lifestyle. A very small subset uploaded content in Chinese and English about the U.S. response to COVID-19 and growing U.S. political divisions. We received leads from Graphika that supported us in this investigation. These findings are consistent with our previous reports in the Q3 and Q4 TAG bulletins.

New campaign targeting security researchers

Over the past several months, the Threat Analysis Group has identified an ongoing campaign targeting security researchers working on vulnerability research and development at different companies and organizations. The actors behind this campaign, which we attribute to a government-backed entity based in North Korea, have employed a number of means to target researchers which we will outline below. We hope this post will remind those in the security research community that they are targets to government-backed attackers and should remain vigilant when engaging with individuals they have not previously interacted with.

In order to build credibility and connect with security researchers, the actors established a research blog and multiple Twitter profiles to interact with potential targets. They've used these Twitter profiles for posting links to their blog, posting videos of their claimed exploits and for amplifying and retweeting posts from other accounts that they control.

A screenshot of 4 actor controlled Twitter profiles: @z0x55g, @james0x40, @br0vvnn and @BrownSec3Labs

Actor controlled Twitter profiles.

Their blog contains write-ups and analysis of vulnerabilities that have been publicly disclosed, including “guest” posts from unwitting legitimate security researchers, likely in an attempt to build additional credibility with other security researchers.

A screenshot from the actors' blog of an analysis done by the actor about a publicly disclosed vulnerability.

Example of an analysis done by the actor about a publicly disclosed vulnerability.

While we are unable to verify the authenticity or the working status of all of the exploits that they have posted videos of, in at least one case, the actors have faked the success of their claimed working exploit. On Jan 14, 2021, the actors shared via Twitter a YouTube video they uploaded that proclaimed to exploit CVE-2021-1647, a recently patched Windows Defender vulnerability. In the video, they purported to show a successful working exploit that spawns a cmd.exe shell, but a careful review of the video shows the exploit is fake. Multiple comments on YouTube identified that the video was faked and that there was not a working exploit demonstrated. After these comments were made, the actors used a second Twitter account (that they control) to retweet the original post and claim that it was “not a fake video.”

Tweets demonstrating the actors “exploits”

Tweets demonstrating the actors' “exploits”

Security researcher targeting

The actors have been observed targeting specific security researchers by a novel social engineering method. After establishing initial communications, the actors would ask the targeted researcher if they wanted to collaborate on vulnerability research together, and then provide the researcher with a Visual Studio Project. Within the Visual Studio Project would be source code for exploiting the vulnerability, as well as an additional DLL that would be executed through Visual Studio Build Events. The DLL is custom malware that would immediately begin communicating with actor-controlled C2 domains. An example of the VS Build Event can be seen in the image below.

Visual Studio Build Events command executed when building the provided VS Project files

Visual Studio Build Events command executed when building the provided VS Project files

In addition to targeting users via social engineering, we have also observed several cases where researchers have been compromised after visiting the actors’ blog. In each of these cases, the researchers have followed a link on Twitter to a write-up hosted on blog.br0vvnn[.]io, and shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server. At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions. At this time we’re unable to confirm the mechanism of compromise, but we welcome any information others might have. Chrome vulnerabilities, including those being exploited in the wild (ITW), are eligible for reward payout under Chrome's Vulnerability Reward Program. We encourage anyone who discovers a Chrome vulnerability to report that activity via the Chrome VRP submission process.

These actors have used multiple platforms to communicate with potential targets, including Twitter, LinkedIn, Telegram, Discord, Keybase and email. We are providing a list of known accounts and aliases below. If you have communicated with any of these accounts or visited the actors’ blog, we suggest you review your systems for the IOCs provided below. To date, we have only seen these actors targeting Windows systems as a part of this campaign.

If you are concerned that you are being targeted, we recommend that you compartmentalize your research activities using separate physical or virtual machines for general web browsing, interacting with others in the research community, accepting files from third parties and your own security research.

Actor controlled sites and accounts

Research Blog
  • https://blog.br0vvnn[.]io
Twitter Accounts
  • https://twitter.com/br0vvnn
  • https://twitter.com/BrownSec3Labs
  • https://twitter.com/dev0exp
  • https://twitter.com/djokovic808
  • https://twitter.com/henya290 
  • https://twitter.com/james0x40
  • https://twitter.com/m5t0r
  • https://twitter.com/mvp4p3r
  • https://twitter.com/tjrim91
  • https://twitter.com/z0x55g
LinkedIn Accounts
  • https://www.linkedin.com/in/billy-brown-a6678b1b8/
  • https://www.linkedin.com/in/guo-zhang-b152721bb/
  • https://www.linkedin.com/in/hyungwoo-lee-6985501b9/
  • https://www.linkedin.com/in/linshuang-li-aa696391bb/
  • https://www.linkedin.com/in/rimmer-trajan-2806b21bb/
Keybase
  • https://keybase.io/zhangguo
Telegram
  • https://t.me/james50d
Sample Hashes
  • https://www.virustotal.com/gui/file/4c3499f3cc4a4fdc7e67417e055891c78540282dccc57e37a01167dfe351b244/detection (VS Project DLL)
  • https://www.virustotal.com/gui/file/68e6b9d71c727545095ea6376940027b61734af5c710b2985a628131e47c6af7/detection (VS Project DLL)
  • https://www.virustotal.com/gui/file/25d8ae4678c37251e7ffbaeddc252ae2530ef23f66e4c856d98ef60f399fa3dc/detection (VS Project Dropped DLL)
  • https://www.virustotal.com/gui/file/a75886b016d84c3eaacaf01a3c61e04953a7a3adf38acf77a4a2e3a8f544f855/detection (VS Project Dropped DLL)
  • https://www.virustotal.com/gui/file/a4fb20b15efd72f983f0fb3325c0352d8a266a69bb5f6ca2eba0556c3e00bd15/detection (Service DLL)
C2 Domains: Attacker-Owned
  • angeldonationblog[.]com
  • codevexillium[.]org
  • investbooking[.]de
  • krakenfolio[.]com
  • opsonew3org[.]sg
  • transferwiser[.]io
  • transplugin[.]io
C2 Domains: Legitimate but Compromised
  • trophylab[.]com
  • www.colasprint[.]com
  • www.dronerc[.]it
  • www.edujikim[.]com
  • www.fabioluciani[.]com
C2 URLs
  • https[:]//angeldonationblog[.]com/image/upload/upload.php
  • https[:]//codevexillium[.]org/image/download/download.asp
  • https[:]//investbooking[.]de/upload/upload.asp
  • https[:]//transplugin[.]io/upload/upload.asp
  • https[:]//www.dronerc[.]it/forum/uploads/index.php
  • https[:]//www.dronerc[.]it/shop_testbr/Core/upload.php
  • https[:]//www.dronerc[.]it/shop_testbr/upload/upload.php
  • https[:]//www.edujikim[.]com/intro/blue/insert.asp
  • https[:]//www.fabioluciani[.]com/es/include/include.asp
  • http[:]//trophylab[.]com/notice/images/renewal/upload.asp
  • http[:]//www.colasprint[.]com/_vti_log/upload.asp

Host IOCs

  • Registry Keys

    • HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\KernelConfig

    • HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\DriverConfig

    • HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run\SSL Update 

  • File Paths

    • C:\Windows\System32\Nwsapagent.sys

    • C:\Windows\System32\helpsvc.sys

    • C:\ProgramData\USOShared\uso.bin

    • C:\ProgramData\VMware\vmnat-update.bin

    • C:\ProgramData\VirtualBox\update.bin

TAG Bulletin: Q4 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q4 2020. It was last updated on November 17, 2020.

October

  • We terminated 12 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian supporting the Russian military and criticizing U.S. military involvement in Japan. We received leads from Facebook that supported us in this investigation.

  • We terminated 2 YouTube channels as part of our investigation into coordinated influence operations linked to Myanmar. This domestic campaign posted content focused on elections and supporting the Union Solidarity and Development Party, (USDP). This campaign was consistent with similar findings reported by Facebook.

  • We terminated 35 YouTube channels as part of our investigation into coordinated influence operations linked to Azerbaijan. This domestic campaign was linked to the New Azerbaijan Party and posted content supporting the Azerbaijani government and promoting Azerbaijani nationalism. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 26 YouTube channels and 1 blog as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content primarily in Russian and included news clips and military videos supporting the Russian government. We received leads from the FBI that supported us in this investigation. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 2 YouTube channels as part of our ongoing investigation into a coordinated influence operation linked to Iran. This campaign uploaded content in Farsi and Arabic that was critical of the Saudi government. 

  • We terminated 7,479 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and cooking. A very small subset uploaded content in English about U.S. protests and ongoing wildfires. We received leads from FireEye and Graphika that supported us in this investigation. These findings are consistent with our previous reports in the Q2 and Q3 TAG bulletins.

How we’re tackling evolving online threats

Major events like elections and COVID-19 present opportunities for threat actors, and Google’s Threat Analysis Group (TAG) is working to thwart these threats and protect our products and the people using them. As we head into the U.S. election, we wanted to share an update on what we’re seeing and how threat actors are changing their tactics.

What we’re seeing around the U.S. elections

In June, we announced that we saw phishing attempts against the personal email accounts of staffers on the Biden and Trump campaigns by Chinese and Iranian APTs (Advanced Persistent Threats) respectively. We haven’t seen any evidence of such attempts being successful. 


The Iranian attacker group (APT35) and the Chinese attacker group (APT31) targeted campaign staffers’ personal emails with credential phishing emails and emails containing tracking links. As part of our wider tracking of APT31 activity, we've also seen them deploy targeted malware campaigns. 


One APT31 campaign was based on emailing links that would ultimately download malware hosted on GitHub. The malware was a python-based implant using Dropbox for command and control. It would allow the attacker to upload and download files as well as execute arbitrary commands. Every malicious piece of this attack was hosted on legitimate services, making it harder for defenders to rely on network signals for detection. 


In one example, attackers impersonated McAfee. The targets would be prompted to install a legitimate version of McAfee anti-virus software from GitHub, while malware was simultaneously silently installed to the system.

Example prompt from an APT31 campaign impersonating McAfee

Example prompt from an APT31 campaign impersonating McAfee

When we detect that a user is the target of a government-backed attack, we send them a prominent warning. In these cases, we also shared our findings with the campaigns and the Federal Bureau of Investigation. This targeting is consistent with what others have subsequently reported.

Number of “government backed attacker” warnings sent in 2020

Number of “government backed attacker” warnings sent in 2020

Overall, we’ve seen increased attention on the threats posed by APTs in the context of the U.S. election. U.S government agencies have warned about different threat actors, and we’ve worked closely with those agencies and others in the tech industry to share leads and intelligence about what we’re seeing across the ecosystem. This has resulted in action on our platforms, as well as others. Shortly after the U.S. Treasury sanctioned Ukrainian Parliament member Andrii Derkach for attempting to influence the U.S. electoral process, we removed 14 Google accounts that were linked to him.

Coordinated influence operations

We’ve been sharing actions against coordinated influence operations in our quarterly TAG bulletin (check out our Q1, Q2 and Q3 updates). To date, TAG has not identified any significant coordinated influence campaigns targeting, or attempting to influence, U.S. voters on our platforms. 


Since last summer, TAG has tracked a large spam network linked to China attempting to run an influence operation, primarily on YouTube. This network has a presence across multiple platforms, and acts by primarily acquiring or hijacking existing accounts and posting spammy content in Mandarin such as videos of animals, music, food, plants, sports, and games. A small fraction of these spam channels will then post videos about current events. Such videos frequently feature clumsy translations and computer-generated voices. Researchers at Graphika and FireEye have detailed how this network behaves—including its shift from posting content in Mandarin about issues related to Hong Kong and China’s response to COVID-19, to including a small subset of content in English and Mandarin about current events in the U.S. (such as protests around racial justice, the wildfires on the West Coast, and the U.S. response to COVID-19). 


We’ve taken an aggressive approach to identifying and removing content from this network—for example, in Q3 alone, our Trust and Safety teams terminated more than 3,000 YouTube channels. As a result, this network hasn’t been able to build an audience. Most of the videos we identify have fewer than 10 views, and most of these views appear to come from related spam accounts rather than actual users. So while this network has posted frequently, the majority of this content is spam and we haven’t seen it effectively reach an actual audience on YouTube. We’ve shared our findings on this network in our Q2 and Q3 TAG bulletins and will continue to update there.
Examples of YouTube videos removed

Examples of YouTube videos removed

New COVID-19 targets

As the course of the COVID-19 pandemic evolves, we’ve seen threat actors evolve their tactics as well. In previous posts, we discussed targeting of health organizations as well as attacker efforts to impersonate the World Health Organization. This summer, we and others observed threat actors from China, Russia and Iran targeting pharmaceutical companies and researchers involved in vaccine development efforts. 


In September, we started to see multiple North Korea groups shifting their targeting towards COVID-19 researchers and pharmaceutical companies, including those based in South Korea. One campaign used URL shorteners and impersonated the target’s webmail portal in an attempt to harvest email credentials. In a separate campaign, attackers posed as recruiting professionals to lure targets into downloading malware.

Spoofed Outlook login panel used by North Korean attackers attempting to harvest credentials

Spoofed Outlook login panel used by North Korean attackers attempting to harvest credentials

Tackling DDoS attacks as an industry

In the threat actor toolkit, different types of attacks are used for different purposes: Phishing campaigns can be used like a scalpel—targeting specific groups or individuals with personalized lures that are more likely to trick them into taking action (like clicking on a malware link), while DDoS attacks are more like a hatchet—disrupting or blocking a site or a service entirely. While it’s less common to see DDoS attacks rather than phishing or hacking campaigns coming from government-backed threat groups, we’ve seen bigger players increase their capabilities in launching large-scale attacks in recent years. For example in 2017, our Security Reliability Engineering team measured a record-breaking UDP amplification attack sourced out of several Chinese ISPs (ASNs 4134, 4837, 58453, and 9394), which remains the largest bandwidth attack of which we are aware.


Addressing state-sponsored DDoS attacks requires a coordinated response from the internet community, and we work with others to identify and dismantle infrastructure used to conduct attacks. Going forward, we’ll also use this blog to report attribution and activity we see in this space from state-backed actors when we can do so with a high degree of confidence and in a way that doesn’t disclose information to malicious actors. 

TAG Bulletin: Q3 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in July of 2020. We will continue to update this bulletin with data from Q3 as it becomes available. It was last updated on Sept 15, 2020.

July

We terminated 1 advertising account and 7 YouTube channels as part of our actions against a coordinated influence operation linked to Ecuador. The campaign was linked to the PR firm Estraterra, and posted content in Spanish about former Ecuador government employees. These findings are consistent with similar findings reported by Facebook.


We terminated 299 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and lifestyle. A very small subset uploaded content in Chinese about COVID-19 and current events in Hong Kong. These findings are consistent with our previous reports in the Q2 TAG bulletin.

TAG Bulletin: Q3 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in July of 2020. We will continue to update this bulletin with data from Q3 as it becomes available. It was last updated on Sept 15, 2020.

July

We terminated 1 advertising account and 7 YouTube channels as part of our actions against a coordinated influence operation linked to Ecuador. The campaign was linked to the PR firm Estraterra, and posted content in Spanish about former Ecuador government employees. These findings are consistent with similar findings reported by Facebook.


We terminated 299 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and lifestyle. A very small subset uploaded content in Chinese about COVID-19 and current events in Hong Kong. These findings are consistent with our previous reports in the Q2 TAG bulletin.

TAG Bulletin: Q2 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q2 of 2020. It was last updated on August 5, 2020.

April

We terminated 16 YouTube channels, 1 advertising account and 1 AdSense account as part of our ongoing investigation into coordinated influence operations linked to Iran. The campaign was linked to the Iranian state-sponsored International Union of Virtual Media (IUVM) network, and posted content in Arabic related to the U.S. response to COVID-19 and content about Saudi-American relations. We received leads from FireEye and Graphika that supported us in this investigation.


We terminated 15 YouTube channels and 3 blogs as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted content in English and Russian about the EU, Lithuania, Ukraine, and the U.S., similar to the findings in a recent Graphika report called Secondary Infektion. We received leads from Graphika that supported us in this investigation.


We terminated 7 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted content in Russian, German, and Farsi about Russian and Syrian politics and the U.S. response to COVID-19. This campaign was consistent with similar findings reported by Facebook.


We terminated 186 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy, non-political content, but a small subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to the U.S. response to COVID-19. 


We terminated 3 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Iran. The campaign posted content in Bosnian and Arabic that was critical of the U.S. and the People's Mujahedin Organization of Iran (PMOI). This campaign was consistent with similar findings reported by Facebook.

May

We terminated 1,098 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy, non-political content, but a small subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to the U.S. response to COVID-19. We received leads from Graphika that supported us in this investigation. 


We terminated 47 YouTube channels and 1 AdSense account as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted content in a coordinated manner primarily in Russian about domestic Russian and international policy issues. This campaign was consistent with similar findings reported by Facebook.

June

We terminated 1,312 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy, non-political content, but a subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to racial justice protests in the U.S. This campaign was consistent with similar findings reported by Twitter. 


We terminated 17 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted comments in Russian in a coordinated manner under a small set of Russian language videos. This campaign was consistent with similar findings reported by Twitter.


We banned 3 Play Developers and terminated 1 advertising account as part of our actions against a coordinated influence operation. The campaign was posting news content in English and French, targeting audiences in Africa. We found evidence of this campaign being tied to the PR company Ureputation based in Tunisia. This campaign was consistent with similar findings reported by Facebook.

TAG Bulletin: Q1 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q1 of 2020. It was last updated on May 27, 2020.

January

We terminated 3 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Iran. The campaign was linked to the Iranian state-sponsored International Union of Virtual Media (IUVM) network, and was reproducing IUVM content covering Iran’s strikes into Iraq and U.S. policy on oil. We received leads from Graphika that supported us in this investigation.

February

We terminated 1 advertising account and 82 YouTube channels as part of our actions against a coordinated influence operation linked to Egypt. The campaign was sharing political content in Arabic supportive of Saudi Arabia, the UAE, Egypt, and Bahrain and critical of Iran and Qatar. We found evidence of this campaign being tied to the digital marketing firm New Waves based in Cairo. This campaign was consistent with similar findings reported by Facebook.

March

We terminated 3 advertising accounts, 1 AdSense account, and 11 YouTube channels as part of our actions against a coordinated influence operation linked to India. The campaign was sharing messages in English supportive of Qatar. This campaign was consistent with similar findings reported by Facebook.


We banned 1 Play developer and terminated 68 YouTube channels as part of our actions against a coordinated influence operation. The campaign was posting political content in Arabic supportive of Turkey and critical of the UAE and Yemen. This campaign was consistent with similar findings reported by Twitter.


We terminated 1 advertising account, 1 AdSense account, 17 YouTube channels and banned 1 Play developer as part of our actions against a coordinated influence operation linked to Egypt. The campaign was posting political content in Arabic supportive of Saudi Arabia, the UAE, Egypt, and Bahrain and critical of Iran and Qatar. This campaign was consistent with similar findings reported by Twitter.


We banned 1 Play developer and terminated 78 YouTube channels as part of our actions against a coordinated influence operation linked to Serbia. The domestic campaign was posting pro-Serbian political content. This campaign was consistent with similar findings reported by Twitter.


We terminated 18 YouTube channels as part of our continued investigation into a coordinated influence operation linked to Indonesia. The domestic campaign was targeting the Indonesian provinces Papua and West Papua with messaging in opposition to the Free Papua Movement. This campaign was consistent with similar findings reported by Twitter.

Updates about government-backed hacking and disinformation

On any given day, Google's Threat Analysis Group (TAG) is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries. Our team of analysts and security experts is focused on identifying and stopping issues like phishing campaigns, zero-day vulnerabilities and hacking against Google, our products and our users. Today, we’re sharing recent findings on government-backed phishing, threats and disinformation, as well as a new bulletin to share information about actions we take against accounts that we attribute to coordinated influence campaigns. 

Hacking and phishing attempts 

Last month, we sent 1,755 warnings to users whose accounts were targets of government-backed attackers. 

pasted image 0 (1).png

Distribution of the targets of government-backed phishing attempts in April 2020

Generally, 2020 has been dominated by COVID-19. The pandemic has taken center stage in people’s everyday lives, in the international news media, and in the world of government-backed hacking. Recently, we shared information on numerous COVID-themed attacks discovered and confirmed by our teams. We continue to see attacks from groups like Charming Kitten on medical and healthcare professionals, including World Health Organization (WHO) employees. And as others have reported, we’re seeing a resurgence in COVID-related hacking and phishing attempts from numerous commercial and government-backed attackers.

As one example, we've seen new activity from “hack-for-hire” firms, many based in India, that have been creating Gmail accounts spoofing the WHO. The accounts have largely targeted business leaders in financial services, consulting, and healthcare corporations within numerous countries including, the U.S., Slovenia, Canada, India, Bahrain, Cyprus, and the UK. The lures themselves encourage individuals to sign up for direct notifications from the WHO to stay informed of COVID-19 related announcements, and link to attacker-hosted websites that bear a strong resemblance to the official WHO website. The sites typically feature fake login pages that prompt potential victims to give up their Google account credentials, and occasionally encourage individuals to give up other personal information, such as their phone numbers. 

pasted image 0 (2).png

Example of a spoofed WHO Newsletter sign-up prompt

To help protect users against these kinds of tracks, our Advanced Protection Program (APP) utilizes hardware security keys and provides the strongest protections available against phishing and account hijackings. APP was designed specifically for high-risk accounts.  

Coordinated influence operations 

Government-backed or state-sponsored groups have different goals in carrying out their attacks: Some are looking to collect intelligence or steal intellectual property; others are targeting dissidents or activists, or attempting to engage in coordinated influence operations and disinformation campaigns. Our products are designed with robust built-in security features, like Gmail protections against phishing and Safe Browsing in Chrome, but we still dedicate significant resources to developing new tools and technology to help identify, track and stop this kind of activity. In addition to our internal investigations, we work with law enforcement, industry partners, and third parties like specialized security firms to assess and share intelligence. 

When we find attempts to conduct coordinated influence operations on our platforms, we work with our Trust & Safety teams to swiftly remove such content from our platforms and terminate these actors’ accounts. We take steps to prevent possible future attempts by the same actors, and routinely exchange information and share our findings with others in the industry. We’ve also shared occasional updates about this kind of activity, and today we’re introducing a more streamlined way of doing this via a new, quarterly bulletin to share information about actions we take against accounts that we attribute to coordinated influence campaigns (foreign and domestic). Our actions against coordinated influence operations from January, February and March can be found in the Q1 Bulletin

Since March, we’ve removed more than a thousand YouTube channels that we believe to be part of a large campaign and that were behaving in a coordinated manner. These channels were mostly uploading spammy, non-political content, but a small subset posted primarily Chinese-language political content similar to the findings of a recent Graphika report. We’ll also share additional removal actions from April and May in the Q2 Bulletin. 

Our hope is that this new bulletin helps others who are also working to track these groups, such as researchers studying this issue, and we hope these updates can help confirm findings from security firms and others in the industry. We will also continue to share more detailed analysis of vulnerabilities we find, phishing and malware campaigns that we see, and other interesting or noteworthy trends across this space.

Findings on COVID-19 and online security threats

Google’s Threat Analysis Group (TAG) is a specialized team of security experts that works to identify, report, and stop government-backed phishing and hacking against Google and the people who use our products. We work across Google products to identify new vulnerabilities and threats. Today we’re sharing our latest findings and the threats we’re seeing in relation to COVID-19.


COVID-19 as general bait

Hackers frequently look at crises as an opportunity, and COVID-19 is no different. Across Google products, we’re seeing bad actors use COVID-related themes to create urgency so that people respond to phishing attacks and scams. Our security systems have detected examples ranging from fake solicitations for charities and NGOs, to messages that try to mimic employer communications to employees working from home, to websites posing as official government pages and public health agencies. Recently, our systems have detected 18 million malware and phishing Gmail messages per day related to COVID-19, in addition to more than 240 million COVID-related daily spam messages. Our machine learning models have evolved to understand and filter these threats, and we continue to block more than 99.9 percent of spam, phishing and malware from reaching our users.

How government-backed attackers are using COVID-19

TAG has specifically identified over a dozen government-backed attacker groups using COVID-19 themes as lure for phishing and malware attempts—trying to get their targets to click malicious links and download files.
Location of users targeted by government-backed COVID-19 related attacks

Location of users targeted by government-backed COVID-19 related attacks

One notable campaign attempted to target personal accounts of U.S. government employees with phishing lures using American fast food franchises and COVID-19 messaging. Some messages offered free meals and coupons in response to COVID-19, others suggested recipients visit sites disguised as online ordering and delivery options. Once people clicked on the emails, they were presented with phishing pages designed to trick them into providing their Google account credentials. The vast majority of these messages were sent to spam without any user ever seeing them, and we were able to preemptively block the domains using Safe Browsing. We’re not aware of any user having their account compromised by this campaign, but as usual, we notify all targeted users with a “government-backed attacker” warning.

We’ve also seen attackers try to trick people into downloading malware by impersonating health organizations:

attackers impersonating health organizations

International and national health organizations are becoming targets 

Our team also found new, COVID-19-specific targeting of international health organizations, including activity that corroborates reporting in Reuters earlier this month and is consistent with the threat actor group often referred to as Charming Kitten. The team has seen similar activity from a South American actor, known externally as Packrat, with emails that linked to a domain spoofing the World Health Organization’s login page. These findings show that health organizations, public health agencies, and the individuals who work there are becoming new targets as a result of COVID-19. We're proactively adding extra security protections, such as higher thresholds for Google Account sign in and recovery, to more than 50,000 of such high-risk accounts.
Contact message from Charming Kitten and packrat phishing page

Left: Contact message from Charming Kitten. Right: Packrat phishing page

Generally, we’re not seeing an overall rise in phishing attacks by government-backed groups; this is just a change in tactics. In fact, we saw a slight decrease in overall volumes in March compared to January and February. While it’s not unusual to see some fluctuations in these numbers, it could be that attackers, just like many other organizations, are experiencing productivity lags and issues due to global lockdowns and quarantine efforts.

Accounts that received a “government-backed attacker” warning in 2020

Accounts that received a “government-backed attacker” warning each month of 2020

When working to identify and prevent threats, we use a combination of internal investigative tools, information sharing with industry partners and law enforcement, as well as leads and intelligence from third-party researchers. To help support this broader security researcher community, Google is providing more than $200,000 in grants as part of a new Vulnerability Research Grant COVID-19 fund for Google VRP researchers who help  identify various vulnerabilities.


As the world continues to respond to COVID-19, we expect to see new lures and schemes. Our teams continue to track these and stop them before they reach people—and we’ll continue to share new and interesting findings.