Author Archives: Shane Huntley

TAG Bulletin: Q2 2021

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q2 2021. It was last updated on May 26, 2021.

April

  • We terminated 3 YouTube channels as part of our investigation into coordinated influence operations linked to El Salvador. This campaign uploaded content in Spanish focusing on a mayoral race in the Santa Tecla municipality. Our findings are similar to findings reported by Facebook.


  • We terminated 43 YouTube channels as part of our investigation into coordinated influence operations linked to Albania. This campaign uploaded content in Farsi that was critical of Iran’s government and supportive of Mojahedin-e Khalq. Our findings are similar to findings reported by Facebook.


  • We terminated 728 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and lifestyle. A very small subset uploaded content in Chinese and English about protests in Hong Kong and criticism of the U.S. response to the COVID-19 pandemic. These findings are consistent with our previous reports.


TAG Bulletin: Q1 2021

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q1 2021. It was last updated on February 16, 2021.

January

  • We terminated 4 YouTube channels and 1 advertising account as part of our ongoing investigation into coordinated influence operations linked to Ukraine. This campaign uploaded content in Russian pertaining to current events in Kazakhstan and critical of European Union policies toward Moldova.

  • We terminated 5 blogs as part of our investigation into coordinated influence operations linked to Morocco. This campaign uploaded content in Arabic that was critical of the Algerian government. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 5 YouTube channels as part of our investigation into coordinated influence operations linked to Brazil. This campaign was linked to a PR firm named AP Exata Intelligence and uploaded content in Portuguese expressing support for several mayoral candidates in Brazil. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 6 YouTube channels as part of our investigation into coordinated influence operations linked to Kyrgyzstan. The campaign uploaded content in Kyrgyz critical of the former President Almazbek Atambayev and the opposition leader Adakhan Madumarov. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 3 advertising accounts as part of our investigation into coordinated influence operations linked to Egypt. This campaign was linked to a PR firm named Mubashier and uploaded content in Arabic supportive of the Russian government across several countries in the Middle East.

  • We terminated 1 YouTube channel as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on current events in Ukraine.

  • We terminated 1 YouTube channel, 2 advertising accounts and 1 mobile developer account as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on such topics as the U.S. election and the poisoning of Alexei Navalny.

  • We terminated 5 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on such topics as the annexation of Crimea and the Syrian civil war.

  • We terminated 2 YouTube channels and 1 advertising account as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on historical events in Afghanistan, Armenia and Ukraine.

  • We terminated 2 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian on such topics as the U.S. current events and Alexei Navalny political rallies.

  • We terminated 2,946 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and lifestyle. A very small subset uploaded content in Chinese and English about the U.S. response to COVID-19 and growing U.S. political divisions. We received leads from Graphika that supported us in this investigation. These findings are consistent with our previous reports in the Q3 and Q4 TAG bulletins.

TAG Bulletin: Q4 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q4 2020. It was last updated on November 17, 2020.

October

  • We terminated 12 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content in Russian supporting the Russian military and criticizing U.S. military involvement in Japan. We received leads from Facebook that supported us in this investigation.

  • We terminated 2 YouTube channels as part of our investigation into coordinated influence operations linked to Myanmar. This domestic campaign posted content focused on elections and supporting the Union Solidarity and Development Party, (USDP). This campaign was consistent with similar findings reported by Facebook.

  • We terminated 35 YouTube channels as part of our investigation into coordinated influence operations linked to Azerbaijan. This domestic campaign was linked to the New Azerbaijan Party and posted content supporting the Azerbaijani government and promoting Azerbaijani nationalism. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 26 YouTube channels and 1 blog as part of our ongoing investigation into coordinated influence operations linked to Russia. This campaign uploaded content primarily in Russian and included news clips and military videos supporting the Russian government. We received leads from the FBI that supported us in this investigation. This campaign was consistent with similar findings reported by Facebook.

  • We terminated 2 YouTube channels as part of our ongoing investigation into a coordinated influence operation linked to Iran. This campaign uploaded content in Farsi and Arabic that was critical of the Saudi government. 

  • We terminated 7,479 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and cooking. A very small subset uploaded content in English about U.S. protests and ongoing wildfires. We received leads from FireEye and Graphika that supported us in this investigation. These findings are consistent with our previous reports in the Q2 and Q3 TAG bulletins.

How we’re tackling evolving online threats

Major events like elections and COVID-19 present opportunities for threat actors, and Google’s Threat Analysis Group (TAG) is working to thwart these threats and protect our products and the people using them. As we head into the U.S. election, we wanted to share an update on what we’re seeing and how threat actors are changing their tactics.

What we’re seeing around the U.S. elections

In June, we announced that we saw phishing attempts against the personal email accounts of staffers on the Biden and Trump campaigns by Chinese and Iranian APTs (Advanced Persistent Threats) respectively. We haven’t seen any evidence of such attempts being successful. 


The Iranian attacker group (APT35) and the Chinese attacker group (APT31) targeted campaign staffers’ personal emails with credential phishing emails and emails containing tracking links. As part of our wider tracking of APT31 activity, we've also seen them deploy targeted malware campaigns. 


One APT31 campaign was based on emailing links that would ultimately download malware hosted on GitHub. The malware was a python-based implant using Dropbox for command and control. It would allow the attacker to upload and download files as well as execute arbitrary commands. Every malicious piece of this attack was hosted on legitimate services, making it harder for defenders to rely on network signals for detection. 


In one example, attackers impersonated McAfee. The targets would be prompted to install a legitimate version of McAfee anti-virus software from GitHub, while malware was simultaneously silently installed to the system.

Example prompt from an APT31 campaign impersonating McAfee

Example prompt from an APT31 campaign impersonating McAfee

When we detect that a user is the target of a government-backed attack, we send them a prominent warning. In these cases, we also shared our findings with the campaigns and the Federal Bureau of Investigation. This targeting is consistent with what others have subsequently reported.

Number of “government backed attacker” warnings sent in 2020

Number of “government backed attacker” warnings sent in 2020

Overall, we’ve seen increased attention on the threats posed by APTs in the context of the U.S. election. U.S government agencies have warned about different threat actors, and we’ve worked closely with those agencies and others in the tech industry to share leads and intelligence about what we’re seeing across the ecosystem. This has resulted in action on our platforms, as well as others. Shortly after the U.S. Treasury sanctioned Ukrainian Parliament member Andrii Derkach for attempting to influence the U.S. electoral process, we removed 14 Google accounts that were linked to him.

Coordinated influence operations

We’ve been sharing actions against coordinated influence operations in our quarterly TAG bulletin (check out our Q1, Q2 and Q3 updates). To date, TAG has not identified any significant coordinated influence campaigns targeting, or attempting to influence, U.S. voters on our platforms. 


Since last summer, TAG has tracked a large spam network linked to China attempting to run an influence operation, primarily on YouTube. This network has a presence across multiple platforms, and acts by primarily acquiring or hijacking existing accounts and posting spammy content in Mandarin such as videos of animals, music, food, plants, sports, and games. A small fraction of these spam channels will then post videos about current events. Such videos frequently feature clumsy translations and computer-generated voices. Researchers at Graphika and FireEye have detailed how this network behaves—including its shift from posting content in Mandarin about issues related to Hong Kong and China’s response to COVID-19, to including a small subset of content in English and Mandarin about current events in the U.S. (such as protests around racial justice, the wildfires on the West Coast, and the U.S. response to COVID-19). 


We’ve taken an aggressive approach to identifying and removing content from this network—for example, in Q3 alone, our Trust and Safety teams terminated more than 3,000 YouTube channels. As a result, this network hasn’t been able to build an audience. Most of the videos we identify have fewer than 10 views, and most of these views appear to come from related spam accounts rather than actual users. So while this network has posted frequently, the majority of this content is spam and we haven’t seen it effectively reach an actual audience on YouTube. We’ve shared our findings on this network in our Q2 and Q3 TAG bulletins and will continue to update there.
Examples of YouTube videos removed

Examples of YouTube videos removed

New COVID-19 targets

As the course of the COVID-19 pandemic evolves, we’ve seen threat actors evolve their tactics as well. In previous posts, we discussed targeting of health organizations as well as attacker efforts to impersonate the World Health Organization. This summer, we and others observed threat actors from China, Russia and Iran targeting pharmaceutical companies and researchers involved in vaccine development efforts. 


In September, we started to see multiple North Korea groups shifting their targeting towards COVID-19 researchers and pharmaceutical companies, including those based in South Korea. One campaign used URL shorteners and impersonated the target’s webmail portal in an attempt to harvest email credentials. In a separate campaign, attackers posed as recruiting professionals to lure targets into downloading malware.

Spoofed Outlook login panel used by North Korean attackers attempting to harvest credentials

Spoofed Outlook login panel used by North Korean attackers attempting to harvest credentials

Tackling DDoS attacks as an industry

In the threat actor toolkit, different types of attacks are used for different purposes: Phishing campaigns can be used like a scalpel—targeting specific groups or individuals with personalized lures that are more likely to trick them into taking action (like clicking on a malware link), while DDoS attacks are more like a hatchet—disrupting or blocking a site or a service entirely. While it’s less common to see DDoS attacks rather than phishing or hacking campaigns coming from government-backed threat groups, we’ve seen bigger players increase their capabilities in launching large-scale attacks in recent years. For example in 2017, our Security Reliability Engineering team measured a record-breaking UDP amplification attack sourced out of several Chinese ISPs (ASNs 4134, 4837, 58453, and 9394), which remains the largest bandwidth attack of which we are aware.


Addressing state-sponsored DDoS attacks requires a coordinated response from the internet community, and we work with others to identify and dismantle infrastructure used to conduct attacks. Going forward, we’ll also use this blog to report attribution and activity we see in this space from state-backed actors when we can do so with a high degree of confidence and in a way that doesn’t disclose information to malicious actors. 

TAG Bulletin: Q3 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in July of 2020. We will continue to update this bulletin with data from Q3 as it becomes available. It was last updated on Sept 15, 2020.

July

We terminated 1 advertising account and 7 YouTube channels as part of our actions against a coordinated influence operation linked to Ecuador. The campaign was linked to the PR firm Estraterra, and posted content in Spanish about former Ecuador government employees. These findings are consistent with similar findings reported by Facebook.


We terminated 299 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and lifestyle. A very small subset uploaded content in Chinese about COVID-19 and current events in Hong Kong. These findings are consistent with our previous reports in the Q2 TAG bulletin.

TAG Bulletin: Q3 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in July of 2020. We will continue to update this bulletin with data from Q3 as it becomes available. It was last updated on Sept 15, 2020.

July

We terminated 1 advertising account and 7 YouTube channels as part of our actions against a coordinated influence operation linked to Ecuador. The campaign was linked to the PR firm Estraterra, and posted content in Spanish about former Ecuador government employees. These findings are consistent with similar findings reported by Facebook.


We terminated 299 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy content in Chinese about music, entertainment, and lifestyle. A very small subset uploaded content in Chinese about COVID-19 and current events in Hong Kong. These findings are consistent with our previous reports in the Q2 TAG bulletin.

TAG Bulletin: Q2 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q2 of 2020. It was last updated on August 5, 2020.

April

We terminated 16 YouTube channels, 1 advertising account and 1 AdSense account as part of our ongoing investigation into coordinated influence operations linked to Iran. The campaign was linked to the Iranian state-sponsored International Union of Virtual Media (IUVM) network, and posted content in Arabic related to the U.S. response to COVID-19 and content about Saudi-American relations. We received leads from FireEye and Graphika that supported us in this investigation.


We terminated 15 YouTube channels and 3 blogs as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted content in English and Russian about the EU, Lithuania, Ukraine, and the U.S., similar to the findings in a recent Graphika report called Secondary Infektion. We received leads from Graphika that supported us in this investigation.


We terminated 7 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted content in Russian, German, and Farsi about Russian and Syrian politics and the U.S. response to COVID-19. This campaign was consistent with similar findings reported by Facebook.


We terminated 186 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy, non-political content, but a small subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to the U.S. response to COVID-19. 


We terminated 3 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Iran. The campaign posted content in Bosnian and Arabic that was critical of the U.S. and the People's Mujahedin Organization of Iran (PMOI). This campaign was consistent with similar findings reported by Facebook.

May

We terminated 1,098 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy, non-political content, but a small subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to the U.S. response to COVID-19. We received leads from Graphika that supported us in this investigation. 


We terminated 47 YouTube channels and 1 AdSense account as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted content in a coordinated manner primarily in Russian about domestic Russian and international policy issues. This campaign was consistent with similar findings reported by Facebook.

June

We terminated 1,312 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to China. These channels mostly uploaded spammy, non-political content, but a subset posted political content primarily in Chinese similar to the findings in a recent Graphika report, including content related to racial justice protests in the U.S. This campaign was consistent with similar findings reported by Twitter. 


We terminated 17 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Russia. The campaign posted comments in Russian in a coordinated manner under a small set of Russian language videos. This campaign was consistent with similar findings reported by Twitter.


We banned 3 Play Developers and terminated 1 advertising account as part of our actions against a coordinated influence operation. The campaign was posting news content in English and French, targeting audiences in Africa. We found evidence of this campaign being tied to the PR company Ureputation based in Tunisia. This campaign was consistent with similar findings reported by Facebook.

Updates about government-backed hacking and disinformation

On any given day, Google's Threat Analysis Group (TAG) is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries. Our team of analysts and security experts is focused on identifying and stopping issues like phishing campaigns, zero-day vulnerabilities and hacking against Google, our products and our users. Today, we’re sharing recent findings on government-backed phishing, threats and disinformation, as well as a new bulletin to share information about actions we take against accounts that we attribute to coordinated influence campaigns. 

Hacking and phishing attempts 

Last month, we sent 1,755 warnings to users whose accounts were targets of government-backed attackers. 

pasted image 0 (1).png

Distribution of the targets of government-backed phishing attempts in April 2020

Generally, 2020 has been dominated by COVID-19. The pandemic has taken center stage in people’s everyday lives, in the international news media, and in the world of government-backed hacking. Recently, we shared information on numerous COVID-themed attacks discovered and confirmed by our teams. We continue to see attacks from groups like Charming Kitten on medical and healthcare professionals, including World Health Organization (WHO) employees. And as others have reported, we’re seeing a resurgence in COVID-related hacking and phishing attempts from numerous commercial and government-backed attackers.

As one example, we've seen new activity from “hack-for-hire” firms, many based in India, that have been creating Gmail accounts spoofing the WHO. The accounts have largely targeted business leaders in financial services, consulting, and healthcare corporations within numerous countries including, the U.S., Slovenia, Canada, India, Bahrain, Cyprus, and the UK. The lures themselves encourage individuals to sign up for direct notifications from the WHO to stay informed of COVID-19 related announcements, and link to attacker-hosted websites that bear a strong resemblance to the official WHO website. The sites typically feature fake login pages that prompt potential victims to give up their Google account credentials, and occasionally encourage individuals to give up other personal information, such as their phone numbers. 

pasted image 0 (2).png

Example of a spoofed WHO Newsletter sign-up prompt

To help protect users against these kinds of tracks, our Advanced Protection Program (APP) utilizes hardware security keys and provides the strongest protections available against phishing and account hijackings. APP was designed specifically for high-risk accounts.  

Coordinated influence operations 

Government-backed or state-sponsored groups have different goals in carrying out their attacks: Some are looking to collect intelligence or steal intellectual property; others are targeting dissidents or activists, or attempting to engage in coordinated influence operations and disinformation campaigns. Our products are designed with robust built-in security features, like Gmail protections against phishing and Safe Browsing in Chrome, but we still dedicate significant resources to developing new tools and technology to help identify, track and stop this kind of activity. In addition to our internal investigations, we work with law enforcement, industry partners, and third parties like specialized security firms to assess and share intelligence. 

When we find attempts to conduct coordinated influence operations on our platforms, we work with our Trust & Safety teams to swiftly remove such content from our platforms and terminate these actors’ accounts. We take steps to prevent possible future attempts by the same actors, and routinely exchange information and share our findings with others in the industry. We’ve also shared occasional updates about this kind of activity, and today we’re introducing a more streamlined way of doing this via a new, quarterly bulletin to share information about actions we take against accounts that we attribute to coordinated influence campaigns (foreign and domestic). Our actions against coordinated influence operations from January, February and March can be found in the Q1 Bulletin

Since March, we’ve removed more than a thousand YouTube channels that we believe to be part of a large campaign and that were behaving in a coordinated manner. These channels were mostly uploading spammy, non-political content, but a small subset posted primarily Chinese-language political content similar to the findings of a recent Graphika report. We’ll also share additional removal actions from April and May in the Q2 Bulletin. 

Our hope is that this new bulletin helps others who are also working to track these groups, such as researchers studying this issue, and we hope these updates can help confirm findings from security firms and others in the industry. We will also continue to share more detailed analysis of vulnerabilities we find, phishing and malware campaigns that we see, and other interesting or noteworthy trends across this space.

TAG Bulletin: Q1 2020

This bulletin includes coordinated influence operation campaigns terminated on our platforms in Q1 of 2020. It was last updated on May 27, 2020.

January

We terminated 3 YouTube channels as part of our ongoing investigation into coordinated influence operations linked to Iran. The campaign was linked to the Iranian state-sponsored International Union of Virtual Media (IUVM) network, and was reproducing IUVM content covering Iran’s strikes into Iraq and U.S. policy on oil. We received leads from Graphika that supported us in this investigation.

February

We terminated 1 advertising account and 82 YouTube channels as part of our actions against a coordinated influence operation linked to Egypt. The campaign was sharing political content in Arabic supportive of Saudi Arabia, the UAE, Egypt, and Bahrain and critical of Iran and Qatar. We found evidence of this campaign being tied to the digital marketing firm New Waves based in Cairo. This campaign was consistent with similar findings reported by Facebook.

March

We terminated 3 advertising accounts, 1 AdSense account, and 11 YouTube channels as part of our actions against a coordinated influence operation linked to India. The campaign was sharing messages in English supportive of Qatar. This campaign was consistent with similar findings reported by Facebook.


We banned 1 Play developer and terminated 68 YouTube channels as part of our actions against a coordinated influence operation. The campaign was posting political content in Arabic supportive of Turkey and critical of the UAE and Yemen. This campaign was consistent with similar findings reported by Twitter.


We terminated 1 advertising account, 1 AdSense account, 17 YouTube channels and banned 1 Play developer as part of our actions against a coordinated influence operation linked to Egypt. The campaign was posting political content in Arabic supportive of Saudi Arabia, the UAE, Egypt, and Bahrain and critical of Iran and Qatar. This campaign was consistent with similar findings reported by Twitter.


We banned 1 Play developer and terminated 78 YouTube channels as part of our actions against a coordinated influence operation linked to Serbia. The domestic campaign was posting pro-Serbian political content. This campaign was consistent with similar findings reported by Twitter.


We terminated 18 YouTube channels as part of our continued investigation into a coordinated influence operation linked to Indonesia. The domestic campaign was targeting the Indonesian provinces Papua and West Papua with messaging in opposition to the Free Papua Movement. This campaign was consistent with similar findings reported by Twitter.

Findings on COVID-19 and online security threats

Google’s Threat Analysis Group (TAG) is a specialized team of security experts that works to identify, report, and stop government-backed phishing and hacking against Google and the people who use our products. We work across Google products to identify new vulnerabilities and threats. Today we’re sharing our latest findings and the threats we’re seeing in relation to COVID-19.


COVID-19 as general bait

Hackers frequently look at crises as an opportunity, and COVID-19 is no different. Across Google products, we’re seeing bad actors use COVID-related themes to create urgency so that people respond to phishing attacks and scams. Our security systems have detected examples ranging from fake solicitations for charities and NGOs, to messages that try to mimic employer communications to employees working from home, to websites posing as official government pages and public health agencies. Recently, our systems have detected 18 million malware and phishing Gmail messages per day related to COVID-19, in addition to more than 240 million COVID-related daily spam messages. Our machine learning models have evolved to understand and filter these threats, and we continue to block more than 99.9 percent of spam, phishing and malware from reaching our users.

How government-backed attackers are using COVID-19

TAG has specifically identified over a dozen government-backed attacker groups using COVID-19 themes as lure for phishing and malware attempts—trying to get their targets to click malicious links and download files.
Location of users targeted by government-backed COVID-19 related attacks

Location of users targeted by government-backed COVID-19 related attacks

One notable campaign attempted to target personal accounts of U.S. government employees with phishing lures using American fast food franchises and COVID-19 messaging. Some messages offered free meals and coupons in response to COVID-19, others suggested recipients visit sites disguised as online ordering and delivery options. Once people clicked on the emails, they were presented with phishing pages designed to trick them into providing their Google account credentials. The vast majority of these messages were sent to spam without any user ever seeing them, and we were able to preemptively block the domains using Safe Browsing. We’re not aware of any user having their account compromised by this campaign, but as usual, we notify all targeted users with a “government-backed attacker” warning.

We’ve also seen attackers try to trick people into downloading malware by impersonating health organizations:

attackers impersonating health organizations

International and national health organizations are becoming targets 

Our team also found new, COVID-19-specific targeting of international health organizations, including activity that corroborates reporting in Reuters earlier this month and is consistent with the threat actor group often referred to as Charming Kitten. The team has seen similar activity from a South American actor, known externally as Packrat, with emails that linked to a domain spoofing the World Health Organization’s login page. These findings show that health organizations, public health agencies, and the individuals who work there are becoming new targets as a result of COVID-19. We're proactively adding extra security protections, such as higher thresholds for Google Account sign in and recovery, to more than 50,000 of such high-risk accounts.
Contact message from Charming Kitten and packrat phishing page

Left: Contact message from Charming Kitten. Right: Packrat phishing page

Generally, we’re not seeing an overall rise in phishing attacks by government-backed groups; this is just a change in tactics. In fact, we saw a slight decrease in overall volumes in March compared to January and February. While it’s not unusual to see some fluctuations in these numbers, it could be that attackers, just like many other organizations, are experiencing productivity lags and issues due to global lockdowns and quarantine efforts.

Accounts that received a “government-backed attacker” warning in 2020

Accounts that received a “government-backed attacker” warning each month of 2020

When working to identify and prevent threats, we use a combination of internal investigative tools, information sharing with industry partners and law enforcement, as well as leads and intelligence from third-party researchers. To help support this broader security researcher community, Google is providing more than $200,000 in grants as part of a new Vulnerability Research Grant COVID-19 fund for Google VRP researchers who help  identify various vulnerabilities.


As the world continues to respond to COVID-19, we expect to see new lures and schemes. Our teams continue to track these and stop them before they reach people—and we’ll continue to share new and interesting findings.