Tag Archives: Journalism & News

When computers learn to swear: Using machine learning for better online conversations

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

Perspective_1.gif

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.

When computers learn to swear: Using machine learning for better online conversations

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

Perspective_1.gif

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.

When computers learn to swear: Using machine learning for better online conversations

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.

Perspective_1.gif

We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.

Expanding Fact Checking at Google

Over the years we’ve heard from Google News users that our efforts to label stories ranging from local to satire to user-generated have helped expand their view of what is happening in the world. Last October we added a new Fact Check tag to help people find news stories that have been fact checked, so they can understand the value of what they’re reading. Soon after, we introduced the tag in France and Germany.

Starting today, people in Brazil, Mexico and Argentina can see fact check tagged articles in the expanded story box on news.google.com and in the Google News & Weather iOS and Android apps.

factcheck_2.png
Fact Check in Brazil

We’re also launching the fact check tag in these countries on news mode in Search. That means if you do a regular search and click the news tab, fact check articles will be elevated and annotated with the same fact check label that you would see in stories on Google News.

factcheck_1.png
Fact Check in news mode in Search

We’re able to do this work because the fact check industry itself has grown—there are now more than 120 organizations involved in tackling this issue—but our commitment to this area is not new. In Europe over the last couple of years we’ve been working with publishers on a number of efforts focused on fact checking. Last week, we announced CrossCheck, a joint project involving nearly 20 French newsrooms and the First Draft Coalition to debunk myths pertaining to the upcoming French elections.

In addition, as part of the Digital Initiative Fund, we’ve provided support for more than 10 projects looking at fact checking and authentication, adding six new initiatives at the end of last year:

  • U.K.-based Full Fact is building an automated fact-checker tailored for journalists.
  • Scotland’s the Ferret is using funding to build up a formal fact checking operation in their newsroom in the wake of the EU referendum.
  • Factmata, developed at University College London and University of Sheffield, will use machine learning to build tools to help readers better understand claims made in digital media content, such as news articles and political speech transcripts.
  • In Italy, Catchy’s team of scientists and media analysts, has created Compass, a fact checking platform to call out misleading stories, rebut bad facts and connect news events to reliable information.
  • In France, Le Monde’s 13-person fact checking unit called Les Décodeurs has received funding for their Hoaxbuster Decodex project.
  • Norway’s ambitious Leserkritikk (“Reader Critic”) project, currently running its prototype on Dagbladet.no, lets readers give specific and structured feedback on facts, language and mistakes in published content. 

These projects clearly illustrate a desire for more of this work, and we’re eager to bring the fact check tag to other countries around the world. In order to make this a reality, we need your help. Publishers who would like to see their work appear with the Fact Check tag should use the open ClaimReview schema from schema.org in their stories.  Adding this markup allows Google to find these stories and highlight the fact checking work that has gone into them.  For more information, head on over to our help center.

Expanding Fact Checking at Google

Over the years we’ve heard from Google News users that our efforts to label stories ranging from local to satire to user-generated have helped expand their view of what is happening in the world. Last October we added a new Fact Check tag to help people find news stories that have been fact checked, so they can understand the value of what they’re reading. Soon after, we introduced the tag in France and Germany.

Starting today, people in Brazil, Mexico and Argentina can see fact check tagged articles in the expanded story box on news.google.com and in the Google News & Weather iOS and Android apps.

factcheck_2.png
Fact Check in Brazil

We’re also launching the fact check tag in these countries on news mode in Search. That means if you do a regular search and click the news tab, fact check articles will be elevated and annotated with the same fact check label that you would see in stories on Google News.

factcheck_1.png
Fact Check in news mode in Search

We’re able to do this work because the fact check industry itself has grown—there are now more than 120 organizations involved in tackling this issue—but our commitment to this area is not new. In Europe over the last couple of years we’ve been working with publishers on a number of efforts focused on fact checking. Last week, we announced CrossCheck, a joint project involving nearly 20 French newsrooms and the First Draft Coalition to debunk myths pertaining to the upcoming French elections.

In addition, as part of the Digital Initiative Fund, we’ve provided support for more than 10 projects looking at fact checking and authentication, adding six new initiatives at the end of last year:

  • U.K.-based Full Fact is building an automated fact-checker tailored for journalists.
  • Scotland’s the Ferret is using funding to build up a formal fact checking operation in their newsroom in the wake of the EU referendum.
  • Factmata, developed at University College London and University of Sheffield, will use machine learning to build tools to help readers better understand claims made in digital media content, such as news articles and political speech transcripts.
  • In Italy, Catchy’s team of scientists and media analysts, has created Compass, a fact checking platform to call out misleading stories, rebut bad facts and connect news events to reliable information.
  • In France, Le Monde’s 13-person fact checking unit called Les Décodeurs has received funding for their Hoaxbuster Decodex project.
  • Norway’s ambitious Leserkritikk (“Reader Critic”) project, currently running its prototype on Dagbladet.no, lets readers give specific and structured feedback on facts, language and mistakes in published content. 

These projects clearly illustrate a desire for more of this work, and we’re eager to bring the fact check tag to other countries around the world. In order to make this a reality, we need your help. Publishers who would like to see their work appear with the Fact Check tag should use the open ClaimReview schema from schema.org in their stories.  Adding this markup allows Google to find these stories and highlight the fact checking work that has gone into them.  For more information, head on over to our help center.

Expanding Fact Checking at Google

Over the years we’ve heard from Google News users that our efforts to label stories ranging from local to satire to user-generated have helped expand their view of what is happening in the world. Last October we added a new Fact Check tag to help people find news stories that have been fact checked, so they can understand the value of what they’re reading. Soon after, we introduced the tag in France and Germany.

Starting today, people in Brazil, Mexico and Argentina can see fact check tagged articles in the expanded story box on news.google.com and in the Google News & Weather iOS and Android apps.

factcheck_2.png
Fact Check in Brazil

We’re also launching the fact check tag in these countries on news mode in Search. That means if you do a regular search and click the news tab, fact check articles will be elevated and annotated with the same fact check label that you would see in stories on Google News.

factcheck_1.png
Fact Check in news mode in Search

We’re able to do this work because the fact check industry itself has grown—there are now more than 120 organizations involved in tackling this issue—but our commitment to this area is not new. In Europe over the last couple of years we’ve been working with publishers on a number of efforts focused on fact checking. Last week, we announced CrossCheck, a joint project involving nearly 20 French newsrooms and the First Draft Coalition to debunk myths pertaining to the upcoming French elections.

In addition, as part of the Digital Initiative Fund, we’ve provided support for more than 10 projects looking at fact checking and authentication, adding six new initiatives at the end of last year:

  • U.K.-based Full Fact is building an automated fact-checker tailored for journalists.
  • Scotland’s the Ferret is using funding to build up a formal fact checking operation in their newsroom in the wake of the EU referendum.
  • Factmata, developed at University College London and University of Sheffield, will use machine learning to build tools to help readers better understand claims made in digital media content, such as news articles and political speech transcripts.
  • In Italy, Catchy’s team of scientists and media analysts, has created Compass, a fact checking platform to call out misleading stories, rebut bad facts and connect news events to reliable information.
  • In France, Le Monde’s 13-person fact checking unit called Les Décodeurs has received funding for their Hoaxbuster Decodex project.
  • Norway’s ambitious Leserkritikk (“Reader Critic”) project, currently running its prototype on Dagbladet.no, lets readers give specific and structured feedback on facts, language and mistakes in published content. 

These projects clearly illustrate a desire for more of this work, and we’re eager to bring the fact check tag to other countries around the world. In order to make this a reality, we need your help. Publishers who would like to see their work appear with the Fact Check tag should use the open ClaimReview schema from schema.org in their stories.  Adding this markup allows Google to find these stories and highlight the fact checking work that has gone into them.  For more information, head on over to our help center.

Expanding Fact Checking at Google

Over the years we’ve heard from Google News users that our efforts to label stories ranging from local to satire to user-generated have helped expand their view of what is happening in the world. Last October we added a new Fact Check tag to help people find news stories that have been fact checked, so they can understand the value of what they’re reading. Soon after, we introduced the tag in France and Germany.

Starting today, people in Brazil, Mexico and Argentina can see fact check tagged articles in the expanded story box on news.google.com and in the Google News & Weather iOS and Android apps.

factcheck_2.png
Fact Check in Brazil

We’re also launching the fact check tag in these countries on news mode in Search. That means if you do a regular search and click the news tab, fact check articles will be elevated and annotated with the same fact check label that you would see in stories on Google News.

factcheck_1.png
Fact Check in news mode in Search

We’re able to do this work because the fact check industry itself has grown—there are now more than 120 organizations involved in tackling this issue—but our commitment to this area is not new. In Europe over the last couple of years we’ve been working with publishers on a number of efforts focused on fact checking. Last week, we announced CrossCheck, a joint project involving nearly 20 French newsrooms and the First Draft Coalition to debunk myths pertaining to the upcoming French elections.

In addition, as part of the Digital Initiative Fund, we’ve provided support for more than 10 projects looking at fact checking and authentication, adding six new initiatives at the end of last year:

  • U.K.-based Full Fact is building an automated fact-checker tailored for journalists.
  • Scotland’s the Ferret is using funding to build up a formal fact checking operation in their newsroom in the wake of the EU referendum.
  • Factmata, developed at University College London and University of Sheffield, will use machine learning to build tools to help readers better understand claims made in digital media content, such as news articles and political speech transcripts.
  • In Italy, Catchy’s team of scientists and media analysts, has created Compass, a fact checking platform to call out misleading stories, rebut bad facts and connect news events to reliable information.
  • In France, Le Monde’s 13-person fact checking unit called Les Décodeurs has received funding for their Hoaxbuster Decodex project.
  • Norway’s ambitious Leserkritikk (“Reader Critic”) project, currently running its prototype on Dagbladet.no, lets readers give specific and structured feedback on facts, language and mistakes in published content. 

These projects clearly illustrate a desire for more of this work, and we’re eager to bring the fact check tag to other countries around the world. In order to make this a reality, we need your help. Publishers who would like to see their work appear with the Fact Check tag should use the open ClaimReview schema from schema.org in their stories.  Adding this markup allows Google to find these stories and highlight the fact checking work that has gone into them.  For more information, head on over to our help center.

Project Shield: Defending Maka Angola

Rafael Marques De Morais is a journalist in Angola who runs Maka Angola, one of the largest independent news site in the country. Operating from Rafael’s kitchen table, Maka Angola may have a small staff, but its impact in Angola is massive. Their investigative journalism, covering topics from conflict diamonds, to wartime atrocities and crippling poverty, have given the citizens of Angola a platform where their voices can now be heard.

As a result of his coverage, Rafael has been threatened, thrown in jail and been the target of constant distributed denial of service (DDoS) attacks to take Maka Angola offline. Rafael has been able to partner with Jigsaw’s Project Shield, ensuring that his site stayed online and continued its work.

The world’s news is under threat from DDoS attacks -- a simple and inexpensive way for anyone with an internet connection to take down a news organization anywhere in the world. This type of cyber attack is one of the most pernicious forms of censorship in the 21st century.

Jigsaw’s Project Shield is a free service that uses Google’s technology to protect independent news sites and human rights groups from DDoS attacks. In light of the rising threat, Google CEO Sundar Pichai announced earlier this year that Shield is available to journalists, news sites and human rights organizations around the world for free.

Learn more about Project Shield here.

Project Shield: Defending Maka Angola

Rafael Marques De Morais is a journalist in Angola who runs Maka Angola, the largest independent news site in the country. Operating from Rafael’s kitchen table, Maka Angola may have a small staff, but its impact in Angola is massive. Their investigative journalism, covering topics from conflict diamonds, to wartime atrocities and crippling poverty, have given the citizens of Angola a platform where their voices can now be heard.

As a result of his coverage, Rafael has been threatened, thrown in jail and been the target of constant distributed denial of service (DDoS) attacks to take Maka Angola offline. Rafael has been able to partner with Jigsaw’s Project Shield, ensuring that his site stayed online and continued its work.

The world’s news is under threat from DDoS attacks -- a simple and inexpensive way for anyone with an internet connection to take down a news organization anywhere in the world. This type of cyber attack is one of the most pernicious forms of censorship in the 21st century.

Jigsaw’s Project Shield is a free service that uses Google’s technology to protect independent news sites and human rights groups from DDoS attacks. In light of the rising threat, Google CEO Sundar Pichai announced earlier this year that Shield is available to journalists, news sites and human rights organizations around the world for free.

Learn more about Rafael’s story and the work of Project Shield.

Project Shield: Defending Maka Angola

Rafael Marques De Morais is a journalist in Angola who runs Maka Angola, one of the largest independent news site in the country. Operating from Rafael’s kitchen table, Maka Angola may have a small staff, but its impact in Angola is massive. Their investigative journalism, covering topics from conflict diamonds, to wartime atrocities and crippling poverty, have given the citizens of Angola a platform where their voices can now be heard.

As a result of his coverage, Rafael has been threatened, thrown in jail and been the target of constant distributed denial of service (DDoS) attacks to take Maka Angola offline. Rafael has been able to partner with Jigsaw’s Project Shield, ensuring that his site stayed online and continued its work.

The world’s news is under threat from DDoS attacks -- a simple and inexpensive way for anyone with an internet connection to take down a news organization anywhere in the world. This type of cyber attack is one of the most pernicious forms of censorship in the 21st century.

Jigsaw’s Project Shield is a free service that uses Google’s technology to protect independent news sites and human rights groups from DDoS attacks. In light of the rising threat, Google CEO Sundar Pichai announced earlier this year that Shield is available to journalists, news sites and human rights organizations around the world for free.

Learn more about Project Shield here.