Tag Archives: Patents

Announcing the Patent Phrase Similarity Dataset

Patent documents typically use legal and highly technical language, with context-dependent terms that may have meanings quite different from colloquial usage and even between different documents. The process of using traditional patent search methods (e.g., keyword searching) to search through the corpus of over one hundred million patent documents can be tedious and result in many missed results due to the broad and non-standard language used. For example, a "soccer ball" may be described as a "spherical recreation device", "inflatable sportsball" or "ball for ball game". Additionally, the language used in some patent documents may obfuscate terms to their advantage, so more powerful natural language processing (NLP) and semantic similarity understanding can give everyone access to do a thorough search.

The patent domain (and more general technical literature like scientific publications) poses unique challenges for NLP modeling due to its use of legal and technical terms. While there are multiple commonly used general-purpose semantic textual similarity (STS) benchmark datasets (e.g., STS-B, SICK, MRPC, PIT), to the best of our knowledge, there are currently no datasets focused on technical concepts found in patents and scientific publications (the somewhat related BioASQ challenge contains a biomedical question answering task). Moreover, with the continuing growth in size of the patent corpus (millions of new patents are issued worldwide every year), there is a need to develop more useful NLP models for this domain.

Today, we announce the release of the Patent Phrase Similarity dataset, a new human-rated contextual phrase-to-phrase semantic matching dataset, and the accompanying paper, presented at the SIGIR PatentSemTech Workshop, which focuses on technical terms from patents. The Patent Phrase Similarity dataset contains ~50,000 rated phrase pairs, each with a Cooperative Patent Classification (CPC) class as context. In addition to similarity scores that are typically included in other benchmark datasets, we include granular rating classes similar to WordNet, such as synonym, antonym, hypernym, hyponym, holonym, meronym, and domain related. This dataset (distributed under the Creative Commons Attribution 4.0 International license) was used by Kaggle and USPTO as the benchmark dataset in the U.S. Patent Phrase to Phrase Matching competition to draw more attention to the performance of machine learning models on technical text. Initial results show that models fine-tuned on this new dataset perform substantially better than general pre-trained models without fine-tuning.

The Patent Phrase Similarity Dataset
To better train the next generation of state-of-the-art models, we created the Patent Phrase Similarity dataset, which includes many examples to address the following problems: (1) phrase disambiguation, (2) adversarial keyword matching, and (3) hard negative keywords (i.e., keywords that are unrelated but received a high score for similarity from other models ). Some keywords and phrases can have multiple meanings (e.g., the phrase "mouse" may refer to an animal or a computer input device), so we disambiguate the phrases by including CPC classes with each pair of phrases. Also, many NLP models (e.g., bag of words models) will not do well on data with phrases that have matching keywords but are otherwise unrelated (adversarial keywords, e.g., “container section” → “kitchen container”, “offset table” → “table fan”). The Patent Phrase Similarity dataset is designed to include many examples of matching keywords that are unrelated through adversarial keyword match, enabling NLP models to improve their performance.

Each entry in the Patent Phrase Similarity dataset contains two phrases, an anchor and target, a context CPC class, a rating class, and a similarity score. The dataset contains 48,548 entries with 973 unique anchors, split into training (75%), validation (5%), and test (20%) sets. When splitting the data, all of the entries with the same anchor are kept together in the same set. There are 106 different context CPC classes and all of them are represented in the training set.

Anchor Target Context Rating Score
acid absorption absorption of acid B08 exact 1.0
acid absorption acid immersion B08 synonym 0.75
acid absorption chemically soaked B08 domain related 0.25
acid absorption acid reflux B08 not related 0.0
gasoline blend petrol blend C10 synonym 0.75
gasoline blend fuel blend C10 hypernym 0.5
gasoline blend fruit blend C10 not related 0.0
faucet assembly water tap A22 hyponym 0.5
faucet assembly water supply A22 holonym 0.25
faucet assembly school assembly A22 not related 0.0
A small sample of the dataset with anchor and target phrases, context CPC class (B08: Cleaning, C10: Petroleum, gas, fuel, lubricants, A22: Butchering, processing meat/poultry/fish), a rating class, and a similarity score.

Generating the Dataset
To generate the Patent Phrase Similarity data, we first process the ~140 million patent documents in the Google Patent's corpus and automatically extract important English phrases, which are typically noun phrases (e.g., “fastener”, “lifting assembly”) and functional phrases (e.g., “food processing”, “ink printing”). Next, we filter and keep phrases that appear in at least 100 patents and randomly sample around 1,000 of these filtered phrases, which we call anchor phrases. For each anchor phrase, we find all of the matching patents and all of the CPC classes for those patents. We then randomly sample up to four matching CPC classes, which become the context CPC classes for the specific anchor phrase.

We use two different methods for pre-generating target phrases: (1) partial matching and (2) a masked language model (MLM). For partial matching, we randomly select phrases from the entire corpus that partially match with the anchor phrase (e.g., “abatement” → “noise abatement”, “material formation” → “formation material”). For MLM, we select sentences from the patents that contain a given anchor phrase, mask them out, and use the Patent-BERT model to predict candidates for the masked portion of the text. Then, all of the phrases are cleaned up, which includes lowercasing and the removal of punctuation and certain stopwords (e.g., "and", "or", "said"), and sent to expert raters for review. Each phrase pair is rated independently by two raters skilled in the technology area. Each rater also generates new target phrases with different ratings. Specifically, they are asked to generate some low-similarity and unrelated targets that partially match with the original anchor and/or some high-similarity targets. Finally, the raters meet to discuss their ratings and come up with final ratings.

Dataset Evaluation
To evaluate its performance, the Patent Phrase Similarity dataset was used in the U.S. Patent Phrase to Phrase Matching Kaggle competition. The competition was very popular, drawing about 2,000 competitors from around the world. A variety of approaches were successfully used by the top scoring teams, including ensemble models of BERT variants and prompting (see the full discussion for more details). The table below shows the best results from the competition, as well as several off-the-shelf baselines from our paper. The Pearson correlation metric was used to measure the linear correlation between the predicted and true scores, which is a helpful metric to target for downstream models so they can distinguish between different similarity ratings.

The baselines in the paper can be considered zero-shot in the sense that they use off-the-shelf models without any further fine-tuning on the new dataset (we use these models to embed the anchor and target phrases separately and compute the cosine similarity between them). The Kaggle competition results demonstrate that by using our training data, one can achieve significant improvements compared with existing NLP models. We have also estimated human performance on this task by comparing a single rater’s scores to the combined score of both raters. The results indicate that this is not a particularly easy task, even for human experts.

Model Training Pearson correlation
word2vec Zero-shot 0.44
Patent-BERT Zero-shot 0.53
Sentence-BERT Zero-shot 0.60
Kaggle 1st place single Fine-tuned 0.87
Kaggle 1st place ensemble Fine-tuned 0.88
Human 0.93
Performance of popular models with no fine-tuning (zero-shot), models fine-tuned on the Patent Phrase Similarity dataset as part of the Kaggle competition, and single human performance.

Conclusion and Future Work
We present the Patent Phrase Similarity dataset, which was used as the benchmark dataset in the U.S. Patent Phrase to Phrase Matching competition, and demonstrate that by using our training data, one can achieve significant improvements compared with existing NLP models.

Additional challenging machine learning benchmarks can be generated from the patent corpus, and patent data has made its way into many of today's most-studied models. For example, the C4 text dataset used to train T5 contains many patent documents. The BigBird and LongT5 models also use patents via the BIGPATENT dataset. The availability, breadth and open usage terms of full text data (see Google Patents Public Datasets) makes patents a unique resource for the research community. Possibilities for future tasks include massively multi-label classification, summarization, information retrieval, image-text similarity, citation graph prediction, and translation. See the paper for more details.

Acknowledgements
This work was possible through a collaboration with Kaggle, Satsyil Corp., USPTO, and MaxVal. Thanks to contributors Ian Wetherbee from Google, Will Cukierski and Maggie Demkin from Kaggle. Thanks to Jerry Ma, Scott Beliveau, and Jamie Holcombe from USPTO and Suja Chittamahalingam from MaxVal for their contributions.

Source: Google AI Blog


Improving patent quality one search at a time

Good patents support innovation while bad patents hinder it. Bad patents drive up costs for innovative companies that must choose between paying undeserved license fees or staggering litigation costs. That’s why today we are excited to launch a new version of Google Patents, which has the power to improve patent quality by helping experts and the public find the most relevant references for judging whether a patent is valid.

The ability to search for the most relevant references--the best prior art--is more important today than ever. Patent filings have steadily increased with 600,000 applications filed and 300,000 patents issued in 2014 alone. At the same time, litigation rates are continuing their dramatic climb, with patent trolls bringing the majority of cases, hitting companies of every size in industries from high-tech to main street. 

Traditional searches often focus on other patents. But the best prior art might be a harder-to-find book, article, or manual. That was true in the “shopping cart” patent case. After many companies paid out millions in settlements, a court finally struck down the patent in light of two books that were not found by the examiner who issued the patent.

The new Google Patents helps users find non-patent prior art by cataloguing it, using the same scheme that applies to patents. We’ve trained a machine classification model to classify everything found in Google Scholar using Cooperative Patent Classification codes. Now users can search for “autonomous vehicles” or “email encryption” and find prior art across patents, technical journals, scientific books, and more.

We’ve also simplified the interface, giving users one location for all patent-related searching and intuitive search fields. And thanks to Google Translate, users can search for foreign patent documents using English keywords. As we said in our May 2015 comments on the PTO’s Patent Quality Initiative, we hope this tool will make patent examination more efficient and help stop bad patents from issuing which would be good for innovation and benefit the public.

Posted by Allen Lo, Deputy General Counsel for Patents and Ian Wetherbee, Software Engineer for Google Patents

Announcing the Patent Purchase Promotion

We invite you to sell us your patents. The Patent Purchase Promotion is an experimental marketplace for patents that’s simple, easy to use, and fast.

Patent owners sell patents for numerous reasons (such as the need to raise money or changes in a company’s business direction). Unfortunately, the usual patent marketplace can sometimes be challenging, especially for smaller participants who sometimes end up working with patent trolls. Then bad things happen, like lawsuits, lots of wasted effort, and generally bad karma. Rarely does this provide any meaningful benefit to the original patent owner.

So today we’re announcing the Patent Purchase Promotion as an experiment to remove friction from the patent market. From May 8, 2015 through May 22, 2015, we’ll open a streamlined portal for patent holders to tell Google about patents they’re willing to sell at a price they set. As soon as the portal closes, we’ll review all the submissions, and let the submitters know whether we’re interested in buying their patents by June 26, 2015. If we contact you about purchasing your patent, we’ll work through some additional diligence with you and look to close a transaction in short order. We anticipate everyone we transact with getting paid by late August.

By simplifying the process and having a concentrated submission window, we can focus our efforts into quickly evaluating patent assets and getting responses back to potential sellers quickly. Hopefully this will translate into better experiences for sellers, and remove the complications of working with entities such as patent trolls.

There’s some fine print that you absolutely want to make sure you fully understand before participating, and we encourage participants to speak with an attorney. More detailed information about the Patent Purchase Promotion is available on our Patent Website, including all the fine print, the form to make a submission (which won’t go live until May 8), and details about what happens if Google agrees to buy your patent. Throughout this process, Google reserves the right to not transact for any reason. 

We’re always looking at ways that can help improve the patent landscape and make the patent system work better for everyone. We ask everyone to remember that this program is an experiment (think of it like a 20 percent project for Google’s patent lawyers), but we hope that it proves useful and delivers great results to participants.