Rule-Based Matching Spacy Usage Documentation
Rule-Based Matching Spacy Usage Documentation
Compared to using regular expressions on raw text, spaCy’s rule-based matcher engines and
components not only let you find the words and phrases you’re looking for – they also give you
access to the tokens within the document and their relationships. This means you can easily access
and analyze the surrounding tokens, merge spans into single tokens or add entries to the named
entities in doc.ents .
For complex tasks, it’s usually better to train a statistical entity recognition model. However,
statistical models require training data, so for many situations, rule-based approaches are
more practical. This is especially true at the start of a project: you can use a rule-based
approach as part of a data collection process, to help you “bootstrap” a statistical model.
Training a model is useful if you have some examples and you want your system to be able to
generalize based on those examples. It works especially well if there are clues in the local
context. For instance, if you’re trying to detect person or company names, your application
may benefit from a statistical named entity recognition model.
Rule-based systems are a good choice if there’s a more or less finite number of examples that
you want to find in the data, or if there’s a very clear, structured pattern you can express with
token rules or regular expressions. For instance, country names, IP addresses or URLs are
things you might be able to handle well with a purely rule-based approach.
You can also combine both approaches and improve a statistical model with rules to handle
very specific cases and boost accuracy. For details, see the section on rule-based entity
recognition.
When should I use the token matcher vs. the phrase matcher? ¶
The PhraseMatcher is useful if you already have a large terminology list or gazetteer
consisting of single or multi-token phrases that you want to find exact instances of in your
data. As of spaCy v2.1.0, you can also match on the LOWER attribute for fast and case-
insensitive matching.
The Matcher isn’t as blazing fast as the PhraseMatcher , since it compares across individual
token attributes. However, it allows you to write very abstract representations of the tokens
you’re looking for, using lexical attributes, linguistic features predicted by the model,
operators, set membership and rich comparison. For example, you can find a noun, followed
by a verb with the lemma “love” or “like”, followed by an optional determiner and another
token that’s at least 10 characters long.
Token-based matching
spaCy features a rule-matching engine, the Matcher , that operates over tokens, similar to
regular expressions. The rules can refer to token annotations (e.g. the token text or tag_ , and
flags like IS_PUNCT ). The rule matcher also lets you pass in a custom callback to act on matches –
for example, to merge entities and apply custom labels. You can also associate patterns with entity
IDs, to allow some basic entity linking or disambiguation. To match large terminology lists, you can
use the PhraseMatcher , which accepts Doc objects as match patterns.
Adding patterns
Let’s say we want to enable spaCy to find a combination of three tokens:
When writing patterns, keep in mind that each dictionary represents one token. If spaCy’s
tokenization doesn’t match the tokens defined in a pattern, the pattern is not going to produce
any results. When developing complex patterns, make sure to check examples against spaCy’s
tokenization:
First, we initialize the Matcher with a vocab. The matcher must always share the same vocab with
the documents it will operate on. We can now call matcher.add() with an ID and a list of
patterns.
import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
# Add match ID "HelloWorld" with no callback and one pattern
pattern = [{"LOWER": "hello"}, {"IS_PUNCT": True}, {"LOWER": "world"}]
matcher.add("HelloWorld", [pattern])
RUN
The matcher returns a list of (match_id, start, end) tuples – in this case,
[('15578876784678163569', 0, 3)] , which maps to the span doc[0:3] of our original
document. The match_id is the hash value of the string ID “HelloWorld”. To get the string value,
you can look up the ID in the StringStore .
Optionally, we could also choose to add more than one pattern, for example to also match
sequences without punctuation between “hello” and “world”:
patterns = [
[{"LOWER": "hello"}, {"IS_PUNCT": True}, {"LOWER": "world"}],
[{"LOWER": "hello"}, {"LOWER": "world"}]
]
matcher.add("HelloWorld", patterns)
By default, the matcher will only return the matches and not do anything else, like merge entities
or assign labels. This is all up to you and can be defined individually for each pattern, by passing in
a callback function as the on_match argument on add() . This is useful, because it lets you write
entirely custom and pattern-specific logic. For example, you might want to merge some patterns
into one token, while adding entity labels for other pattern types. You shouldn’t have to create
different matchers for each of those processes.
TYPE: str
TYPE: str
TYPE: str
TYPE: int
TYPE: bool
TYPE: bool
POS , TAG , MORPH , The token’s simple and extended part-of-speech tag, morphological
DEP , LEMMA , SHAPE analysis, dependency label, lemma, shape. Note that the values of these
attributes are case-sensitive. For a list of available part-of-speech tags and
dependency labels, see the Annotation Specifications .
TYPE: str
TYPE: str
Does it matter if the attribute names are uppercase or lowercase?
No, it shouldn’t. spaCy will normalize the names internally and {"LOWER": "text"} and
{"lower": "text"} will both produce the same result. Using the uppercase version is
mostly a convention to make it clear that the attributes are “special” and don’t exactly map to
the token attributes like Token.lower and Token.lower_ .
spaCy can’t provide access to all of the attributes because the Matcher loops over the Cython
data, not the Python objects. Inside the matcher, we’re dealing with a TokenC struct – we
don’t have an instance of Token . This means that all of the attributes that refer to
computed properties can’t be accessed.
The uppercase attribute names like LOWER or IS_PUNCT refer to symbols from the
spacy.attrs enum table. They’re passed into a function that essentially is a big
case/switch statement, to figure out which struct field to return. The same attribute identifiers
are used in Doc.to_array , and a few other places in the code where you need to
describe fields like this.
EXAMPLE
TYPE: Any
TYPE: Any
IS_SUBSET Attribute value (for MORPH or custom list attributes) is a subset of a list.
TYPE: Any
IS_SUPERSET Attribute value (for MORPH or custom list attributes) is a superset of a list.
TYPE: Any
INTERSECTS Attribute value (for MORPH or custom list attributes) has a non-empty
intersection with a list.
TYPE: Any
== , >= , <= , > , Attribute value is equal, greater or equal, smaller or equal, greater or smaller.
<
TYPE: Union[int, float]
Regular expressions
In some cases, only matching tokens and token attributes isn’t enough – for example, you might
want to match different spellings of a word, without having to add a new pattern for each spelling.
The REGEX operator allows defining rules for any attribute string value, including custom
attributes. It always needs to be applied to an attribute like TEXT , LOWER or TAG :
Important note
When using the REGEX operator, keep in mind that it operates on single tokens, not the whole
text. Each expression you provide will be matched on a token. If you need to match on the
whole text instead, see the details on regex matching on the whole text.
If your expressions apply to multiple tokens, a simple solution is to match on the doc.text with
re.finditer and use the Doc.char_span method to create a Span from the character
indices of the match. If the matched characters don’t map to one or more valid tokens,
Doc.char_span returns None .
In the example, the expression will also match "US" in "USA" . However, "USA" is a single token and
Span objects are sequences of tokens. So "US" cannot be its own span, because it does not end on
a token boundary.
import spacy
import re
nlp = spacy.load("en_core_web_sm")
doc = nlp("The United States of America (USA) are commonly known as the United States (U.S
RUN
In some cases, you might want to expand the match to the closest token boundaries, so you
can create a Span for "USA" , even though only the substring "US" is matched. You can
calculate this using the character offsets of the tokens in the document, available as
Token.idx . This lets you create a list of valid token start and end boundaries and leaves
you with a rather basic algorithmic problem: Given a number, find the next lowest (start token)
or the next highest (end token) number that’s part of a given list of numbers. This will be the
closest valid token boundary.
There are many ways to do this and the most straightforward one is to create a dict keyed by
characters in the Doc , mapped to the token they’re part of. It’s easy to write and less error-
prone, and gives you a constant lookup time: you only ever need to create the dict once per
Doc .
chars_to_tokens = {}
for token in doc:
for i in range(token.idx, token.idx + len(token.text)):
chars_to_tokens[i] = token.i
You can then look up character at a given position, and get the index of the corresponding
token that the character is part of. Your span would then be doc[token_start:token_end] .
If a character isn’t in the dict, it means it’s the (white)space tokens are split on. That hopefully
shouldn’t happen, though, because it’d mean your regex is producing matches with leading or
trailing whitespace.
OP DESCRIPTION
EXAMPLE
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab, validate=True)
# Add match ID "HelloWorld" with unsupported attribute CASEINSENSITIVE
pattern = [{"LOWER": "hello"}, {"IS_PUNCT": True}, {"CASEINSENSITIVE": "world"}]
matcher.add("HelloWorld", [pattern])
# � Raises an error:
# MatchPatternError: Invalid token patterns for matcher rule 'HelloWorld'
# Pattern 0:
# - [pattern -> 2 -> CASEINSENSITIVE] extra fields not permitted
RUN
nlp = English()
matcher = Matcher(nlp.vocab)
RUN
A very similar logic has been implemented in the built-in EntityRuler by the way. It also
takes care of handling overlapping matches, which you would otherwise have to take care of
yourself.
When working with entities, you can use displaCy to quickly generate a NER visualization from
your updated Doc , which can be exported as an HTML file:
For more info and examples, see the usage guide on visualizing spaCy.
We can now call the matcher on our documents. The patterns will be matched in the order they
occur in the text. The matcher will then iterate over the matches, look up the callback for the match
ID that was matched, and invoke it.
doc = nlp(YOUR_TEXT_HERE)
matcher(doc)
When the callback is invoked, it is passed four arguments: the matcher itself, the document, the
position of the current match, and the total list of matches. This allows you to write callbacks that
consider the entire set of matched phrases, so that you can resolve overlaps and other conflicts in
whatever way you prefer.
ARGUMENT DESCRIPTION
TYPE: Matcher
TYPE: Doc
TYPE: int
matches A list of (match_id, start, end) tuples, describing the matches. A match tuple
describes a span doc[start:end ].
import spacy
from spacy.matcher import Matcher
from spacy.tokens import Span
nlp = spacy.blank("en")
matcher = Matcher(nlp.vocab)
matcher.add("PERSON", [[{"lower": "barack"}, {"lower": "obama"}]])
doc = nlp("Barack Obama was the 44th president of the United States")
RUN
import spacy
from spacy.language import Language
from spacy.matcher import Matcher
from spacy.tokens import Token
class BadHTMLMerger:
def __init__(self, vocab):
patterns = [
[{"ORTH": "<"}, {"LOWER": "br"}, {"ORTH": ">"}],
[{"ORTH": "<"}, {"LOWER": "br/"}, {"ORTH": ">"}],
]
# Register a new token extension to flag bad HTML
Token.set_extension("bad_html", default=False)
self.matcher = Matcher(vocab)
self.matcher.add("BAD_HTML", patterns)
nlp = spacy.load("en_core_web_sm")
nlp.add_pipe("html_merger", last=True) # Add component to the pipeline
doc = nlp("Hello<br>world! <br/> This is a test.")
for token in doc:
print( )
RUN
Instead of hard-coding the patterns into the component, you could also make it take a path to a
JSON file containing the patterns. This lets you reuse the component with different patterns,
depending on your application. When adding the component to the pipeline with
nlp.add_pipe , you can pass in the argument via the config :
� Processing pipelines
For more details and examples of how to create custom pipeline components and extension
attributes, see the usage guide.
[{"LOWER": "facebook"}, {"LEMMA": "be"}, {"POS": "ADV", "OP": "*"}, {"POS": "ADJ"
This translates to a token whose lowercase form matches “facebook” (like Facebook, facebook or
FACEBOOK), followed by a token with the lemma “be” (for example, is, was, or ‘s), followed by an
optional adverb, followed by an adjective. Using the linguistic annotations here is especially useful,
because you can tell spaCy to match “Facebook’s annoying”, but not “Facebook’s annoying ads”.
The optional adverb makes sure you won’t miss adjectives with intensifiers, like “pretty awful” or
“very nice”.
To get a quick overview of the results, you could collect all sentences containing a match and
render them with the displaCy visualizer. In the callback function, you’ll have access to the start
and end of each match, as well as the parent Doc . This lets you determine the sentence
containing the match, doc[start:end].sent , and calculate the start and end of the matched
span within the sentence. Using displaCy in “manual” mode lets you pass in a list of dictionaries
containing the text and entities to render.
import spacy
from spacy import displacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
matched_sents = [] # Collect data of matched sentences to be visualized
def collect_sents(matcher, doc, i, matches):
match_id, start, end = matches[i]
span = doc[start:end] # Matched span
sent = span.sent # Sentence containing matched span
# Append mock entity for match in displaCy style to matched_sents
# get the match span by ofsetting the start and end of the span with the
# start and end of the sentence in the doc
match_ents = [{
"start": span.start_char - sent.start_char,
"end": span.end_char - sent.start_char,
"label": "MATCH",
}]
matched_sents.append({"text": sent.text, "ents": match_ents})
RUN
The IS_DIGIT flag is not very helpful here, because it doesn’t tell us anything about the length.
However, you can use the SHAPE flag, with each d representing a digit (up to 4 digits /
characters):
[{"ORTH": "("}, {"SHAPE": "ddd"}, {"ORTH": ")"}, {"SHAPE": "dddd"},
{"ORTH": "-", "OP": "?"}, {"SHAPE": "dddd"}]
This will match phone numbers of the format (123) 4567 8901 or (123) 4567-8901. To also match
formats like (123) 456 789, you can add a second pattern using 'ddd' in place of 'dddd' . By
hard-coding some values, you can match only certain, country-specific numbers. For example,
here’s a pattern to match the most common formats of international German numbers:
[{"ORTH": "+"}, {"ORTH": "49"}, {"ORTH": "(", "OP": "?"}, {"SHAPE": "dddd"},
{"ORTH": ")", "OP": "?"}, {"SHAPE": "dddd", "LENGTH": 6}]
Depending on the formats your application needs to match, creating an extensive set of rules like
this is often better than training a model. It’ll produce more predictable results, is much easier to
modify and extend, and doesn’t require any training data – only a set of test cases.
import spacy
from spacy.matcher import Matcher
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
pattern = [{"ORTH": "("}, {"SHAPE": "ddd"}, {"ORTH": ")"}, {"SHAPE": "ddd"},
{"ORTH": "-", "OP": "?"}, {"SHAPE": "ddd"}]
matcher.add("PHONE_NUMBER", [pattern])
RUN
Example: Hashtags and emoji on social media
Social media posts, especially tweets, can be difficult to work with. They’re very short and often
contain various emoji and hashtags. By only looking at the plain text, you’ll lose a lot of valuable
semantic information.
Let’s say you’ve extracted a large sample of social media posts on a specific topic, for example posts
mentioning a brand name or product. As the first step of your data exploration, you want to filter
out posts containing certain emoji and use them to assign a general sentiment score, based on
whether the expressed emotion is positive or negative, e.g. � or �. You also want to find, merge
and label hashtags like #MondayMotivation , to be able to ignore or analyze them later.
Ultimately, sentiment analysis is not always that easy. In addition to the emoji, you’ll also want to take
specific words into account and check the subtree for intensifiers like “very”, to increase the
sentiment score. At some point, you might also want to train a sentiment model. However, the
approach described in this example is very useful for bootstrapping rules to collect training data. It’s
also an incredibly fast way to gather first insights into your data – with about 1 million tweets, you’d be
looking at a processing time of under 1 minute.
By default, spaCy’s tokenizer will split emoji into separate tokens. This means that you can create a
pattern for one or more emoji tokens. Valid hashtags usually consist of a # , plus a sequence of
ASCII characters with no whitespace, making them easy to match as well.
pos_emoji = ["�
�", "�
�", "�
�", "�
�", "�
�", "�
�"] # Positive emoji
neg_emoji = ["�
�", "�
�", "�
�", "�
�", "�
�", "�
�"] # Negative emoji
# Add pattern for valid hashtag, i.e. '#' plus any ASCII token
matcher.add("HASHTAG", [[{"ORTH": "#"}, {"IS_ASCII": True}]])
RUN
Because the on_match callback receives the ID of each match, you can use the same function to
handle the sentiment assignment for both the positive and negative pattern. To keep it simple, we’ll
either add or subtract 0.1 points – this way, the score will also reflect combinations of emoji, even
positive and negative ones.
With a library like Emojipedia , we can also retrieve a short description for each emoji – for
example, �‘s official title is “Smiling Face With Heart-Eyes”. Assigning it to a custom attribute on
the emoji span will make it available as span._.emoji_desc .
To label the hashtags, we can use a custom attribute set on the respective token:
nlp = spacy.load("en_core_web_sm")
matcher = Matcher(nlp.vocab)
# Add pattern for valid hashtag, i.e. '#' plus any ASCII token
matcher.add("HASHTAG", [[{"ORTH": "#"}, {"IS_ASCII": True}]])
RUN
import spacy
from spacy.matcher import PhraseMatcher
nlp = spacy.load("en_core_web_sm")
matcher = PhraseMatcher(nlp.vocab)
terms = ["Barack Obama", "Angela Merkel", "Washington, D.C."]
# Only run nlp.make_doc to speed things up
patterns = [nlp.make_doc(text) for text in terms]
matcher.add("TerminologyList", patterns)
doc = nlp("German Chancellor Angela Merkel and US President Barack Obama "
"converse in the Oval Office inside the White House in Washington, D.C."
matches = matcher(doc)
for match_id, start, end in matches:
span = doc[start:end]
print( )
RUN
Since spaCy is used for processing both the patterns and the text to be matched, you won’t have to
worry about specific tokenization – for example, you can simply pass in
nlp("Washington, D.C.") and won’t have to write a complex token pattern covering the exact
tokenization of the term.
To create the patterns, each phrase has to be processed with the nlp object. If you have a
trained pipeline loaded, doing this in a loop or list comprehension can easily become inefficient
and slow. If you only need the tokenization and lexical attributes, you can run
nlp.make_doc instead, which will only run the tokenizer. For an additional speed boost,
you can also use the nlp.tokenizer.pipe method, which will process the texts as a
stream.
Matching on other token attributes
By default, the PhraseMatcher will match on the verbatim token text, e.g. Token.text . By
setting the attr argument on initialization, you can change which token attribute the matcher
should use when comparing the phrase pattern to the matched Doc . For example, using the
attribute LOWER lets you match on Token.lower and create case-insensitive match patterns:
nlp = English()
matcher = PhraseMatcher(nlp.vocab, attr="LOWER")
patterns = [nlp.make_doc(name) for name in ["Angela Merkel", "Barack Obama"]]
matcher.add("Names", patterns)
RUN
The examples here use nlp.make_doc to create Doc object patterns as efficiently as
possible and without running any of the other pipeline components. If the token attribute you
want to match on are set by a pipeline component, make sure that the pipeline component
runs when you create the pattern. For example, to match on POS or LEMMA , the pattern Doc
objects need to have part-of-speech tags set by the tagger or morphologizer . You can
either call the nlp object on your pattern texts instead of nlp.make_doc , or use
nlp.select_pipes to disable components selectively.
Another possible use case is matching number tokens like IP addresses based on their shape. This
means that you won’t have to worry about how those string will be tokenized and you’ll be able to
find tokens and combinations of tokens based on a few examples. Here, we’re matching on the
shapes ddd.d.d.d and ddd.ddd.d.d :
nlp = English()
matcher = PhraseMatcher(nlp.vocab, attr="SHAPE")
matcher.add("IP", [nlp("127.0.0.1"), nlp("127.127.0.0")])
doc = nlp("Often the router will have an IP address such as 192.168.1.1 or 192.168.2.1."
for match_id, start, end in matcher(doc):
( [ ])
RUN
In theory, the same also works for attributes like POS . For example, a pattern
nlp("I like cats") matched based on its part-of-speech tag would return a match for “I love
dogs”. You could also match on boolean flags like IS_PUNCT to match phrases with the same
sequence of punctuation and non-punctuation tokens as the pattern. But this can easily get
confusing and doesn’t have much of an advantage over writing one or two token patterns.
The DependencyMatcher lets you match patterns within the dependency parse using
Semgrex operators. It requires a model containing a parser such as the DependencyParser .
Instead of defining a list of adjacent tokens as in Matcher patterns, the DependencyMatcher
patterns match tokens in the dependency parse and specify the relations between them.
EXAMPLE
matcher = DependencyMatcher(nlp.vocab)
matcher.add("FOUNDED", [pattern])
matches = matcher(doc)
A pattern added to the dependency matcher consists of a list of dictionaries, with each dictionary
describing a token to match and its relation to an existing token in the pattern. Except for the
first dictionary, which defines an anchor token using only RIGHT_ID and RIGHT_ATTRS , each
pattern should have the following keys:
NAME DESCRIPTION
LEFT_ID The name of the left-hand node in the relation, which has been defined in an
earlier node.
TYPE: str
REL_OP An operator that describes how the two nodes are related.
TYPE: str
TYPE: str
RIGHT_ATTRS The token attributes to match for the right-hand node in the same format as
patterns provided to the regular token-based Matcher .
Each additional token added to the pattern is linked to an existing token LEFT_ID by the relation
REL_OP . The new token is given the name RIGHT_ID and described by the attributes
RIGHT_ATTRS .
Important note
Because the unique token names in LEFT_ID and RIGHT_ID are used to identify tokens, the
order of the dicts in the patterns is important: a token name needs to be defined as RIGHT_ID
in one dict in the pattern before it can be used as LEFT_ID in another dict.
A . B A immediately precedes B , i.e. A.i == B.i - 1 , and both are within the same
dependency tree.
A .* B A precedes B , i.e. A.i < B.i , and both are within the same dependency tree (not in
Semgrex).
A ; B A immediately follows B , i.e. A.i == B.i + 1 , and both are within the same
dependency tree (not in Semgrex).
A ;* B A follows B , i.e. A.i > B.i , and both are within the same dependency tree (not in
Semgrex).
A $+ B B is a right immediate sibling of A , i.e. A and B have the same parent and
A.i == B.i - 1 .
A $- B B is a left immediate sibling of A , i.e. A and B have the same parent and
A.i == B.i + 1 .
A $++ B B is a right sibling of A , i.e. A and B have the same parent and A.i < B.i .
A $-- B B is a left sibling of A , i.e. A and B have the same parent and A.i > B.i .
The displacy visualizer lets you render Doc objects and their dependency parse and part-of-speech
tags:
import spacy
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Smith founded a healthcare company")
displacy.serve(doc)
dobj
de t
nsubj
● the founder is the subject ( nsubj ) of the token with the text founded
● the company is the object ( dobj ) of founded
● the kind of company may be an adjective ( amod , not shown above) or a compound
( compound )
The first step is to pick an anchor token for the pattern. Since it’s the root of the dependency
parse, founded is a good choice here. It is often easier to construct patterns when all dependency
relation operators point from the head to the children. In this example, we’ll only use > , which
connects a head to an immediate dependent as head > child .
The simplest dependency matcher pattern will identify and name a single token in the tree:
import spacy
from spacy.matcher import DependencyMatcher
nlp = spacy.load("en_core_web_sm")
matcher = DependencyMatcher(nlp.vocab)
pattern = [
{
"RIGHT_ID": "anchor_founded", # unique name
"RIGHT_ATTRS": {"ORTH": "founded"} # token pattern for "founded"
}
]
matcher.add("FOUNDED", [pattern])
doc = nlp("Smith founded two companies.")
matches = matcher(doc)
print( ) # [(4851363122962674176, [1])]
RUN
Now that we have a named anchor token ( anchor_founded ), we can add the founder as the
immediate dependent ( > ) of founded with the dependency label nsubj :
STEP 1
pattern = [
{
"RIGHT_ID": "anchor_founded",
"RIGHT_ATTRS": {"ORTH": "founded"}
},
{
"LEFT_ID": "anchor_founded",
"REL_OP": ">",
"RIGHT_ID": "founded_subject",
"RIGHT_ATTRS": {"DEP": "nsubj"},
}
# ...
]
STEP 2
pattern = [
#...
{
"LEFT_ID": "anchor_founded",
"REL_OP": ">",
"RIGHT_ID": "founded_object",
"RIGHT_ATTRS": {"DEP": "dobj"},
}
# ...
]
When the subject and object tokens are added, they are required to have names under the key
RIGHT_ID , which are allowed to be any unique string, e.g. founded_subject . These names can
then be used as LEFT_ID to link new tokens into the pattern. For the final part of our pattern,
we’ll specify that the token founded_object should have a modifier with the dependency relation
amod or compound :
STEP 3
pattern = [
# ...
{
"LEFT_ID": "founded_object",
"REL_OP": ">",
"RIGHT_ID": "founded_object_modifier",
"RIGHT_ATTRS": {"DEP": {"IN": ["amod", "compound"]}},
}
]
You can picture the process of creating a dependency matcher pattern as defining an anchor token
on the left and building up the pattern by linking tokens one-by-one on the right using relation
operators. To create a valid pattern, each new token needs to be linked to an existing token on its
left. As for founded in this example, a token may be linked to more than one token on its right:
import spacy
from spacy.matcher import DependencyMatcher
nlp = spacy.load("en_core_web_sm")
matcher = DependencyMatcher(nlp.vocab)
pattern = [
{
"RIGHT_ID": "anchor_founded",
"RIGHT_ATTRS": {"ORTH": "founded"}
},
{
"LEFT_ID": "anchor_founded",
"REL_OP": ">",
"RIGHT_ID": "founded_subject",
"RIGHT_ATTRS": {"DEP": "nsubj"},
},
{
"LEFT_ID": "anchor_founded",
"REL_OP": ">",
"RIGHT_ID": "founded_object",
"RIGHT_ATTRS": {"DEP": "dobj"},
},
{
"LEFT_ID": "founded_object",
"REL_OP": ">",
"RIGHT_ID": "founded_object_modifier",
"RIGHT_ATTRS": {"DEP": {"IN": ["amod", "compound"]}},
}
]
matcher.add("FOUNDED", [pattern])
doc = nlp("Lee, an experienced CEO, has founded two AI startups.")
matches = matcher(doc)
RUN
The dependency matcher may be slow when token patterns can potentially match many tokens
in the sentence or when relation operators allow longer paths in the dependency parse, e.g.
<< , >> , .* and ;* .
To improve the matcher speed, try to make your token patterns and operators as specific as
possible. For example, use > instead of >> if possible and use token patterns that include
dependency labels and other token attributes instead of patterns such as {} that match any
token in the sentence.
Rule-based entity recognition
The EntityRuler is a component that lets you add named entities based on pattern
dictionaries, which makes it easy to combine rule-based and statistical named entity recognition for
even more powerful pipelines.
Entity Patterns
Entity patterns are dictionaries with two keys: "label" , specifying the label to assign to the entity
if the pattern is matched, and "pattern" , the match pattern. The entity ruler accepts two types of
patterns:
RUN
The entity ruler is designed to integrate with spaCy’s existing pipeline components and enhance the
named entity recognizer. If it’s added before the "ner" component, the entity recognizer will
respect the existing entity spans and adjust its predictions around it. This can significantly improve
accuracy in some cases. If it’s added after the "ner" component, the entity ruler will only add
spans to the doc.ents if they don’t overlap with existing entities predicted by the model. To
overwrite overlapping entities, you can set overwrite_ents=True on initialization.
import spacy
nlp = spacy.load("en_core_web_sm")
ruler = nlp.add_pipe("entity_ruler")
patterns = [{"label": "ORG", "pattern": "MyCorp Inc."}]
ruler.add_patterns(patterns)
RUN
nlp = English()
ruler = nlp.add_pipe("entity_ruler")
patterns = [{"label": "ORG", "pattern": "Apple", "id": "apple"},
{"label": "GPE", "pattern": [{"LOWER": "san"}, {"LOWER": "francisco"
{"label": "GPE", "pattern": [{"LOWER": "san"}, {"LOWER": "fran"}],
ruler.add_patterns(patterns)
RUN
If the id attribute is included in the EntityRuler patterns, the ent_id_ property of the
matched entity is set to the id given in the patterns. So in the example above it’s easy to identify
that “San Francisco” and “San Fran” are both the same entity.
ruler.to_disk("./patterns.jsonl")
new_ruler = nlp.add_pipe("entity_ruler").from_disk("./patterns.jsonl")
If you’re using the Prodigy annotation tool, you might recognize these pattern files from
bootstrapping your named entity and text classification labelling. The patterns for the
EntityRuler follow the same syntax, so you can use your existing Prodigy pattern files in
spaCy, and vice versa.
When you save out an nlp object that has an EntityRuler added to its pipeline, its patterns are
automatically exported to the pipeline directory:
nlp = spacy.load("en_core_web_sm")
ruler = nlp.add_pipe("entity_ruler")
ruler.add_patterns([{"label": "ORG", "pattern": "Apple"}])
nlp.to_disk("/path/to/pipeline")
The saved pipeline now includes the "entity_ruler" in its config.cfg and the pipeline
directory contains a file entityruler.jsonl with the patterns. When you load the pipeline back
in, all pipeline components will be restored and deserialized – including the entity ruler. This lets
you ship powerful pipeline packages with binary weights and rules included!
Running the full language pipeline across every pattern in a large list scales linearly and can
therefore take a long time on large amounts of phrase patterns. As of spaCy v2.2.4 the
add_patterns function has been refactored to use nlp.pipe on all phrase patterns resulting in
about a 10x-20x speed up with 5,000-100,000 phrase patterns respectively. Even with this speedup
(but especially if you’re using an older version) the add_patterns function can still take a long
time. An easy workaround to make this function run faster is disabling the other language pipes
while adding the phrase patterns.
ruler = nlp.add_pipe("entity_ruler")
patterns = [{"label": "TEST", "pattern": str(i)} for i in range(100000)]
with nlp.select_pipes(enable="tagger"):
ruler.add_patterns(patterns)
Corpora used to train pipelines from scratch are often produced in academia. They contain text from
various sources with linguistic features labeled manually by human annotators (following a set of
specific guidelines). The corpora are then distributed with evaluation data, so other researchers can
benchmark their algorithms and everyone can report numbers on the same data. However, most
applications need to learn information that isn’t contained in any available corpus.
For example, the corpus spaCy’s English pipelines were trained on defines a PERSON entity as
just the person name, without titles like “Mr.” or “Dr.”. This makes sense, because it makes it easier
to resolve the entity type back to a knowledge base. But what if your application needs the full
names, including the titles?
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Dr. Alex Smith chaired first board meeting of Acme Corp Inc.")
([( ) ])
RUN
While you could try and teach the model a new definition of the PERSON entity by updating it with
more examples of spans that include the title, this might not be the most efficient approach. The
existing model was trained on over 2 million words, so in order to completely change the definition
of an entity type, you might need a lot of training examples. However, if you already have the
predicted PERSON entities, you can use a rule-based approach that checks whether they come with
a title and if so, expands the entity span by one token. After all, what all titles in this example have
in common is that if they occur, they occur in the previous token right before the person entity.
The above function takes a Doc object, modifies its doc.ents and returns it. Using the
@Language.component decorator, we can register it as a pipeline component so it can run
automatically when processing a text. We can use nlp.add_pipe to add it to the current
pipeline.
import spacy
from spacy.language import Language
from spacy.tokens import Span
nlp = spacy.load("en_core_web_sm")
@Language.component("expand_person_entities")
def expand_person_entities(doc):
new_ents = []
for ent in doc.ents:
if ent.label_ == "PERSON" and ent.start != 0:
prev_token = doc[ent.start - 1]
if prev_token.text in ("Dr", "Dr.", "Mr", "Mr.", "Ms", "Ms."):
new_ent = Span(doc, ent.start - 1, ent.end, label=ent.label)
new_ents.append(new_ent)
else:
new_ents.append(ent)
doc.ents = new_ents
return doc
doc = nlp("Dr. Alex Smith chaired first board meeting of Acme Corp Inc.")
print([( ) in ])
RUN
An alternative approach would be to use an extension attribute like ._.person_title and add it
to Span objects (which includes entity spans in doc.ents ). The advantage here is that the entity
text stays intact and can still be used to look up the name in a knowledge base. The following
function takes a Span object, checks the previous token if it’s a PERSON entity and returns the title
if one is found. The Span.doc attribute gives us easy access to the span’s parent document.
def get_person_title(span):
if span.label_ == "PERSON" and span.start != 0:
prev_token = span.doc[span.start - 1]
if prev_token.text in ("Dr", "Dr.", "Mr", "Mr.", "Ms", "Ms."):
return prev_token.text
We can now use the Span.set_extension method to add the custom extension attribute
"person_title" , using get_person_title as the getter function.
import spacy
from spacy.tokens import Span
nlp = spacy.load("en_core_web_sm")
def get_person_title(span):
if span.label_ == "PERSON" and span.start != 0:
prev_token = span.doc[span.start - 1]
if prev_token.text in ("Dr", "Dr.", "Mr", "Mr.", "Ms", "Ms."):
return prev_token.text
doc = nlp("Dr Alex Smith chaired first board meeting of Acme Corp Inc.")
print([( person_title) in ])
RUN
LINGUISTIC FEATURES
This example makes extensive use of part-of-speech tag and dependency attributes and related Doc ,
Token and Span methods. For an introduction on this, see the guide on linguistic features. Also see
the label schemes in the models directory for details on the labels.
Let’s say you want to parse professional biographies and extract the person names and company
names, and whether it’s a company they’re currently working at, or a previous company. One
approach could be to try and train a named entity recognizer to predict CURRENT_ORG and
PREVIOUS_ORG – but this distinction is very subtle and something the entity recognizer may
struggle to learn. Nothing about “Acme Corp Inc.” is inherently “current” or “previous”.
However, the syntax of the sentence holds some very important clues: we can check for trigger
words like “work”, whether they’re past tense or present tense, whether company names are
attached to it and whether the person is the subject. All of this information is available in the part-
of-speech tags and the dependency parse.
import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp("Alex Smith worked at Acme Corp Inc.")
([( ) ])
RUN
In this example, “worked” is the root of the sentence and is a past tense verb. Its subject is “Alex
Smith”, the person who worked. “at Acme Corp Inc.” is a prepositional phrase attached to the verb
“worked”. To extract this relationship, we can start by looking at the predicted PERSON entities, find
their heads and check whether they’re attached to a trigger word like “work”. Next, we can check for
prepositional phrases attached to the head and whether they contain an ORG entity. Finally, to
determine whether the company affiliation is current, we can check the head’s part-of-speech tag.
To apply this logic automatically when we process a text, we can add it to the nlp object as a
custom pipeline component. The above logic also expects that entities are merged into single
tokens. spaCy ships with a handy built-in merge_entities that takes care of that. Instead of just
printing the result, you could also write it to custom attributes on the entity Span – for example
._.orgs or ._.prev_orgs and ._.current_orgs .
MERGING ENTITIES
Under the hood, entities are merged using the Doc.retokenize context manager:
import spacy
from spacy.language import Language
from spacy import displacy
nlp = spacy.load("en_core_web_sm")
@Language.component("extract_person_orgs")
def extract_person_orgs(doc):
person_entities = [ent for ent in doc.ents if ent.label_ == "PERSON"]
for ent in person_entities:
head = ent.root.head
if head.lemma_ == "work":
preps = [token for token in head.children if token.dep_ == "prep"]
for prep in preps:
orgs = [token for token in prep.children if token.ent_type_ ==
print({'person': ent, 'orgs': orgs, 'past': head.tag_ == "VBD"})
return doc
# To make the entities easier to work with, we'll merge them into single tokens
nlp.add_pipe("merge_entities")
nlp.add_pipe("extract_person_orgs")
RUN
If you change the sentence structure above, for example to “was working”, you’ll notice that our
current logic fails and doesn’t correctly detect the company as a past organization. That’s because
the root is a participle and the tense information is in the attached auxiliary “was”:
nsub j
To solve this, we can adjust the rules to also check for the above construction:
@Language.component("extract_person_orgs")
def extract_person_orgs(doc):
person_entities = [ent for ent in doc.ents if ent.label_ == "PERSON"]
for ent in person_entities:
head = ent.root.head
if head.lemma_ == "work":
preps = [token for token in head.children if token.dep_ ==
"prep"]
for prep in preps:
orgs = [t for t in prep.children if t.ent_type_ == "ORG"]
aux = [token for token in head.children if token.dep_ ==
"aux"]
past_aux = any(t.tag_ == "VBD" for t in aux)
past = head.tag_ == "VBD" or head.tag_ == "VBG" and past_aux
print({'person': ent, 'orgs': orgs, 'past': past})
return doc
In your final rule-based system, you may end up with several different code paths to cover the
types of constructions that occur in your data.
SUGGEST EDITS