Unit III 1
Unit III 1
Semantic Parsing
1. Introduction
Key Goals:
2. Semantic Interpretation
a. Structural Ambiguity
Occurs when multiple interpretations arise from the same syntactic structure.
Resolution Approaches:
1. Syntactic Parsing: Derive possible parse trees using tools like Stanford
Parser.
Example parse trees:
b. Word Sense
Word sense refers to a word's meaning in context. Disambiguating word sense is
crucial for accurate semantic interpretation or The process of identifying the correct
sense of a word based on its context.
Example:
This involves identifying and linking entities and events within text.
Entity Resolution:
Techniques:
Event Resolution:
Example:
o Predicate: kicked
o Arguments:
Frameworks:
e. Meaning Representation
Examples:
First-Order Logic:
Sentence: "All cats are animals."
Representation:
1. ∀x(cat(x)→animal(x))
want-01
├── :arg0 (John)
└── :arg1 (eat-01)
├── :arg0 (John)
└── :arg1 (pizza)
1. Rule-Based Systems
Rule: If the verb is "give," the first noun is the giver, the second noun is
the recipient, and the third is the object.
Output: (Giver: Mary, Recipient: John, Object: Book)
Advantages:
Limitations:
Applications:
2. Statistical Systems
Statistical systems use probabilistic models to predict semantic structures based on
annotated training data. These systems apply probabilistic methods to resolve
ambiguities and improve robustness.
1. Learn syntactic and semantic rules from labeled data (e.g., Penn
Treebank).
2. Compute probabilities for possible interpretations and select the most
likely one.
Key Techniques:
Advantages:
Limitations:
Applications:
3. Neural Systems
Neural systems leverage deep learning to perform semantic parsing. These systems
often use sequence-to-sequence (Seq2Seq) models or transformer architectures.
How They Work:
Key Techniques:
Neural output:
"intent": "book_flight",
"destination": "London"
Advantages:
Limitations:
Applications:
Example Workflow:
Advantages:
Limitations:
Applications:
Comparison of Paradigms
Example
Paradigm Advantages Disadvantages
Applications
Word sense refers to the specific meaning of a word in a given context. Many words
are polysemous, meaning they have multiple senses or meanings. Accurately
determining the intended meaning of a word is crucial for tasks like machine
translation, semantic parsing, and question answering.
1. Polysemy
The process of identifying the correct sense of a word based on its context.
Example:
Word: Bank
Sentence: The fisherman sat on the bank of the river.
Glosses:
1. Bank (financial institution): "A place for receiving deposits."
2. Bank (river edge): "The land alongside a river."
Overlap: River matches with the second gloss. Thus, the sense is river
edge.
b. Supervised Learning
Uses labeled datasets where words are annotated with their senses. Machine
learning models are trained to predict the sense of a word given its context.
Steps:
Decision Trees
Support Vector Machines (SVMs)
Neural Networks
Example:
c. Unsupervised Learning
Clusters words into different senses based on context without labeled data.
Common techniques include:
How It Works:
a. Resources
Key resources for WSD include:
1. WordNet
A lexical database where words are grouped into synsets (sets of cognitive
synonyms), each representing a distinct sense.
2. BabelNet
3. FrameNet
Focuses on semantic frames. Groups words into concepts based on their roles
in a scenario. Example: Giving frame includes words like give, transfer, donate.
b. Systems
1. Lesk-Based Systems
Simple to implement.
Requires access to a lexical database (e.g., WordNet).
2. Supervised Systems
3. Unsupervised Systems
c. Software