0% found this document useful (0 votes)
21 views40 pages

Flat M2

The document covers various concepts related to context-free grammars (CFGs) and their applications, including derivation trees, leftmost and rightmost derivations, ambiguous grammars, pushdown automata (PDA), and normal forms like Chomsky Normal Form (CNF) and Greibach Normal Form (GNF). It explains the structure and function of PDAs, their deterministic counterparts (DPDAs), and the closure and decision properties of context-free languages (CFLs). Additionally, it highlights the significance of CFGs in compiler design, programming languages, and natural language processing.

Uploaded by

Sameer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views40 pages

Flat M2

The document covers various concepts related to context-free grammars (CFGs) and their applications, including derivation trees, leftmost and rightmost derivations, ambiguous grammars, pushdown automata (PDA), and normal forms like Chomsky Normal Form (CNF) and Greibach Normal Form (GNF). It explains the structure and function of PDAs, their deterministic counterparts (DPDAs), and the closure and decision properties of context-free languages (CFLs). Additionally, it highlights the significance of CFGs in compiler design, programming languages, and natural language processing.

Uploaded by

Sameer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

1. Short note on derivation tree with example.

– 2M​
A derivation tree (also called a parse tree) is a graphical representation of the derivation of a
string using a context-free grammar (CFG). It visually represents how the start symbol of a CFG
derives a particular string by applying production rules step by step. The root of the tree is the
start symbol, and each internal node represents a non-terminal. The children of a node
represent the symbols produced by a production applied to that non-terminal.

Example:​
Grammar:​
S → aSb | ε

Explanation:

●​ First S → aSb, then inner S → aSb, and finally inner S → ε.​

●​ The string generated is aabb.​

●​ The derivation tree shows the hierarchy and order of rule applications clearly.​

2. Explain rightmost and leftmost derivation with example. – 2M​


In a CFG, the derivation of a string can proceed in multiple ways. Two important strategies are
leftmost and rightmost derivation.

●​ Leftmost Derivation: Always expand the leftmost non-terminal first.​

●​ Rightmost Derivation: Always expand the rightmost non-terminal first.​


Example Grammar:​
S → aSb | ε

Deriving string "aabb":

Leftmost Derivation:​
S​
⇒ aSb​
⇒ aaSbb​
⇒ aaεbb​
⇒ aabb

Rightmost Derivation:​
S​
⇒ aSb​
⇒ aabSb​
⇒ aabb

Explanation:​
Both derivations generate the same string, but they apply rules in different orders. In leftmost,
we expand from left to right, and in rightmost, we expand from right to left.

3. Define ambiguous grammar. – 2M​


A context-free grammar is ambiguous if there exists at least one string that has more than
one distinct parse tree or derivation (either leftmost or rightmost).

Why it's a problem:​


Ambiguity leads to multiple interpretations of the same input, which is undesirable in compiler
design and language parsing.

Example Grammar:​
E → E + E | E * E | id

Consider string: id + id * id
Second Parse Tree (interpret multiplication first):

Conclusion:​
Since the same string has multiple parse trees, the grammar is ambiguous.
4. Given CFG for palindromes: S → aSa | bSb | a | b, find PDA that accepts L(G). – 2M

The given CFG generates odd-length palindromes over the alphabet {a, b}. The idea behind
the PDA is to push the first half of the input onto the stack and then pop and match with the
second half.

CFG:

S → aSa | bSb | a | b

This generates palindromes like:

●​ Length 1: a, b​

●​ Length 3: aba, bab​

●​ Length 5: aabaa, ababa​

PDA Design:

Let PDA M = (Q, Σ, Γ, δ, q0, Z, F)

●​ Q = {q0, q1, q2, qf}​

●​ Σ = {a, b}​

●​ Γ = {a, b, Z}​

●​ q0 = Start state​

●​ Z = Initial stack symbol​

●​ F = {qf} = Final state​

Transition Function δ:
1.​ Initialize stack:​
δ(q0, ε, ε) = (q1, Z)​

2.​ Push symbols:​


δ(q1, a, ε) = (q1, a)​
δ(q1, b, ε) = (q1, b)​

3.​ Non-deterministically guess midpoint and switch:​


δ(q1, ε, ε) = (q2, ε)​

4.​ Pop and match:​


δ(q2, a, a) = (q2, ε)​
δ(q2, b, b) = (q2, ε)​

5.​ Accept if stack empty (Z on top):​


δ(q2, ε, Z) = (qf, Z)​

Explanation:

●​ From q0 to q1: start and push Z.​

●​ In q1: read and push a/b to stack.​

●​ Then nondeterministically move to q2 (guess midpoint).​

●​ In q2: for every a/b in input, match top of stack.​

●​ If matching successful and stack is Z, go to accepting state qf.​

This PDA accepts all palindromes of odd length over {a, b} by simulating symmetric matching.

5. Define parse tree with example. – 2M

A parse tree (also known as a derivation tree) is a tree structure that shows how a string is
derived from a CFG. It reflects the syntactic structure and rule application order.

Key Properties:
●​ Root is always the start symbol.​

●​ Internal nodes are non-terminals.​

●​ Leaves are terminals or ε.​

●​ Tree is read from top to bottom, left to right.​

Example Grammar:

S → aSb | ε

Derivation for “aabb”:

S → aSb → aaSbb → aaεbb → aabb

Parse Tree:
less
CopyEdit
S
/ | \
a S b
/ \
a S b
|
ε

Explanation:

●​ The structure shows recursive calls of S → aSb, ending with S → ε.​

●​ Terminals in the leaves form the string "aabb".​

6. Briefly explain PDA with example. – 2M


A Pushdown Automaton (PDA) is a type of automaton used to recognize context-free
languages. It works like a finite automaton but includes a stack for extra memory, allowing it to
handle recursive structures.

Components of a PDA:

●​ Q: Finite set of states​

●​ Σ: Input alphabet​

●​ Γ: Stack alphabet​

●​ δ: Transition function​

●​ q0: Start state​

●​ Z: Initial stack symbol​

●​ F: Set of accepting states​

Example Language:

L = { aⁿbⁿ | n ≥ 1 } — equal number of a’s followed by equal number of b’s.

PDA Idea:

●​ For each 'a', push onto the stack.​

●​ For each 'b', pop from the stack.​

●​ If stack is empty and input is finished → accept.​

Transitions:
1.​ δ(q0, a, Z) → (q0, aZ)​

2.​ δ(q0, a, a) → (q0, aa)​

3.​ δ(q0, b, a) → (q1, ε)​

4.​ δ(q1, b, a) → (q1, ε)​

5.​ δ(q1, ε, Z) → (qf, Z)​

Explanation:

●​ Stack tracks count of a’s.​

●​ Every b pops one a.​

●​ Accepted only if every a is matched by a b.​

●​ PDA recognizes L because stack lets it compare counts.

7. Briefly explain DPDA with example. – 2M

A Deterministic Pushdown Automaton (DPDA) is a PDA in which for every state, input
symbol, and stack symbol, there is at most one possible transition. It’s more restrictive than
a non-deterministic PDA (NPDA).

Key Points:

●​ DPDA ≠ NPDA in power: DPDAs can’t accept all CFLs.​

●​ Each move is uniquely determined.​

●​ Used to recognize deterministic context-free languages (DCFLs), e.g., programming


language syntax.​
Example Language:

L = { aⁿbⁿ | n ≥ 0 }

DPDA Design:

●​ States: {q0, q1, q2}​

●​ Stack Alphabet: {Z, a}​

●​ Input Alphabet: {a, b}​

●​ Start state: q0​

●​ Initial stack symbol: Z​

●​ Final state: q2​

Transitions:

1.​ δ(q0, a, Z) → (q0, aZ)​

2.​ δ(q0, a, a) → (q0, aa)​

3.​ δ(q0, b, a) → (q1, ε)​

4.​ δ(q1, b, a) → (q1, ε)​

5.​ δ(q1, ε, Z) → (q2, Z)​

Explanation:

●​ Push a’s in q0.​


●​ On first b, switch to q1 and start popping.​

●​ If stack returns to Z, accept in q2.​

●​ Only one transition possible at each step → it’s deterministic.​

8. List applications of CFG. – 2M

Context-Free Grammars (CFGs) have wide applications in computer science, compilers, and
language processing.

Key Applications:

1.​ Syntax analysis in compilers:​

○​ CFGs define the syntax rules of programming languages (e.g., how expressions,
loops, functions are structured).​

○​ Parser generators like YACC, Bison use CFGs.​

2.​ Design of programming languages:​

○​ CFGs specify the grammar of languages like C, Java, Python.​

3.​ Natural language processing (NLP):​

○​ Used to model grammatical structure of human languages (e.g., noun → adj


noun).​

4.​ Automated tools:​

○​ Syntax checkers, interpreters, code formatters use CFG-based parsers.​

5.​ Document parsing:​

○​ HTML/XML validators and editors use CFGs to ensure correct structure.​


9. Difference between leftmost and rightmost derivation with example. – 2M

Leftmost Derivation:

●​ Always expand the leftmost non-terminal first in every step.​

Rightmost Derivation:

●​ Always expand the rightmost non-terminal first in every step.​

Example Grammar:

S → aSb | ε

String: aabb

Leftmost Derivation:

S​
⇒ aSb​
⇒ aaSbb​
⇒ aaεbb​
⇒ aabb

Rightmost Derivation:

S​
⇒ aSb​
⇒ aabSb​
⇒ aabb

Key Differences:
Feature Leftmost Derivation Rightmost Derivation

Expansion Order Leftmost non-terminal Rightmost non-terminal

Parsing Style Used in top-down Used in bottom-up parsers


parsers

Tree Building Builds parse tree left-first Builds parse tree right-first





10. Write closure properties of CFL. – 2M

Context-Free Languages (CFLs) are not closed under all operations, but they are closed
under several important ones.

Closure Properties of CFLs:

CFLs are closed under:

●​ Union:​
If L₁ and L₂ are CFLs, then L₁ ∪ L₂ is also a CFL.​

●​ Concatenation:​
If L₁ and L₂ are CFLs, then L₁L₂ is a CFL.​

●​ Kleene star:​
If L is a CFL, then L* is a CFL.​

●​ Substitution:​
If each symbol is replaced with a CFL, the result is a CFL.​

●​ Intersection with Regular Languages:​


If L₁ is a CFL and L₂ is regular, L₁ ∩ L₂ is a CFL.​

CFLs are NOT closed under:


●​ Intersection (with another CFL):​
L₁ ∩ L₂ may not be a CFL.​

●​ Complementation:​
¬L may not be a CFL.​

11. Write decision properties of CFL. – 2M

Decision properties refer to problems for which we can always determine an answer (yes or no).
Some problems involving CFLs are decidable, others are undecidable.

Decidable Properties for CFLs:

●​ Emptiness:​
Is L = ∅? → Decidable​

●​ Finiteness:​
Is L finite? → Decidable​

●​ Membership:​
Is w ∈ L? → Decidable using CYK or PDA simulation​

●​ Intersection with Regular Language is empty?​


→ Decidable​

Undecidable Properties for CFLs:

●​ Equivalence:​
Are two CFGs equivalent? → Undecidable​

●​ Universality:​
Does CFG generate all strings? → Undecidable​

●​ Inclusion:​
Is L₁ ⊆ L₂ for two CFLs? → Undecidable​
12. Define CNF with suitable example. – 2M

Chomsky Normal Form (CNF) is a way of rewriting a CFG such that all productions follow strict
forms:

Allowed Production Forms in CNF:

1.​ A → BC (two non-terminals)​

2.​ A → a (a single terminal)​

3.​ S → ε (only if ε is in the language)​

Example:

Original CFG:​
S → aAB​
A → a​
B→b

Convert to CNF:

1.​ Break S → aAB:​


Introduce X → a​
Then S → XB​
Then XB → AB​

Final CNF Rules:

●​ S → XA​

●​ A → a​

●​ B → b​

●​ X → a​
Why CNF is useful:

●​ Parsing algorithms like CYK work only on CNF.​

●​ CNF simplifies CFG structure for proofs and computations.

Chomsky Normal Form (CNF) is a special form of a Context-Free Grammar (CFG) that
imposes strict constraints on the structure of production rules. These constraints simplify parsing
and formal analysis, making CNF an important tool in compiler construction, formal language
theory, and parsing algorithms like CYK (Cocke-Younger-Kasami).

Characteristics of CNF:

1.​ Production Rules in CNF: A CFG is said to be in Chomsky Normal Form if all its
production rules satisfy the following conditions:​

○​ Every production is of the form A → BC, where A, B, C are non-terminal


symbols, and B and C are not the start symbol.​

○​ Or, a production of the form A → a, where a is a terminal symbol.​

○​ Additionally, a single rule S → ε is allowed, but only if S is the start symbol, and ε
is part of the language.​

2.​ Therefore, the production rules of a CNF grammar can only fall into three categories:​

○​ A → BC (non-terminal to two non-terminals).​

○​ A → a (non-terminal to a terminal).​

○​ S → ε (start symbol to epsilon, if ε is in the language).​

3.​ Why CNF is Used:​

○​ Simplicity for Parsing: The structure of CNF allows for easier parsing and
checking whether a string belongs to the language defined by the grammar.
Parsing algorithms, like CYK, work efficiently with CNF as every rule ensures
uniformity in the form of the production.​

○​ Formal Analysis and Proofs: CNF provides a simplified structure for theoretical
analysis, such as proving properties of languages or automata.​
○​ Algorithm Design: CNF simplifies the process of converting grammars into
automata or working with parsing algorithms that require consistent production
rule structures.

13. Define GNF with suitable example. – 2M

Greibach Normal Form (GNF) is a form of context-free grammar where every production rule
has the following format:

●​ A → aα, where a is a terminal and α is a string of non-terminals (including possibly the


empty string).​

GNF ensures that every production starts with a terminal symbol, making it especially useful for
top-down parsing.

Example Grammar:

Original CFG:​
S → aAB | bA​
A → a​
B→b

Convert to GNF:

1.​ Start with the production S → aAB. This already follows GNF because it begins with a
terminal (a).​

2.​ Similarly, S → bA is valid since it starts with the terminal b.​

So, the grammar is already in GNF:

●​ S → aAB | bA​

●​ A → a​
●​ B → b​

Why GNF is useful:

●​ GNF allows a predictive parser (like recursive descent) to work efficiently.​

●​ It guarantees that each production starts with a terminal, simplifying top-down parsing.​

14. Difference between NP-Hard and NP-Complete with example. – 2M

NP-Hard and NP-Complete are both classes of problems in computational complexity theory,
but they are distinct in their definitions and properties.

NP-Hard:

●​ NP-Hard problems are at least as hard as the hardest problems in NP.​

●​ A problem X is NP-Hard if every problem in NP can be reduced to it in polynomial time.​

●​ NP-Hard problems are not necessarily in NP, and they may not have a known solution
that can be verified in polynomial time.​

NP-Complete:

●​ NP-Complete problems are both in NP and NP-Hard.​

●​ A problem X is NP-Complete if:​

○​ It is in NP.​

○​ Every other problem in NP can be reduced to it in polynomial time.​

●​ NP-Complete problems are the hardest problems in NP, meaning that if one
NP-Complete problem is solved in polynomial time, all NP problems can be solved in
polynomial time.​

Example:

●​ NP-Hard Example:​
The Halting Problem is NP-Hard but not in NP because it cannot be verified in
polynomial time and isn’t solvable by any known algorithm.​

●​ NP-Complete Example:​
The Traveling Salesman Problem (TSP) is NP-Complete because:​

○​ It is in NP (a solution can be verified in polynomial time).​

○​ Every other NP problem can be reduced to it in polynomial time.​

Key Differences:
Feature NP-Hard NP-Complete

Belongs to NP? No Yes

Can be reduced to? Any problem in NP Any problem in NP

Solution verification? May not be verifiable in polynomial time Verifiable in polynomial time

15. Define Turing Machine with suitable example. – 2M

A Turing Machine (TM) is a theoretical model of computation that defines an abstract machine
capable of solving any problem that can be algorithmically solved.

Components of a Turing Machine:

●​ Tape: Infinite memory divided into cells, each holding a symbol.​

●​ Head: Reads and writes symbols on the tape.​


●​ State Register: Holds the state of the machine.​

●​ Transition Function: Defines the machine’s operation based on the current state and
symbol it reads.​

●​ Start State: The state in which the machine begins.​

●​ Accept State: A state that indicates the machine has finished processing.​

●​ Reject State: A state that indicates the machine has failed.​

Example Language:

L = { w | w is a string with an even number of 1s } over the alphabet {0, 1}.

TM Design for L:

1.​ The machine starts in state q0.​

2.​ It reads input symbol 0 or 1 and moves to the next state accordingly.​

3.​ It will count 1’s by transitioning between states, and in state q1, it ensures the machine is
processing in pairs.​

4.​ If the machine finishes reading the input and the number of 1’s is even, it accepts the
string.​

Transition Function Example:

●​ δ(q0, 1) → (q1, 1, R)​

●​ δ(q1, 1) → (q0, 1, R)​

●​ δ(q0, 0) → (q0, 0, R)​

●​ δ(q1, 0) → (q1, 0, R)​


●​ δ(q0, ε) → (q_accept, ε, N) (If tape ends and number of 1’s is even)​

Explanation:

The Turing machine above accepts strings with an even number of 1’s by tracking the count in
states q0 and q1. If it ends with an even number of 1’s, it transitions to the accept state.

Greibach Normal Form (GNF) is a normal form for Context-Free Grammars (CFGs) where
each production rule is constrained to the following form:

●​ A → aα, where:​

○​ A is a non-terminal.​

○​ a is a terminal symbol.​

○​ α is a string of non-terminals (including possibly the empty string).​

This form ensures that every production starts with a terminal symbol followed by zero or more
non-terminals, making it particularly useful for top-down parsing, such as recursive descent
parsers.

Characteristics of GNF:

●​ Every production has the form A → aα, where a is a terminal and α is a sequence of
non-terminals (which can be empty).​

●​ This structure allows for direct prediction of the next symbol in the input string (hence,
predictive parsing).​

Advantages of GNF:

●​ Simplifies top-down parsing because it allows for unambiguous prediction of what the
next input symbol will be.​

●​ Facilitates the construction of recursive descent parsers, as the production rules lead
directly to terminal symbols.​

Example Grammar:
Consider the following CFG that is not in GNF:

less
CopyEdit
S → AB | a
A → a
B → b

This grammar is not in GNF because:

1.​ The production S → AB does not start with a terminal.​

2.​ The production A → a is in GNF, but B → b is not part of the right-hand side of a
non-terminal to terminal rule (i.e., doesn't follow the A → aα format).​

Steps to Convert a CFG to GNF:

1.​ Eliminate ε-productions (productions of the form A → ε) unless the start symbol
generates ε.​

○​ In this example, there are no ε-productions, so we can skip this step.​

2.​ Eliminate unit productions (productions like A → B).​

○​ Replace any unit production with equivalent rules. In this case, there are no unit
productions to eliminate.​

3.​ Eliminate left recursion (if the grammar is left-recursive).​

○​ Left recursion occurs when a non-terminal can eventually produce itself as the
first symbol in its derivation, like A → Aα. This doesn't happen in this grammar,
so we skip this step as well.​

4.​ Ensure each production starts with a terminal.​

○​ We need to ensure all rules have the form A → aα, where a is a terminal. If not,
we break down the non-terminal into a terminal-first structure.​

Conversion of Example Grammar to GNF:

Start with the grammar:


less
CopyEdit
S → AB | a
A → a
B → b

Step 1: Identify non-terminal starting productions.

●​ S → AB: This does not start with a terminal, so we need to transform it.​

●​ A → a: This is already in GNF (it starts with a terminal).​

●​ B → b: This is already in GNF (it starts with a terminal).​

Step 2: Introduce new variables to handle non-terminal starts.

●​ We need to break down the production S → AB. We can introduce a new non-terminal X
to represent A → a and B → b and modify S:​

○​ S → aB (since A → a)​

○​ B → b (no change needed)​

●​ After introducing the new rule for S, we get:​

less
CopyEdit
S → aB | a
A → a
B → b

Step 3: Check for consistency with GNF.

Now, each production is in GNF:

●​ S → aB: Starts with terminal a, followed by a non-terminal B.​

●​ S → a: Starts with terminal a.​


●​ A → a: Starts with terminal a.​

●​ B → b: Starts with terminal b.​

Thus, the grammar is now in Greibach Normal Form.

Final Grammar in GNF:


less
CopyEdit
S → aB | a
A → a
B → b

Summary of Conversion Process:

1.​ Start with the original grammar and identify the productions that do not follow GNF.​

2.​ Introduce new non-terminals where necessary to ensure that productions begin with a
terminal.​

3.​ Modify the rules by replacing non-terminal starts with terminal-first sequences, ensuring
each rule is of the form A → aα.​

4.​ Repeat the process until all productions are in GNF, ensuring that each rule is simplified
to start with a terminal symbol, followed by zero or more non-terminals.​

Key Points:

●​ GNF simplifies top-down parsing by ensuring that each production starts with a terminal
symbol.​

●​ The conversion process involves eliminating any productions that do not start with a
terminal and introducing new non-terminals if necessary to maintain the structure of
GNF.
PART-B

1.​ Explain rightmost derivation and leftmost derivation with a suitable example. 4M
Leftmost Derivation:
In leftmost derivation, we start with the start symbol and at each step, we expand the
leftmost non-terminal first. The goal is to replace the leftmost non-terminal with the
corresponding production rule until the entire string is derived.
Steps for Leftmost Derivation:
1.​ Start with the start symbol.
2.​ Replace the leftmost non-terminal using one of its production rules.
3.​ Continue expanding the leftmost non-terminal until no non-terminal remains.
Example of Leftmost Derivation:
Consider the following grammar:
css
Copy code
S → ABC
A→a
B→b
C→c
Let’s derive the string w = abc using leftmost derivation:
•​ Step 1: Start with S.
nginx
Copy code
S → ABC
•​ Step 2: Expand the leftmost non-terminal A to a (using A → a).
nginx
Copy code
S → aBC
•​ Step 3: Now, the leftmost non-terminal is B. Expand B to b (using B → b).
nginx
Copy code
S → abC
•​ Step 4: Finally, expand C to c (using C → c).
nginx
Copy code
S → abc
Now, S → abc is the final derived string.
Rightmost Derivation:
In rightmost derivation, we start with the start symbol and at each step, we expand the
rightmost non-terminal first. The goal is to replace the rightmost non-terminal with the
corresponding production rule until the entire string is derived.
Steps for Rightmost Derivation:
1.​ Start with the start symbol.
2.​ Replace the rightmost non-terminal using one of its production rules.
3.​ Continue expanding the rightmost non-terminal until no non-terminal remains.
Example of Rightmost Derivation:
Let’s derive the string w = abc again using rightmost derivation for the same grammar:
css
Copy code
S → ABC
A→a
B→b
C→c
•​ Step 1: Start with S.
nginx
Copy code
S → ABC
•​ Step 2: Expand the rightmost non-terminal C to c (using C → c).
nginx
Copy code
S → ABc
•​ Step 3: Now, expand the rightmost non-terminal B to b (using B → b).
nginx
Copy code
S → Abc
•​ Step 4: Finally, expand A to a (using A → a).
nginx
Copy code
S → abc
Now, S → abc is the final derived string.
________________________________________
Summary of Key Differences:
•​ In leftmost derivation, you always replace the leftmost non-terminal first.
•​ In rightmost derivation, you always replace the rightmost non-terminal first.
•​ For the same grammar, a string can have both leftmost and rightmost derivations,
but the order in which the non-terminals are replaced will differ.
________________________________________
Example to Illustrate Both:
Let’s use a simple example with the grammar:
css
Copy code
S → ABC
A→a
B→b
C→c
For string abc, we get the following derivations:
•​ Leftmost Derivation:
css
Copy code
S → ABC
→ aBC (using A → a)
→ abC (using B → b)
→ abc (using C → c)
•​ Rightmost Derivation:
css
Copy code
S → ABC
→ ABc (using C → c)
→ Abc (using B → b)
→ abc (using A → a)
Both derivations result in abc, but the order of non-terminal replacements differs.

2.​ Show that the following grammar is ambiguous: S → SbS / a. 4M

What is an Ambiguous Grammar?


A grammar is said to be ambiguous if there exists at least one string in the language that
can be derived in more than one way, i.e., the string has more than one leftmost
derivation or rightmost derivation or has more than one parse tree.
When a grammar is ambiguous, the same string can be generated by multiple derivation
paths, leading to different parse trees. This makes it difficult to determine a unique
structure or meaning from the string.
________________________________________
Grammar:
The given grammar is:
less
Copy code
S → SbS | a
This grammar consists of two production rules:
1.​ S → SbS (Recursive rule)
2.​ S → a (Base case)
To show that this grammar is ambiguous, we need to prove that there exists at least one
string that can be generated in multiple ways.
Let's derive the string aa using this grammar and see if we get more than one derivation
or parse tree.
________________________________________
First Derivation (1):
Start with S and apply the first production S → SbS.
1.​ Step 1: Start with S.
nginx
Copy code
S → SbS
2.​ Step 2: Now, expand the first S (leftmost S) using the rule S → a.
css
Copy code
S→abS
3.​ Step 3: Now, expand the second S (rightmost S) using the rule S → a.
css
Copy code
S → ab a
4.​ The resulting string is aa.
So, one derivation for aa is S → SbS → abS → ab a.
________________________________________
Second Derivation (2):
Now, let's try another derivation for aa by using the second production S → a earlier.
1.​ Step 1: Start with S.
css
Copy code
S→a
2.​ Step 2: Now, apply the production S → SbS to the second S.
css
Copy code
S → a → SbS
3.​ Step 3: Expand the first S (leftmost S) using S → a:
css
Copy code
S → a → a bS
4.​ Step 4: Now expand the second S using S → a:
css
Copy code
S → ab a
5.​ The resulting string is aa.
So, another derivation for aa is S → a → SbS → abS → ab a.
________________________________________
Conclusion:
We have shown two different derivations for the same string aa, which results in different
parse trees. This proves that the grammar is ambiguous.
________________________________________
Parse Trees:
Parse Tree for Derivation 1:
less
Copy code
S
/|\
/|\
S b S
| |
a a
Parse Tree for Derivation 2:
less
Copy code
S
/\
a S
/|\
/|\
S b S
| |
a a
As we can see, the two parse trees have different structures, confirming that the
grammar is ambiguous.
________________________________________
Final Answer:
The grammar S → SbS | a is ambiguous because there exist at least two different
derivations (or parse trees) for the string aa. This results in multiple possible structures
for the same string, showing the ambiguity of the grammar.

3.​ Construct the CFG representing the set of palindrome over (0+1)*. 4M
4.​ Explain about pumping lemma algorithm. 4M
5.​ Write about closure properties of context free language. 4M
6.​ Enumerate normal forms for context free language. 4M
7.​ Convert the following context free language to CNF: S → ABC, A → Aa / ε, B → bB
/ ε, C → cC / ε. 4M
8.​ Convert the following CFG into GNF: S → AB, A → a, B → CA, C → AB / b. 4M
9.​ Construct a PDA for accepting a language {L = a^n b^n | n ≥ 1}. 4M
10.​ Construct PDA for the given CFG: S → 0BB, B → 0S | 1S | 0. Test whether 01044 is
accepted by this PDA. 8M
11.​ Convert the following CFG into Chomsky’s Normal Form (CNF): S → A B A | B a A
| A, A → B a | S | ε, B → B a | b | C a, C → C a, D → D a D | a. 8M

12.​ Differentiate between PDA and DPDA with suitable examples. 8M


Introduction to PDA and DPDA
Both PDA (Pushdown Automata) and DPDA (Deterministic Pushdown Automata) are
types of automata used to recognize context-free languages. They are more powerful
than finite automata due to their use of a stack, which gives them additional
computational capability. While both PDA and DPDA use a stack to help in computation,
the key difference lies in their determinism.
________________________________________
What is a Pushdown Automaton (PDA)?
A Pushdown Automaton (PDA) is a type of automaton that uses a stack to store symbols.
It is a non-deterministic machine, meaning it can be in multiple states at a given point in
time depending on the input and the stack's current contents. A PDA can accept a
language by either acceptance by final state or acceptance by empty stack.
PDA Components:
1.​ States: A finite set of states, including a start state and one or more accepting
states.
2.​ Input alphabet: A finite set of symbols that the automaton can read.
3.​ Stack alphabet: A finite set of symbols that the automaton can push and pop from
the stack.
4.​ Transition function: A set of rules that describe the transitions between states
based on the current input symbol, the current top stack symbol, and the state of the
machine.
5.​ Start state: The state where the automaton begins its computation.
6.​ Start symbol: The symbol initially placed in the stack.
7.​ Acceptance condition: The conditions under which the PDA will accept an input
string. It can be either by reaching an accepting state or by emptying the stack.
Example of a PDA:
Consider the following PDA designed to accept the language L={anbn∣n≥1}L = \{ a^n b^n
| n \geq 1 \}L={anbn∣n≥1}, i.e., strings with an equal number of a's followed by an equal
number of b's.
PDA for L={anbn∣n≥1}L = \{ a^n b^n | n \geq 1 \}L={anbn∣n≥1}:
•​ States: q0,q1,q2q_0, q_1, q_2q0,q1,q2
•​ Start state: q0q_0q0
•​ Start symbol: Z (which is the initial stack symbol)
•​ Input alphabet: {a, b}
•​ Stack alphabet: {a, Z}
•​ Acceptance condition: Acceptance by empty stack
Transition function:
•​ δ(q0,a,Z)=(q0,aZ)\delta(q_0, a, Z) = (q_0, aZ)δ(q0,a,Z)=(q0,aZ) — Push a onto the
stack when reading a from input.
•​ δ(q0,a,a)=(q0,aa)\delta(q_0, a, a) = (q_0, aa)δ(q0,a,a)=(q0,aa) — Push a onto the
stack when reading a from input (if a is on the stack).
•​ δ(q0,b,a)=(q1,ϵ)\delta(q_0, b, a) = (q_1, \epsilon)δ(q0,b,a)=(q1,ϵ) — Pop a from the
stack when reading b from input.
•​ δ(q1,b,a)=(q1,ϵ)\delta(q_1, b, a) = (q_1, \epsilon)δ(q1,b,a)=(q1,ϵ) — Continue
popping a when reading b.
•​ δ(q1,ϵ,Z)=(q2,Z)\delta(q_1, \epsilon, Z) = (q_2, Z)δ(q1,ϵ,Z)=(q2,Z) — Move to the
accepting state when the stack is empty.
Example Walkthrough (for input string "aabb"):
1.​ Start at state q0q_0q0, stack contains ZZZ.
2.​ Read a, transition to q0q_0q0, push a onto the stack. Stack is now aZaZaZ.
3.​ Read a, transition to q0q_0q0, push a onto the stack. Stack is now aaZaaZaaZ.
4.​ Read b, transition to q1q_1q1, pop a from the stack. Stack is now aZaZaZ.
5.​ Read b, transition to q1q_1q1, pop a from the stack. Stack is now ZZZ.
6.​ Read ϵ\epsilonϵ, transition to q2q_2q2 when the stack is empty.
Since the stack is empty at the end, the string "aabb" is accepted.
________________________________________
What is a Deterministic Pushdown Automaton (DPDA)?
A Deterministic Pushdown Automaton (DPDA) is a type of PDA that operates
deterministically. Unlike a general PDA, which may have multiple possible transitions for
a given input and stack symbol, a DPDA has exactly one transition for each combination
of input symbol and stack symbol. This makes DPDAs more predictable and
deterministic in behavior.
Key Properties of a DPDA:
1.​ Deterministic Transitions: For each input symbol and top stack symbol, the DPDA
has only one possible transition.
2.​ Deterministic Nature: Given a state, input symbol, and stack symbol, the DPDA
can only make one move, which ensures no ambiguity in the operation.
3.​ Acceptance Condition: DPDAs can accept languages through either acceptance
by final state or acceptance by empty stack.
Example of a DPDA:
Consider the following DPDA that accepts the language L={anbn∣n≥1}L = \{ a^n b^n | n
\geq 1 \}L={anbn∣n≥1}, which is similar to the PDA example above but with deterministic
behavior.
•​ States: q0,q1,q2q_0, q_1, q_2q0,q1,q2
•​ Start state: q0q_0q0
•​ Start symbol: Z
•​ Input alphabet: {a, b}
•​ Stack alphabet: {a, Z}
•​ Acceptance condition: Acceptance by empty stack
Transition function:
•​ δ(q0,a,Z)=(q0,aZ)\delta(q_0, a, Z) = (q_0, aZ)δ(q0,a,Z)=(q0,aZ) — Push a onto the
stack when reading a from input.
•​ δ(q0,a,a)=(q0,aa)\delta(q_0, a, a) = (q_0, aa)δ(q0,a,a)=(q0,aa) — Push a onto the
stack when reading a from input.
•​ δ(q0,b,a)=(q1,ϵ)\delta(q_0, b, a) = (q_1, \epsilon)δ(q0,b,a)=(q1,ϵ) — Pop a from the
stack when reading b from input.
•​ δ(q1,b,a)=(q1,ϵ)\delta(q_1, b, a) = (q_1, \epsilon)δ(q1,b,a)=(q1,ϵ) — Continue
popping a when reading b.
•​ δ(q1,ϵ,Z)=(q2,Z)\delta(q_1, \epsilon, Z) = (q_2, Z)δ(q1,ϵ,Z)=(q2,Z) — Move to the
accepting state when the stack is empty.
Example Walkthrough** (for input string "aabb"):
The transitions are exactly the same as the PDA example, but because the DPDA is
deterministic, each transition is uniquely determined by the current input symbol and
stack symbol.
________________________________________
Differences Between PDA and DPDA
Now, let's summarize the key differences between PDA and DPDA:
Feature​ PDA (Pushdown Automaton)​ DPDA (Deterministic Pushdown
Automaton)
Determinism​ Non-deterministic, can have multiple transitions for the same input symbol
and stack symbol.​ Deterministic, has exactly one transition for each input symbol and
stack symbol.
Transitions​ Multiple possible transitions may exist for a given input symbol and stack
symbol.​ Exactly one possible transition for each input symbol and stack symbol.
Language Recognition​ Recognizes all context-free languages.​ Recognizes only a
subset of context-free languages (not all CFLs).
Acceptance Condition​ Can accept by empty stack or final state.​Can accept by
empty stack or final state, similar to PDA.
Computational Power​ More computationally powerful due to its non-deterministic
nature.​ Less computationally powerful compared to PDA, due to determinism.
Applications​ Used for recognizing languages with ambiguous structures (e.g.,
programming language syntax parsing).​ Used in contexts where deterministic behavior
is required, such as parsing deterministic grammars.
Examples​ Recognizing palindromes, parsing ambiguous expressions.​ Parsing
deterministic context-free grammars, such as balanced parentheses.
________________________________________
Conclusion
Both PDA and DPDA are important models for recognizing context-free languages, but
they differ significantly in terms of determinism. While a PDA can process more complex
languages with non-determinism, a DPDA operates deterministically, and this constraint
limits its ability to recognize some context-free languages. However, this determinism
makes DPDAs more predictable and suitable for applications where a unique
interpretation of the input is needed.

13.​ Differentiate between Chomsky’s Normal Form (CNF) and GNF (Greibach Normal
Form) with suitable examples. 4M
14.​ Explain decidability and undecidability with examples. 4M
15.​ Explain Post correspondence problem with an example. 4M
16.​ Discuss about Modified Post correspondence problem with an example. 4M
17.​ Explain about the Decision Properties and Closure Properties of CFL. 4M
18.​ Construct a Turing Machine (TM) that accepts the language L = {0^n 1^n | n > 1}.
4M

19.​ Short notes on: i) P ii) NP iii) NP Hard iv) NP Complete with example. 8M
Introduction to P, NP, NP-Hard, and NP-Complete
In the theory of computational complexity within Automata Theory, P, NP, NP-Hard, and
NP-Complete are classes used to categorize decision problems based on their
computational complexity. These classifications help determine the difficulty of problems
and the resources needed to solve them.
Understanding the relationships between these complexity classes is crucial for
determining which problems are solvable efficiently (in polynomial time) and which are
likely intractable (requiring exponential time).
Let's define each of these terms one by one and explore them with examples.
________________________________________
i) P (Polynomial Time)
P is the class of decision problems (problems with a "yes" or "no" answer) that can be
solved by a deterministic Turing machine in polynomial time. In other words, a problem is
in P if there exists an algorithm to solve it that runs in time proportional to a polynomial
function of the input size.
Key Points:
•​ P problems are considered efficiently solvable.
•​ The solution to these problems can be computed in time O(nk)O(n^k)O(nk), where
nnn is the size of the input and kkk is some constant.
•​ Examples include problems like sorting a list, searching an element in an array,
and matrix multiplication.
Example: Sorting Problem
Given a list of numbers, sorting them in non-decreasing order is a problem in P. The
popular Merge Sort algorithm solves this in O(nlog⁡n)O(n \log n)O(nlogn), which is
polynomial time.
________________________________________
ii) NP (Non-deterministic Polynomial Time)
NP is the class of decision problems for which a non-deterministic Turing machine can
solve the problem in polynomial time, or equivalently, for which a solution can be verified
in polynomial time given a potential solution.
Key Points:
•​ A problem is in NP if, once a solution is guessed (non-deterministically), it can be
verified in polynomial time.
•​ NP problems may or may not have efficient solutions, but any proposed solution
can be checked in polynomial time.
•​ NP includes both problems that are solvable in polynomial time (like those in P)
and those that are not known to be solvable efficiently.
Example: Hamiltonian Path Problem
Given a graph, the Hamiltonian Path Problem asks if there exists a path that visits each
vertex exactly once. Verifying a proposed solution (i.e., checking if a given path is a
Hamiltonian path) can be done in polynomial time, but finding such a path (if one exists)
is not known to be solvable in polynomial time.
________________________________________
iii) NP-Hard
NP-Hard is a class of problems that are, informally, at least as hard as the hardest
problems in NP. A problem is considered NP-Hard if every problem in NP can be reduced
to it in polynomial time. In other words, if we can solve an NP-Hard problem efficiently,
we can solve all NP problems efficiently.
Key Points:
•​ NP-Hard problems are not necessarily in NP because they may not be decision
problems (they may not have a yes/no answer).
•​ Solving an NP-Hard problem would give a solution to all NP problems, but finding
such a solution may be difficult or impossible.
•​ NP-Hard does not require polynomial-time verification.
Example: Traveling Salesman Problem (TSP)
In the Traveling Salesman Problem, you are given a list of cities and must determine the
shortest possible route that visits each city exactly once and returns to the starting city.
This problem is NP-Hard because it is computationally very difficult, and no
polynomial-time solution is known, though it can be verified in polynomial time once a
route is proposed.
________________________________________
iv) NP-Complete
NP-Complete is the class of problems that are both in NP and NP-Hard. These problems
are the hardest problems in NP, and they are believed to be intractable (i.e., no
polynomial-time solution exists unless P=NPP = NPP=NP).
A problem is considered NP-Complete if:
1.​ It is in NP (its solution can be verified in polynomial time).
2.​ Every other problem in NP can be reduced to it in polynomial time.
If any NP-Complete problem can be solved in polynomial time, then all problems in NP
can be solved in polynomial time (i.e., P=NPP = NPP=NP).
Key Points:
•​ NP-Complete problems are both hard and verifiable in polynomial time.
•​ If we find a polynomial-time solution to one NP-Complete problem, it would imply
that P=NPP = NPP=NP.
•​ NP-Complete problems are important because they represent the "hardest"
problems in NP.
Example: Boolean Satisfiability Problem (SAT)
The SAT Problem asks whether there exists an assignment of truth values to variables
that satisfies a given Boolean formula. It is NP-Complete because:
•​ It is in NP (a solution can be verified in polynomial time).
•​ Any problem in NP can be reduced to SAT in polynomial time.
________________________________________
Summary of Differences and Relationships
Class​ Description​ Example
P​ Problems solvable in polynomial time.​ Sorting, matrix multiplication, shortest
path
NP​ Problems verifiable in polynomial time, but not necessarily solvable in polynomial
time.​ Hamiltonian Path, Boolean satisfiability (SAT)
NP-Hard​ Problems that are at least as hard as the hardest problems in NP.​
Traveling Salesman Problem (TSP), Halting problem
NP-Complete​Problems that are both in NP and NP-Hard.​ SAT, Knapsack problem,
Vertex Cover problem
________________________________________
Conclusion
In summary, P is the class of problems solvable in polynomial time, NP is the class of
problems whose solutions can be verified in polynomial time, NP-Hard includes the
hardest problems in NP (or even harder), and NP-Complete represents the hardest
problems in NP that are both in NP and NP-Hard. Understanding these complexity
classes is crucial in the field of computational theory and automata, as it helps determine
the tractability of algorithms and problems in computer science.

20.​ Illustrate the process of Recursive languages and Recursively enumerable


Languages with suitable examples. 8M
21.​ Construct a Turing Machine for L = { a^n b^n c^n | n ≥ 1 }. 8M

Unit-4
31.

32. Convert the following CFG into Chomsky’s


Normal Form (CNF)
S→ A B A | B a A | A
A→ B a | S | Ɛ
B → B a | b | Ca
C→Ca
D→DaD|a

33. Difference Between PDA and DPDA


A PDA and a DPDA are both computational models used to recognize context-free
languages (CFLs), but they differ mainly in how they handle transitions —
nondeterminism vs determinism.
________________________________________
1. Pushdown Automaton (PDA)
•​ Nature: Nondeterministic
A PDA can have multiple possible transitions for the same input, stack symbol, and state.
It may explore multiple paths and accept if any path leads to an accepting state.
•​ Memory: Uses a stack to store symbols for processing.
•​ Formal Definition:
A PDA is defined by a 7-tuple:
(Q, Σ, Γ, δ, q₀, Z₀, F)
Where:
o​ Q: Set of states
o​ Σ: Input alphabet
o​ Γ: Stack alphabet
o​ δ: Transition function
o​ q₀: Initial state
o​ Z₀: Initial stack symbol
o​ F: Set of accepting states
•​ Transition Function:
δ(q, a, X) → set of possible (next state, stack action)
Allows multiple options (nondeterministic).
•​ Language Power:
Recognizes context-free languages (CFLs).
•​ Example Language:
L = { aⁿbⁿcⁿ | n ≥ 0 }
This language is CFL but not deterministic — requires guessing when to switch from
reading b's to c's.
________________________________________
2. Deterministic Pushdown Automaton (DPDA)
•​ Nature: Deterministic
A DPDA must have exactly one possible transition for each combination of input symbol,
stack symbol, and current state. No ambiguity is allowed.
•​ Memory: Also uses a stack like PDA.
•​ Formal Definition:
Also defined as a 7-tuple:
(Q, Σ, Γ, δ, q₀, Z₀, F)
But with the restriction that δ is deterministic.
•​ Transition Function:
δ(q, a, X) → exactly one (next state, stack action)
No multiple choices allowed.
•​ Language Power:
Recognizes deterministic context-free languages (DCFLs), which are a subset of CFLs.
•​ Example Language:
L = { aⁿbⁿ | n ≥ 0 }
This language can be accepted deterministically by matching each "a" with a "b".

25. Explain about pumping lemma algorithm


https://www.youtube.com/watch?v=KyQc054-BEU

26. Write about closure properties of context free language


1.Union Property
If you have two context-free languages, L1 and L2, the union of these two, represented as
L1∪L2, will also be a context-free language.
Example
Let's say L1 = { axby , x > 0}
The corresponding grammar G1 would be P: S1 → aAb | ab
And if L2 = { czdz , z ≥ 0}
The corresponding grammar G2 would be P: S2 → cBb| ε
The union of L1 and L2 would be L = L1 ∪ L2 = { axby } ∪ { czdz }
Here, the corresponding grammar G would have the additional production, that is, S →
S1 | S2
2.Concatenation Property
If L1 and L2 are CFLs, then the concatenation of these two, represented as L1L2, will also
be a context-free language.
Example
The concatenation of the languages L1 and L2 would be L = L1L2 = { axbyczdz }
The corresponding grammar G would have the additional production, that is, S → S1 S2
3.Kleene Star Property
If L is a CFL, then the Kleene Star of L, represented as L*, will also be a context-free
language.
Example
If L = { axby , x ≥ 0}
Then, the corresponding grammar G would have P: S → aAb| ε
Thus, the Kleene Star L1 = { axby }*
Here, the corresponding grammar G1 would have additional productions, and they are S1
→ SS1 | ε
However, CFLs are not closed under the following operations:
o​ Intersection − If L1 and L2 are CFLs, then the intersection of these two,
represented as L1 ∩ L2, may not be a CFL.
o​ Intersection with a regular language − If L1 is a regular language and L2 is a CFL,
then the intersection of these two, represented as L1 ∩ L2, will be a CFL.
o​ Complement − If L1 is a CFL, the complement of L1, represented as L1’, may not
be a CFL
27. Enumerate normal forms for context free language
Ans:
There are two primary normal forms for Context-Free Grammars (CFGs) that are:
1.​ Chomsky Normal Form (CNF)
2.​ Greibach Normal Form (GNF)
1.Chomsky Normal Form(CNF)
2.​ Greibach Normal Form (GNF)

28. Convert the following context free language to


CNF
S -> ABC
A -> Aa/epsilon
B -> bB/epsilon
C -> cC/epsilon
29. Convert the following CFG into GNF.
S->AB
A->a
B-> CA
C->AB/b

30. Construct a PDA for accepting a language


{L=a^nb^n | n ≥ 1}

31. Construct PDA for the given CFG


S→0BB
B→0S|1S|0
Test whether 01044 is accepted by this PDA

https://www.naukri.com/code360/library/cfg-to-pda-conversion

Unit-5
40. short notes on:
i) P ii) NP iii) NP Hard iv) NP Complete with example
41. Illustrate the process of Recursive languages and Recursively enumerable
Languages with suitable examples?

Ans:
Recursively Enumerable Languages
In simple words, a "language" is a collection of strings, like words in a dictionary. A
recursively enumerable language is a language where we can create a computer program
(or a Turing machine) that can systematically list out all the strings that belong to the
language.
Consider a machine that can generate all the possible sentences in the English language,
one by one. This machine wouldn't necessarily know which sentences are not in the
English language, but it could list out all the valid sentences. This is the idea of a
recursively enumerable language. It can enumerate all the strings that are part of the
language.

.
Recursive Languages: A Subset of RE Languages
Another important subset of RE languages is recursive language. In a recursive
language, the Turing machine not only accepts strings belonging to the language but
also always halts for strings that are not in the language
As an example, Consider the language L={anbncn|n≥0}L={anbncn|n≥0}. This language
consists of strings where the number of a's, b's, and c's are equal

Closure Properties of Recursive Languages


42. Construct a Turing Machine
L={ a^n b^n c^n | n ≥ 1 }

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy