Automata and Complexity theory
Automata and Complexity theory
PROPERTIES OF DETERMINISTIC FINITE AUTOMATA When the DFA receives an input symbol while in
a particular state, it follows a specific transition
Here are some properties of deterministic finite rule that leads it to a single, predetermined next
automata (DFAs) explained in simpler terms: state. The transition rules of a DFA are
deterministic, meaning they leave no room for
1. Clear rules: DFAs follow straightforward
ambiguity or multiple possible outcomes.
and easy-to-understand rules. It's like a
game where you know exactly what This property of having a unique next state based
moves to make based on the current on the current state and input symbol makes
situation. DFAs easy to understand and analyze. It ensures
that the behavior of the DFA is well-defined and
2. Predictable behavior: DFAs always behave
predictable for any given input sequence.
in a predictable manner. They don't make
random choices or guesses. Every time These properties make DFAs simple and easy to
they receive an input, they know exactly understand. They operate in a step-by-step
which state to transition to. manner, always knowing what to do next based
on the input and their current state. DFAs are languages are useful in various tasks, like
often used to recognize and validate patterns in searching for words or patterns in text.
strings, making them useful in various
In simpler terms, a regular language is a set of
applications, such as text processing, compilers,
words or strings that can be described or
and language recognition.
recognized by a special type of machine called a
finite state machine (FSM) or a regular
expression.
REGULAR LANGUAGES
Think of a regular language as a collection of
patterns or rules that define which strings are
Imagine a regular language as a special club that
considered part of the language. These patterns
only allows certain words or strings to be part of
can be simple or complex, depending on the
it. This club has specific rules or patterns that
language. For example, the regular language of
determine whether a word is allowed or not.
all words that start with the letter "a" and end
Here are some key points about regular languages with the letter "b" can be described by the
1. Words that follow a pattern: A regular pattern "a...b", where the dots represent any
language is like a group of words that sequence of characters in between.
follow a specific pattern. Think of it like Regular languages are characterized by their
a secret code that only some words know simplicity and can be recognized by deterministic
how to follow. finite automata (DFAs) or non-deterministic finite
2. Simple patterns: These patterns are not automata (NFAs). These automata are machines
too complicated. They can be as simple as with a limited amount of memory that can
"words that start with 'a' and end with 't'" process input strings and determine if they belong
or "words that have exactly three letters." to the regular language.
3. Special machines: There are special Regular languages have many practical
machines called finite state machines that applications, such as in text processing, pattern
can check if a word follows the pattern. matching, and lexical analysis in compilers. They
These machines are like detectives that provide a fundamental framework for describing
examine each letter of the word to see if and working with patterns in strings, allowing for
it matches the pattern. efficient and concise representations of language
constraints.
4. Language membership: If a word matches
the pattern and is allowed in the club, we OPERATION ON REGULAR LANGUAGES
say it belongs to the regular language. It's Operations on regular languages are ways to
like being a member of a special group. combine or manipulate different regular languages
5. Useful in many things: Regular languages to create new languages. It's like playing with
are helpful in many areas. They can be building blocks to make new structures.
used to find specific words in a Here are some key operations on regular
document, validate email addresses, or languages explained in simpler terms :-
search for patterns in a large amount of
1. Union (U): Imagine you have two sets of
text.
toys. The union operation allows you to
In summary, a regular language is like a club combine the toys from both sets into one
with rules or patterns that words must follow to big set. In terms of regular languages, the
be part of it. Special machines, called finite state union operation combines two languages
machines, help us check if a word matches the to create a new language that contains all
pattern and belongs to the language. Regular the words from both languages. For
example, if one language is {cat, dog} For example, let's say the original language is
and another language is {bird, fish}, the {cat, dog}. The complement of this language
union operation would give you a new would give you all the words that are not in the
language {cat, dog, bird, fish}. original language, such as {bird, fish}. It's like
saying, "Here are all the words that are not 'cat'
2. Concatenation(o) :- Concatenation is like
or 'dog'."
putting two puzzle pieces together. If you
have two strings or words, concatenation 6) Reverse of a Language (LR = {wR: w ϵ
allows you to join them to create a longer L}) : The reverse of a language is like
word. In regular languages, concatenation reading words backward. It's like
combines two languages to create a new reversing the order of the letters in each
language that consists of all possible word.
combinations of words from the original
For instance, consider the original language
languages. For example, if one language
{hello, world}. The reverse of this language
is {hello} and another language is
would give you {olleh, dlrow}. Each word in the
{world}, concatenation would give you
original language is reversed. It's like saying,
the language {helloworld}.
"Let's read the words backward."
3. Kleene Star (*): The Kleene star operation
Complement and reverse are additional operations
is like a magic wand that can make
that can be applied to regular languages to create
things repeat. If you have a language, the
new languages with different properties.
Kleene star operation allows you to create
Complement gives you the words not in the
a new language that includes all possible
original language, while reverse flips the order of
combinations of words from the original
letters in each word. These operations expand the
language, including the empty word. It's
possibilities of manipulating and exploring regular
like repeating and stacking the words
languages.
together. For example, if the original
language is {a}, the Kleene star operation These operations on regular languages are like
would give you the language {ε (empty tools that help you create new languages by
word), a, aa, aaa, ...}. combining or manipulating existing ones. Just
like how you can build different structures by
L* = L0 U L1 U L2…
playing with building blocks, operations on
regular languages allow you to create new
languages with different properties and patterns.
4) Positive Closure of L
L+ = L1 U L2 U L3…
NON-DETERMINISTIC FINITE AUTOMATA
It's important to note that non-deterministic finite 4. Initial state: The NFA starts its journey in
automata can be converted to equivalent one particular room. It's like the room it's
deterministic finite automata using specific initially placed in when you start playing
algorithms. with the machine.
1. Understand the problem: The first step in 7. Handle remaining combinations: If there
constructing a DFA is to understand the are any remaining combinations of input
problem or language you want the DFA to symbols that have not been accounted for
recognize. Identify the patterns or rules by the existing transitions, direct them to
that define the valid strings in the a separate state called the "error state" or
language. For example, if you want to "dead state." This ensures that any
construct a DFA that recognizes binary unrecognized or invalid inputs are
strings with an even number of 1s, you handled appropriately.
need to understand the pattern of valid
8. Test and refine: Test the DFA with
binary strings.
various input strings to ensure it behaves
2. Determine the states: Based on the as expected and recognizes the desired
problem, determine the number of states patterns. If any issues or errors are
required in the DFA. The number of encountered, refine the DFA by adjusting
states can be determined by considering the transition function or state
the structure of the language or problem. designations.
For example, if you're constructing a DFA
By following these steps, you can construct a
for the language of even-length binary
DFA that recognizes the desired language or
strings, you might need two states to
pattern. The DFA acts as a machine with states,
represent even and odd lengths.
transitions, and accepting states, allowing it to
3. Define the alphabet: Identify the symbols process input strings and determine their validity
or inputs that the DFA can accept. This within the defined language.
set of symbols is known as the alphabet.
STEPS FOR CONSTRUCTION OF DFA
For a binary string DFA, the alphabet
would consist of the symbols {0, 1}. Following steps are followed to construct a DFA
5. Define the transition function: Define the ➔ Calculate the length of sub string
transitions from one state to another
based on the input symbols. Create a ➔ All strings starting with “n” length sub
transition table or diagram that specifies string will always require minimum (n+2)
the next state for each combination of states in the DFA
current state and input symbol. For
Step – 02 :-
example, if the current state is "even"
and the input symbol is "0," the ➔ Decide the strings for which DFA will be
transition might lead to the "even" state constructed
again.
Step – 03 :-
6. Designate accepting states: Determine
which states in the DFA will be the • Construct a DFA for the strings decided in
accepting or final states. These states step 02
Step – 04 :- Example 2 :- Draw a DFA for the language
accepting strings starting with “a” over input
➔ Send all the left possible combinations to alphabets ∑ = {a, b}
the dead state
•aba
•abab
Step-03:
Example :-
FORMAL LANGUAGE , FORMAL GRAMMER AND • Formal grammars are used to
AUTOMATA generate or recognize strings in a
formal language, depending on the
type of grammar (e.g., regular
The relationship among formal languages, formal grammar, context-free grammar).
grammars, and automata lies at the core of • Context-free grammars are
theoretical computer science and computational particularly important as they are
linguistics. These concepts are interconnected and widely used in the description of
provide a foundation for understanding the programming languages and in
processing and generation of languages by parsing natural languages.
computational devices. Let's define each of these
in detail and explore their relationships :- 3. Automata :-
1. Formal Languages :- • An automaton is a computational
• A formal language is a set of model that reads an input string
strings composed of symbols from and transitions between states
a given alphabet. based on the input symbols it
• The alphabet is a finite set of reads.
symbols or characters that form • Automata can be categorized into
the building blocks of the different types based on their
language. capabilities, such as finite
• Formal languages are abstract automata, pushdown automata,
representations used to describe and Turing machines.
various types of languages, • Finite automata are simple
including natural languages like machines with a fixed set of states
English, programming languages that can recognize regular
like C++ or Python, and languages, which are described by
mathematical languages like regular grammars.
regular expressions or context-free • Pushdown automata can handle
grammars. context-free languages, which are
• Formal languages are essential for described by context-free
defining patterns, rules, and grammars, and are more powerful
structures within languages, and than finite automata.
they find applications in various • Turing machines are the most
areas of computer science and powerful computational model,
linguistics. capable of recognizing recursively
enumerable languages, which can
2. Formal Grammars :- be described by context-sensitive
grammars or recursively
• A formal grammar is a set of rules enumerable grammars.
that define the structure and
syntax of a formal language. Relationships :-
• It consists of a set of production • Formal languages are generated or
rules that specify how symbols described by formal grammars. Each type
from the alphabet can be of formal grammar corresponds to a
combined to form valid strings in specific class of formal languages. For
the language. example, regular grammars generate
regular languages, context-free grammars
generate context-free languages, and so CHAPTER THREE
on.
• Automata can recognize or decide whether REGULAR GRAMMER
a given string belongs to a particular
formal language. Each type of automaton Formal grammars are mathematical models used
corresponds to a specific class of formal to describe the syntax or structure of formal
languages. For example, finite automata languages. They provide a set of rules for
recognize regular languages, pushdown generating valid strings in a language or for
automata recognize context-free languages, parsing and analyzing the structure of strings.
and Turing machines recognize recursively
Formal grammars consist of the following
enumerable languages.
components :-
• There is a strong connection between
formal grammars and automata through 1. Terminal Symbols: These are the basic
the Chomsky hierarchy, which classifies elements or atomic units of the language.
grammars and languages into four types: They represent the actual symbols that
Type 3 (regular), Type 2 (context-free), appear in valid strings. For example, in a
Type 1 (context-sensitive), and Type 0 programming language, terminal symbols
(recursively enumerable). Each type of could be identifiers, keywords, operators,
grammar corresponds to a specific class of or punctuation marks.
automaton with equivalent computational
2. Non-terminal Symbols: These symbols are
power.
placeholders that represent sets of strings
In summary, formal languages, formal grammars, in the language. They are used to define
and automata are interconnected concepts that rules for generating or transforming
provide a formal and mathematical foundation for strings. Non-terminal symbols are typically
understanding the structure and processing of represented by uppercase letters.
languages. Formal grammars generate or describe
3. Production Rules: These rules specify how
formal languages, while automata recognize or
the symbols can be combined or replaced
decide whether strings belong to specific formal
to generate valid strings in the language.
languages. The Chomsky hierarchy establishes a
A production rule consists of a non-
connection between grammars and automata,
terminal symbol on the left-hand side and
organizing them into classes with increasing
a sequence of terminal and/or non-
computational power. These fundamental concepts
terminal symbols on the right-hand side.
play a central role in various areas of computer
It represents a transformation or
science, linguistics, and theoretical research.
expansion of a non-terminal symbol into a
sequence of symbols.
RIGHT LINEAR AND LEFT LINEAR In summary, the main difference between right-
GRAMMERS linear and left-linear grammars lies in the
direction of the derivations or transformations. In
Before defining right-linear and left-linear right-linear grammars, the derivations proceed
grammars, let's start with a brief introduction to from left to right, while in left-linear grammars,
regular grammars. the derivations proceed from right to left. Both
types of grammars are examples of regular
Regular grammars belong to the class of formal
grammars and generate regular languages, but
grammars that generate regular languages. These
their production rules differ in the ordering of
grammars have production rules of the form A →
symbols.
aB, A → a, or A → ε, where A and B are non-
terminal symbols, a is a terminal symbol, and ε
represents the empty string. DERIVATION FROM A GRAMMER
Now, let's dive into the definitions of right-linear In the context of formal grammars, a derivation
and left-linear grammars :- is a sequence of production rule applications that
2. Left-Linear Grammar: Now, let's derive the string "aabbb" from the
start symbol S using the given grammar:
• A left-linear grammar is a type of
regular grammar where all 1. Start with the initial string: S
production rules have the form A 2. Apply the production rule S → aSb:
→ Ba or A → a, where A and B • S → aSb
are non-terminal symbols and a is • aSb
a terminal symbol. 3. Apply the production rule S → aSb again:
• S → aSb → aaSbb
• aaSbb
4. Apply the production rule S → aSb one Here is a brief description of context-free
more time: grammars :-
• S → aSb → aaSbb → aaaSbbb
1. Non-terminal and Terminal Symbols :-
• aaaSbbb
5. No more non-terminals remain. The final • A context-free grammar consists of
string is composed solely of terminals: a set of non-terminal symbols
"aabbb". (also called variables) and a set of
terminal symbols.
The derivation process demonstrates how the
production rules are successively applied to
• Non-terminals represent syntactic
transform the non-terminals in each step until
categories or elements that can be
only terminals remain. Each step represents a rule
further expanded or replaced.
application, and the resulting string is obtained
by replacing a non-terminal symbol according to
• Terminals represent the basic units
the production rule.
or symbols of the language.
It's important to note that there can be multiple
derivations for the same string in a grammar, 2. Production Rules:
depending on which production rules are chosen
• The grammar includes a set of
at each step. Additionally, some grammars may
production rules that define how
have ambiguous derivations where multiple
non-terminals can be expanded or
sequences of rule applications lead to the same
replaced by a sequence of
final string.
terminals and non-terminals.
• Each production rule has the form
A → α, where A is a non-terminal
and α is a string of terminals
and/or non-terminals.
• The substitution of a non-terminal
with its right-hand side is
independent of the context in
which it appears, hence the term
"context-free."
CONTEXT FREE GRAMMER
3. Start Symbol:
Context-free grammars (CFGs) are a type of • The grammar designates a specific
formal grammar widely used in linguistics, non-terminal symbol as the start
computer science, and other fields. They are symbol, from which the derivation
named "context-free" because the left-hand side or parsing of strings begins.
of each production rule in the grammar is a
single non-terminal symbol, and the substitution 4. Language Generation:
or expansion of that non-terminal can occur
• A context-free grammar defines a
regardless of the context in which it appears. In
formal language by specifying the
other words, the replacement of a non-terminal
set of strings that can be
with its corresponding right-hand side can happen
generated by the grammar.
without considering the surrounding symbols.
• Starting from the start symbol,
valid strings in the language can
be derived by successively In this grammar, the non-terminals are S, A, and
applying the production rules. B, and the terminals are a and b.
In a derivation tree:
3. Stack :-
It's important to note that not all grammars are • A PDA has an additional
inherently ambiguous. A language can have component called the stack, which
unambiguous grammars that produce a unique provides extra memory.
parse tree for every valid input string. However,
some languages inherently have ambiguous
• The stack is a last-in-first-out
constructs, and it is necessary to carefully design
the grammar or use disambiguation techniques to (LIFO) data structure, meaning
avoid ambiguity. that the most recently added
symbol can be accessed and
removed first.
FORMAL DEFINITION OF PUSH DOWN
• The stack allows the PDA to AUTOMATA
remember information from
previous input symbols and formal definition of a pushdown automaton (PDA)
perform context-sensitive consists of the following components :-
operations.
1. Input Alphabet (Σ):
It's worth noting that the term "context-sensitive" The production rules of a context-free grammar
can also refer to other concepts in different define the transformations or expansions that can
contexts, such as context-sensitive rewriting rules be applied to the non-terminal symbols. During
in formal languages or context-sensitive rewriting the derivation process, a sequence of productions
systems in computational models. However, in is applied to the start symbol, replacing non-
the context of grammars and languages, context- terminal symbols with the corresponding RHS
sensitive grammars and context-sensitive symbols according to the production rules. This
languages refer to the concepts described above. process continues until only terminal symbols
remain, resulting in a valid string in the language
Formal Definition of Context sensitive Grammer
defined by the grammar.
A context-free grammar (CFG) is a formal Context-free grammars and languages are widely
grammar that describes a context-free language. It used in various areas, such as programming
is a widely used grammar type in formal language syntax, natural language processing, and
language theory, programming languages, and the design and implementation of parsing
compiler design. A context-free grammar consists algorithms. They provide a flexible and expressive
of a set of production rules that define how way to describe the syntactic structure of
symbols can be replaced or expanded. languages, allowing for concise and powerful
language definitions.
Formally, a context-free grammar consists of the
following components :- COMPARING CONTEXT FREE GRAMMER
1. Non-terminal symbols: These are symbols AND CONTEXT FREE LANGUAGES
that can be replaced or expanded during
the derivation process. Non-terminal Context-Free Grammar (CFG): A context-free
symbols are typically represented by grammar is a formal grammar where each
uppercase letters. production rule has the form A -> α, where A is
a non-terminal symbol and α is a string of
2. Terminal symbols: These are symbols that symbols (both terminals and non-terminals). The
cannot be further expanded and represent key characteristic of CFGs is that the replacement
the basic units or tokens of the language. or expansion of non-terminal symbols occurs
Terminal symbols are typically represented regardless of the context in which they appear.
by lowercase letters, digits, or other CFGs have a specific set of rules and restrictions
characters. that define their structure.
3. Production rules: These rules specify how Context-Sensitive Grammar (CSG): A context-
the non-terminal symbols can be replaced sensitive grammar is a formal grammar where
by a sequence of symbols, which can each production rule has the form α -> β, where
include both non-terminal and terminal α and β are strings of symbols, and the length of
symbols. Each production rule consists of β is greater than or equal to the length of α. In
a non-terminal symbol as the left-hand context-sensitive grammars, the replacement or
side (LHS) and a sequence of symbols as expansion of non-terminal symbols is influenced
the right-hand side (RHS), separated by by the context or surrounding symbols. This
an arrow symbol (->). means that the rules can modify the context or
4. Start symbol: It represents the initial non- neighboring symbols during the derivation
terminal symbol from which the process.
A context-sensitive grammar is a formal grammar Differences between Context-Free Grammar and
where each production rule has the form α -> β, Context-Sensitive Grammar :-
where α and β are strings of symbols, and the
1. Rule Formulation:
length of β is greater than or equal to the length
of α. • CFG: The production rules have
the form A -> α, where A is a
To illustrate this definition with an example,
non-terminal symbol and α is a
consider the following context-sensitive
string of symbols.
grammar :-
• CSG: The production rules have
S -> aSb the form α -> β, where α and β
are strings of symbols, and the
S -> ab
length of β is greater than or
In this grammar, the nonterminal symbol S can equal to the length of α.
be expanded as either "aSb" or "ab".
2. Context Dependency:
Let's analyze the production rules in terms of the
lengths of α and β: • CFG: The expansion of non-
terminal symbols occurs regardless
1. S -> aSb : Here, α is the string "S" with
of the context or surrounding
length 1, and β is the string "aSb" with
symbols. The rules are applied
length 3. The length of β is greater than
based on the non-terminal symbols
the length of α.
alone.
2. S -> ab : Here, α is the string "S" with • CSG: The expansion of non-
length 1, and β is the string "ab" with terminal symbols depends on the
length 2. The length of β is equal to the context or neighboring symbols.
length of α. The rules can modify the context
Both production rules in this example satisfy the or surrounding symbols during the
condition of a context-sensitive grammar, where derivation process.
the length of β is greater than or equal to the
length of α. This means that the grammar is 3. Generative Power:
context-sensitive. • CFG: CFGs can generate context-
Note that context-sensitive grammars allow for free languages, which are a subset
more flexibility than context-free grammars by of the Chomsky hierarchy. They
allowing the right-hand side (β) of a production are less expressive than context-
rule to be longer or equal in length to the left- sensitive languages.
hand side (α). This flexibility allows context- • CSG: CSGs can generate context-
sensitive grammars to define more complex sensitive languages, which are a
languages that cannot be captured by context-free more general class of languages
grammars. than context-free languages. They
have a higher generative power
and can capture more complex
patterns and dependencies.
4. Formal Definition:
The formal definition of a Turing machine (TM) Let's imagine a Turing machine as a special
consists of a 7-tuple: computer that can do different things step by
step. It has a tape, kind of like a long strip of
M = (Q, Σ, Γ, δ, q0, qaccept, qreject)
paper, and it can read and write symbols on this
Where: tape.
1. Q is the set of states: Q = {q0, q1, The machine also has a little "head" that can
q2, ..., qn}. move along the tape and read the symbol at the
2. Σ is the input alphabet (a finite set of position it's currently on. It has some rules that
symbols): Σ = {a1, a2, ..., am}. tell it what to do based on the symbol it sees
3. Γ is the tape alphabet (a finite set of and the state it's in.
symbols that includes the input alphabet
The machine starts at a special place called the
and a blank symbol): Σ ⊆ Γ, and it also
starting state. It looks at the symbol on the tape
contains a special blank symbol, usually
where the head is and follows the rules to decide
denoted as '□'.
what to do next. It might change the symbol on
4. δ is the transition function: δ: Q × Γ → Q
the tape, move the head left or right, or change
× Γ × {L, R}, where Q × Γ represents the
to a different state.
current state and symbol, and δ(q, a) =
(p, b, L) means if the machine is in state This process keeps going until the machine
q and reads symbol 'a', it changes to state reaches a special state called the accepting state,
p, writes symbol 'b' on the tape, and which means it's done and it says "yes" to
moves the tape head one cell to the left whatever it was trying to figure out. Or it might
(L) or right (R). reach another special state called the rejecting
5. q0 is the initial state: q0 ∈ Q. state, which means it's done and it says "no"
6. qaccept is the accepting state: qaccept ∈
Turing machines are really powerful because they
Q. If the machine reaches this state, it
can do all kinds of things, just like regular
halts and accepts the input.
computers. They can solve math problems,
7. qreject is the rejecting state: qreject ∈ Q.
simulate other computers, and lots more. They
If the machine reaches this state, it halts
help us understand how computers work and
and rejects the input.
what they can do.
The Turing machine starts in state q0 with the
So, in simple words, a Turing machine is like a
input tape head positioned at the leftmost symbol
special computer that can read and write symbols
of the input string. It then reads the symbol at
on a tape, follow rules to decide what to do, and
the current position and looks up the appropriate
figure out answers to different problems.
transition in the transition function δ. Based on
the transition, it changes state, writes a new
SUMMARY 6. Include anchors: Use anchors such as "^"
(caret) and "$" (dollar sign) to indicate
DESIGNING A REGULAR EXPRESSION the start and end of a string, respectively.
FROM A LANGUAGE These help to ensure that the regex
matches the entire string and not just a
Designing a regular expression (regex) involves part of it.
constructing a pattern that matches strings
belonging to a specific language. Regular
expressions are a powerful tool used for pattern
matching and string manipulation.
S -> AB
For example, consider the string s = a^p b^p The pumping lemma is a fundamental tool in the
c^p. If we divide it as s = uvwxy, where |vwx| theory of formal languages used to analyze and
≤ p and |vx| > 0, then pumping up or down prove properties of regular and context-free
will result in the number of 'a's, 'b's, and 'c's languages. Specifically, it helps to show that
becoming unbalanced, violating the condition of L certain languages are not regular or context-free.
= {a^n b^n c^m}.
The pumping lemma is used as a technique to
Hence, by applying the pumping lemma, we can prove that certain languages are not regular or
conclude that L = {a^n b^n c^m} is not a not context-free by demonstrating a contradiction
context-free language (CFL). when the pumping conditions are violated.
EXP (Exponential Time): EXP is the class of • Relationship to Other Complexity Classes:
decision problems that can be solved by a EXP is known to be a superclass of both
deterministic Turing machine in exponential time. P and NP. This means that any problem
These problems require resources that grow in P or NP can be solved in exponential
exponentially with the input size. As a result, time, as P and NP are subsets of EXP.
solving EXP problems is generally considered Additionally, EXP-hard and EXP-complete
difficult and computationally expensive. problems represent the hardest problems
within the EXP class, and they serve as
Features :-
benchmarks for the difficulty level of
• Exponential Time Complexity: Problems in problems in EXP.
the EXP class require exponential time to
• Intractable Nature :- Due to the
solve. This means that the running time
exponential time complexity, solving EXP
of any algorithm that solves these
problems is generally considered
problems grows exponentially with the
intractable. It often involves exhaustive
size of the input. As the input size
search or brute-force techniques, which
increases, the resources required to solve
become infeasible for larger input sizes.
EXP problems increase dramatically.
As a result, finding optimal solutions for
• High Computational Complexity: EXP EXP problems is often impractical or even
represents a class of computationally impossible in practice.
difficult problems. The exponential time
• In summary, the EXP class represents
complexity indicates that solving these
problems that require exponential time
problems is generally considered
resources to solve. These problems have
challenging and computationally
high computational complexity and go
expensive. It often involves exploring a
beyond the efficiency boundaries of P and
vast search space or performing repeated
NP. EXP problems are generally
computations.
considered intractable and require
• Resources Grow Exponentially: EXP exponentially growing resources as the
problems require exponentially growing input size increases.
resources, such as time, memory, or
In summary, the complexity classes P, NP,
computational power, to find a solution.
PSPACE, and EXP categorize decision problems
As the input size increases, the amount of
based on their computational complexity and the
time and memory needed to solve EXP
resources required to solve them. P represents
problems increases exponentially. This
efficiently solvable problems, NP includes
problems with efficiently verifiable solutions,
PSPACE encompasses problems solvable within
Reductions
polynomial space, and EXP consists of problems
that require exponential time resources.
Reducibility is a fundamental concept in
complexity theory that enables us to compare the
HARDNESS AND COMPLETENESS computational complexity of different problems. It
provides a way to establish relationships between
Hardness: problems and determine their relative difficulty.
Let's delve into the details of reducibility:
• Hardness refers to the level of difficulty
or computational intractability of a 1. Reducibility between Problems :-
problem.
• Reducibility allows us to analyze
• A problem is considered hard if solving it
the computational complexity of
is at least as difficult as solving any other
one problem in terms of another
problem in that complexity class.
problem.
• For example, an NP-hard problem is as
• If problem A is reducible to
hard as the hardest problems in the NP
problem B, it means that an
class. This means that if an efficient
algorithm that solves problem B
algorithm exists for any NP-hard problem,
can be used to solve problem A.
it can be used to solve any problem in
• The reduction provides a mapping
NP.
or transformation from instances
of problem A to instances of
Completeness:
problem B.
• Completeness refers to a problem's • This mapping must be efficient,
property that makes it representative or typically done in polynomial time,
complete for a particular complexity class. meaning that the transformation
• A problem is considered complete for a can be performed in a reasonable
class if it is both in the class and amount of time.
captures the essential characteristics of
that class. 2. Implications of Reducibility :-
• For example, a problem is NP-complete if
• If problem A is reducible to
it is in the NP class and every problem in
problem B, it implies that problem
NP can be reduced to it in polynomial
B is at least as difficult as
time.
problem A in terms of
• In other words, an NP-complete problem
computational complexity.
represents the "hardest" problems in NP
• If an efficient algorithm exists for
and serves as a benchmark for the
solving problem B, it can be
difficulty level of all problems in NP.
utilized to solve problem A by
To summarize, hardness refers to the difficulty or applying the reduction.
intractability of a problem within a complexity • In other words, if problem B is
class, while completeness refers to a problem's hard, then problem A is at least
property of representing the essential as hard as problem B.
characteristics of a complexity class. Hardness
provides a measure of how challenging a problem 3. Comparing Complexity :-
is within its class, and completeness establishes a
• By establishing reducibility
problem as a representative for a specific
relationships, we can classify
complexity class.
problems into different complexity HIERARCHY AND RELATIONSHIPS
classes based on their BETWEEN COMPLEXITY CLASSES
computational difficulty.
• For example, if problem A is Hierarchy and relationships between complexity
reducible to problem B and classes play a crucial role in understanding the
problem B is known to be hard, relative difficulty and computational properties of
then problem A is placed in the problems. Here's an explanation of hierarchy and
same or a higher complexity class relationships between complexity classes :-
as problem B.
Hierarchy :-
• This allows us to compare the
complexity of problems and • Complexity classes can be organized in a
understand their relative difficulty hierarchy based on the resources required
within a given complexity class or to solve the problems within each class.
hierarchy. • The hierarchy reflects the relationship
between classes in terms of their
4. Types of Reductions :- computational power and the amount of
resources they utilize.
• There are different types of
• In general, complexity classes higher in
reductions used in complexity
the hierarchy encompass a wider range of
theory, such as polynomial-time
problems that require more resources or
reductions, logarithmic-space
have higher computational complexity
reductions, and many-one
than classes lower in the hierarchy.
reductions.
• Polynomial-time reductions are
most commonly employed, as they
Relationships :-
provide efficient mappings
between problems that can be 1. Inclusion :-
computed in polynomial time.
• One fundamental relationship
• These reductions are crucial for
between complexity classes is
defining completeness and
inclusion, where one class is
hardness within complexity
contained within another.
classes, as well as establishing
• For example, P is contained within
relationships between classes.
PSPACE, which means that any
In summary, reducibility is a powerful concept problem that can be solved in
that allows us to compare the computational polynomial time (P) can also be
complexity of different problems. It enables us to solved using polynomial space
determine the relative difficulty of problems, (PSPACE).
classify them into complexity classes, and • Similarly, NP is contained within
establish relationships between classes. PSPACE, indicating that any
Reducibility provides a way to map one problem problem with a solution verifiable
to another, indicating that if the latter problem is in polynomial time (NP) can also
hard, the former problem is at least as hard. By be solved using polynomial space
using efficient reductions, we gain insights into (PSPACE).
the computational landscape of problems and
their complexities. 2. Reduction :-
3. Completeness :-
• Completeness establishes a
problem as representative or
complete for a specific complexity
class.
• For example, a problem is NP-
complete if it is in the NP class
and every problem in NP can be
reduced to it in polynomial time.
• NP-complete problems serve as
benchmarks for the difficulty level
of problems within NP, as any
NP-complete problem is as hard as
the hardest problems in NP. HERE IS A BRIEF EXPLANATION ABOUT
HARDNESS AND COMPLETENESS
4. Equivalence :-
Hardness :-
• Equivalence denotes that two
In computer science, hardness refers to the
complexity classes are essentially
difficulty of solving a particular problem. It is a
the same in terms of the problems
measure of how much time and computational
they contain and their
resources are required to find a solution for that
computational power.
problem. If a problem is hard, it means that
• For instance, if P = NP, it implies
there is no known efficient algorithm (a step-by-
that all problems in NP can be
step process) that can solve the problem quickly
solved in polynomial time, making
for all possible inputs.
P and NP equivalent.
• However, the question of whether It's important to understand that "hardness" in
P = NP or other complexity class this context refers to the difficulty of solving the
equivalences remain unsolved problem, not the difficulty of the problem itself.
problems in computer science.
Example :- The Traveling Salesman Problem (TSP)
In summary, the hierarchy and relationships
One classic example of a hard problem is the
between complexity classes provide insights into
Traveling Salesman Problem (TSP). Imagine a
salesperson who needs to visit multiple cities and The Hamiltonian Cycle Problem is an example of
return to their starting point while traveling the an NP-hard problem. It asks whether a given
shortest distance possible. The TSP asks, "What is graph contains a cycle that visits every vertex
the shortest possible route that visits each city exactly once. It is known to be NP-hard but not
exactly once and returns to the starting city?" known to be in NP. However, if we find an
This problem is notoriously difficult because the efficient algorithm for this problem, we can
number of possible routes grows exponentially efficiently solve any problem in NP, making it an
with the number of cities, making it hard to find NP-complete problem.
the best solution quickly as the number of cities
Summary :-
increases.
In summary, hardness describes how difficult it is
Completeness :-
to solve a specific problem efficiently, while
Completeness, on the other hand, relates to a completeness refers to the property of a problem
special property of some problems that allows that makes it capable of solving all other
them to serve as a benchmark for the entire class problems in its class efficiently. NP-complete
of problems they belong to. If a problem is problems are the hardest problems within the NP
complete, it means that every other problem in class, and they play a significant role in
that class can be efficiently transformed into an understanding the difficulty of various
instance of that complete problem. Solving the computational problems and their relationships.
complete problem would then provide a solution
for all the other problems in the class.