0% found this document useful (0 votes)
192 views

Data Structure (AutoRecovered)

This document discusses type safety when using ArrayLists in Java. It explains that adding objects of different types to the same ArrayList can cause runtime errors. Generics can be used to restrict ArrayLists to only accept objects of a specific type, avoiding these issues. The document provides examples of declaring ArrayLists using generics with different data types like Integer, Double, and Float. It also discusses operations that can be performed on ArrayLists like adding, removing, and searching elements.

Uploaded by

Aryan Khatik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
192 views

Data Structure (AutoRecovered)

This document discusses type safety when using ArrayLists in Java. It explains that adding objects of different types to the same ArrayList can cause runtime errors. Generics can be used to restrict ArrayLists to only accept objects of a specific type, avoiding these issues. The document provides examples of declaring ArrayLists using generics with different data types like Integer, Double, and Float. It also discusses operations that can be performed on ArrayLists like adding, removing, and searching elements.

Uploaded by

Aryan Khatik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 126

Type Safety

You created an ArrayList containing three objects of the Student type. Then, you
added a new Student object to it. But what would happen if you added a new object
of any other type (for example, a String or Professor type) to the same
ArrayList? Let’s find out in this video.
So, you saw that you got an error when you tried to add a new object of
the String type to the ArrayList. But before you start fixing this error and making
your code type-safe, please answer the following questions. The exercise below will
give you more clarity on the reason behind the run-time error.
You now know that you can create an ArrayList containing elements of different
types, where the data type of all the objects can be ‘Object’.
 
In most cases, you should create an ArrayList of the same data type (like the
Student ArrayList in our example). Otherwise, if you create a generic ArrayList that
holds generic objects, you will need to cast the element back to its original type (i.e.
Object → Student). However, you may encounter run-time errors if you accidentally
store an element that cannot be cast (like the String → Student example you saw
above).
 
To deal with this, let’s learn about a special type of ArrayList in the next segment.
ArrayList Using Generics
In this segment, there are two major issues you will learn to deal with:
1. You need to avoid typecasting multiple times.
2. You should lower the chances of potential run-time errors while adding
incompatible elements to an ArrayList and typecasting them.
So, let’s look at the Java ArrayList, which has a wonderful solution to both these
issues.
This special ArrayList, where you cannot add elements of different data types, is
referred to as an ArrayList using Generics.
 
Earlier, when you could add elements of different types in the ArrayList, they were
non-generic in nature. Generics ensure that you can always add elements of the same
data type in an ArrayList. But if you try to add elements of any other data type, this
will give you a compiler error.
 
Advantages of using generics
1. The ArrayList of a specific data type and the elements of other data types are
restricted.
2. Typecasting is not required.
3. There is a conversion of potential run-time errors into compiler errors.
 
Format to Declare ArrayList Using Generics
The class ArrayList can be declared using generics in the method shown below:
ArrayList<datatype> listName =new ArrayList<datatype>();

 
Here, the data type to be mentioned is always non-primitive (reference). For example,
Student, String, etc., which are declared as classes. Primitive data types such as int,
char, and double cannot be used here. If you want to store primitive data types in
ArrayList classes, you’ll need to use their Object cousins, such as Integer, Double,
Float, and Boolean.
So, if you need to create the Generics ArrayList with elements of the following data
types, you can use the following formats:
 
1. ArrayList of int-type values —
ArrayList<Integer> list = new ArrayList<Integer>( );

 
2. ArrayList of double-type values —
ArrayList<Double> list = new ArrayList<Double>( );

 
3. ArrayList of float-type values —
ArrayList<Float> list = new ArrayList<Float>( );

 
Now, it’s time to write some code to create an ArrayList using generics.

import java.util.ArrayList;
class Source {

public static void main(String[] args) {

ArrayList<Float> random = new <Float>ArrayList();

random.add(2f);

random.add(4f);

random.add(5f);

random.add(10f);

random.add(99.9f);

printrandom(random);}

public static void printrandom(ArrayList<Float> abc){

for(Object o :abc){

System.out.println(o);

}
Operations on ArrayList: I
Now, since you learnt how to create an ArrayList with a specific data type (using
generics), let’s learn to perform the following basic operations such as:
1. Adding an element at any arbitrary position
2. Removing an element
3. Searching for an element
Let’s start with the first operation — adding an element to any arbitrary position in
ArrayList.
The following methods, predefined in the ArrayList class, will help you add elements
to it:
1. add(Object o): This appends the specified object ‘o’ to the end of ArrayList. Its
return type is Boolean, which returns TRUE when the element is added to the
list.
2. add(int index, Object o): This inserts the specified object into the specified
position in ArrayList.
 
Answer the following questions to strengthen your understanding of the appropriate
uses of these functions.
Operations on ArrayList: II
Let’s now look at the second operation — removing an element from any index of
ArrayList.
You can remove elements from ArrayList using the following methods:
1. remove(int index): It removes the element from ArrayList, at the specified
index.
2. clear( ): It removes all the elements from ArrayList.
Let’s now move to the third operation — searching for an element in ArrayList.
 
You can search for an element in ArrayList using the following method:
 
contains(Object o): It searches for the element in ArrayList and returns ‘true’ if the
element is present.
Additional Reading
You can go through the Method Summary and Method Detail tables on this page to
find out more about other methods in the ArrayList class. You can use these methods
to perform different operations.
 
Now, it’s time to perform a minor operation on the ArrayList code given below.
Lists and Polymorphism
Till now, you learnt about ArrayList and LinkedList in detail. Now, let’s find out what
lists are exactly and how they are related to both these data structures.
In reality, List is an interface that is implemented by
the ArrayList and LinkedList classes. This is the reason why you can instantiate
both 'ArrayList' and 'LinkedList' by declaring the type of variable as List.

Refer to the diagram below to understand how all these classes and interfaces are
linked to a larger interface named Collection.

Figure 1 - Hierarchy of classes and interfaces under the Collection Interface


The diagram above denotes the following steps:
1. The List interface extends the Collection interface, or List is the child interface
of Collection.
2. AbstractList implements the List interface, which is further extended by the
ArrayList and LinkedList classes. Or, the ArrayList and LinkedList classes are
implementations of the List interface.
3. The AbstractList class is extended by the ArrayList and LinkedList classes. Or
in other words, ArrayList and LinkedList are the subclasses of the Abstract
class.

 
Performance Measurement -
I
Let’s start by looking at the code which will help you to compare ArrayList with
LinkedList at the time of performing certain operations.
Our professor has created a skeleton of the code — to estimate the time taken for both
ArrayList as well as LinkedList to perform any operation, and compare their
performances with the time taken by them.
Since we now have the basic skeleton code for measuring the performance in terms of
the time taken to perform an action, let’s start with the first operation — get
operation.
 The get(int index) method returns the element at the specified index in the list.

So, you measured the performance of both the lists for the ‘get’ operation. Here
you also saw that in order to retrieve elements from any specific index,
ArrayList is faster than that of LinkedList.
  
 Before we go into the explanation for this, let’s compare the lists for one
operation.
Performance Measurement -
II
Let’s continue our experiment of comparing the performance of ArrayList and
LinkedList with respect to other operations as well. Let’s continue with the add
operation.
 add(Object o) appends the specified object ‘o’ to the end of any list.
 add(int index, Object o) inserts the specified object at the specified position in
the lis
  
 You measured the performance of both the lists for the add operation as well.
Here you saw that in order to add a new element to any position, LinkedList
is faster than that of ArrayList.
  
 In the next few pages of this session, we will dive deeper into the reason for the
difference in performance by both these lists.
  
Data structure and algorithm
Definition: It’s a method of solving problem through sequence of instruction.
Algorithm 1
Go through the ThreadLocalRandom class which is used by the professor in the below
video to generate a random number in the specified range.
ThreadLocalRandom is a class in java which extends the Random class, below are
some methods of ThreadLocalRandom class used in the video below
current()
Returns the current thread's ThreadLocalRandom.
nextInt(int origin, int bound)
Returns a pseudorandom int value between the specified origin (inclusive) and the
specified bound (exclusive).
Therefore, “ThreadLocalRandom.current().nextInt(int origin, int bound)” returns a
random value in the specified range.
 
Now, let's continue our discussion on algorithms and look into an imaginary problem
scenario where you can write an algorithm to solve the same.
import java.util.concurrent.ThreadLocalRandom;
Parameters for Algorithm
Efficiency
time complexity : it is a relation between input size and running time (operation)

In the previous segment, you learnt about two different algorithms finding duplicate
student IDs in the registered student data. Now, in order to find the efficient algorithm
between the two, you need certain parameters to compare them.
In this video, you will be introduced to the parameters to measure the efficiency of an
algorithm.
Replay

Mute

Current Time 1:05

Duration 1:05

Loaded: 100.00%

1.25x
How to declare a linked list?
It is simple to declare an array, as it is of single type, while the declaration of linked list is
a bit more typical than array. Linked list contains two parts, and both are of different
types, i.e., one is the simple variable, while another is the pointer variable. We can
declare the linked list by using the user-defined data type structure.

The declaration of linked list is given as follows -

1. struct node  
2. {  
3. int data;  
4. struct node *next;  
5. }  

In the above declaration, we have defined a structure named as node that contains two
variables, one is data that is of integer type, and another one is next that is a pointer
which contains the address of next node.

Linked list is basically useful in three things such as

Non contiguous memory = it connects the previous data with the next data
variable size = size is not the biggest concern we can put the element as much as we
want
unless and until the system memory is not full
insert and deletion= it takes big O of (1) constant
search= the time complexity of search of linked list in O(n).
Types of Linked list
Linked list is classified into the following types -

o Singly-linked list - Singly linked list can be defined as the collection of an


ordered set of elements. A node in the singly linked list consists of two parts: data
part and link part. Data part of the node stores actual information that is to be
represented by the node, while the link part of the node stores the address of its
immediate successor.
o Doubly linked list - Doubly linked list is a complex type of linked list in which a
node contains a pointer to the previous as well as the next node in the sequence.
Therefore, in a doubly-linked list, a node consists of three parts: node data,
pointer to the next node in sequence (next pointer), and pointer to the previous
node (previous pointer).

o Circular singly linked list - In a circular singly linked list, the last node of the list
contains a pointer to the first node of the list. We can have circular singly linked
list as well as circular doubly linked list.
o Circular doubly linked list - Circular doubly linked list is a more complex type of
data structure in which a node contains pointers to its previous node as well as
the next node. Circular doubly linked list doesn't contain NULL in any of the
nodes. The last node of the list contains the address of the first node of the list.
The first node of the list also contains the address of the last node in its previous
pointer.
Advantages of Linked list
The advantages of using the Linked list are given as follows -

o Dynamic data structure - The size of the linked list may vary according to the
requirements. Linked list does not have a fixed size.
o Insertion and deletion - Unlike arrays, insertion, and deletion in linked list is easier.
Array elements are stored in the consecutive location, whereas the elements in the linked
list are stored at a random location. To insert or delete an element in an array, we have
to shift the elements for creating the space. Whereas, in linked list, instead of shifting, we
just have to update the address of the pointer of the node.
o Memory efficient - The size of a linked list can grow or shrink according to the
requirements, so memory consumption in linked list is efficient.
o Implementation - We can implement both stacks and queues using linked list.
Singly Linked list
It is the commonly used linked list in programs. If we are talking about the linked list, it means it
is a singly linked list. The singly linked list is a data structure that contains two parts, i.e., one is
the data part, and the other one is the address part, which contains the address of the next or the
successor node. The address part in a node is also known as a pointer.

Suppose we have three nodes, and the addresses of these three nodes are 100, 200 and 300
respectively. The representation of three nodes as a linked list is shown in the below figure:

We can observe in the above figure that there are three different nodes having address
100, 200 and 300 respectively. The first node contains the address of the next node, i.e.,
200, the second node contains the address of the last node, i.e., 300, and the third node
contains the NULL value in its address part as it does not point to any node. The pointer
that holds the address of the initial node is known as a head pointer.

The linked list, which is shown in the above diagram, is known as a singly linked list as it
contains only a single link. In this list, only forward traversal is possible; we cannot
traverse in the backward direction as it has only one link in the list.

Representation of the node in a singly linked list


1. struct node  
2. {  
3.    int data;  
4.    struct node *next;  
5. }  

Doubly linked list


As the name suggests, the doubly linked list contains two pointers. We can define the doubly
linked list as a linear data structure with three parts: the data part and the other two address part.
In other words, a doubly linked list is a list that has three parts in a single node, includes one data
part, a pointer to its previous node, and a pointer to the next node.

Suppose we have three nodes, and the address of these nodes are 100, 200 and 300,
respectively. The representation of these nodes in a doubly-linked list is shown below:

As we can observe in the above figure, the node in a doubly-linked list has two address
parts; one part stores the address of the next while the other part of the node stores
the previous node's address. The initial node in the doubly linked list has
the NULL value in the address part, which provides the address of the previous node.

Representation of the node in a doubly linked list

1. struct node  
2. {  
3.   int data;  
4.   struct node *next;  
5.  struct node *prev;   
6. }   

In the above representation, we have defined a user-defined structure named a


node with three members, one is data of integer type, and the other two are the
pointers, i.e., next and prev of the node type. The next pointer variable holds the
address of the next node, and the prev pointer holds the address of the previous node.
The type of both the pointers, i.e., next and prev is struct node as both the pointers are
storing the address of the node of the struct node type.

Floyd’s Cycle Finding Algorithm

public static boolean hasCycle(ListNode head) {


ListNode slow = head;
ListNode fast = head;
while (fast!=null&&fast.next!=null){
slow=slow.next;
fast=fast.next.next;
if (slow==fast){
return true;
}
}
return false;
}
}

class ListNode{
public int data;
public ListNode next;

public ListNode(int data){


this.data=data;
this.next=null;
}

}
//to get the input in LinkedList//

Scanner input = new Scanner(System.in);

int n = input.nextInt();
head=null;
if (n>0){
head= new ListNode(input.nextInt());
ListNode temp=head;
for (int i =0;i<n;i++){
temp.next= new ListNode(input.nextInt());
temp=temp.next;
}
}
Linked list
implementation………
public class Main {
public static void main(String[] args) {
Main LinkedList = new Main();

Main.addfirst(2);
Main.addfirst(3);
Main.addLast(4);
Main.printList();
Main.deleteFirst();
Main.printList();

static ListNode head;

public static void addfirst(int data){


ListNode newNode = new ListNode(data);
if (head==null){
head=newNode;
return;
}
newNode.next=head;
head=newNode;
}
public static void addLast(int data){
ListNode newNode = new ListNode(data);
if (head==null){
head=newNode;
return;
}
ListNode currNode =head;
while (currNode.next!=null){
currNode=currNode.next;
}
currNode.next=newNode;
}
public static void deleteFirst(){
if (head==null){
System.out.println("List is Empty");
return;
}
head=head.next;
}
public static void deleteLast(){
if (head==null){
System.out.println("List is Empty");
return;
}
if (head.next==null){
head=null;
return;
}
ListNode secondLast = head;
ListNode lastNode = head.next;

while (lastNode.next!=null){
secondLast=secondLast.next;
lastNode= lastNode.next;
}
secondLast.next=null;
}
public static void printList(){
if (head==null){
System.out.println("List is Empty");
return;
}
ListNode currNode =head;
while (currNode!=null){
System.out.print(currNode.data+"-->");
currNode=currNode.next;
}
System.out.print("NUll");

}
}
class ListNode{
public int data;
public ListNode next;

public ListNode(int data){


this.data=data;
this.next=null;

}
}
Playback Rate

Quality Levels

Picture-in-PictureFullscreen

4421328

So, the parameters that determine the efficiency of an algorithm are the time
taken (running time) and the memory space required to execute an algorithm.
 
Both the time taken and the memory space required to execute an algorithm are
calculated as functions of the input size. Since you typically don’t know what the
input size will be beforehand, you simply use the variable ‘n’ to represent the
potential input size of the algorithm.
 
Now, the question that arises is — how do you calculate the time taken to execute
an algorithm?
Each instruction in an algorithm takes a specific amount of time to execute and certain
instruction set is executed for more than once i.e.instruction set inside a 'for' or 'while'
loop. To analyse an algorithm, you must calculate the number of times an instruction
set is executed with respect to the input size (n) rather than the exact time values. This
is because the time taken to execute an algorithm depends on various external factors
such as processor speed, the compiler, etc. since these external factors can vary from
computer to computer.
 
Note: When calculating the number of times an instruction set is executed, software
developers and computer scientists tend to consider the worst case, i.e. the specific
case in which the instruction set is executed a maximum number of times.
 
There are three cases in the analysis of an algorithm
1. Best case
2. Average case
3. Worst Case
Before going into the depth of the above three cases, once go through the below
program
import java.util.Scanner;

public class Demo {


public static void main(String[] args) {
int n, req_no, flag = 0, i;
Scanner s = new Scanner(System.in);
int a[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 0};
n = a.length;
System.out.println("There are numbers from 0 to 9");
System.out.print("So, Enter the element you want to find:");

// taking the required number from the user


req_no = s.nextInt();

//Searching in the array for required number


for (i = 0; i < n; i++) {
//If required number is found
if (a[i] == req_no) {
flag = 1;
break;
}
//If required number is not found
else {
flag = 0;
}
}
if (flag == 1) {
System.out.println("Entered number found at position:" + (i + 1));
} else {
System.out.println("Sorry! Entered number not found");
}
}
}

Input 1:
1
Output:
Entered number found at position:1
 
Input 2:
0
Output:
Entered number found at position:10
 
Input 3:
15
Output:
Sorry! Entered number not found
 
Best case:
The lower bound on the time taken by an algorithm is the best case. In the above
program for the Input 1, the output was ‘Entered no found at position:1’ it means that
the number found at first position itself, ‘for’ loop in the program iterates n times until
it finds the required value in the array at ith index. Since the required value is found at
the first index, the ‘for’ loop terminates in the first iteration itself, hence this is the
best case for this program. The no of instructions get executed in the best case is
constant.
 
Average case:
In this case, consider all possible different types of input data and calculate the time
taken by the algorithm for all those inputs. Now add all those calculated time taken
values and divide it by the total number of inputs, then the obtained value is the
average time. 
 
Worst Case:
The upper bound of the time taken by an algorithm is the worst case. In the above
program for the Input 2, the output was ‘Entered number found at position:10’ and for
Input 3 the output was ‘Sorry! Entered number not found’ for both the inputs the
program searches throughout the array so here it will take the maximum time for the
algorithm to execute. Hence this is the worst case.

Most of the times, the worst case time complexity of an algorithm is used for the
algorithm analysis because the worst case time complexity guarantees the upper
bound of the time that the algorithm takes i.e., the maximum time taken by the
algorithm.
Think and answer the question below to understand the worst case in an algorithm. 
Data Structures

Abstract data types vs data


Structures

01.Abstract Data types is a entity which defines the capabilities of a data arrangement

Eg List

02 data Structures is a implementation of the Abstract data types

For eg Arraylist,Linkedlist.

1. Finding an element:

 ArrayList: O(1)

 LinkedList: O(N)

2. Insertion/Deletion at the beginning:

 ArrayList: O(N)

 LinkedList: O(1)

3. Insertion/Deletion at an arbitrary position:

 ArrayList: O(N)
 LinkedList: O(N)

Introduction to Stacks
STACK:
 
Stacks follow LIFO (Last In First Out) order.
 
Here are the most common definitions:
1. Push: Inserting an element into the stack
2. Pop: Deletes the topmost element from the stack
3. Peek: Returns the topmost element in the stack
 
Operation Worst Time Complexity

Push O(1)

Pop O(1)

Peek O(1)

 
 
QUEUE:
 
Queues follow FIFO (First In First Out) order.
Here are the most common definitions:
1. Enqueue: Adding an element to the queue
2. Dequeue: Removing an element from the queue.
3. Peek: Returns the frontmost element in the queue
 
Operation Worst Time Complexity

Enqueue O(1)

Dequeue O(n)

Peek O(1)

Now we studied about “last in,first out”

Some few examples would be

The ‘last in, first out’ property of the stack data structure makes it more run-
time efficient than a simple array or a linked list in finding the last function called.

Which of the following options represents ‘last in, first out’? 

(Note: More than one option may be correct.)

A pile of clothes, where one piece of clothing is kept on top of another

Feedback:
When clothes are kept in a pile, the piece of clothing that goes in last sits at the top of
the pile. Therefore, when you want to get a cloth from the pile, you pick whatever is at
the top. This was the last item that you added to the pile. Therefore, a pile of clothes is
an example of 'last in, first out'.

The Undo option in text editors (MS Word/Google Docs)

Feedback:
When you use the Undo option in a text editor, the last change that you made is the
first to be reverted. Therefore, such options are examples of 'last in, first out'.
In simple word the first work is done last and the last work is done first

1.Element inserted and removed from a single end only (top of the Stack).
2. Order of insertion and Deletion follows LIFO(last in, first out ) order.
Insertion of elements is called as push.
Removal of elements is called as pop.

If ‘n’ elements are pushed into a stack, then the element that is pushed in first
will be the last one to be removed (popped) from the stack.

The element that is pushed in first will be at the bottom of the stack if more elements
are pushed in after it. The element that is pushed in last will be the first one to be
popped from the stack. Therefore, the element that is pushed in first will be the last one
to be removed from the stack.

Program Stack
The program stack is also referred to as a call stack, a run-time stack or an execution
stack.
There is an extremely common error called ‘stack overflow’, which occurs when you
use up more memory for a stack than your program is supposed to. For example,
when you frame a recursive logic erroneously and give an infinite recursive call, your
compiler will throw a stack overflow error when the size of the stack grows to exceed
its maximum allowed size.

Brief Note About Parsing


When you were writing programs, you would have encountered errors when you did
not use the complete syntax – for instance, a missing bracket or a missing semicolon.
You knew that these were syntax errors. Such rules are an extremely significant part
of the constructs (grammatical rules) followed while using a programming language.
If the input program does not follow any of these language constructs, then the
compiler throws an error.
 
One of the parts of parsing is ‘matching parentheses’. You would have noticed that in
Java, you have to enclose all the method arguments within a pair of opening and
closing parentheses. Matching parentheses is one of the important tasks that is
performed by a parser. You will learn more about it in later segments.
 
Stack class in java
There is a Stack class in Java which implements stack data structure. The class
provides the following functions.
 push(object element) - inserts the element onto the top of the stack

 pop() - removes an element on top of the stack

 isEmpty() - returns true if the stack is empty otherwise returns false

 peek() - returns the element on the top of the stack

 search(object element) - searches for the element in the stack and returns its location

in the stack, If the element is not present then it returns ‘-1’


Comprehension
There are two kinds of exceptions you could encounter in stacks:
 Underflow: Trying to pop/peek an element from an empty stack
 Overflow: Trying to push an element into a stack that is already at its max
capacity
You try to avoid these exceptions by writing checks in your code or by handling
exceptions. For the underflow, you saw that a condition was framed, where a pop is
not allowed from an empty stack. On the other hand, you learnt that a stack overflow
is caused when you've used up more memory for a stack than your program was
supposed to use. An instance of this is when there are infinite recursive calls and the
compiler shows a stack overflow due to the overflow in the program stack. Attempt
the questions below:

Stack<Integer> st = new Stack<Integer>();

Parenthesis Matching Problem


Paranthesis example
[],{},().
Paranthesis should match in the program otherwise it wil throw errors
[=opening Parenthesis
]=closing Parenthesis
public static void main(String[] args) throws Exception {
Scanner input = new Scanner(System.in);
System.out.println(match(input.next()));

int count =0;


char[] parens = paren.toCharArray();

for (char c : parens){


if (c=='('){
count++;
} else if (c==')') {
if (count>0){
count--;
}
}else {
return false;
}

}
if (count==0){
return true;
}else {
return false;
}
code for parenthesis problem

//code for all character

public static void main(String[] args) throws Exception {


Scanner input = new Scanner(System.in);
System.out.println(match(input.next()));

}
public static boolean match(String paren){
int count =0;
char[] parens = paren.toCharArray();

for (char c : parens){


if (c=='('||c=='['||c=='{'){
count++;
} else if (c==')'||c==']'||c=='}') {
if (count>0){
count--;
}
}else {
return false;
}

}
if (count==0){
return true;
}else {
return false;
}

}
Parenthesis Matching
Problem – II
In the previous segment, you saw a solution to the parenthesis matching problem, but
you used only a single type of bracket, i.e., '()'. However, your compilers include more
types of brackets. So, we will begin this segment with the following video, where we
will see how the previous algorithm would have fared if we had considered two
different types of brackets: '{}' and '()'.
int count1 = 0, count2 = 0;
char[] chars = parens.toCharArray();
for (char c : chars) {
if (c == '(') {
count1++;
} else if (c == ')') {
if (count1 > 0) {
count1--;
} else{
return false;
}
}
else if (c == '{') {
count2++;
} else if (c == '}') {
if (count2 > 0) {
count2--;
} else {
return false;
}
} else {
throw new Exception("Invalid character " + c);
}
}
if (count1 == 0 && count2 == 0) {
return true;
} else {
return false;
}
}
}

Solving the Parenthesis


Problem Using Stacks
In the previous segments, you saw how to solve the parenthesis matching problem.
Nevertheless, the solutions that you saw are not perfect. As a better alternative, you
can actually use the stack data structure to solve this problem, which is what your
compiler actually does.
 
But before that, you will learn how stacks work for simple parentheses, i.e., (). You
can first attempt the problem given below to quickly revise what you learnt about
stacks earlier
import java.util.Stack;
import java.util.LinkedList;
import java.util.EmptyStackException;

public class ParenthesisStack {


/*
NOTE
----
We implement the parenthesis matching algorithm using a stack.
*/

// you could also initialise the stack from the inbuilt class
// private static Stack<Character> stack = new Stack<Character>();

private static MyStack<Character> stack = new MyStack<Character>();


public static void main(String[] args) {
try {
System.out.println(match("()"));
System.out.println(match("((((((()))))))"));
System.out.println(match("(((((()))))))"));
System.out.println(match("(((((()))))))"));
System.out.println(match("(((()(((()))))))"));
} catch (Exception e) {
System.out.println(e.getMessage());
}
}

public static boolean match(String parens) throws Exception {


for (char c : parens.toCharArray()) {
if (c == '(') {
stack.push('(');
} else if (c == ')') {
if (!stack.isEmpty()) {
stack.pop();
} else {
return false;
}
} else {
throw new Exception("Unexpected character " + c);
}
}
if (stack.isEmpty()) {
return true;
} else {
return false;
}
}
}

class MyStack<T> {
private LinkedList<T> list = new LinkedList<T>();

public void push(T e) {


this.list.add(e);
}

public T pop() {
if (this.list.size() > 0) {
T e = list.get(list.size() - 1);
list.remove(list.size() - 1);
return e;
}
throw new EmptyStackException();
}

public Boolean isEmpty() {


return this.list.size() == 0;
}
}
Why do you think using stacks is a better alternative to using counters in the
parenthesis matching problem?

The counter-based approach to the parenthesis matching problem cannot account for different
combinations of parentheses, brackets, braces, etc. For example, if the string is '{{{(}}}}', then the
counter-based approach would label it as correct, when, in fact, it is an incorrect string.

Creating stack with Array and


oops
import java.util.EmptyStackException;
import java.util.Scanner;
import java.util.Stack;

public class Main {


public static void main(String[] args) {
StackUsingArray<String> stack = new StackUsingArray<>(5);
System.out.println(stack.isEmpty());
stack.push("Aryan");
stack.push("Nayra");
stack.push("Jagdish");
stack.push("Vinita");

System.out.println(stack.isEmpty());
System.out.println(stack.peek());
}
}

class StackUsingArray<T>{
public T[] array;
public int capacity;
public int index;

public StackUsingArray(int capacity){


this.capacity= capacity;
array=(T[])new Object[capacity];
index=-1;
}
public boolean isEmpty(){ //O(1)
return index==-1;
}

public boolean isFull(){ //O(1)


return index==this.capacity-1;
}
public void push(T data){ //O(1)
if (isFull()){
throw new StackOverflowError("Stack is already full");
}
this.array[++index]= data;
}
public T pop(){ //O(1)
if (isEmpty()){
throw new EmptyStackException();
}
return this.array[--index];
}

public T peek(){ //O(1)


if (isEmpty()){
throw new EmptyStackException();
}
return this.array[index];
}
}
Queues
The data Structure Queues follow “First in , First out” concept.
For example standing in the line of purchasing the ticket..
U came in first got the ticket and u came out First.
“Queues Functions”
1. Enqueue= it puts the one element in the Queue.add().
2. Dequeue=it removes the element which was first put in.remove().
3. We can check the size of the Queue.size().
4. We can check whether Queue is empty or not.

Which of the algorithms below can use a queue data structure?

Sending emails to a user

Emails are sent in FIFO order. So, if you send two emails to a user, then they will receive
them in the same order in which they were sent.

Processing booking requests from multiple users


Typically, booking requests are processed based on the FIFO principle and can be stored in a
queue.

Ticket-Booking System Using


Queues
Using Queues
1. Queues prevent failure when 1000s of request made
2. New request is enqueued in the queue
3. All request in the queue are periodically processed in
FIFO order(first in , first out)
Implement Stack using 2 Queues by making push operation
costly
The idea is to keep newly entered element at the front of ‘qs from ‘q1’. ‘q2’ isy
The idea is to keep newly entered element at the front of ‘q1’ so that pop
operation dequeues from ‘q1’. ‘q2’ is used to put every new element in front
of ‘q1’.new element in front of ‘q1’.
Follow the below steps to implement the push(s, x) operation: 
Enqueue x to q2.
One by one dequeue everything from q1 and enqueue to q2.
Swap the queues of q1 and q2.
Follow the below steps to implement the pop(s) operation: 
Dequeue an item from q1 and return it.

import java.util.*;

public class Main {


public static void main(String[] args) {
Stack s = new Stack();
Stack.push(1);
Stack.push(2);
Stack.push(3);
System.out.println("Size = "+Stack.size());
System.out.println(Stack.search(3));
System.out.println("element at the top which has been removed =
"+Stack.pop());
System.out.println("Now the top element would be = "+Stack.top());

}
}

class Stack{
static Queue<Integer> q1 = new LinkedList<>();
static Queue<Integer> q2 = new LinkedList<>();

static void push(int x){


q2.add(x);

while (!q1.isEmpty()){
q2.add(q1.peek());
q1.remove();
}
Queue<Integer> q =q1;
q1=q2;
q2=q;

static int pop(){


if (q1.isEmpty()){
return -1;
}
return q1.remove();
}
static int search(int x){
if (q1.contains(x)){
System.out.print("ELement is present = "); return q1.element();
}else {
return -1;
}
}

static int size(){


if (q1.isEmpty()){
return -1;
}
return q1.size();
}

static int top(){


if (q1.isEmpty()){
return -1;
}
return q1.peek();
}
}

Time Complexity:
 Push operation: O(N), As all the elements need to be popped out
from the Queue (q1) and push them back to Queue (q2).
 Pop operation: O(1), As we need to remove the front element from
the Queue.
Auxiliary Space: O(N), As we use two queues for the implementation of a
Stack.

Implement Stack using 2 Queues by making pop()


operation costly:
The new element is always enqueued to q1. In pop() operation, if q2 is emptyth
The new element is always enqueued to q1. In pop() operation, if q2 is empty
then all the elements except the last, are moved to q2. Finally, the last element
is dequeued from q1 and returned.en all the elements except the last, are
moved to q2. Finally, the last element is dequeued from q1 and returned.
 Follow the below steps to implement the push(s, x) operation: 
 Enqueue x to q1 (assuming the size of q1 is unlimited).
 Follow the below steps to implement the pop(s) operation: 
 One by one dequeue everything except the last element from
q1 and enqueue to q2.
 Dequeue the last item of q1, the dequeued item is the result,
store it.
 Swap the names of q1 and q2
 Return the item stored in step 2.

import java.util.*;
public class Main {
public static void main(String[] args) {
Stack s = new Stack();
Stack.push(1);
Stack.push(2);
Stack.push(3);
System.out.println("Size = "+Stack.size());
System.out.println(Stack.search(3));
Stack.pop();
System.out.println("Now the top element would be = "+Stack.top());
}
}
class Stack{

static Queue<Integer> q1 = new LinkedList<>();


static Queue<Integer> q2 = new LinkedList<>();

static void push(int x){q1.add(x);}


/////this method is created to add the element in Queue/////
static void pop() {
/////this method is created to remove the element//////
if (q1.isEmpty()) {
return;
}
while (q1.size() != 1) {
q2.add(q1.peek());
q1.remove();
}
q1.remove();
Queue<Integer> q = q1;
q1 = q2;
q2 = q;
}

static int search(int x){


////this is created to search//////
if (q1.contains(x)){
System.out.print("element is present = "); return q1.element();
}else {
return -1;
}
}

static int size(){


////this method is created to check the size/////
if (q1.isEmpty()){
return -1;
}
return q1.size();
}

static int top(){


////////this method is created to check the top value///////
if (q1.isEmpty()){
return -1;
}
while (q1.size()!=1){
q2.add(q1.peek());
q1.remove();
}
int temp =q1.peek();

q1.remove();
q2.add(temp);

Queue<Integer> q3 =q1;
q1=q2;
q2=q3;
return temp;
}

}
 Push operation: O(1), As, on each push operation the new element
is added at the end of the Queue.
 Pop operation: O(N), As, on each pop operation, all the elements are
popped out from the Queue (q1) except the last element and pushed
into the Queue (q2).
Auxiliary Space: O(N) since 2 queues are used.

Implement Stack using 1 queue:


Using only one queue and make the queue act as a Stack in modified way of
the above discussed approach.
Follow the below steps to implement the idea: 
 The idea behind this approach is to make one queue and push the first
element in it. 
 After the first element, we push the next element and then push the
first element again and finally pop the first element. 
 So, according to the FIFO rule of the queue, the second element that
was inserted will be at the front and then the first element as it was
pushed again later and its first copy was popped out. 
 So, this acts as a Stack and we do this at every step i.e. from the initial
element to the second last element, and the last element will be the
one which we are inserting and since we will be pushing the initial
elements after pushing the last element, our last element becomes the
first element.

import java.util.*;
public class Main {
public static void main(String[] args) {
Stack s = new Stack();
Stack.push(1);
Stack.push(2);
Stack.push(3);
System.out.println("Size = "+Stack.size());
System.out.println(Stack.search(3));

System.out.println("Now the top element is = "+Stack.top());


}
}
class Stack {

static Queue<Integer> q1 = new LinkedList<>();

static int size(){


if (q1.isEmpty()){
return -1;
}
return q1.size();
}
static void push(int x) {
int s = q1.size();
q1.add(x);

for (int i = 0; i < s; i++) {


q1.add(q1.remove());
}
}

static int pop() {


if (q1.isEmpty()) {
return -1;
}
return q1.remove();
}

static int search(int j){


if (!q1.contains(j)){
return -1;
}
return j;
}

static int top(){


if (q1.isEmpty()){
return -1;
}
return q1.peek();
}
}
Time complexity: O(N) where N is size of stack
Auxiliary Space: O(N)
Identify a Palindromic String
public static void checkPalindrome(String input) {
StringBuffer buffer = new StringBuffer(input);
buffer.reverse();
String name = buffer.toString();
Queue<Character> input1 = new LinkedList<>();
for (int i =0;i<input.length();i++){
input1.add(input.charAt(i));

}
String reverseString ="";

while (!input1.isEmpty()){
reverseString=reverseString+input1.remove();
}
if (reverseString.equals(name)){
System.out.println("It is a palindrome");
}else{
System.out.println("it is not a palindrome");
}

}
}
using Queue
Time complexity is O(n) and space complexity is O(n2).
Identify a Palindromic String
public static void checkPalindrome(String input) {
Stack<Character> input1 = new Stack<>();
for (int i =0;i<input.length();i++){
input1.push(input.charAt(i));

}
String reverseString ="";

while (!input1.isEmpty()){
reverseString=reverseString+input1.pop();
}
if (reverseString.equals(input)){
System.out.println("It is a palindrome");
}else{
System.out.println("it is not a palindrome");
}

}
}

using stack
Time complexity is O(n) and space complexity is O(n2).
Detect Duplicate Parentheses
public static String parenthesis(String inputString) {
Stack<Character> stack = new Stack<>();
for (int i =0;i<inputString.length();i++){
char c = inputString.charAt(i);

if (c!=')'){
stack.push(c);
}else {
char top = stack.peek();
stack.pop();
int count =0;

while (top!='('){
top=stack.peek();
stack.pop();
count++;
}
if (count<=1){
return "String contains duplicate parenthesis";
}
}
}
return " string does not contains duplicate parenthesis";

}
}
time complexity O(n)
space complexity O(n)
Hashing and Hash Functions
Through the following video, you will learn about the concepts of hash tables and
hash functions.
As we have been discussing that the hash table is mainly used to achieve a constant
run-time complexity, you will now see how a hash function is actually used to realise
this run-time of O(1).

A hash table is a data structure that stores records (or elements) according to its
associated “keys”. This is done by designing a hash function that takes the given keys
as input and outputs array indices at which the records are stored. In other words, each
record is stored using the array index obtained by hashing its key.
 
Note that hash tables stores data in the form of key and value. Each array contains a
hashed key along with a pointer that tells you the location at which the associated
record is stored.
 
An important point to note:
 
One important feature that a hash function must have is that it must compute fast,
since a time-expensive hash function would defeat the overall purpose of the fast
retrieval of a hash table. Now that you know what a hash function is, let’s move on to
a different mathematical hash function to add a new dimension to this topic.
 
Let’s say you’re working with mathematical operations such as divide, a modulus,
etc., which play a great role in a lot of hash functions. For instance, the hash function
H = i%10 would map any integer in the range [0-9], depending on the remainder you
get on dividing i by 10. E.g. i = 23, which gives H = 3 since 23 divided by 10 would
give you a remainder of 3
 
You heard the term Cardinality in this Session which basically is a measure of a set's
size or meaning the number of elements in the set. Eg. If you are given a set A-
{1,3,4,3,2,5}, the cardinality of set A is 6.
With the three major thing we can understand the hash function
Hash key ={1,2,3,4,5,6,7,8,}
Hash table ={0,1,2,3,4,5,6,7,8,9,10}
Hash function ={[kmod10],[kmodn],[mid square],[fold]}.
With the help of these you can easily get to know how and where the data is
getting stored

Hash collision
There are basically two types of collision
1.open hashing(chaining approach)
2.closed hashing(linear approach)

1.in open hashing suppose if we get same remainder for two values 24,56,
the remainder is 1 then that means it will store the data in chain format
2.closed hashing is about if the next space is empty then it will store the
data over there. Simple and clear concept.

Introduction to HashMap
Similarities between Hashtable and HashMap are as follows:
1. Both are the implementations of Map interface in java.
2. Both of them perform similar functions.
3. Both do not maintain any order of elements.
 
Differences between Hashtable and HashMap are as follows:
Hashtable HashMap

It is used in the older versions of It exists only in newer versions of Java i.e., it is part of Java since

Java. version 1.2

It doesn’t allow a key to be null. It allows at most one key to be null.

It doesn’t allow to store a null


It allows storing any number of null values.
value.

It is a bit slower. It is faster.

We declare the HashMap in Java by using the below instruction:


HashMap<keyDataType, valueDataType> hashMapName = new HashMap<keyDataType,
valueDataType>();

 
Some of the commonly used methods of HashMap are:
Methods Operations

put(key,value) This method adds the specified key with the specified value to the HashMap.

If the key is present in the HashMap, then it removes the key along with the value
remove(key)
mapped to it.

containsKey(key) If there is any mapping to the specified key, then it returns true.

size() This returns the number of key-value mappings present in the HashMap.

isEmpty() If there is no key-value mapping present in the HashMap, it returns true.

clear() Removes all mappings present in the HashMap.

get(key) Returns the value mapped to the specified key in the HashMap.

keySet() Returns the set of keys present in the HashMap.

HashMap should be used if you want to search in the large amount of data
It is not useful for the small data.

Hashmap doesn’t store the data in the same order the way you have
inserted it

LinkedHashMap will help you store the data in the same order the way you
have inserted
for (Map.Entry<Integer,String>entry:map.entrySet()){
System.out.print(entry.getValue());
}
Set<Integer> keys = map.keySet();
for (int key : keys){
System.out.println(key+map.get(key));
}
this will print in iteration format

Introduction to HashSet
Implementations of the Set
There are three implementations of the Set interface in Java they are:
1. HashSet:
 This is the most commonly used implementation of the set. Here the
elements are stored randomly, and duplicates are not allowed.
2. LinkedHashSet
 Here, the order of the elements is maintained on the basis of their
insertion order, and no duplicates are allowed.
3. TreeSet
 Here, the order of the elements is maintained by the inbuilt ordering or
by the explicit comparator (which can arrange it in any sorted order) of
TreeSet. Here as well duplicates are not allowed.

HashSet<t> set=new HashSet<>();


Add(x);
Contains(x); true/false
Remove(x); only one element
isEmpty(); true/false
size();integer/-1
clear();empty

Linear vs. Non-Linear Data


Structure
In the previous module, you learnt how to store linear data. But, what if the data is not
linear?
 What if there exists a relation between data points that are not linear?

 What if the data has a hierarchical structure?

 How do you store such data?


In such a case, you need to use a non-linear data structure. 
 
Let's look at some of the differences between a linear data structure and a non-
linear data structure.
 
Criteria Linear Data Structure Non-Linear Data Structure

The elements in a linear data


The elements in a non-linear data
structure exhibit some sort of
structure do not exhibit any sequence
Sequence/Arrangement sequence or we can say that the
and they are not arranged in any linear
elements are arranged in a linear
order.
order.

In a linear data structure, we have a In a non-linear data structure, we have

No. of Levels single level and all the elements are multiple levels and the elements are

present on this level. distributed across levels.

In a non-linear data structure, it is not


In a linear data structure, it is
possible to traverse through all the
Traversals possible to traverse through all the
elements in a single pass. Multiple
elements in a single pass.
passes / runs are needed for this purpose.

Arrays, Stacks, Queues, Linked Lists,


Examples Trees, Graphs, etc.
etc.

 
Introduction to Trees
Tree Vocabulary
Let's now learn about various components of a tree and what we call them. Go
through the tree given below which follows a hierarchical order:
 
All the elements in a tree are called nodes. In the tree given above, the nodes are the
ones with the values 1, 2, 3, 4, 5, 6, and 7.
Nodes can be of different types. The topmost node in a tree is called the root node.
From the root node, descend some other nodes. The root node is called a parent if
there is at least one or more nodes descending from it directly. In that case, we can
call root node 1 as the parent node having node 2 as a child node. Node 1 is also
the parent of nodes 3 and 4. Similarly, node 2 is the parent of nodes 5 and 6 and node
4 is the parent of node 7. A node that does not have any child is called a leaf node. In
the tree given above, nodes 3, 5, 6, and 7 are called leaf nodes.
 
The nodes at the same level descending from the same parent are called siblings. In
the tree given above, nodes 2, 3, and 4 are siblings.
Following is the vocabulary for the tree given above:
Nodes: 1, 2, 3, 4, 5, 6, 7
Root Node: 1
Parent Nodes & their Children Nodes:
Parent - 1,
Children - 2, 3, 4
Parent - 2,
Children - 5, 6
Parent - 4, Child - 7 
Leaf Nodes: 3, 5, 6, 7
Sibling Nodes: (2, 3, 4), (5, 6)

Properties of Trees
1. No Cycle in Tree
An important point to remember is that a tree cannot have any cycle. A cycle means
that in a tree, a child node cannot have two parent nodes. Go through the figure given
below:
 
 
 
You can see that the child node 4 has two parents - node 2 and node 3, which builds a
cycle in the figure given above. Thus, the above figure cannot be considered a tree.

2. No. of edges = No. of nodes - 1


An edge is a line connecting two nodes in a tree. Every node except the root has a
unique edge from its parent to the node. Thus, if there are N nodes in the tree, the
number of edges would be N-1 i.e. one per node except for the root. Thus, if there are
N nodes in a tree, the number of edges will be equal to N-1.

Trees vs. Arrays vs. Linked Lists


Let's now discuss the important points of differentiation between the arrays, linked
lists and trees. These points are enlisted in the table given below.
 
Criterion Comparison Justification

Access (or Linked Lists < Trees < Accessing (or searching) an element in a tree is

Search) Arrays quicker than linked lists but slower than arrays.

Insertion / Arrays < Trees < Linked Inserting or deleting an element from a tree is quicker

Deletion Lists (Unordered) than arrays but slower than unordered linked lists.

Number of Linked Lists = Trees ≠ Like linked lists and unlike arrays, trees can consist of

Elements Arrays as many elements as needed on run time.


Introduction to Binary Tree
In the last video, you learned about binary trees. A binary tree is a tree which has at
most 2 children. In other words, each node can have either 0 or 1 or 2 children.
 
When it comes to the vocabulary of a binary tree which can have a maximum of two
children, we call the child on the left side as the left child and the child on the right
side as the right child. 

Consider the binary tree given below:


 

 
In the binary tree given above, the root node 1 is a parent which has two child nodes -
2 and 3. 2 is the left child of 1 and 3 is the right child of 1. 4 is the left child of parent
node 2 and 5 is the right child of parent node 2. 3 has the right child as 6.
Binary Tree
Let’s assume that a node in a tree structure has data, a pointer to the left child, and a
pointer to the right child as its parts. If the tree’s physical representation with nodes is as
shown in the figure, find its logical representation from the given options.
Types of Binary Trees
Full Binary Tree:
Let's understand from Srishti what a full binary tree is in the following video.
In a full binary tree, every node has 2 child nodes except the leaf nodes.
Example 1:
 

 
In the binary tree given above, all the nodes have 2 children except the leaf nodes at
the last level. Thus, it is a full binary tree.
 
Example 2:
 
 
The binary tree given above is a full binary tree because all the trees have 2 children
except the leaf nodes - node 2, node 5, and the nodes at the last level. Remember that
a node is a leaf node if it does not have any child. A leaf node is not necessarily
needed to be present at the last level.
Complete Binary Tree:
Now here you will understand the concept of a complete binary tree.
In a complete binary tree, all the levels are completely filled. The exception to
this exists for the last level meaning that the last level may or may not be
completely filled. This means that the second last level may or may not have both
the children.
With this exception, it is also necessary that the last level must have all the keys
as left as possible. In other words, a node in the second-last level cannot have a right
child without having all the nodes in its left side. 

Example 1:
 

 
 
In the example given above, all the levels are completely filled. Even the last level is
completely filled. Thus, the tree given above is a complete binary tree.
 
Example 2:
 

 
 
In the example given above, all the levels are completely filled except the last level.
The node 3 at the second last level does not have any child. Thus, both the conditions
hold true for the above binary tree to be a complete binary tree.
 
Example 3:
 
 
In the example given above, the last level is not completely filled but that is an
exception and is allowed. However, there are two violations of the rule here. The first
violation is that the second last level is not completely filled because node 8 does not
have 2 children. The second violation is that the last level does not have the leaf nodes
completely on the left side because of node 4 containing a child without node 1
containing 2 children. Because of these violations, the binary tree given above is not a
complete binary tree. Note that even if one of these violations would have occurred,
still the above binary tree would not be called a complete binary tree.
 

Perfect Binary Tree:


Now is the time to understand a perfect binary tree. Let's learn it from Srishti when a
binary tree can be called a perfect binary tree.
In a perfect binary tree, all the nodes must have two children nodes except the
leaf nodes and all the leaf nodes must be at the same level.
Representation of Binary Tree
Tree tree= new Tree();
tree.root= new Node(1);
tree.root.left=new Node(2);
tree.root.right=new Node(3);

tree.root.left.right=new Node(5);
tree.root.left.left=new Node(4);

tree.root.right.right=new Node(7);
tree.root.right.left=new Node(6);
System.out.println(tree);

}
}
class Node{ // this class is having a data and left side and a right side//
int data; // value contained inside a node
Node left,right; // left & right children of a node

// constructor to set the data of a node to the passed value and make it a
leaf node
Node(int data){
this.data=data;
left=right=null;
}
}
class Tree{
Node root; // root node of the binary tree

// constructor to create an empty tree with no root node


Tree(){
root=null;
}
}
Tree Traversal: Depth-First
Search (DFS)
There are two ways of traversing a tree:
1. Depth First Search (DFS)
2. Breadth First Search (BFS)
 
Traversal – Visiting each of the nodes in some order and perform some
action on each node in the process
Root node is 0
And how far the nodes goes or deep the nodes goes then it will called as
depth incase if root nodes have two children and that two children have two
more children then the depth would be 3
Depth(tree)=max(level(n))

It is important to note that sometimes we consider the level of the root node as 0
while at other times, we consider the level of the root node as 1. We usually
follow the latter.

There are basically three order in dfs(depth first traversal):


Preorder traversal
Postorder traversal
In order traversval
1.preorder mean that its only at the root node
2.inorder mens that it reaches at the left subtree and not at the right
subtree
3.postorder means that it reaches at the end of the nodes

The following points summarize this idea well:


 Inorder (stage 2, stage 1, stage 3)

 Preorder (stage 1, stage 2, stage 3)

 Postorder (stage 2, stage 3, stage 1)


So, as discussed in the video, there are three stages at which a node is visited:
 Stage 1: When only the node is visited and none of its right and left subtrees are visited

 Stage 2: When its left subtree is visited

 Stage 3: When both its left and right subtrees are visited
Depending on which stage the action is performed at, the algorithm becomes a
particular variant of depth-first traversal. An action can be anything. It may be
printing its value, comparing its value with some other node(s) etc.
DFS: Pseudocode & Code
Preorder code
public void preOrderDFS(Node node){
if (node==null){
return;
}
System.out.print(node.data+" ");
preOrderDFS(node.left);
preOrderDFS(node.right);
}
Inorder Code
public void inOrderDFS(Node node) {
if (node==null){
return;
}
inOrderDFS(node.left);
System.out.print(node.data+" ");
inOrderDFS(node.right);
}
postOrder code
public void postOrderDFS(Node node){
if (node==null){
return;
}
postOrderDFS(node.left);
postOrderDFS(node.right);
System.out.print(node.data+" ");
}
finding height
public int heigth(Node node){
if (node==null)
return 0;

else {
int leftDepth= heigth(node.left);
int rightDepth= heigth(node.right);
if (leftDepth>rightDepth)
return (leftDepth+1);
else
return (rightDepth+1);

}
Tree Traversal: Breadth-First
Search (BFS)
In simple word the BFS traverse through one level of children

So, in a BFS, the nodes at a certain level are visited before moving on to the next
level. So basically, you first visit the root node, then all the nodes at level 1, then all
the nodes at level 2, and so on. Hence, in this algorithm, you move along the breadth
of a tree, before hopping on to the next level. This is the reason why Breadth-First
Search (BFS) traversal is also called the Level-Order traversal.
BFS (Recursive): Pseudocode &
Code
This code will print the data
public void levelOrderOrBFS(){
int h = height(root);
for (int i =1;i<=h;i++){
printNodesAtLevel(root,i,1);
}
}

this code will print the data if level==currentlevel and if it is not equal then
we increment the currentlevel by 1 in both the treeside
public void printNodesAtLevel(Node root,int level,int currentlevel){
if (root==null){
return;
}if (level==currentlevel){
System.out.print(root.data+" ");
}else {
printNodesAtLevel(root.left,level,currentlevel+1);
printNodesAtLevel(root.right,level,currentlevel+1);
}
}

hence this is called as level order traversal

insert node
public Node insertNode(int[] elementsArr, Node node, int i){
if(i < elementsArr.length){
node = new Node(elementsArr[i]);

node.left = insertNode(elementsArr,node.left,2 * i + 1);


node.right = insertNode(elementsArr,node.right,2 * i + 2);
}
return node;
}
Iterative format
public void printNodesLevel() { //iterative form///
Queue<Node> q = new LinkedList<>();
q.add(root);

while (!q.isEmpty()){
Node node= q.peek();
q.remove();
System.out.print(node.data+" ");
if (node.left!=null){
q.add(node.left);
} else if (node.right!=null) {
q.add(node.right);
}
}
}

finding maximum
public int findMax(Node root) {
if (root == null) {
return 0;
} else {
int data = root.data;
int leftSide = findMax(root.left);
int rightSide = findMax(root.right);
if (leftSide > data) {
data = leftSide;
} else if (rightSide>data) {
data=rightSide;

}
return data;

}
}
Mirror a Tree
public void swap(Node node){
if (node==null)
return;

Queue<Node> queue= new LinkedList<>();


queue.add(root);

while (queue.size()>0){
Node temp = queue.peek();
queue.remove();

Node temp1 = temp.left;


temp1.left = temp1.right;
temp1.right = temp;

if (temp.left!=null)
queue.add(temp.left);
if (temp.right!=null)
queue.add(temp.right);

}
}

public Node swap1(Node node){


if (node==null){
return node;
}
Node temp=node.left;
node.left=node.right;
node.right=temp;

swap1(root.left);
swap1(root.right);
return node;
}
Summary
In this segment you learnt the following:
1. Evolution of tree-based data structures was due to the non-linearity of data.
Every data point in the tree is connected to several other data points such that
there is a specific relation between every connection.
2. Tree vocabulary:-

Nodes are all the elements in the tree with the topmost known as the root node. The node from

which other nodes descend is the parent node and descendent is the child node. Node with no

child is the leaf node and the line that connects two nodes is the edge.

3. We then discussed Binary Trees where you learnt that a node in a binary tree can have at most 2

children, left child and right child.

1. Maximum number of nodes n at level l of a binary tree is 2l−1 and that in a binary tree

of height h is 2h−1.

2. Minimum height h of a binary tree with n nodes is log2(n+1).

3. Minimum levels l in a binary tree with L leaves is log2L+1.

4. Types of Binary trees were discussed:


1. Full Binary Tree - every node has 2 child nodes except the leaf nodes.
2. Complete Binary Tree - all the levels are completely filled (except the
last level).
3. Perfect Binary Tree - all the nodes must have two children nodes
except the leaf nodes and all the leaf nodes must be at the same level.
5. Two traversal techniques were discussed:
1. Depth-First Search Traversal - the nodes are visited along the levels.
             a. Pre-Order Traversal: First the node and then its left and right subtrees are visited.

             b. In-Order Traversal: First the node's left subtree, then the node and then its right
subtree is visited.

             c. Post-Order Traversal: The node's left and right subtrees are visited followed by the

node.

2. Breadth-First Search Traversal/Level-Order Traversal (Recursive and Iterative approach)-

The nodes at a certain level are visited before moving on to the next level i.e. you move along

the breadth of the tree.

6. Applications of traversal such as mirror of a tree and spiral level order traversal were also

discussed in detail.
Introduction to BSTs
To summarise, the two types of tree data structures we discussed till now are as
follows:
1. Binary Tree:
In this tree, every node can have at most two children. 
 
2. Binary Search Tree:
A Binary Search Tree is also called an ordered binary tree because of the
following properties:
 The values of all the nodes in the left subtree are less than the root node.
 The values of all the nodes in the right subtree are greater than the root
node.
 Each subtree in the binary search tree is itself a binary search tree.

You will also learn how search in a binary search tree has a time complexity
of O(logn).

How to search for a key


public boolean search(Node root, int key){
if(root==null){
return false;
}if (key==root.data){
return true;
}if(key < root.data){
return search(root.left,key);
}else {
return search(root.right,key);
}
}
in simple explanation the key is less than the root.data then it will search in
the left or else it is greater then the root.data then it will search it in the
right side of data

Inserting Node in BST


So, whenever a new node is added to a BST, it has to be added as a leaf node. It
cannot be an internal node. Also, adding a new node to a BST is similar to searching
for a node in the BST. Hence, the ‘insert’ operation follows a run-time efficiency
of O(logn).
void insert(int key) {
root = Insertion(root, key);
}

public Node insert(Node root,int key){


if (root==null){
root= new Node(key);
return root;
}
if (key>root.data){
root.right=insert(root.right,key);
} else if (key< root.data) {
root.left=insert(root.left,key);

}
return root;
}
Deleting a Node from BST
package bintree;

public class Node<T> {

protected T value;
protected Node<T> parent;
protected Node<T> left;
protected Node<T> right;

public Node(T value, Node<T> parent, Node<T> left, Node<T> right) {


this.value = value;
this.parent = parent;
this.left = left;
this.right = right;
}

public T getValue() {
return this.value;
}

public Node<T> getParent() {


return this.parent;
}

public Node<T> getLeft() {


return this.left;
}

public Node<T> getRight() {


return this.right;
}

public String toString() {


return this.value.toString();
}
public void setValue(T value) {
this.value = value;
}

public void setParent(Node<T> node) {


this.parent = node;
}

public void setRight(Node<T> node) {


this.right = node;
}

public void setLeft(Node<T> node) {


this.left = node;
}
}
Lowest Common Ancestor in
BST
public Node Ancestor(Node root,int n1,int n2){
if (root==null){
return null;
}if (n1 > root.data && n2 > root.data){
return Ancestor(root.right,n1,n2);
}if (n1 < root.data && n2 < root.data){
return Ancestor(root.left,n1,n2);
}
return root;
}
Introduction: Priority Queues
and Heaps
The topics covered in this session are —
 Priority queues with example
 Priority queues ADT
 Implementation of priority queues using LinkedList and ArrayList
 Heaps
 Basic operations performed on heaps
 Sorting

Example of a Priority Queue - 1


Now that you have an idea about priority queues and where to implement them, let’s
look at an example of a waitlisted ticket confirmation system for a sports club match.
 
Let’s say that a sports club is running its next season of matches and for that, it is
selling tickets in four categories — Club, Season, Online and Counter. The priority
given to each category is in decreasing order, i.e. Club gets the highest priority
while Counter gets the lowest.
 
But some buyers received waitlisted tickets because the matches got overbooked. As a
result, some people may cancel their ticket bookings, and you can give those tickets to
others on the waiting list. So, your job is to find out the order in which these waitlisted
tickets are confirmed.
 
Before you move on to find out the answer to this question, let’s first define the
problem statement and find the solution using Java code. You can refer the code file
attached at the first page of the module(It is also attached below). It includes the code
that will be used in the video and will help your learning if you simultaneously keep it
open on your IDE.

Example of a Priority Queue -


2
Now that you have an idea about where priority queues are implemented, let’s look at
an example of assigning bank privileges to customers using a priority queue.
 
Let’s say that a bank is ready to provide privileges to some of its customers. The bank
has decided that the customers will get privileges on the basis of their bank balances.
Now, your job is to determine the order in which the privileges will be assigned to the
customers.
 
Let’s define the problem statement and find the solution using Java code. You can
refer to the code file attached at the first page of the module. It includes the code that
will be used in the video and will help your learning if you simultaneously keep it
open on your IDE.

Priority Queue ADT


In the previous videos, you solved an example of assigning privileges to bank
customers using priority queues.
 
Let’s look at the Abstract Data Type (ADT):
 
It is a type (or class) for objects whose behavior is defined by a set of values and a
set of operations.
 
Similar to interfaces, the definition of ADT only states what operations are to be
performed but doesn’t say how these operations will be carried out. It does not specify
how data will be organized in memory and what algorithms will be used to carry out
the operations.
 
The user of the data type need not know the data type that is implemented. For
example, we have been using int, float, char, and other primitive data types only with
the knowledge of values that it can take and the operations that can be performed on
them, without any idea of how these primitive data types are implemented in Java.
Similarly, we can think of ADT as a black box, which hides the inner structure and the
internal design of a data type from its users.
 
For example, here’s List ADT:
 
A list contains elements of the same type arranged in sequential order, and the
following operations can be performed on a list.
 get(): Returns an element from the list at any given position.
 insert(): Inserts an element at any position in the list.
 remove(): Removes the first occurrence of an element from a non-empty list.
 removeAt(): Removes the element at a specified location from a non-empty
list.
 replace(): Replaces an element at any position with another element.
 size(): Returns the number of elements in the list.
 isEmpty(): Returns ‘true’ if the list is empty, otherwise return ‘false’.
 isFull(): Returns ‘true’ if the list is full, otherwise return ‘false’.
 element(): Returns the head of the Queue, otherwise
throws NoSuchElementException when the queue is empty.
  
 There are two types of priority queues — max priority queues and min
priority queues. In a max priority queue, the largest element(the element
containing the most value) is given the highest priority whereas, in a min-
priority queue, the smallest element(the element containing the least value)
is given the highest priority.
 For the remainder of this session, you will study min priority queues. You
can refer to the code file attached at the first page of the module. It includes the
code that will be used in the video and will help your learning if you
simultaneously keep it open on your IDE.

Introduction to Heaps
In the previous videos, you learnt about the list-based implementation of priority
queues. And here is a chart with the time complexities for each basic operation carried
out by ordered and unordered list implementation of priority queues.
 

Types of heaps
1. Min heap :-
This heap basically represent that’s the parent node should be always
less than or equal to left and right child
It takes O(1) time to search the minimum
2. Max heap:-
This heap exactly the opposite of the minheap which means that the
parent node should be greater than the or equal to the left and the right
child node.
In this lecture, you saw that the binary heap data structure
could implement the basic operations of a priority queue in O(log n)
time, which is a performance improvement over the list implementation
of the priority queue. For list implementation, the time complexity
was O(n).
You learnt the basic structure of heaps and two of its properties:
 They are complete binary trees
 Order property: They maintain a hierarchical (level-wise) order among the
nodes of their trees (i.e. min heap or max heap).

Heap data Structure will only work on complete binary trees.


Insertion and Removal of Heap
Elements
 
There are two heapify operations:
 HeapifyUp is used during insertions
 HeapifyDown is used during deletions
This was a good insight, apart from the root node, the last node is the
most important node since all new node insertions and deletions are
done at the last node of a heap. In this lecture, you learnt that a heap’s
properties might get violated after inserting a new element. So, you must
apply the ‘heapify’ operations to rectify any heap property violations or
to maintain the heapness(following heap properties) of a heap.
Now you have seen how deletion takes place in a heap. Below is an image showing a
step by step process of how the removal of the elements takes place in the heap
discussed in the last video.
 During the time of insertion it will first going to work in min heap so when u insert
the element in the last node and its important to know about the last because all the
insertion and deletion is going to place in the last node so at the time of insertion it
check that the parent value is less or equal to it or not if it is not then it will swap the
element and it keep on swaping until the violation is not ended

 
Note: The minimum value, which is at the root node, cannot be removed
directly. First, it is swapped with the last node and then it is deleted from the last
node because, as discussed, all the insertion and deletions are done at the last node
only.
 
Also, you have seen a complete example containing both insertion and deletion
operations on a heap.

Implementation of a
Complete Binary Tree
In the previous videos, you have seen the pictorial representation of heaps, how they
are made, and how they perform their operations.
 
Now, let’s see how heaps can be represented using arrays.
If the array index is starting from 0, then the child nodes (i) will have an index of i = 2*j
+1 or i= 2*j +2, if the parent node’s index is j. So, j = (i-1)/2, thus this is the correct
option.

Now you have seen how any heap is implemented as an array. In the array, the root
node is the initial element of the array and any child node’s index is 2*i
or 2*i+1 if, the parent’s index is i.
 Arr[(i -1) / 2] returns its parent node.
 Arr[(2 * i) + 1] returns its left child node.
 Arr[(2 * i) + 2] returns its right child node.
This are the implementation of min heap side
 getMin(): It returns the root element of Min Heap. The Time
Complexity of this operation is O(1).
 extractMin(): Removes the minimum element from MinHeap. The
Time Complexity of this Operation is O(Log n) as this operation needs
to maintain the heap property (by calling heapify()) after removing the
root.
 insert(): Inserting a new key takes O(Log n) time. We add a new key
at the end of the tree. If a new key is larger than its parent, then we
don’t need to do anything. Otherwise, we need to traverse up to fix the
violated heap property.

What is meant by Heapify? 


Heapify is the process of creating a heap data structure from a binary tree
represented using an array. It is used to create Min-Heap or Max-heap. Start
from the first index of the non-leaf node whose index is given by n/2 – 1.
Heapify uses recursion

Heap Sort
Heap sort is a comparison-based sorting technique based on Binary Heap data
structure. It is similar to the selection sort where we first find the minimum
element and place the minimum element at the beginning. Repeat the same
process for the remaining elements.

 Heap sort is an in-place algorithm. 


 Its typical implementation is not stable, but can be made stable
(See this)
 Typically 2-3 times slower than well-implemented QuickSort.  The
reason for slowness is a lack of locality of reference.
Advantages of heapsort:
 Efficiency –  The time required to perform Heap sort increases
logarithmically while other algorithms may grow exponentially slower
as the number of items to sort increases. This sorting algorithm is very
efficient.
 Memory Usage – Memory usage is minimal because apart from what
is necessary to hold the initial list of items to be sorted, it needs no
additional memory space to work
 Simplicity –  It is simpler to understand than other equally efficient
sorting algorithms because it does not use advanced computer
science concepts such as recursion
Heapsort take less time as compare to the other algorithms it takes only O(n).
And rest of them take O(nlogn).

Introduction to Graphs
There are basically two types of things in graphs
Nodes
Edges
Nodes represent objects or entities
Edges represent various kind of relationship between this entities
Vertices means nodes
And edges means arcs
1. Graphs are used in Google maps where different locations are represented as
nodes, and the roads connecting between the locations act as edges between
these nodes.
2. Recommendations on e-commerce websites are generated by the graph theory.
When a person checks out a particular category of product, similar kinds of
products appear as suggestions to the user. Here, the products act as nodes and
the similarity between them forms a relationship and connects them to an edge
that leads to related products (nodes).

These are two among many real-life examples where you can use graphs to depict the
scenarios

there are basically two types of relationship


1. Symmetric relationship : it means that if A has a relationship with B then B will
also have a relationship with A example father and son , mother and father
This is Basically as UNDIRECTED GRAPHS.
2. Asymmetric relationship: it means that if A has a relationship with B then it is
not necessary that B should follow A. it is Also called as DIRECTED
GRAPHS.

A graph is an abstract data structure representing the relationship between multiple


nodes (entities) by connecting the nodes with edges.
 
G = (V, E). Here, G is a graph, which is a pair of two sets, where:
1. V is a finite set of vertices (nodes), and
2. E is a finite set of an ordered pair of the form (u, v), called an edge. The pair (u,
v) indicates that there is an edge from vertex u to vertex v.
 
Graphs play a critical role in the application of social networks, transportation
networks, etc. Certain applications of the graph data structure include the following:
1. Graphs help with identifying contacts on social networking websites, such as
Facebook; the nodes can be different users, and the edges represent the
relationships between them.
2. Graphs help with finding the shortest distance between any two given
locations; the nodes can be the different locations in a city, and the edges
represent the routes available between the locations.
 
Graph data structures do not have any restrictions like those of the tree data structure
on how different nodes are connected. Tree data structures have usage restrictions in
representing real-world scenarios, as each child node can have one parent node only.
Let’s take a look at the two data structures in the diagram given below.

Figure 1

So, you have now been introduced to the concepts of graphs and have also looked at
various real-world scenarios where graph data structures are used extensively.
 
In the forthcoming video, you will learn about the classification of graphs and the
differences between the graph and the tree data structure.
Play Video

4421328

The chart given below shows the classification of graphs.

Figure 2

 
Undirected graphs: Graphs that show a symmetrical relationship between two
connected nodes paired by an edge representing a simple line.
 
 
G for the above graph is (V, E), where:
 
V = {1, 2, 3, 4} and
E = {(1, 2), (2, 4), (4, 1), (1, 3)}.
 
 
Directed graphs: Graphs that show an asymmetrical relationship between two
connected nodes paired by an edge that indicates the direction of the relationship from
one node to the other.
 

 
Note: The pair representing the edge is ordered, and (u, v) ≠ (v, u) in the case of a
directed graph (digraph). In the case of an undirected graph, the order does not matter.
 
G for the graph above is (V, E), where:
V = {A, B, C, D, E} and
E = {(A, C), (B, C), (C, D), (E, D), (B, E)}.
 
Directed acyclic graphs (DAGs): These graphs fall under the sub-category of
directed graphs. The only difference between a directed graph and a DAG (a directed
acyclic graph) is that DAGs do not have cycles; this means if you start from any node
in the graph and traverse through its connecting edges, then you can never return to
the same node where you started.
 
 
 
Trees: These are restricted forms of graphs, and they fit the category of directed
acyclic graphs with the restriction that each child node can have only one parent node
in the structure.
 
 
 
Connected graph: A graph is connected if a path exists from every vertex to every
other vertex.
 
 
 
Disconnected graph: A graph is disconnected if at least two nodes exist such that
there is no path connecting them.
 
 
There does not exist any path from the set of vertices {1, 2, 4, 3} to vertex {5}.
Terminology
Neighbours: If two nodes are adjacent to each other and connected by an edge, then
those nodes are called neighbours.
 
Degree: The number of edges that are connected to a node is called the degree of the
node.
 
Now, let us consider the undirected graph given below. We will now discuss how to
determine the degree of each node in this graph.

Figure 3
 

Nodes Neighbours Degree

Node 1 {2, 3, 4, 5} 4
Node 2 {1, 4} 2

Node 3 {1} 1

Node 4 {1, 2, 5} 3

Node 5 {1, 4} 2

 
In the case of directed graphs, the degree can be classified as:
 In-degree: The number of incoming edges to a node
 Out-degree: The number of outgoing edges from a node
 
For a directed graph:
Degree = In-degree (Edges pointing to the vertex) + Out-degree (Edges pointing away
from the vertex).
 
Path: When a series of vertices are connected by a sequence of edges between two
specific nodes in a graph, the sequence is called a path. For example, in the
graph above, {2, 1, 4, 5} indicates the path between nodes 2 and 5, and the
intermediate nodes are 1 and 4.
 
Weighted graph: A graph in which the edges contain some weights or values is
called a weighted graph.
 
Example: If the nodes in a graph are considered to be cities, and the edges are
considered to be the paths between the cities, then the weights of these edges can be
considered to be the distance between these cities.
 
 
 
Unweighted graph: A graph in which edges contain no weight is called an
unweighted graph.
 
 
So, now that you have a clear understanding of a graph’s properties test your
knowledge by attempting the questions given below.
Depth-First Search (DFS) – I

 
Here’s an image to explain the traversal of the DFS algorithm step-by-step on
an example graph.

Example
In the video, you saw the pseudocode for the depth-first search of a graph. Now, let’s
take a look at the steps in finding the DFS traversal of the graph given in Step 1 in
the image below by taking node 1 as the starting node of the traversal.
 
 
 
Let us apply the pseudocode discussed in the video.
 
The steps in the image above are explained below:
Step 1: Run the dfs() method on node ‘1’ and add that node to the visited list
Step 2: The dfs(1) method recursively calls for all the unvisited neighbours of node
‘1’:
1. Here, the unvisited neighbours of node ‘1’ are {2, 3}.
2. Let us assume that dfs(1) recursively calls for node ‘2’ first and adds the node
to the visited list.
Step 3: The dfs(2) method recursively calls for all the unvisited neighbours of node
‘2’:
1. Here, the unvisited neighbours of node ‘2’ are {3, 4}.
2. Let us assume that dfs(2) recursively calls for node ‘3’ first and adds the node
to the visited list.
Step 4: The dfs(3) method recursively calls for all the unvisited neighbours of node
‘3’. Since there are no remaining unvisited neighbours of node ‘3’, it returns.
Step 5: The dfs(2) method recursively calls for all the unvisited neighbours of node
‘2’:
1. Here, the unvisited neighbour of node ‘2’ is {4}. So, dfs(2) recursively calls for
node ‘4’ and adds the node to the visited list.
Step 6: The dfs(4) method recursively calls for all the remaining unvisited neighbours
of node ‘4’. Since there are no remaining unvisited neighbours of node ‘4’, it returns.
Step 7: The dfs(2) method recursively calls for all the remaining unvisited neighbours
of node ‘2’. Since there are no remaining unvisited neighbours of node ‘2’, it returns.
Step 8: The dfs(1) method recursively calls for all the remaining unvisited neighbours
of node ‘1’. Since there are no remaining unvisited neighbours of node ‘1’, it returns.
The visited list is the DFS of the graph.
Depth-First Search (DFS) – II
In the previous segment, you learnt about the depth-first traversal of a graph and
the pseudocode for the same in detail.
 
Now, since you have already seen the DFS of a graph using recursion, and we know
that there is a relation between recursion and stack, let’s see the DFS of a graph using
stacks with an example.
 
Let’s take a look at the steps in finding the DFS traversal of the graph given in Step 1
in the image below by taking node 1 as the starting node of the traversal.
 

 
 
The steps in the image above are explained below:
 
Step 1: Push the starting node, which is node 1 here, to stack.
Step 2: Pop the stack; the popped element here is ‘1’:
1. If the popped element is not on the visited list, then add it to the visited list:
1. So, ‘1’ is added to the visited list.
2. Now, push all the neighbours of the popped element that are not on the visited
list:
1. Therefore, 2 and 3 are pushed to the stack.
Step 3: Pop the stack; the popped element here is ‘3’:
1. If the popped element is not on the visited list, then add it to the visited list:
1. So, ‘3’ is added to the visited list.
2. Now, push all the neighbours of the popped element that are not on the visited
list:
1. Therefore, 2 is pushed to the stack.
Step 4: Pop the stack; the popped element here is ‘2’:
1. If the popped element is not on the visited list, then add it to the visited list:
1. So, ‘2’ is added to the visited list.
2. Now, push all the neighbours of the popped element that are not on the visited
list:
1. Therefore, 4 is pushed to the stack.
Step 5: Pop the stack; the popped element here is ‘4’:
1. If the popped element is not on the visited list, then add it to the visited list:
1. So, ‘4’ is added to the visited list.
2. Now, push all the neighbours of the popped element that are not on the visited
list:
1. Since there are no neighbours of the popped element that are not on the
visited list, do nothing.
Step 6: Pop the stack; the popped element here is ‘2’:
1. If the popped element is not on the visited list, then add it to the visited list:
1. Since ‘2’ is already on the visited list, do nothing.
2. Now, push all the neighbours of the popped element that are not on the visited
list:
1. Since there are no neighbours of the popped element that are not on the
visited list, do nothing.
 
Since the stack is empty, the visited list is the DFS of the graph in Step 1.
 
Now, test your understanding of the DFS algorithm by attempting the questions given
below.
Breadth-First Search (BFS) – I
In the two previous segments, you learnt about the depth-first search (DFS) algorithm
for graphs. 
 
Now, we will begin this segment with the forthcoming video, which will introduce
you to another graph traversal technique called breadth-first search (BFS).
Breadth-first search is a traversing algorithm, where traversing begins from the start
node and then explores the immediate neighbours of the start node. Then the
traversing moves towards the next-level neighbours of the graph structure. As the
name suggests, traversal across the graph happens breadthwise.
 
To implement a BFS, you need to consider the stage of each node. Nodes, in general,
are considered to be in the following three different stages:
 Not visited
 Visited
 Completed
 
In the BFS algorithm, nodes are marked as visited during traversal to avoid the
infinite loops caused due to the possibilities of cycles in a graph structure.
 
Now, you learnt about the introduction of the BFS algorithm. In the next segment, you
will learn about the BFS algorithm in more detail.
Breadth-First Search (BFS) – II
Here is an image to explain the traversal of the BFS algorithm step by step on an
example graph.
 

The pseudocode of the BFS algorithm is given below.


Procedure bfs(n)

 Q ← new Queue

 Visited ← { }

 enqueue (Q, n)

 Add n to visited set

 While Q is not empty


    n ← dequeue(Q)

    for all n` ∈ neighbours(n)

          if (n` ∉ visited) then

                enqueue(Q, n`)

               add n` to visited set

         end if

    end for

 end while

end procedure

Step 1: The start node is enqueued and also marked as visited in the following set of
instructions:
 enqueue (Q, n)
 Add n to visited set.
Step 2: The ‘while’ loop instruction set is executed when the queue is not empty.
Step 3: For each iteration of the ‘while’ loop, a node gets dequeued.
Step 4: Now, the ‘for’ loop runs until all the unvisited neighbours of the dequeued
node (n) are enqueued and marked as visited.
Step 5: For the first iteration of the ‘while’ loop, all the neighbour nodes of the start
node are enqueued. And for the second iteration, all the next-level unvisited neighbour
nodes of one of the neighbour nodes are enqueued.
 
In this way, all the neighbour nodes are enqueued and visited level-wise from the start
node. And after a certain number of iterations, all the nodes are dequeued, and the
algorithm ends.
  Summary
In this session, we covered the following topics:
 What is a graph abstract data type?
 Different types of graphs:
 Undirected graphs
 Directed graphs:
 Directed acyclic graphs
 Differences between graphs and trees
 Depth-first search:
 Pseudocode
 Breadth-first search:
 Pseudocode

Summary
In this session, we covered the following topics:
 What is a graph abstract data type?
 Different types of graphs:
 Undirected graphs
 Directed graphs:
 Directed acyclic graphs
 Differences between graphs and trees
 Depth-first search:
 Pseudocode
 Breadth-first search:
 Pseudocode
Introduction to Edge Lists

So, in the video, you learnt what an edge list is. You also learnt that an edge list fails
in a scenario wherein you have an isolated node in your graph. To overcome this
problem, you will be introduced to a second graph implementation, an ‘adjacency
matrix’, in the next segment.
 
But before moving on to the segment, attempt the question below to assess your
understanding of the concepts covered so far.
Performance Characteristics
of an Adjacency Matrix
The most important aspect of any algorithm is its performance characteristics, which
determine how it would perform in a given situation or under given circumstances.
Therefore, you need to take into account an algorithm’s performance when you make
your choice.
 
So, in the forthcoming video, we will calculate the time complexity of certain
operations on an adjacency matrix. Precisely, we will calculate the time complexity of
the following four operations:
 getAllNodes: Get all the nodes of the graph
 addNode: Add a node to the graph
 addEdge: Add an edge between two specified nodes
 getAllNeighbours: Get all the neighbours of a specified node
So, in the video, you learnt:
 To calculate the time complexity of certain important operations on an
adjacency matrix
 The practices that you can follow to improve the time complexities of the
following methods: addEdge and getAllNeighbours.
 
What are Dense Graphs and Sparse Graphs?
Let us now discuss ‘dense graphs’ and ‘sparse graphs’ in detail.
 
Dense graphs: A dense graph is a graph in which the number of edges is close to the
maximum possible edges for a given set of nodes. Given below is an example of a
dense graph.

Figure 1

 
Sparse graphs: Sparse graphs are connected graphs with a minimum or a small
number of edges connecting the nodes. In sparse graphs, there may or may not be an
edge between two nodes. Here, usually, the number of edges is n, which is also the
number of vertices.
 
Figure 2

Now, let’s see what the time complexity of removing an edge from an adjacency
matrix is if all the vertices are not integral values.
 
The adjacency matrix and vertex list of the above weighted directed graph are given
below.

Adjacency Matrix and Vertex List


If you, for instance, want to remove an edge from X to U, then:
1. You have to traverse through the vertex list and find the indexes of X and U; here, they

are 3 and 0, respectively:

 If the size of the vertex list is V, then this step takes linear time, which is O(V) in

the worst case.


2. Now, set that (3, 0)the cell to ‘0’: 
 This step takes constant time.
 
So, the total time taken to remove an edge is O(V).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy