0% found this document useful (0 votes)
35 views26 pages

Ads M Tech Mid 2

A hash table is a data structure that allows for quick insertion, lookup, and removal of key-value pairs using a hash function to map keys to array indices. Key concepts include load factor, hash functions, collision resolution techniques, and dynamic resizing, with implementations available in various programming languages. Open addressing techniques for collision resolution include linear probing, quadratic probing, and double hashing, each with its own method for calculating probe sequences.

Uploaded by

likhithakonda28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views26 pages

Ads M Tech Mid 2

A hash table is a data structure that allows for quick insertion, lookup, and removal of key-value pairs using a hash function to map keys to array indices. Key concepts include load factor, hash functions, collision resolution techniques, and dynamic resizing, with implementations available in various programming languages. Open addressing techniques for collision resolution include linear probing, quadratic probing, and double hashing, each with its own method for calculating probe sequences.

Uploaded by

likhithakonda28
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

1.Explain the features in HASH table functions and Reprasentations?

What is Hash Table?

A Hash table is defined as a data structure used to insert, look up, and remove key-value pairs
quickly. It operates on the hashing concept, where each key is translated by a hash function into
a distinct index in an array. The index functions as a storage location for the matching value. In
simple words, it maps the keys with the value.

Hash Function and Table

What is Load factor?

A hash table’s load factor is determined by how many elements are kept there in relation to how
big the table is. The table may be cluttered and have longer search times and collisions if the load
factor is high. An ideal load factor can be maintained with the use of a good hash function and
proper table resizing.

What is a Hash function?

A Function that translates keys to array indices is known as a hash function. The keys should be
evenly distributed across the array via a decent hash function to reduce collisions and ensure
quick lookup speeds.

 Integer universe assumption: The keys are assumed to be integers within a certain
range according to the integer universe assumption. This enables the use of basic hashing
operations like division or multiplication hashing.

 Hashing by division: This straightforward hashing technique uses the key’s remaining
value after dividing it by the array’s size as the index. When an array size is a prime
number and the keys are evenly spaced out, it performs well.
 Hashing by multiplication: This straightforward hashing operation multiplies the key by
a constant between 0 and 1 before taking the fractional portion of the outcome. After that,
the index is determined by multiplying the fractional component by the array’s size. Also,
it functions effectively when the keys are scattered equally.

Choosing a hash function:

Selecting a decent hash function is based on the properties of the keys and the intended
functionality of the hash table. Using a function that evenly distributes the keys and reduces
collisions is crucial.

Criteria based on which a hash function is chosen:

 To ensure that the number of collisions is kept to a minimum, a good hash function
should distribute the keys throughout the hash table in a uniform manner. This implies
that for all pairings of keys, the likelihood of two keys hashing to the same position in the
table should be rather constant.

 To enable speedy hashing and key retrieval, the hash function should be computationally
efficient.

 It ought to be challenging to deduce the key from its hash value. As a result, attempts to
guess the key using the hash value are less likely to succeed.

 A hash function should be flexible enough to adjust as the data being hashed changes. For
instance, the hash function needs to continue to perform properly if the keys being hashed
change in size or format.

Collision resolution techniques:

Collisions happen when two or more keys point to the same array index. Chaining, open
addressing, and double hashing are a few techniques for resolving collisions.
 Open addressing: collisions are handled by looking for the following empty space in the
table. If the first slot is already taken, the hash function is applied to the subsequent slots
until one is left empty. There are various ways to use this approach, including double
hashing, linear probing, and quadratic probing.

 Separate Chaining: In separate chaining, a linked list of objects that hash to each slot in
the hash table is present. Two keys are included in the linked list if they hash to the same
slot. This method is rather simple to use and can manage several collisions.

 Robin Hood hashing: To reduce the length of the chain, collisions in Robin Hood
hashing are addressed by switching off keys. The algorithm compares the distance
between the slot and the occupied slot of the two keys if a new key hashes to an already-
occupied slot. The existing key gets swapped out with the new one if it is closer to its
ideal slot. This brings the existing key closer to its ideal slot. This method has a tendency
to cut down on collisions and average chain length.

Dynamic resizing:

This feature enables the hash table to expand or contract in response to changes in the number of
elements contained in the table. This promotes a load factor that is ideal and quick lookup times.

Example Implementation of Hash Table

Python, Java, C++, and Ruby are just a few of the programming languages that support hash
tables. They can be used as a customized data structure in addition to frequently being included
in the standard library.

Example: hashIndex = key % noOfBuckets


Insert: Move to the bucket corresponding to the above-calculated hash index and insert the new
node at the end of the list.
Delete: To delete a node from hash table, calculate the hash index for the key, move to the
bucket corresponding to the calculated hash index, and search the list in the current bucket to
find and remove the node with the given key (if found).
Please refer Hashing | Set 2 (Separate Chaining) for details.

2. Enhance The Sequence Techniques present in Open Addressing Techniques ?

Three techniques are commonly used to compute the probe sequence required for open
addressing:

1. Linear Probing.
2. Quadratic Probing.
3. Double Hashing.

1. Linear Probing:

It is a Scheme in Computer Programming for resolving collision in hash tables.

Suppose a new record R with key k is to be added to the memory table T but that the memory
locations with the hash address H (k). H is already filled.

Our natural key to resolve the collision is to crossing R to the first available location following T
(h). We assume that the table T with m location is circular, so that T [i] comes after T [m].

The above collision resolution is called "Linear Probing".

Linear probing is simple to implement, but it suffers from an issue known as primary clustering.
Long runs of occupied slots build up, increasing the average search time. Clusters arise because
an empty slot proceeded by i full slots gets filled next with probability (i + 1)/m. Long runs of
occupied slots tend to get longer, and the average search time increases.

Given an ordinary hash function h': U {0, 1...m-1}, the method of linear probing uses the hash
function.

1. h (k, i) = (h' (k) + i) mod m


Where 'm' is the size of hash table and h' (k) = k mod m. for i=0, 1....m-1.

Given key k, the first slot is T [h' (k)]. We next slot T [h' (k) +1] and so on up to the slot T [m-1].
Then we wrap around to slots T [0], T [1]....until finally slot T [h' (k)-1]. Since the initial probe
position dispose of the entire probe sequence, only m distinct probe sequences are used with
linear probing.

Example: Consider inserting the keys 24, 36, 58,65,62,86 into a hash table of size m=11 using
linear probing, consider the primary hash function is h' (k) = k mod m.

Solution: Initial state of hash table

Insert 24. We know h (k, i) = [h' (k) + i] mod m


Now h (24, 0) = [24 mod 11 + 0] mod 11
= (2+0) mod 11 = 2 mod 11 = 2
Since T [2] is free, insert key 24 at this place.

Insert 36. Now h (36, 0) = [36 mod 11 + 0] mod 11


= [3+0] mod 11 = 3
Since T [3] is free, insert key 36 at this place.

Insert 58. Now h (58, 0) = [58 mod 11 +0] mod 11


= [3+0] mod 11 =3
Since T [3] is not free, so the next sequence is
h (58, 1) = [58 mod 11 +1] mod 11
= [3+1] mod 11= 4 mod 11=4
T [4] is free; Insert key 58 at this place.

Insert 65. Now h (65, 0) = [65 mod 11 +0] mod 11


= (10 +0) mod 11= 10
T [10] is free. Insert key 65 at this place.

Insert 62. Now h (62, 0) = [62 mod 11 +0] mod 11


= [7 + 0] mod 11 = 7
T [7] is free. Insert key 62 at this place.

Insert 86. Now h (86, 0) = [86 mod 11 + 0] mod 11


= [9 + 0] mod 11 = 9
T [9] is free. Insert key 86 at this place.
Thus,
2. Quadratic Probing:

Suppose a record R with key k has the hash address H (k) = h then instead of searching the
location with addresses h, h+1, and h+ 2...We linearly search the locations with addresses

h, h+1, h+4, h+9...h+i2

Quadratic Probing uses a hash function of the form

h (k,i) = (h' (k) + c1i + c2i2) mod m

Where (as in linear probing) h' is an auxiliary hash function c1 and c2 ≠0 are auxiliary constants
and i=0, 1...m-1. The initial position is T [h' (k)]; later position probed is offset by the amount
that depend in a quadratic manner on the probe number i.

Example: Consider inserting the keys 74, 28, 36,58,21,64 into a hash table of size m =11 using
quadratic probing with c1=1 and c2=3. Further consider that the primary hash function is h' (k) =
k mod m.

Solution: For Quadratic Probing, we have

h (k, i) = [k mod m +c1i +c2 i2) mod m

This is the initial state of hash table

Here c1= 1 and c2=3


h (k, i) = [k mod m + i + 3i2 ] mod m
Insert 74.

h (74,0)= (74 mod 11+0+3x0) mod 11


= (8 +0+0) mod 11 = 8
T [8] is free; insert the key 74 at this place.

Insert 28.

h (28, 0) = (28 mod 11 + 0 + 3 x 0) mod 11


= (6 +0 + 0) mod 11 = 6.
T [6] is free; insert key 28 at this place.

Insert 36.

h (36, 0) = (36 mod 11 + 0 + 3 x 0) mod 11


= (3 + 0+0) mod 11=3
T [3] is free; insert key 36 at this place.

Insert 58.

h (58, 0) = (58 mod 11 + 0 + 3 x 0) mod 11


= (3 + 0 + 0) mod 11 = 3
T [3] is not free, so next probe sequence is computed as
h (59, 1) = (58 mod 11 + 1 + 3 x12) mod 11
= (3 + 1 + 3) mod 11
=7 mod 11= 7
T [7] is free; insert key 58 at this place.

Insert 21.

h (21, 0) = (21 mod 11 + 0 + 3 x 0)


= (10 + 0) mod 11 = 10
T [10] is free; insert key 21 at this place.

Insert 64.

h (64, 0) = (64 mod 11 + 0 + 3 x 0)


= (9 + 0+ 0) mod 11 = 9.
T [9] is free; insert key 64 at this place.

Thus, after inserting all keys, the hash table is

3. Double Hashing:

Double Hashing is one of the best techniques available for open addressing because the
permutations produced have many of the characteristics of randomly chosen permutations.

Double hashing uses a hash function of the form

h (k, i) = (h1(k) + i h2 (k)) mod m


Where h1 and h2 are auxiliary hash functions and m is the size of the hash table.

h1 (k) = k mod m or h2 (k) = k mod m'. Here m' is slightly less than m (say m-1 or m-2).

Example: Consider inserting the keys 76, 26, 37,59,21,65 into a hash table of size m = 11 using
double hashing. Consider that the auxiliary hash functions are h1 (k)=k mod 11 and h2(k) = k
mod 9.

Solution: Initial state of Hash table is

1. Insert 76.
h1(76) = 76 mod 11 = 10
h2(76) = 76 mod 9 = 4
h (76, 0) = (10 + 0 x 4) mod 11
= 10 mod 11 = 10
T [10] is free, so insert key 76 at this place.

2. Insert 26.
h1(26) = 26 mod 11 = 4
h2(26) = 26 mod 9 = 8
h (26, 0) = (4 + 0 x 8) mod 11
= 4 mod 11 = 4
T [4] is free, so insert key 26 at this place.

3. Insert 37.
h1(37) = 37 mod 11 = 4
h2(37) = 37 mod 9 = 1
h (37, 0) = (4 + 0 x 1) mod 11 = 4 mod 11 = 4
T [4] is not free, the next probe sequence is
h (37, 1) = (4 + 1 x 1) mod 11 = 5 mod 11 = 5
T [5] is free, so insert key 37 at this place.

4. Insert 59.
h1(59) = 59 mod 11 = 4
h2(59) = 59 mod 9 = 5
h (59, 0) = (4 + 0 x 5) mod 11 = 4 mod 11 = 4
Since, T [4] is not free, the next probe sequence is
h (59, 1) = (4 + 1 x 5) mod 11 = 9 mod 11 = 9
T [9] is free, so insert key 59 at this place.
5. Insert 21.
h1(21) = 21 mod 11 = 10
h2(21) = 21 mod 9 = 3
h (21, 0) = (10 + 0 x 3) mod 11 = 10 mod 11 = 10
T [10] is not free, the next probe sequence is
h (21, 1) = (10 + 1 x 3) mod 11 = 13 mod 11 = 2
T [2] is free, so insert key 21 at this place.

6. Insert 65.
h1(65) = 65 mod 11 = 10
h2(65) = 65 mod 9 = 2
h (65, 0) = (10 + 0 x 2) mod 11 = 10 mod 11 = 10
T [10] is not free, the next probe sequence is
h (65, 1) = (10 + 1 x 2) mod 11 = 12 mod 11 = 1
T [1] is free, so insert key 65 at this place.
Thus, after insertion of all keys the final hash table is

3. How to implement Priority Queue – using Heap and Array?

A Priority Queue is a data structure that allows you to insert elements with a priority, and
retrieve the element with the highest priority.

You can implement a priority queue using either an array or a heap. Both array and heap-based
implementations of priority queues have their own advantages and disadvantages. Arrays are
generally easier to implement, but they can be slower because inserting and deleting elements
requires shifting the elements in the array. Heaps are more efficient, but they can be more
complex to implement. You can also refer to the Difference between Heaps and Sorted Array for
a general comparison between the two.

Heap-based implementation of a priority queue

It involves creating a binary heap data structure and maintaining the heap property as elements
are inserted and removed. In a binary heap, the element with the highest priority is always the
root of the heap. To insert an element, you would add it to the end of the heap and then perform
the necessary heap operations (such as swapping the element with its parent) to restore the heap
property. To retrieve the highest priority element, you would simply return the root of the heap.

To implement a priority queue using a heap, we can use the following steps:
 Create a heap data structure (either a max heap or a min-heap)
 To insert an element into the priority queue, add the element to the heap using the heap’s
insert function. The heap will automatically rearrange the elements to maintain the heap
property.
 To remove the highest priority element (in a max heap) or the lowest priority element (in
a min-heap), use the heap’s remove function. This will remove the root of the tree and
rearrange the remaining elements to maintain the heap property.

The MaxHeap class has the following functions:

 __init__: Initializes the heap as an empty list


 insert: Inserts a value into the heap and calls the heapify_up function to maintain the
heap property
 remove: Removes the root of the heap (the maximum value in a max heap) and calls the
heapify_down function to maintain the heap property
 heapify_up: Starting from the bottom of the heap, compares each node to its parent and
swaps them if necessary to maintain the heap property
 heapify_down: Starting from the root of the heap, compares each node to its children
and swaps them if necessary to maintain the heap property.

The PriorityQueue class has the following functions:

 __init__: Initializes the priority queue with an empty MaxHeap


 insert: Inserts a value into the priority queue with a given priority. The priority is stored
along with the value in a tuple, and the tuple is inserted into the MaxHeap
 remove: Removes the highest priority value from the priority queue by calling the
remove function on the MaxHeap, and returns only the value (not the priority)

Array-based implementation of a priority queue:

It involves creating an array of elements and sorting it in ascending or descending order of


priority. To insert an element, you would need to shift the elements in the array to make room for
the new element and then insert it at the appropriate position based on its priority. To retrieve the
highest priority element, you would simply return the first element in the array.

To implement a priority queue using arrays, we can use the following steps:

 Create an array to store the elements of the priority queue


 To insert an element into the priority queue, add the element to the end of the array
 To remove the highest priority element (in a max heap) or the lowest priority element (in
a min heap),
perform the following steps:
 Find the index of the highest or lowest priority element in the array
 Swap the element at that index with the element at the end of the array
 Remove the element at the end of the array
 Repeat steps 1-3 until the element at the desired index is in the correct position
Data structure Insert Search Find min Delete min
Sorted array O(n) O(log n) O(1) O(n)
Min heap O(log n) O(n) O(1) O(log n)

 Both arrays and heaps can be used to implement priority queues, but heaps are generally
more efficient because they offer faster insertion and retrieval times. The choice of data
structure will depend on the specific requirements of your application. It is important to
consider the trade-offs between the ease of implementation and the performance of the
data structure when deciding which one to use.

4 . Explain Brief Introduction to Binary Search Tree algorithm ?

A Binary Search Tree (or BST) is a data structure used in computer science for organizing and
storing data in a sorted manner. Each node in a Binary Search Tree has at most two children, a
left child and a right child, with the left child containing values less than the parent node and the
right child containing values greater than the parent node. This hierarchical structure allows for
efficient searching, insertion, and deletion operations on the data stored in the tree.

Binary Search Tree

Introduction to Binary Search:

 Introduction to BST

 Applications of BST

Basic Operations on BST:

 Insertion in BST

 Searching in BST

 Deletion in BST
 BST Traversals

 Minimum in BST

 Maximum in BST

 Floor in BST

 Ceil in BST

 Inorder Successor in BST

 Inorder Predecessor in BST

 Handling duplicates in BST

Algorithm to search an element in Binary search tree

1. Search (root, item)

2. Step 1 - if (item = root → data) or (root = NULL)

3. return root

4. else if (item < root → data)

5. return Search(root → left, item)

6. else

7. return Search(root → right, item)

8. END if

9. Step 2 - END

Deletion in Binary Search tree

In a binary search tree, we must delete a node from the tree by keeping in mind that the property
of BST is not violated. To delete a node from BST, there are three possible situations occur -

 The node to be deleted is the leaf node, or,


 The node to be deleted has only one child, and,
 The node to be deleted has two children

We will understand the situations listed above in detail.


When the node to be deleted is the leaf node

It is the simplest case to delete a node in BST. Here, we have to replace the leaf node with NULL
and simply free the allocated space.

We can see the process to delete a leaf node from BST in the below image. In below image,
suppose we have to delete node 90, as the node to be deleted is a leaf node, so it will be replaced
with NULL, and the allocated space will free.

When the node to be deleted has only one child

In this case, we have to replace the target node with its child, and then delete the child node. It
means that after replacing the target node with its child node, the child node will now contain the
value to be deleted. So, we simply have to replace the child node with NULL and free up the
allocated space.

We can see the process of deleting a node with one child from BST in the below image. In the
below image, suppose we have to delete the node 79, as the node to be deleted has only one
child, so it will be replaced with its child 55.

So, the replaced node 79 will now be a leaf node that can be easily deleted.

When the node to be deleted has two children


This case of deleting a node in BST is a bit complex among other two cases. In such a case, the
steps to be followed are listed as follows -

 First, find the inorder successor of the node to be deleted.


 After that, replace that node with the inorder successor until the target node is placed at
the leaf of tree.
 And at last, replace the node with NULL and free up the allocated space.

The inorder successor is required when the right child of the node is not empty. We can obtain
the inorder successor by finding the minimum element in the right child of the node.

We can see the process of deleting a node with two children from BST in the below image. In the
below image, suppose we have to delete node 45 that is the root node, as the node to be deleted
has two children, so it will be replaced with its inorder successor. Now, node 45 will be at the
leaf of the tree so that it can be deleted easily.

Now let's understand how insertion is performed on a binary search tree.

Insertion in Binary Search tree

A new key in BST is always inserted at the leaf. To insert an element in BST, we have to start
searching from the root node; if the node to be inserted is less than the root node, then search for
an empty location in the left subtree. Else, search for the empty location in the right subtree and
insert the data. Insert in BST is similar to searching, as we always have to maintain the rule that
the left subtree is smaller than the root, and right subtree is larger than the root.

Now, let's see the process of inserting a node into BST using an example.
The complexity of the Binary Search tree

Let's see the time and space complexity of the Binary search tree. We will see the time
complexity for insertion, deletion, and searching operations in best case, average case, and worst
case.

1. Time Complexity

Best case time Average case time Worst case time


Operations
complexity complexity complexity
Insertion O(log n) O(log n) O(n)
Deletion O(log n) O(log n) O(n)
Search O(log n) O(log n) O(n)

Where 'n' is the number of nodes in the given tree.

2. Space Complexity

Operations Space complexity


Insertion O(n)
Deletion O(n)
Search O(n)

 The space complexity of all operations of Binary search tree is O(n).

5. what is AVL Tree ? and Explain Rotations in AVL Tree ?

An AVL tree defined as a self-balancing Binary Search Tree (BST) where the difference
between heights of left and right subtrees for any node cannot be more than one.

The difference between the heights of the left subtree and the right subtree for any node is known
as the balance factor of the node.

The AVL tree is named after its inventors, Georgy Adelson-Velsky and Evgenii Landis, who
published it in their 1962 paper “An algorithm for the organization of information”.

Example of AVL Trees:

AVL tree

The above tree is AVL because the differences between the heights of left and right subtrees for
every node are less than or equal to 1.

Operations on an AVL Tree:

 Insertion

 Deletion

 Searching [It is similar to performing a search in BST]


Rotating the subtrees in an AVL Tree:

An AVL tree may rotate in one of the following four ways to keep itself balanced:

Left Rotation:

When a node is added into the right subtree of the right subtree, if the tree gets out of balance, we
do a single left rotation.

Left-Rotation in AVL tree

Right Rotation:

If a node is added to the left subtree of the left subtree, the AVL tree may get out of balance, we
do a single right rotation.

Right-Rotation in AVL Tree

Left-Right Rotation:
A left-right rotation is a combination in which first left rotation takes place after that right
rotation executes.

Left-Right Rotation in AVL tree

Right-Left Rotation:

A right-left rotation is a combination in which first right rotation takes place after that left
rotation executes.

Right-Left Rotation in AVL tree

Advantages of AVL Tree:

1. AVL trees can self-balance themselves and therefore provides time complexity as O(Log
n) for search, insert and delete.

2. It is a BST only (with balancing), so items can be traversed in sorted order.

3. Since the balancing rules are strict compared to Red Black Tree, AVL trees in general
have relatively less height and hence the search is faster.
4. AVL tree is relatively less complex to understand and implement compared to Red Black
Trees.

Disadvantages of AVL Tree:

1. It is difficult to implement compared to normal BST and easier compared to Red Black

2. Less used compared to Red-Black trees. Due to its rather strict balance, AVL trees
provide complicated insertion and removal operations as more rotations are performed.

Applications of AVL Tree:

1. AVL Tree is used as a first example self balancing BST in teaching DSA as it is easier to
understand and implement compared to Red Black

2. Applications, where insertions and deletions are less common but frequent data lookups
along with other operations of BST like sorted traversal, floor, ceil, min and max.

3. Red Black tree is more commonly implemented in language libraries like map in C++, set
in C++, TreeMap in Java and TreeSet in Java.

4. AVL Trees can be used in a real time environment where predictable and consistent
performance is required.

Introduction to Red-Black Tree

Binary search trees are a fundamental data structure, but their performance can suffer if the tree
becomes unbalanced. Red Black Trees are a type of balanced binary search tree that use a set
of rules to maintain balance, ensuring logarithmic time complexity for operations like insertion,
deletion, and searching, regardless of the initial shape of the tree. Red Black Trees are self-
balancing, using a simple color-coding scheme to adjust the tree after each modification.

Red-Black Tree
6. Basic Operations on RED – BLACK Trees and also rotations ?

What is a Red-Black Tree?


A Red-Black Tree is a self-balancing binary search tree where each node has an additional
attribute: a color, which can be either red or black. The primary objective of these trees is to
maintain balance during insertions and deletions, ensuring efficient data retrieval and
manipulation.

Properties of Red-Black Trees


A Red-Black Tree have the following properties:

1. Node Color: Each node is either red or black.

2. Root Property: The root of the tree is always black.

3. Red Property: Red nodes cannot have red children (no two consecutive red nodes on any path).

4. Black Property: Every path from a node to its descendant null nodes (leaves) has the same
number of black nodes.

5. Leaf Property: All leaves (NIL nodes) are black.

These properties ensure that the longest path from the root to any leaf is no more than twice as
long as the shortest path, maintaining the tree’s balance and efficient performance.

Example of Red-Black Tree:


Basic Operations on Red-Black Tree:
The basic operations on a Red-Black Tree include:

1. Insertion

2. Search

3. Deletion

4. Rotation

1. Insertion
Inserting a new node in a Red-Black Tree involves a two-step process: performing a standard
binary search tree (BST) insertion, followed by fixing any violations of Red-Black properties.

Insertion Steps

1. BST Insert: Insert the new node like in a standard BST.

2. Fix Violations:

o If the parent of the new node is black, no properties are violated.

o If the parent is red, the tree might violate the Red Property, requiring fixes.
Fixing Violations During Insertion

After inserting the new node as a red node, we might encounter several cases depending on the
colors of the node’s parent and uncle (the sibling of the parent):

 Case 1: Uncle is Red: Recolor the parent and uncle to black, and the grandparent to red. Then
move up the tree to check for further violations.

 Case 2: Uncle is Black:

o Sub-case 2.1: Node is a right child: Perform a left rotation on the parent.

o Sub-case 2.2: Node is a left child: Perform a right rotation on the grandparent and
recolor appropriately.

2. Searching
Searching for a node in a Red-Black Tree is similar to searching in a standard Binary Search
Tree (BST). The search operation follows a straightforward path from the root to a leaf,
comparing the target value with the current node’s value and moving left or right accordingly.

Search Steps

1. Start at the Root: Begin the search at the root node.

2. Traverse the Tree:

o If the target value is equal to the current node’s value, the node is found.

o If the target value is less than the current node’s value, move to the left child.

o If the target value is greater than the current node’s value, move to the right child.

3. Repeat: Continue this process until the target value is found or a NIL node is reached (indicating
the value is not present in the tree).

3. Deletion
Deleting a node from a Red-Black Tree also involves a two-step process: performing the BST
deletion, followed by fixing any violations that arise.

Deletion Steps

1. BST Deletion: Remove the node using standard BST rules.


2. Fix Double Black:

o If a black node is deleted, a “double black” condition might arise, which requires specific
fixes.

Fixing Violations During Deletion

When a black node is deleted, we handle the double black issue based on the sibling’s color and
the colors of its children:

 Case 1: Sibling is Red: Rotate the parent and recolor the sibling and parent.

 Case 2: Sibling is Black:

o Sub-case 2.1: Sibling’s children are black: Recolor the sibling and propagate the double
black upwards.

o Sub-case 2.2: At least one of the sibling’s children is red:

 If the sibling’s far child is red: Perform a rotation on the parent and sibling, and
recolor appropriately.

 If the sibling’s near child is red: Rotate the sibling and its child, then handle as
above.

4. Rotation
Rotations are fundamental operations in maintaining the balanced structure of a Red-Black Tree
(RBT). They help to preserve the properties of the tree, ensuring that the longest path from the
root to any leaf is no more than twice the length of the shortest path. Rotations come in two
types: left rotations and right rotations.

1. Left Rotation

A left rotation at node x moves x down to the left and its right child y up to take x’s place.

Before Rotation:

x
\
y
/ \
a b

After Left Rotation:

y
/ \
x b
\
a

Left Rotation Steps:

1. Set y to be the right child of x.

2. Move y’s left subtree to x’s right subtree.

3. Update the parent of x and y.

4. Update x’s parent to point to y instead of x.

5. Set y’s left child to x.

6. Update x’s parent to y.

Pseudocode of Left Rotation:


// Utility function to perform left rotation
void leftRotate(Node* x)
{
Node* y = x->right;
x->right = y->left;
if (y->left != NIL) {
y->left->parent = x;
}
y->parent = x->parent;
if (x->parent == nullptr) {
root = y;
}
else if (x == x->parent->left) {
x->parent->left = y;
}
else {
x->parent->right = y;
}
y->left = x;
x->parent = y;
}

2. Right Rotation

A right rotation at node x moves x down to the right and its left child y up to take x’s place.
1

Befor Right Rotation:


2

x
4

/
5

y
6

/ \
7

a b
8

After Right Rotation:


10

11

y
12

/ \
13

a x
14

15 b
Right Rotation Steps:

1. Set y to be the left child of x.

2. Move y’s right subtree to x’s left subtree.

3. Update the parent of x and y.

4. Update x’s parent to point to y instead of x.

5. Set y’s right child to x.

6. Update x’s parent to y.

In this case, there is one (excluding the root node).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy