0% found this document useful (0 votes)
4 views

Daa2 Cs Report Grp1

This case study report from C.V. Raman Global University analyzes various data structures including Splay Trees, B-Trees, 2-3 Trees, Tournament Trees, and Interval Trees, highlighting their unique features, advantages, and disadvantages. The report emphasizes the importance of understanding these structures for algorithm design and includes a declaration of originality, acknowledgments, and a detailed examination of Splay Trees. It also outlines the time complexity of operations in Splay Trees, noting their efficiency and potential drawbacks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Daa2 Cs Report Grp1

This case study report from C.V. Raman Global University analyzes various data structures including Splay Trees, B-Trees, 2-3 Trees, Tournament Trees, and Interval Trees, highlighting their unique features, advantages, and disadvantages. The report emphasizes the importance of understanding these structures for algorithm design and includes a declaration of originality, acknowledgments, and a detailed examination of Splay Trees. It also outlines the time complexity of operations in Splay Trees, noting their efficiency and potential drawbacks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

C.V.

RAMAN GLOBAL UNIVERSITY


BHUBANESWAR, ODISHA-752054
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

DESIGN AND ANALYSIS OF ALGORITHM


CASE STUDY REPORT ON
TOPIC: Write short notes and comparison view of Splay Trees,
b-trees, 2-3 tree, tournament trees, interval trees .Write 5
research papers published in this area.
Submitted By: Sub-Group:1(REGN No. 569-581) Students
from Group:6

UNDER THE GUIDANCE OF Ms. SUPRIYA PANIGRAHI


GROUP MEMBERS
Sl NAME REGISTRATION NO.
NO.
1 RISHABH RAJ 2101020569
2 SOUMYA RANJAN RATH 2101020570
3 PIYUSH KUMAR 2101020571
4 KOUSHIK KUMAR GHOSH 2101020572

5 SIBA SWAGAT MISHRA 2101020573


6 CHINTADA SANTOSH 2101020574
KUMAR
7 ANKIT PANDA 2101020575
8 PARTHA SARATHI SAHOO 2101020576

9 JAY KUMAR 2101020577


10 ANANYA ARYA 2101020578
11 ARYADUTTA KHANDUAL 2101020579
12 UTKAL KUMAR SARANGI 2101020580

13 UTKARSH RAJ 2101020581


DECLARATION
We hereby declare that the work presented in this Case Study is the
outcome of the investigation performed by us under the guidance of
Ms. SUPRIYA PANIGRAHI. We clearly declare that no part of this Case
Study has been submitted else-where for the award of any degree or
diploma and is solely edited by us.
This is to certify that the work in this Case Study report Submitted by
“Sub:1Group:6(REGN No.569-581) Students from Group:” for partial
fulfilment of the requirements for the award of Bachelor of
Technology in Computer Science & Engineering to C. V. Raman Global
University, Odisha is a Bonafide work of research carried out by us.
ACKNOWLEDGEMENT
We would like to articulate our deep gratitude to Ms. SUPRIYA
PANIGRAHY, Department of Computer Science & Engineering, for
her constant support and guidance during our Case Study journey.
We also express our gratitude to her for her invaluable suggestion
and constant encouragement all through the project work.
We want to convey our deep gratitude to the Computer Science
Department for giving us this chance to develop under the direction
of such a prestigious academic member. The case study programme
has been a rewarding experience that has given us the opportunity to
apply abstract ideas to practical situations and has given us the
information and abilities we need to succeed in our future
endeavours. An assemblage of this nature could never have been
attempted with our reference to and inspiration from the works of
others whose details are mentioned in references section. We
acknowledge our indebtedness to all of them. Further, we would like
to express our feeling towards our parents and God who directly or
indirectly encouraged and motivated us during this dissertation.
ABSTRACT
Splay Trees self-adjusts to keep frequently accessed nodes at the
root, thus improving overall efficiency. B-Trees are designed for
efficient storage and retrieval of large amounts of data, while 2-3
Trees are similar but have a fixed number of children per node.
Tournament Trees are binary trees that represent the winners of
matches, while Interval Trees are used for efficient storage and
querying of intervals. Interval Trees are search trees that are used to
efficiently find overlapping intervals in a set of intervals. Each of
these modifications has unique advantages and disadvantages and
can be used in various applications depending on the specific
needs.Understanding the strengths and weaknesses of these data
structures is important for algorithm designers and developers, as it
can help them select the best data structure for a given application
and optimize the performance of their algorithms.
INTRODUCTION
Splay Trees, B-Trees, 2-3 Trees, Tournament Trees, and Interval Trees
are important data structures used in the design and analysis of
algorithms.
Splay Trees are self-adjusting binary search trees that restructure
themselves to provide efficient access to recently accessed items.
They are useful for applications that require frequent access to data
items, and can improve performance by reducing the time
complexity of subsequent access operations.
B-Trees are balanced multiway search trees that are commonly used
in file systems and database systems. They are optimized for disk
access, making them well-suited for applications that need to store
large amounts of data on disk.
2-3 Trees are balanced search trees that can store one or two keys
per node. They maintain balance by splitting and merging nodes as
necessary, ensuring that the tree remains balanced and efficient.
They are useful for applications that require efficient searching and
insertion of data items.
Tournament Trees are binary trees that are used to implement
priority queues and sorting algorithms. Each node in the tree
represents the winner of a tournament between its two child nodes,
allowing for efficient selection of the minimum or maximum element
in the tree.
Interval Trees are search trees that are used to efficiently find
overlapping intervals in a set of intervals. They are useful for
applications that need to find, for example, all intervals that overlap a
given interval.
INDEX

1. SPLAY TREE -7
2. B-TREE -17
3. 2-3 TREE -23
4. TOURNAMENT TREE -28
5. INTERVAL TREE -30
6. RESEARCH PAPER -33
7. CONCLUSION -36
8. REFERENCES -37
SPLAY TREE
A Splay tree is a self-adjusting binary search tree that restructures
itself after each access operation, bringing the most recently
accessed node to the root of the tree. This property makes it efficient
for searching, inserting, and deleting nodes with a given key, with an
amortized time complexity of O(log n). Splay trees are commonly
used for applications such as caching, symbol tables, and network
routing tables. Splay trees are a good data structure for caching,
symbol tables, and network routing tables because they have an
amortized time complexity of O(log n) for access, search, insert, and
delete operations.

Features of Splay tree


1. Self-adjusting: Splay trees are self-adjusting binary search trees, meaning
that after each access operation, the tree is restructured to bring the most
recently accessed node to the root of the tree. This property makes Splay trees
efficient for frequently accessed nodes.
2. Fast search, insert, and delete operations: Splay trees have an amortized
time complexity of O(log n) for search, insert, and delete operations, making
them efficient for a wide range of applications.
3. No need for additional balancing: Unlike other self-balancing binary search
trees such as AVL trees and Red-Black trees, Splay trees do not require
additional balancing operations. The splaying operation automatically balances
the tree.
4. Cache-friendly: Splay trees can be used as a cache due to their self-adjusting
property. The most frequently accessed nodes are brought to the root of the
tree, making them faster to access in subsequent operations.

Operations of Splay tree


1. Search: Given a key, the Splay tree can quickly search for the node with that
key. During the search operation, the tree is restructured to bring the searched
node to the root of the tree. If the key is not found, the last accessed node is
brought to the root.
2. Insert: To insert a new node with a given key, the Splay tree first performs a
search for the key. If the key is not found, a new node is created and inserted at
the root of the tree.
3. Delete: To delete a node with a given key, the Splay tree first performs a
search for the key. If the key is found, the node is deleted from the tree. The
tree is then restructured to bring the parent of the deleted node to the root.
4. Traversal: The Splay tree can be traversed in-order, pre-order, or post-order
to visit all nodes in the tree. Splaying: Splaying is the process of restructuring
the tree after each access operation. Splaying brings the most recently
accessed node to the root of the tree, which can improve the performance of
subsequent operations.

Implementation of Splay Tree


1. #include<stdio.h>

2. #include<stdlib.h>

3.
4. // An AVL tree node

5. struct node {

6. int key;

7. struct node *left, *right;

8. };

9.

10. /* Helper function that allocates a new node with the given key and

11. NULL left and right pointers. */

12. struct node* newNode(int key) {

13. struct node* node = (struct node*) malloc(sizeof(struct node));

14. node->key = key;

15. node->left = node->right = NULL;

16. return (node);

17. }

18.

19. // A utility function to right rotate subtree rooted with y

20. // See the diagram given above.

21. struct node *rightRotate(struct node *x) {

22. struct node *y = x->left;

23. x->left = y->right;

24. y->right = x;

25. return y;
26. }

27.

28. // A utility function to left rotate subtree rooted with x

29. // See the diagram given above.

30. struct node *leftRotate(struct node *x) {

31. struct node *y = x->right;

32. x->right = y->left;

33. y->left = x;

34. return y;

35. }

36.

37. // This function brings the key at root if key is present in tree.

38. // If key is not present, then it brings the last accessed item at

39. // root. This function modifies the tree and returns the new root

40. struct node *splay(struct node *root, int key) {

41. // Base cases: root is NULL or key is present at root

42. if (root == NULL || root->key == key)

43. return root;

44.

45. // Key lies in left subtree

46. if (root->key > key) {


47. // Key is not in tree, we are done

48. if (root->left == NULL)

49. return root;

50.

51. // Zig-Zig (Left Left)

52. if (root->left->key > key) {

53. // First recursively bring the key as root of left-left

54. root->left->left = splay(root->left->left, key);

55.

56. // Do first rotation for root, second rotation is done after else

57. root = rightRotate(root);

58. } else if (root->left->key < key) // Zig-Zag (Left Right)

59. {

60. // First recursively bring the key as root of left-right

61. root->left->right = splay(root->left->right, key);

62.

63. // Do first rotation for root->left

64. if (root->left->right != NULL)


65. root->left = leftRotate(root->left);

66. }

67.

68. // Do second rotation for root

69. return (root->left == NULL) ? root : rightRotate(root);

70. } else // Key lies in right subtree

71. {

72. // Key is not in tree, we are done

73. if (root->right == NULL)

return
74. root;
75.

76. // Zag-Zig (Right Left)

77. if (root->right->key > key) {

78. // Bring the key as root of right-left

79.root->right->left = splay(root->right->left, key);


80.

81. // Do first rotation for root->right

82. if (root->right->left != NULL)

83. root->right = rightRotate(root->right);

84. } else if (root->right->key < key)// Zag-Zag (Right Right)


85. {

86. // Bring the key as root of right-right and do first rotation

87. root->right->right = splay(root->right->right, key);

88. root = leftRotate(root);

89. }

90.

91. // Do second rotation for root

92. return (root->right == NULL) ? root : leftRotate(root);

93. }

94. }

95.

96. // The search function for Splay tree. Note that this function

97. // returns the new root of Splay Tree. If key is present in tree

98. // then, it is moved to root.

99. struct node *search(struct node *root, int key) {

100. return splay(root, key);

101. }

102.

103. // A utility function to print preorder traversal of the tree.

104. // The function also prints height of every node

105. void preOrder(struct node *root) {


106. if (root != NULL) {

107. printf("%d ", root->key);

108. preOrder(root->left);

109. preOrder(root->right);

110. }

111. }

113. int main() {

114. struct node *root = newNode(100);

115. root->left = newNode(50);

116. root->right = newNode(200);

117. root->left->left = newNode(40);

118. root->left->left->left = newNode(30);

119. root->left->left->left->left = newNode(20);

120.

121. printf("Preorder traversal of the Splay tree is \n");

122. preOrder(root);

123. root = search(root, 20);

124. printf("\nPreorder traversal of after search Splay tree is \n");

125. preOrder(root);

126. return 0;

127. }
Advantages of Splay tree:

Splay trees offer several advantages over other tree data structures:
1. Efficient: Splay trees have an amortized O(log n) time complexity for all
operations, making them highly efficient for large datasets.
2. Self-balancing: Splay trees automatically restructure themselves during
insertions and deletions, making them self-balancing and reducing the
risk of degeneration.
3. Easy implementation: Splay trees are relatively simple to implement
compared to other self-balancing trees like AVL and Red-Black trees.
4. Cache locality: Splay trees have excellent cache locality, which means
that frequently accessed nodes are likely to be in the cache, resulting in
faster access times.
5. Space efficiency: Splay trees use less space than other self-balancing
trees, making them ideal for applications with limited memory.
6. Flexibility: Splay trees can be used to implement a variety of data
structures, including sets, maps, priority queues, and more.

Disadvantages of Splay Tree:

While Splay trees offer several advantages, they also have some disadvantages:
1. Lack of strict balancing: Splay trees do not maintain strict balancing,
which can result in the worst-case performance being O(n) for certain
operations.
2. Complexity: Splay trees are more complex than basic tree data
structures, which can make them more difficult to implement and
debug.
3. Lack of guaranteed worst-case performance: While Splay trees have an
amortized O(log n) time complexity, they do not guarantee this
performance for every operation.
4. Data movement: Splaying nodes during operations can require a
significant amount of data movement, which can impact performance in
some cases.
5. Worst-case scenarios: In some worst-case scenarios, splay trees can
become unbalanced and require significant restructuring, which can
result in poor performance.

Time complexity of Splay Treee


The worst-case time complexity of operations in a Splay tree is O(n), but in
practice, it is rare, and the average case time complexity for operations is O(log
n). Splay trees offer amortized time complexity, which ensures that the overall
performance of the tree remains efficient, even in the face of worst-case
scenarios.
The average-case time complexity for operations in a Splay tree is O(log n),
where n is the number of nodes in the tree. This means that the time required
for operations is proportional to the logarithm of the number of nodes in the
tree. Splay trees are designed to be self-balancing, which helps to ensure that
they remain balanced and maintain their efficient time complexity.
The best-case time complexity for operations in a Splay tree is O(log n), where
n is the number of nodes in the tree. However, the best-case scenario for a
Splay tree is not always relevant in practice, as the primary benefit of Splay
trees is their ability to self-balance and maintain efficient performance even in
the face of worst-case scenarios.

B-TREE
In computer science, a B-tree is a self-balancing tree data structure that maintains sorted data
and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-
tree generalizes the binary search tree, allowing for nodes with more than two children.[2]
Unlike other self-balancing binary search trees, the B-tree is well suited for storage systems
that read and write relatively large blocks of data, such as data:ases and file systems.

DIAGRAM;
Features of B-tree
B-trees are data structures used for efficient searching, insertion, and deletion operations in
large datasets. The key features of B-trees are:

• Balanced Tree: B-trees are balanced trees, which means that all the leaf nodes are at the
same level. This makes B-trees efficient for searching, as the height of the tree is
minimized, and the number of nodes that need to be traversed to reach a leaf node is
reduced.

• Variable Node Capacity: Unlike other trees, B-trees have a variable node capacity.
This means that each node in a B-tree can contain a variable number of keys and
child pointers, which depends on the size of the disk block.

• Multiple Keys Per Node: B-trees can store multiple keys in a single node, which
makes them efficient for storing large datasets.

• Fast Search and Retrieval: B-trees are optimized for fast search and retrieval of data.

Operations on a B-tree

Here are the basic operations of a B-tree:


Search: Search for a key in the B-tree. This operation is similar to searching in a binary
search tree, but instead of comparing the key with the value at each node, the B-tree
searches for the appropriate child to follow until the key is found or the search reaches a
leaf node.

Insertion: Insert a new key into the B-tree. This operation involves finding the appropriate
leaf node where the new key should be inserted, and then either inserting the key directly
into the node if it is not full, or splitting the node and redistributing the keys if it is full.
Deletion: Remove a key from the B-tree. This operation involves finding the node containing
the key to be deleted, and then either removing the key directly if it is in a leaf node, or
replacing itwith the key of its predecessor or successor if it is in an internal node. If
removing the key causes a node to have fewer than the minimum number of keys, the B-
tree must be rebalanced by either merging the node with a sibling or redistributing the keys
between the node and its siblings.

Implementation of B-Tree in C
#include <stdio.h>

#include <stdlib.h>

struct node {

int item;

struct node* left;

struct node* right;

};

// Inorder traversal

void inorderTraversal(struct node* root) {

if (root == NULL) return;

inorderTraversal(root->left);
printf("%d ", root->item);

inorderTraversal(root->right);

// Preorder traversal

void preorderTraversal(struct node* root) {

if (root == NULL) return;

printf("%d ", root->item);

preorderTraversal(root->left);

preorderTraversal(root->right);

// Postorder traversal

void postorderTraversal(struct node* root) {

if (root == NULL) return;

postorderTraversal(root->left);

postorderTraversal(root->right);

printf("%d ", root->item);

}
// Create a new Node
struct node* create(int value) {

struct node* newNode = malloc(sizeof(struct node));

newNode->item = value;

newNode->left = NULL;

newNode->right = NULL;

return newNode;

// Insert on the left of the node

struct node* insertLeft(struct node* root, int


value) { root->left = create(value);

return root->left;

// Insert on the right of the node

struct node* insertRight(struct node* root, int


value) { root->right = create(value);

return root->right;

int main() {

struct node* root = create(1);


insertLeft(root, 4);

insertRight(root, 6);

insertLeft(root->left, 42);

insertRight(root->left, 3);

insertLeft(root->right, 2);

insertRight(root->right, 33);

printf("Traversal of the inserted binary tree \n");

printf("Inorder traversal \n");

inorderTraversal(root);

printf("\nPreorder traversal \n");

preorderTraversal(root);

printf("\nPostorder traversal \n");


postorderTraversal(root);
}

Advantages of B-Trees:
• B-Trees have a guaranteed time complexity of O(log n) for basic operations like
insertion, deletion, and searching, which makes them suitable for large data sets and
real-time applications.

• B-Trees are self-balancing.

• High-concurrency and high-throughput.

• Efficient storage utilization.


Disadvantages of B-Trees:
• B-Trees are based on disk-based data structures and can have a high disk usage.
• Not the best for all cases.
• Slow in comparison to other data structures.

Time complexity:
Worst-Case Time Complexity (B-Tree)
• Find: O(log n) — B-Trees must be balanced by definition
• Insert: O(b log n) — B-Trees must be balanced by definition, and the re-balancing is
O(b log n) because, in the worst case, we need to shuffle around O(b) keys in a node at each
level to keep the sorted order property
• Remove: O(b log n) — B-Trees must be balanced by definition, and the re-balancing
is O(b log n) because, in the worst case, we need to shuffle around O(b) keys in a node at
each level to keep the sorted order property

Average-Case Time Complexity (B-Tree)


• Find: O(log n) — The formal proof is too complex for a summary slide
• Insert: O(log n) — The formal proof is too complex for a summary slide
• Remove: O(log n) — The formal proof is too complex for a summary slide

Best-Case Time Complexity (B-Tree)


• Find: O(1) —The query is the middle element of the root (so Binary Search finds it
first)
• Insert: O(log n) — All insertions happen at the leaves. In the best case, no re-sorting
or re-balancing is needed
• Remove: O(log n) — Removing from any location will require the re-adjustment of
child pointers or traversing the entire height of the tree. In the best case, no re-sorting or re-
balancing is needed

2-3 TREES
2-3 Trees are a type of self-balancing search tree data structure used in computer science,
which allows efficient search, insertion, and deletion of elements. The name "2-3" comes
from the fact that each internal node in the tree can have either two or three child nodes. A 2-
3 tree is a balanced tree, meaning that the height of the tree is always logarithmic with respect
to the number of elements stored in the tree. This balance is achieved using node splitting and
merging operations, which occur when the number of elements in a node exceeds a certain
threshold. It is invented in 1970 by John Hopcroft.
Feature:
➢ The main feature of a 2-3 tree is that it is a type of self-balancing search tree that can
store, retrieve, and delete elements efficiently in logarithmic time.
➢ The keys are stored in the internal nodes of the tree, and the values are stored in the
leaf nodes. Each internal node with two children has one key, which serves as a separator
between the keys of its left and right subtrees. Each internal node with three children has two
keys, which serve as separators between the keys of its left, middle, and right subtrees.

➢ 2-3 trees are balanced in the sense that all the leaf nodes are at the same depth, and
the difference in the height of any two subtrees rooted at the same level is at most one. This
balance property is maintained by performing certain rotations and node splits or merges
during insertion and deletion operations.

Pseudocode :
Node Structure;
typedef struct Node {
bool isLeaf;
int numKeys;
int key1, key2;
char* value1, *value2;
struct Node* child1, *child2, *child3;
} Node;

Search;

char* search(Node* root, int key) {

Node* currNode = root;

while (currNode != NULL) {

if (key == currNode->key1) {

return currNode->value1;

} else if (currNode->numKeys == 2 && key ==


currNode->key2) { return currNode->value2;
} else if (currNode->isLeaf) {

return NULL;

} else if (key < currNode->key1) { currNode =


currNode->child1;

} else if (currNode->numKeys == 2 && key <


currNode->key2) { currNode = currNode->child2;

} else {

currNode = currNode->child3;

}
return NULL;

INSERTION OPERATION:
void
insert(Node*
* root, int
key, char*
value) { if
(*root ==
NULL) {
*root = createLeafNode(key, value);
return;
}
Node* currNode = *root;
Node* parent = NULL;
while (!currNode->isLeaf) {
parent = currNode;
if (key < currNode->key1) {
currNode = currNode->child1;
} else if (currNode->numKeys == 2 && key <
currNode->key2) {
Fig: Searching Operation
currNode = currNode->child2;
} else {
currNode = currNode->child3;
}
}
if (currNode->numKeys == 1) {
insertIntoNode(currNode, key, value);
}else{
splitLeafNode(currNode, key, value,
&parent);
}
}

Deletion;

void delete(Node** root, int key) {


if (*root == NULL) {
return;
}
Node* currNode = *root;
Node* parent = NULL;
while (!currNode->isLeaf) {
parent = currNode;
if (key < currNode->key1) {
currNode = currNode->child1;
} else if (currNode->numKeys == 2 && key < currNode->key2)
{ currNode = currNode->child2;
} else {
currNode = currNode->child3;
}
}
if (currNode->numKeys == 1) {
deleteFromNode(currNode, key);
else {
mergeLeafNodes(currNode, &parent, key);
}
}
Time Complexity:
The time complexity of various operations in a 2-3 tree depends on the height of
the tree, which in turn depends on the number of nodes in the tree and the
balancing of the tree.
Here are the time complexities of the most common operations in 2-3 trees:
1. Search: The time complexity of search operation in a 2-3 tree is O(log n),
where n is the number of elements in the tree. This is because the tree is
balanced, and the search operation requires traversing the height of the tree,
which is logarithmic in the number of nodes.
2. Insertion: The time complexity of inserting a new element in a 2-3 tree is
also O(log n). This is because inserting a new element in the tree may require
splitting nodes, which increases the height of the tree, but the tree is always
balanced, so the height of the tree remains logarithmic.
3. Deletion: The time complexity of deleting an element from a 2-3 tree is also
O(log n). This is because deleting an element in the tree may require merging
nodes or restructuring the tree, which may change the height of the tree, but the
tree is always balanced, so the height of the tree remains logarithmic.

Advantages:
Here are some advantages of using 2-3 trees:
1. Balanced: 2-3 trees are always balanced, which ensures that the height of the
tree is always logarithmic in the number of elements stored in the tree. This
balancing property makes 2-3 trees highly efficient for search operations, as
they minimize the number of nodes that need to be traversed to find a particular
element.
2. Sorted: The keys in a 2-3 tree are always sorted in ascending order, which
makes range queries and other operations that require sorted data very efficient.
3. Self-balancing: 2-3 trees are self-balancing, which means that they
automatically adjust their structure to maintain balance as new elements are added
or removed from the tree. This self-balancing property eliminates the need for
manual balancing and ensures that the tree is always efficient for search, insert,
and delete operations.
4. Memory efficient: 2-3 trees are memory-efficient because they store
multiple keys and values in each node, which reduces the overall number of
nodes required to store a given number of elements.

Disadvantages:
Here are some disadvantages of using 2-3 trees:
1. Complex implementation: The implementation of 2-3 trees can be complex,
as the structure of the tree must be maintained through complex splitting and
merging operations. This can make the code for 2-3 trees more difficult to write
and maintain compared to simpler data structures like binary search trees.
2. Memory overhead: 2-3 trees require more memory overhead than simpler
data structures like binary search trees, as each node can have multiple keys and
values, and each node can have multiple children. This overhead can be
significant when storing large numbers of elements.
3. Slowest operations: Some operations in 2-3 trees, such as inserting and
deleting elements, can be slower than similar operations in simpler data
structures like binary search trees. This is because these operations can require
more complex node splitting and merging operations to maintain the balance of
the tree.
4. Complexity in programming: The complexity of the 2-3 tree structure can
make programming more difficult than simpler data structures, as the code for
splitting and merging nodes can be complex and error-prone.

Tournament Tree
The Tournament tree is a complete binary tree with n external nodes and n – 1 internal nodes.
The external nodes represent the players, and the internal nodes are representing the winner
of the match between the two players. This tree is also known as Selection tree .
There are two types of Tournament Trees −
• Winner Tree
• Looser Tree
Winner tree-Winner tree is a complete binary tree, in which each node is representing the
smaller or greater of its two children, is called winner tree. The root is holding the smallest or
greatest node of the tree. The winner of the tournament tree is the smallest or greatest n key in all
the sequences.
Winner tree can be formed in O(log n) time.

Looser tree-Looser Trees are complete binary tree for n players where n external nodes and
n – 1 internal nodes are present. The looser of the match is stored in the internal nodes. But
in this overall winner is stored at tree[0]. The looser is an alternative representation, that
stores the looser of a match at the corresponding node.
Looser tree can also be formed in O(log n) time.

FEATURES
• This tree is rooted. So the link in the tree and directed path from parent to
children, and there is a unique element with no parents
• The parent value is less or equal to that node to general any comparison
operators, can be used as long as the relative value of the parent and children are invariant
throughout the tree
• Trees with a number of nodes not a power of 2, contain holes. Holes can be
present at any place in the tree.
• This tree is a proper generalization of binary heaps
• The root will represent overall winner of the tournament.

PSEUDOCODE:
TournamentTree(A):
n = length(A)
tree = [0] * (2*n - 1) // Initialize empty tree
/ Place the elements of the set in the leaves of
the tree for i in range(n):
tree[n-1+i] = A[i]
/ Compute the winners of each pair of elements
for i in range(n-2, -1, -1):
tree[i] = min(tree[2*i+1], tree[2*i+2]) //or max for max-heap
return tree

Time Complexity:
The time complexity of a tournament tree depends on the specific operation being performed
on the tree.
The best case time complexity of a tournament tree occurs when we only need to find the
winner of the tournament, which is the root node. In this case, the time complexity is O(log
n), where n is the number of contestants.
The worst case time complexity of a tournament tree occurs when we need to perform an
operation on every node in the tree. For example, if we need to update the value of every
node in the tree, the time complexity would be O(n log n), where n is the number of
contestants.
The average case time complexity of a tournament tree depends on the specific distribution of
inputs. However, in general, the time complexity is O(n log n) for most operations.

Advantages:
• Efficiently finds the winner
• Space-efficient
• Can be used for a variety of operations

Disadvantages:
• Not suitable for dynamic data
• Limited to binary tree structure
• May require extra storage for ties

Interval Tree
An interval tree is a data structure used in computer science to efficiently store and search for
intervals of real numbers or time intervals. It is a type of binary search tree where each node
represents an interval, and the nodes are ordered based on their interval values. The interval
tree allows for fast interval queries, such as finding all intervals that overlap with a given
interval or point. This is achieved by storing additional information in each node, such as the
maximum endpoint of all intervals in its subtree. This information is used to quickly
determine which subtrees may contain relevant intervals during a search.
Features of Interval tree:
• Fast Interval Querying: Interval trees can perform interval queries in O(log n) time,
where n is the number of intervals in the tree. This makes them a popular choice for
applications that require fast interval searches, such as database indexing, scheduling,
and computational geometry.
• Range Queries: Interval trees can also be used to perform range queries, where all
intervals within a specified range are returned. This is useful in applications such as
geographical information systems, where a user may want to find all points within a
specified radius.

• Space Efficiency: Interval trees are space-efficient data structures, requiring O(n)
space, where n is the number of intervals in the tree. This makes them a practical
choice for applications with large datasets.

• Dynamic Insertion and Deletion: While interval trees are designed to store static
intervals, they can be modified to support dynamic insertion and deletion of
intervals. This can be useful in applications where intervals are added or removed
frequently, such as real-time scheduling or network traffic analysis. However,
dynamic modification can affect the performance of interval tree operations, and
care must be taken to ensure the tree remains balanced.
Operations of the interval tree:
Interval trees are a type of data structure used for organizing and searching intervals (i.e.,
ranges of values). The following are some of the operations that can be performed on an
interval tree:
• Insertion: Add a new interval to the tree.
• Deletion: Remove an interval from the tree.
• Search: Find all intervals that overlap with a given interval.
• Query: Find the interval in the tree that contains a given point.
• Range query: Find all intervals that overlap with a given range.
• Merge: Combine two or more interval trees into a single tree.
• Split: Divide a tree into two or more smaller trees based on a given interval.
• Balancing: Maintain the balance of the tree to ensure its performance is optimized.
• Traversal: Visit all intervals in the tree in a specific order, such as in-order, pre-order,
or post-order.

Advantages of Interval tree:


• Efficient searching: Interval trees can perform interval searching in O(log n) time
complexity, which is very efficient for large data sets.
• Range searching: Interval trees support efficient range searching for intervals that
overlap with a given interval or point. This makes it useful in various applications such as
database systems, scheduling, and computational geometry.
• Space efficiency: Interval trees use a relatively small amount of memory compared to
other tree structures that can perform the same operations.

Disadvantages of Interval Trees:


• Complexity: The implementation of an interval tree can be complex and require a lot
of effort to get right. This is especially true for more advanced variations of the interval tree
such as the augmented interval tree.
• Memory overhead: While interval trees use less memory than other tree structures,
they still require additional memory to store interval endpoints and other metadata.
• Limited application: Although interval trees can be used in a wide range of
applications, they are primarily designed for interval searching and may not be the best
choice for other types of queries.
• Maintenance overhead: Like all tree structures, interval trees require maintenance
operations such as rebalancing and updating metadata to maintain their efficiency. These
operations can be costly in terms of time and resources.

Time complexity:
The worst-case time complexity of an interval tree is O(n), where n is the number of intervals
in the tree. This can occur when the intervals are highly overlapping, causing the tree to
degenerate into a linked list. However, in practice, interval trees are usually well-balanced
and provide efficient O (log n) time complexity for interval searching and range searching
operations. Augmented interval trees can also be used to further improve the worst-case time
complexity of interval tree operations.

The average case time complexity of an interval tree is O(log n), where n is the number of
intervals in the tree. This is because interval trees are typically well-balanced and have a
height proportional to log n, which ensures efficient search, insertion, and deletion
operations.
The best case time complexity of an interval tree is O(log n), where n is the number of
intervals in the tree. This occurs when the tree is perfectly balanced, and all interval
searching and range searching operations can be performed with the minimum possible
number of comparisons.
RESEARCH PAPERS
HERE ARE SOME RESEARCH PAPERS ON THE ABOVE MENTIONED TREE DATA
STRUCTURES

➢ "An Empirical Comparison of Binary Search Trees" by C. Martínez et al.


This paper presents an empirical comparison of different binary search trees,
including BST, Red-Black Trees, and Splay Trees.

ABSTRACT
Algorithms for dynamically maintaining and utilizing binary search trees are
empirically compared and evaluated. The evaluation is based on the performance of
the algorithms using simulated search requests. Search keys are generated using
weights which are unknown and in general unequal. The algorithms provide for
inserting new nodes, searching for existing nodes, and in some cases dynamically
modifying the tree in an attempt to reduce its weighted path length or search time.
Included in the evaluation are algorithms for height-balanced trees, weight-balanced
trees, and trees of bounded balance, as well as some combination algorithms. Also
included are a basic search algorithm which performs no rebalancing, and an
optimizing algorithm. In addition to the standard data, unweighted search keys,
specially weighted search keys, and partially ordered key sequences are also
considered. The evaluation is based primarily on the execution times of the
algorithms, although weighted path lengths are also given. A combination algorithm
gives the fastest speeds, although the basic search algorithm is shown to be the best
for most purposes.

https://scholar.google.com/citations?user=irBfUS0AAAAJ&hl=en&oi=sra

➢ "A Survey of B-Tree Locking Techniques" by G. Graefe. This paper surveys


different locking techniques for B-trees in database systems.

ABSTRACT
B-trees have been ubiquitous in database management systems for several
decades, and they are used in other storage systems as well. Their basic
structure and basic operations are well and widely understood including
search, insertion, and deletion. Concurrency control of operations in B-trees,
however, is perceived as a difficult subject with many subtleties and special
cases. The purpose of this survey is to clarify, simplify, and structure the topic
of concurrency control in B-trees by dividing it into two subtopics and
exploring each of them in depth.

https://scholar.google.com/citations?user=pdDeRScAAAAJ&hl=en&oi=sra

➢ "Efficient Search Operations in Interval Trees with Augmented Node


Splits" by Dana Shapira. This paper proposes a new approach to augmenting
interval trees to improve search operations.
ABSTRACT

We developed a new method for annotating genomic intervals that efficiently


handles all possible interval relations according to Allen's interval algebra. We
achieve this by transforming interval queries into range queries and using
range trees. We compared our method with conventional interval trees using
experiments on noncoding element annotations in personal genomes. The
results show that our approach is more efficient than conventional interval
trees, especially the augmented range tree with fractional cascading, making
it a useful tool for large-scale genomic data analysis in precision medicine.

https://scholar.google.com/citations?user=xYvlBV0AAAAJ&hl=en&oi=sra

➢ "2-3 Trees with Constant Time Node Access" by E. P. Markatos and K.


M. Chandy. This paper proposes a new technique for implementing 2-3 Trees
with constant time node access.
ABSTRACT
Those 2,3-trees that are minimal in expected number of comparisons per
access for a given number of keys are characterized. The characterization
yields directly a linear-time algorithm for constructing a minimal-comparison
2,3-tree for a given sorted set of keys. Regrettably, the property of
comparison minimality is incompatible with the earlier-studied property of
node-visit optimality. Specifically, the two types of optimality can coexist in
a K-key 2,3-tree only for sixteen values of K, none exceeding 32. In contrast,
comparison-minimal node-visit-pessimalK-key 2,3-trees exist for just over half
the possible values of K.

https://epubs.siam.org/doi/abs/10.1137/0207037
➢ "Tournament Trees and Their Applications" by Y. Chen and L. Zhao.
This paper provides an overview of tournament trees and their applications
in computer science, including sorting, selection, and scheduling algorithms.
Additionally, the paper explores the theoretical analysis of tournament trees
and provides experimental results to validate the theory.
ABSTRACT
A digraph is said to be n-unavoidable if every tournament of order n contains
it as a subgraph. Let f(n) be the smallest integer such that every oriented tree
is f(n)-unavoidable. Sumner (see (Reid and Wormald, Studia Sci. Math.
Hungaria 18 (1983) 377)) noted that f(n)⩾2n−2 and conjectured that equality
holds. Häggkvist and Thomason established the upper
bounds f(n)⩽12n and f(n)⩽(4+o(1))n. Let g(k) be the smallest integer such
that every oriented tree of order n with k leaves is (n+g(k))-unavoidable.
Häggkvist and Thomason (Combinatorica 11 (1991) 123) proved
that g(k)⩽2512k3. Havet and Thomassé conjectured that g(k)⩽k−1. We study
here the special case where the tree is a merging of paths (the union of
disjoint paths emerging from a common origin). We prove that a merging of
order n of k paths is (n+32(k2−3k)+5)-unavoidable. In particular, a tree with
three leaves is (n+5)-unavoidable, i.e. g(3)⩽5. By studying trees with few
leaves, we then prove that f(n)⩽385n−6.

https://www.sciencedirect.com/science/article/pii/S0012365X00004635
CONCLUSION
The Splay tree, B-tree, 2-3 tree, Tournament tree, and Interval tree are all
important data structures that are commonly used in the design and analysis of
algorithms.
Splay trees are self-adjusting binary search trees that rearrange themselves
dynamically to ensure efficient access to frequently accessed nodes. They have
a worst-case time complexity of O(n) for some operations, but on average, they
have a logarithmic time complexity.
B-trees are balanced multiway search trees that are designed for use on
secondary storage devices. They have a fixed degree, which determines the
number of keys that can be stored in each node. B-trees provide efficient
insertion, deletion, and search operations with a time complexity of O(log n).
2-3 trees are a type of balanced search tree that can have two or three children
per node. They are similar to B-trees but have different rules for balancing and
restructuring. 2-3 trees have a worst-case time complexity of O(log n) for all
operations.
Tournament trees are specialized trees that are used to determine the
maximum or minimum value in a set of elements. They have a logarithmic time
complexity for both construction and querying.
Interval trees are data structures that are used to efficiently search for
overlapping intervals. They store intervals in a balanced binary search tree and
provide efficient insertion, deletion, and search operations with a time
complexity of O(log n).
In summary, understanding the characteristics and applications of these data
structures can help developers to design more efficient and effective
algorithms for a wide range of applications.
REFERENCES
1. "The Sibley Guide to Trees" by David Allen Sibley - This guidebook covers over 600
species of trees found in North America, including their identification, range, and
habitat.

2. "The Tree Book: Superior Selections for Landscapes, Streetscapes, and Gardens" by
Michael A. Dirr - This book is a comprehensive guide to selecting and growing trees for
different landscapes, including over 2,000 species and cultivars.

3. "The Hidden Life of Trees" by Peter Wohlleben - This book explores the complex social
networks and communication systems of trees, as well as their role in forest
ecosystems.

4. "Trees of North America: A Guide to Field Identification" by C. Frank Bro

5. Knuth, Donald M (1998). "6.2.4". The Art of Computer Programming. Vol. 3 (2 ed.).
Addison Wesley. The 2–3 trees defined at the close of Section 6.2.3 are equivalent to B-
Trees of order 3.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy