0% found this document useful (0 votes)
10 views6 pages

NLM Big O Exam 2 Summary

Uploaded by

Jessica Milner
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views6 pages

NLM Big O Exam 2 Summary

Uploaded by

Jessica Milner
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Here is a cheat sheet for the time complexities of various data structures, operations, and

algorithms, as mentioned in the provided sources:

Priority Queues

A priority queue is a data structure that allows you to insert elements with an associated priority
and efficiently retrieve the element with the highest priority.

Priority Queue with Unsorted List Implementation


Operation Time Complexity Notes

insert(e) O(1) Adding an element to the end


of the list is a constant-time
operation.

removeMin() O(n) Finding the minimum element


in an unsorted list requires a
linear search.

min() O(n) Similar to removeMin(),


finding the minimum element
takes O(n) time.

Priority Queue with Sorted List Implementation


Operation Time Complexity Notes

insert(e) O(n) Inserting into a sorted list


requires finding the correct
position, which can take O(n)
time in the worst case.

removeMin() O(1) Removing the minimum


element from the front of the
sorted list is a constant-time
operation.

min() O(1) Accessing the minimum


element at the front of the
sorted list takes constant
time.

Priority Queue with Heap Implementation


Operation Time Complexity Notes
insert(e) O(log n) Inserting an element involves
adding it to the end and
performing an "upheap"
operation to maintain the
heap property, which takes
logarithmic time.

removeMin() O(log n) Removing the minimum


element involves replacing it
with the last element and
performing a "downheap"
operation to restore the heap
property, which also takes
logarithmic time.

min() O(1) Accessing the minimum


element at the root of the
heap takes constant time.

size O(1)

empty O(1)

Binary Trees

A binary tree is a tree data structure where each node has at most two children, referred to as
the left and right children.

Operations in a Binary Tree Implemented with a Linked List


Operation Time Complexity Notes

left, right, parent, O(1) These operations involve


isExternal, isRoot accessing pointers directly,
which takes constant time.

size, empty O(1) These operations likely rely


on a stored size value,
making them constant time.

root O(1) Accessing the root pointer


directly takes constant time.

expandExternal, O(1) These operations involve


removeAboveExternal pointer manipulation and take
constant time.
positions O(n) This operation involves
traversing the tree to collect
all positions, which takes
linear time.

Binary Search Trees (BSTs)

A binary search tree is a type of binary tree where nodes are ordered based on their keys,
allowing efficient search, insertion, and deletion operations.

Operations in a Binary Search Tree


Operation Time Complexity Notes

find(k) O(log n) (average case), In a balanced BST, searching


O(n) (worst case) takes logarithmic time.
However, in an unbalanced
tree, the worst-case scenario
can degrade to linear time.

insert(k, v) O(log n) (average case), Similar to searching, insertion


O(n) (worst case) in a balanced BST is efficient
but can degrade in an
unbalanced tree.

erase(k) O(log n) (average case), Deletion in a BST involves


O(n) (worst case) finding the node and
potentially restructuring the
tree, which can be efficient in
a balanced tree but can take
linear time in the worst case.

size O(1)

empty O(1)

AVL Trees

An AVL tree is a self-balancing binary search tree that ensures a balanced structure to
guarantee logarithmic time complexity for operations.

Operations in an AVL Tree


Operation Time Complexity Notes
find(k) O(log n) Due to the self-balancing
property, search operations in
an AVL tree always take
logarithmic time.

insert(k, v) O(log n) Insertion in an AVL tree


involves adding the node and
potentially performing
rotations to maintain balance,
which takes logarithmic time.

erase(k) O(log n) Similar to insertion, deletion


can involve rotations to
rebalance the tree but still
guarantees logarithmic time
complexity.

Hash Tables

A hash table is a data structure that uses a hash function to map keys to indices in an array,
allowing efficient access, insertion, and deletion operations.

Operations in a Hash Table


Operation Time Complexity Notes

put(k, v) O(1) (average case), O(n) In the average case, inserting


(worst case) into a hash table is a
constant-time operation.
However, if many collisions
occur, the worst-case
complexity can become
linear.

find(k) O(1) (average case), O(n) Similar to insertion, finding an


(worst case) element in a hash table is
typically very efficient but can
degrade in the worst case
due to collisions.

erase(k) O(1) (average case), O(n) Deleting an element from a


(worst case) hash table involves finding it
and removing it, which is
usually fast but can be
impacted by collisions.
Other Algorithms

Binary Search

Binary search is an algorithm for finding a target value in a sorted array by repeatedly dividing
the search interval in half.

Operation Time Complexity Notes

binarySearch(array, O(log n) Since the search space is


left, right, target) halved in each step, binary
search has a logarithmic time
complexity.

Sorting with a Priority Queue

Sorting a list of elements can be done using a priority queue by inserting all elements into the
queue and then removing them in sorted order.

Algorithm Time Complexity Notes

PQ-Sort(S, C) O(n log n) Inserting all elements takes


O(n log n) time. Removing
elements in sorted order also
takes O(n log n) time,
resulting in an overall
complexity of O(n log n).

Huffman Coding

Huffman coding is a data compression algorithm that uses a priority queue to build a Huffman
tree, assigning variable-length codes to characters based on their frequencies.

Step Time Complexity Notes

Calculate Frequency O(n) Counting the frequency of


each character requires a
linear pass through the data.

Build Trees O(n) Creating one-node trees for


each unique character takes
linear time.
Insert Trees into Priority O(n log n) Inserting n elements into a
Queue priority queue takes O(n log
n) time.

Merge Trees O(n log n) Merging n trees involves n-1


merge operations, each
taking O(log n) time in a
priority queue.

Generate Codes O(n) Traversing the final tree to


assign codes takes linear
time.

Overall O(n log n) The dominant step in


Huffman coding is the
merging of trees, resulting in
an overall complexity of O(n
log n).

The time complexity of the algorithms and operations can vary depending on the specific
implementation and the characteristics of the data being processed. However, this cheat sheet
provides a general overview of the expected performance based on the information in the
sources.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy