0% found this document useful (0 votes)
47 views5 pages

Output

This document discusses time complexity and how to analyze the runtime of algorithms. It defines big O notation for describing an algorithm's worst-case complexity in terms of the input size n. Common complexities like O(1), O(log n), O(n), O(n log n), and O(n2) are explained. Upper bounds are provided for the value of n given different complexities. The document also notes that constant factors are ignored in big O notation but can impact whether an algorithm meets tight time limits.

Uploaded by

Ibrahim Anis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views5 pages

Output

This document discusses time complexity and how to analyze the runtime of algorithms. It defines big O notation for describing an algorithm's worst-case complexity in terms of the input size n. Common complexities like O(1), O(log n), O(n), O(n log n), and O(n2) are explained. Upper bounds are provided for the value of n given different complexities. The document also notes that constant factors are ignored in big O notation but can impact whether an algorithm meets tight time limits.

Uploaded by

Ibrahim Anis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Prev Next

Time Complexity
Authors: Darren Yao, Benjamin Qi
Evaluating a program's time complexity, or how fast your program runs.

TABLE OF CONTENTS
Complexity Calculations
Common Complexities and Constraints
Constant Factor

RESOURCES
IUSACO 3 - Algorithm Analysis
module is based off this
CPH 2 - Time Complexity
Intro and examples
PAPS 5 - Time Complexity
More in-depth. In particular, 5.2 gives a formal definition of Big O.

In programming contests, your program needs to finish running within a certain timeframe
in order to receive credit. For USACO, this limit is 2 seconds for C submissions, and 4
seconds for Java/Python submissions. A conservative estimate for the number of
operations the grading server can handle per second is 108 , but it could be closer to 5 ⋅
108 given good constant factors*.

Complexity Calculations
We want a method to calculate how many operations it takes to run each algorithm, in
terms of the input size n. Fortunately, this can be done relatively easily using Big O
Notation, which expresses worst-case time complexity as a function of n as n gets
arbitrarily large. Complexity is an upper bound for the number of steps an algorithm
requires as a function of the input size. In Big O notation, we denote the complexity of a
function as O(f (n)), where constant factors and lower-order terms are generally omitted
from f (n). We'll see some examples of how this works, as follows.
The following code is O(1), because it executes a constant number of operations.
CPP
int a = 5;
int b = 7;
int c = 4;
int d = a + b + c + 153;

Input and output operations are also assumed to be O(1). In the following examples, we
assume that the code inside the loops is O(1).
The time complexity of loops is the number of iterations that the loop runs. For example,
the following code examples are both O(n).
CPP
for (int i = 1; i <= n; i++) {
// constant time code here
}

CPP
int i = 0;
while (i < n) {
// constant time node here
i++;
}

Because we ignore constant factors and lower order terms, the following examples are
also O(n):
CPP
for (int i = 1; i <= 5*n + 17; i++) {
// constant time code here
}

CPP
for (int i = 1; i <= n + 457737; i++) {
// constant time code here
}

We can find the time complexity of multiple loops by multiplying together the time
complexities of each loop. This example is O(nm), because the outer loop runs O(n)
iterations and the inner loop O(m).
CPP
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
// constant time code here
}
}

In this example, the outer loop runs O(n) iterations, and the inner loop runs anywhere
between 1 and n iterations (which is a maximum of n). Since Big O notation calculates
worst-case time complexity, we treat the inner loop as a factor of n.* Thus, this code is
O(n2 ).
CPP
for (int i = 1; i <= n; i++) {
for (int j = i; j <= n; j++) {
// constant time code here
}
}

If an algorithm contains multiple blocks, then its time complexity is the worst time
complexity out of any block. For example, the following code is O(n2 ).
CPP
for (int i = 1; i <= n; i++) {
for(int j = 1; j <= n; j++) {
// constant time code here
}
}
for (int i = 1; i <= n + 58834; i++) {
// more constant time code here
}

The following code is O(n2 + m), because it consists of two blocks of complexity O(n2 )
and O(m), and neither of them is a lower order function with respect to the other.
CPP
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
// constant time code here
}
}
for (int i = 1; i <= m; i++) {
// more constant time code here
}
Common Complexities and Constraints
Complexity factors that come from some common algorithms and data structures are as
follows:
Warning!
Don't worry if you don't recognize most of these! They will all be introduced later.
Mathematical formulas that just calculate an answer: O(1)
Binary search: O(log n)
Ordered set/map or priority queue: O(log n) per operation
Prime factorization of an integer, or checking primality or compositeness of an integer
naively: O( n)
Reading in n items of input: O(n)
Iterating through an array or a list of n elements: O(n)
Sorting: usually O(n log n) for default sorting algorithms (mergesort,
Collections.sort , Arrays.sort )

Java Quicksort Arrays.sort function on primitives: O(n2 )


See Introduction to Data Structures for details.
Iterating through all subsets of size k of the input elements: O(nk ). For example,
iterating through all triplets is O(n3 ).
Iterating through all subsets: O(2n )
Iterating through all permutations: O(n!)
Here are conservative upper bounds on the value of n for each time complexity. You might
get away with more than this, but this should allow you to quickly check whether an
algorithm is viable.
n Possible complexities
n ≤ 10 O(n!), O(n7 ), O(n6 )
n ≤ 20 O(2n ⋅ n), O(n5 )
n ≤ 80 O(n4 )
n ≤ 400 O(n3 )
n ≤ 7500 O(n2 )
n ≤ 7 ⋅ 104 O(n n)
n Possible complexities
n ≤ 5 ⋅ 105 O(n log n)
n ≤ 5 ⋅ 106 O(n)
n ≤ 1018 ,
O(log2 n) O(log n) O(1) ,
Warning!
A significant portion of Bronze problems will have n ≤ 100. This doesn't give much of a hint
regarding the intended time complexity. The intended solution could still be O(n)!

Constant Factor
Constant factor refers to the idea that different operations with the same complexity take
slightly different amounts of time to run. For example, three addition operations take a bit
longer than a single addition operation. Another example is that although binary search
and set insertion are both O(log n), binary searching is noticeably faster.
Constant factor is entirely ignored in big-O notation. This is fine most of the time, but if
the time limit is particularly tight, you may TLE with the intended complexity. When this
happens, it is important to keep the constant factor in mind. One example is, a piece of
code that iterates through all ordered triplets runs in O(n3 ) time, but could be sped up by
a factor of 6 by iterating through unordered triplets.
For now, don't worry about optimizing constant factors -- just be aware of them.

Module Progress: Not Started

Prev Next

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy