COMP90038 2022S1 A1 Solutions
COMP90038 2022S1 A1 Solutions
Problems
1. [7 Marks] Consider the following recursive function, which takes an unordered array of inte-
gers A, and an integer k, and returns True if k appears in A and False otherwise.
You may assume the input array is of length n = 3m for some positive integer m, so that the
argument for each recursive call is an exact third of the size of the input array.
W (n) = 3W ( n3 ) + 2
= 3(3W ( 3n2 ) + 2) + 2
= 32 × W ( 3n2 ) + 3 × 2 + 2
= 9(3W ( 3n3 ) + 2) + 3 × 2 + 2
= 33 × W ( 3n3 ) + 32 × 2 + 3 × 2 + 2
···
= 3k × W ( 3nk ) + 3k−1 × 2 + · · · + 31 × 2 + 2
Since n = 3m , let k = m, we have 3k = n,
= n × W ( nn ) + 2 × 3 2−1
k
= 2n + n − 1
= 3n − 1
(b) Use O, Ω, Θ to bound the time complexity of ThirdsSearch, using T (n) for its runtime.
You must include an upper and lower bound. Full marks will be awarded for solutions
with the tightest possible bounds.
Solution:
This algorithm is not input sensitive, so the above expression applies to all inputs of size
n. The worst case and the best case are the same.
So W (n) ∈ Ω(n) and W (n) ∈ O(n) =⇒ W (n) ∈ Θ(n). Thus T (n) ∈ Θ(n).
(c) Mr. Clever suggested the following change to the algorithm listed above: if a is True
then we can immediately return True without the need to evaluate b or c; similarly, we
may check b before evaluating c. With this change, what would be the best case and
worst case complexity of ThirdsSearch? Briefly justify your answers.
Solution:
The worst case is the same, Θ(n), for example consider k not existing in the array (or
existing at index n − 1).
The best case however is now Θ(log3 n), when we only evaluate the first branch each
time.
So the new improved algorithm is Ω(log n) and O(n).
2. [6 Marks] We define two non-negative integers to be close friends if their octal representa-
tions (without leading zeros) are permutations of each other. For example, 519 (octal 1007)
and 3592 (octal 7010) are close friends while 347 (octal 533) and 1579 (3053) are not. As
a special case, an integer is considered to be a close friend of itself.
Design an algorithm CloseFriend(num1, num2) in pseudocode that takes two non-negative
integers as input, the algorithm should return True if the two integers are close friends,
False otherwise.
NOTE: Partial marks will be awarded to working but less efficient implementations.
Solution:
To convert a natural number from decimal to octal, we may keep dividing the input
number by 8 and taking the remainder. The remainders form the reversed order of the
octal representation of the original number.
We may use an additional array to store the number of occurrence of each possible digit
(0 to 7) of an octal value. In the below implementation, we increment the number of
occurrence of the digit when we process through each digit in num1, and decrease that
value when handling num2. If the two input integers are close friends, we should end
up with an array of all zeros.
function CloseFriend(num1, num2)
f req ← [0, 0, 0, 0, 0, 0, 0, 0] ◃ To store frequency of each digit 0-7
while num1 > 0 do
f req[num1 % 8] ← f req[num1 % 8] + 1 ◃ % stands for the modulus operation
num1 ← num1 // 8 ◃ // stands for integer division
while num2 > 0 do
f req[num2 % 8] ← f req[num2 % 8] − 1
num2 ← num2 // 8
for i ← 0 to 7 do
if f req[i] ̸= 0 then
return False
return True
3. [8 Marks] In this task, we will implement a queue using only the stack ADT. You may assume:
• in addition to the usual push(x) and pop() operations, the stack ADT also comes with
a size() operation, which returns the number of elements currently stored;
• size(), push(x) and pop() are constant-time operations.
(a) Write pseudocode for the functions enqueue(x) and dequeue(). Partial marks will be
awarded to working but less efficient implementations.
Hint: you should not need more than three stacks.
Solution:
Two stacks are needed here. We will call them S1 and S2.
function enqueue(x)
S1.push(x)
function dequeue( )
if S2.size() = 0 then
while S1.size() > 0 do
S2.push(S1.pop())
return S2.pop()
(b) Use O, Ω and/or Θ to make the strongest possible claims about the complexity of your
enqueue(x) and dequeue() functions respectively, considering both the best case and
the worst case. Briefly justify your answers.
Solution:
The complexity of enqueue(x) is Θ(1) as we are pushing the element into the first stack.
The complexity of dequeue() is dependent on the state of the second stack. If the second
stack is empty, we will need to move all elements from the first stack to the second stack,
requiring Θ(n) time where n is the number of elements stored in the first stack. If the
second stack is non-empty, the dequeue() operation only takes constant time to pop an
element from the second stack. Therefore, the time complexity of dequeue() is Ω(1)
and O(n), where n is the number of elements stored in the queue when we perform a
dequeue.
(c) Suppose n elements are enqueued and dequeued in some order. What is the worst case
complexity for this entire process (calling enqueue(x) and dequeue() n times)? Briefly
justify your answer.
Solution:
We may consider the operations performed on each element. Each element is pushed
into the first stack once, later moved (pop from stack one, push into stack two), and
eventually popped from the second stack. Thus four operations are performed on each
element, giving us Θ(n) as the overall complexity for enqueuing and dequeuing n elements.
4. [9 Marks] Consider the ‘traverse the maze problem’ that we discussed in the first lecture.
Suppose:
(a) Explain, in English, how we may model this problem using a graph. Your answer should
at least include what is considered to be a vertex/edge in the graph, how to store the
graph, etc.
Solution:
The graph can be constructed such that each vertex represents a valid position (i, j) in the
maze (0 ≤ i, j ≤ n − 1) and each edge represents a possible move. An edge between nodes
(i, j) and (i, j + 1) exists if M [i][j] = M [i][j + 1] = 1, for 0 ≤ i ≤ n − 1 and 0 ≤ j ≤ n − 2.
There is also an edge between nodes (i, j) and (i + 1, j) if M [i][j] = M [i + 1][j] = 1, for
0 ≤ i ≤ n − 2 and 0 ≤ j ≤ n − 1. Otherwise there is no edge connecting two nodes.
The graph does not necessary need to be stored using additional space. When traversing
the graph, all neighbours of a node (i, j) can be generated in constant time, following the
above rules.
(b) Explain, in English, how to determine whether there is a path from the starting point
to the goal point. Use Big-Oh notation to express the worst case complexity of your
approach, in terms of n.
Solution:
We may slightly modify a DFS (or BFS) algorithm for this task. The idea is that using
a single node as the starting point, visiting its neighbours, neighbours of neighbours, etc.
will eventually have all nodes in the same connected component visited. This allows us
to check if the starting point and the goal point belongs to the same component in the
graph, and thus having a path connecting them.
From implementation point of view, we may first mark all nodes with 0 (unvisited), and
call DFSExplore with the node representing the starting point (0, 0) as the input. Once
the function call returns, we check whether the goal point (n − 1, n − 1) is marked with
0 or not. There is no path from the starting point to the goal point if the goal point is
marked with 0, otherwise a path exists.
In the worst case, we need to visit all positions in the maze, resulting in O(n2 ) as the
time complexity. An alternative way of this complexity analysis: DFS has a worst case
time complexity of O(|V | + |E|), in this problem |V | = n2 and |E| < 4|V | ∈ Θ(|V |), thus
O(|V | + |E|) = O(|V |) = O(n2 ).
(c) Explain, in English, how to find a shortest path (minimal number of movements) from
the starting point to the goal point.
Solution:
The existence of a path can be first confirmed using the algorithm discussed in part (b).
A modified BFS algorithm is suitable for finding a shortest path. An additional n × n
array P [0..n − 1][0..n − 1] is needed to keep track of how to reach each node. For example
P [i][j] = (k, l) indicates that position (i, j) can be reached by moving from position (k, l).
Start traversing the graph by adding (0, 0) into the traversal queue. Every time we find
an unvisited neighbour node (i, j), we update the value of P [i][j] to the node we are
currently visiting, indicating that (i, j) should be reached by moving from the current
position.
Once the traversal queue becomes empty, we may backtrack the path using the array P .
The goal point (n − 1, n − 1) is clearly the last node in the path. The value stored in
P [n − 1][n − 1] (say, (u, v)) gives the second last node (u, v) in the path. We can then
access P [u][v] to get the third last node in the path. Following the same process until
reaching the starting point (0, 0), allowing us to have the shortest path reconstructed.