Chapter 5: Operating System, Quiz With Answers
Chapter 5: Operating System, Quiz With Answers
FACULTY OF COMPUTING
OPERATING SYSTEMS ( I )
Quiz Four: Open Book
Full name_________________________________________ID#:_____
1. Which of the following scheduling policies allow the OS to interrupt the currently
running process and move it to the Ready state?
A) FIFO B) FCFS
C) non-preemptive D) preemptive
Ans: D
2. A risk with _________ is the possibility of starvation for longer processes, as long as
there is a steady supply of shorter processes.
A) SRTF B) SJF
C) FIFO D) FCFS
Ans: B
3. In the case of the __________ policy, the scheduler always chooses the process that has
the shortest expected remaining processing time.
A) SRTF B) FCFS
C) SPN D) HRRN
Ans: A
4. The operating system must make _________ types of scheduling decisions with respect to
the execution of processes.
A) six B) four
C) five D) three
Ans: D
Ans: B
6. ____ is the number of processes that are completed per time unit.
A) CPU utilization
B) Response time
C) Turnaround time
D) Throughput
Ans: D
7. ____ scheduling is approximated by predicting the next CPU burst with an exponential
average of the measured lengths of previous CPU bursts.
A) Multilevel queue
B) RR
C) FCFS
D) SJF
Ans: D
Ans: C
9. Which of the following scheduling algorithms must be nonpreemptive?
A) SJF
B) RR
C) FCFS
D) priority algorithms
Ans: C
Ans: B
Ans: A
12. Which of the following statements are false with regards to the Linux CFS scheduler?
A) Each task is assigned a proportion of CPU processing time.
B) Lower numeric values indicate higher relative priorities.
C) There is a single, system-wide value of vruntime.
D) The scheduler doesn't directly assign priorities.
Ans: C
13. The Linux CFS scheduler identifies _____________ as the interval of time during which
every runnable task should run at least once.
A) virtual run time
B) targeted latency
C) nice value
D) load balancing
Ans: B
Ans: B
15. __________ involves the decision of which kernel thread to schedule onto which CPU.
A) Process-contention scope
B) System-contention scope
C) Dispatcher
D) Round-robin scheduling
Ans: B
16. With _______ a thread executes on a processor until a long-latency event (i.e. a
memory stall) occurs.
A) coarse-grained multithreading
B) fine-grained multithreading
C) virtualization
D) multicore processors
Ans: A
Ans: B
18. The ______ occurs in first-come-first-served scheduling when a process with a long
CPU burst occupies the CPU.
A) dispatch latency
B) waiting time
C) convoy effect
D) system-contention scope
Ans: C
19. The rate of a periodic task in a hard real-time system is ____, where p is a period and t
is the processing time.
A) 1/p
B) p/t
C) 1/t
D) pt
Ans: A
21. Giving each process a slice of time before being preempted is a technique known as
________ .
Ans: Time slicing
22. A common technique for predicting a future value on the basis of a time series of past
values is __________ .
Ans: Exponential Averaging
Ans:
The lifecycle of a process can be considered to consist of a number of bursts belonging to two
different states. All processes consist of CPU cycles and I/O operations. Therefore, a process
can be modeled as switching between bursts of CPU execution and I/O wait.
Ans:
The dispatcher gives control of the CPU to the process selected by the short-term scheduler. To
perform this task, a context switch, a switch to user mode, and a jump to the proper location in
the user program are all required. The dispatch should be made as fast as possible. The time
lost to the dispatcher is termed dispatch latency.
24. Explain the difference between response time and turnaround time. These times are
both used to measure the effectiveness of scheduling schemes.
Ans:
Turnaround time is the sum of the periods that a process is spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing I/O. Turnaround time essentially
measures the amount of time it takes to execute a process. Response time, on the other hand,
is a measure of the time that elapses between a request and the first response produced.
25. What effect does the size of the time quantum have on the performance of an RR
algorithm?
Ans:
At one extreme, if the time quantum is extremely large, the RR policy is the same as the FCFS
policy. If the time quantum is extremely small, the RR approach is called processor sharing
and creates the appearance that each of n processes has its own processor running at 1/n the
speed of the real processor.
26. Explain the process of starvation and how aging can be used to prevent it.
Ans:
Starvation occurs when a process is ready to run but is stuck waiting indefinitely for the CPU.
This can be caused, for example, when higher-priority processes prevent low-priority
processes from ever getting the CPU. Aging involves gradually increasing the priority of a
process so that a process will eventually achieve a high enough priority to execute if it waited
for a long enough period of time.
Ans:
In asymmetric multiprocessing, all scheduling decisions, I/O, and other system activities are
handled by a single processor, whereas in SMP, each processor is self-scheduling.
28. Describe two general approaches to load balancing.
Ans:
With push migration, a specific task periodically checks the load on each processor and — if it
finds an imbalance—evenly distributes the load by moving processes from overloaded to idle
or less-busy processors. Pull migration occurs when an idle processor pulls a waiting task
from a busy processor. Push and pull migration are often implemented in parallel on
load-balancing systems.
29. In Windows, how does the dispatcher determine the order of thread execution?
Ans:
The dispatcher uses a 32-level priority scheme to determine the execution order. Priorities are
divided into two classes. The variable class contains threads having priorities from 1 to 15, and
the real-time class contains threads having priorities from 16 to 31. The dispatcher uses a
queue for each scheduling priority, and traverses the set of queues from highest to lowest until
it finds a thread that is ready to run. The dispatcher executes an idle thread if no ready thread
is found.
Ans:
Deterministic modeling takes a particular predetermined workload and defines the
performance of each algorithm for that workload. Deterministic modeling is simple, fast, and
gives exact numbers for comparison of algorithms. However, it requires exact numbers for
input, and its answers apply only in those cases. The main uses of deterministic modeling are
describing scheduling algorithms and providing examples to indicate trends.
31. What are the two types of latency that affect the performance of real-time systems?
Ans:
Interrupt latency refers to the period of time from the arrival of an interrupt at the CPU to the
start of the routine that services the interrupt. Dispatch latency refers to the amount of time
required for the scheduling dispatcher to stop one process and start another.
32. What are the advantages of the EDF scheduling algorithm over the rate-monotonic
scheduling algorithm?
Ans:
Unlike the rate-monotonic algorithm, EDF scheduling does not require that processes be
periodic, nor must a process require a constant amount of CPU time per burst. The appeal of
EDF scheduling is that it is theoretically optimal - theoretically, it can schedule processes so
that each process can meet its deadline requirements and CPU utilization will be 100 percent.
33. Using the following scheduling algorithms with the following processes,
calculate Average Waiting Time
a. First Come First Served Scheduling
b. Shortest Job First Scheduling
c. Shortest Job First Scheduling (non-preemptive)
d. Shortest Remaining Time First Scheduling (preemptive)
e. Priority Scheduling
f. Round Robin Scheduling with time quantum = 8ms
g. Priority Scheduling with Round Robin time quantum = 15ms
Answers:
P1 P2 P3 P4 P5
0 8 12 21 26 45
P2 P4 P1 P3 P5
0 4 9 17 26 45
P1 P2 P4 P3 P5
0 8 12 17 26 45
6 0 0 0 0 0
P1 P2 P4 P1 P3 P5
0 2 6 11 17 26 45
P1 =9, P2 = 0, P3=13, P4=1, P5=20, AWT= 8.6ms
Priority Scheduling
P4 P2 P3 P1 P5
0 5 9 18 26 45
P1 =18, P2 = 5, P3=9, P4=0, P5=26, AWT= 11.6ms
Round Robin Scheduling with time quantum = 8ms
0 0 1 0 11 0 3 0
P1 P2 P3 P4 P5 P3 P5 P5
0 8 12 20 25 33 34 42 45
0 0 0 0 4 0
P4 P2 P3 P1 P5 P5
0 5 9 18 26 41 45
True/False Questions
34. First-come-first-served (FCFS) is a simple scheduling policy that tends to favor I/O
bound processes over processor bound processes.
Ans: False
35. One problem with a pure priority scheduling scheme is that lower-priority processes
may suffer starvation.
Ans: True
36. FCFS performs much better for short processes than long ones.
Ans: False
37. Round robin is particularly effective in a general purpose time sharing system or
transaction processing system.
Ans: True
38. In preemptive scheduling, the sections of code affected by interrupts must be guarded
from simultaneous use.
Ans: True
39. In RR scheduling, the time quantum should be small with respect to the
context-switch time.
Ans: False
40. The most complex scheduling algorithm is the multilevel feedback-queue algorithm.
Ans: True
41. Load balancing is typically only necessary on systems with a common run queue.
Ans: False
42. Systems using a one-to-one model (such as Windows, Solaris , and Linux) schedule
threads using process-contention scope (PCS).
Ans: False
43. Solaris and Windows assign higher-priority threads/tasks longer time quantums and
lower-priority tasks shorter time quantums.
Ans: False
44. A Solaris interactive thread with priority 15 has a higher relative priority than an
interactive thread with priority 20
Ans: False
45. A Solaris interactive thread with a time quantum of 80 has a higher priority than an
interactive thread with a time quantum of 120.
Ans: True
46. SMP systems that use multicore processors typically run faster than SMP systems that
place each processor on separate cores.
Ans: True
47. Windows 7 User-mode scheduling (UMS) allows applications to create and manage
thread independently of the kernel
Ans: True
Ans: True
49. Load balancing algorithms have no impact on the benefits of processor affinity.
Ans: False
50. A multicore system allows two (or more) threads that are in compute cycles to execute
at the same time.
Ans: True
Ans: False
Ans: True
53. In Pthread real-time scheduling, the SCHED_FIFO class provides time slicing among
threads of equal priority.
Ans: False
54. In the Linux CFS scheduler, the task with smallest value of vruntime is considered to
have the highest priority.
Ans: True
55. The length of a time quantum assigned by the Linux CFS scheduler is dependent upon
the relative priority of a task.
Ans: False
56. The Completely Fair Scheduler (CFS) is the default scheduler for Linux systems.
Ans: True