100% found this document useful (1 vote)
4K views

Chapter 5: Operating System, Quiz With Answers

Here are the steps to calculate average waiting time for the given processes using different scheduling algorithms: 1) List the processes with their burst times: P1 burst time = 10 P2 burst time = 5 P3 burst time = 2 P4 burst time = 1 2) First Come First Served Scheduling: Waiting time for P1 = 0 Waiting time for P2 = 10 Waiting time for P3 = 15 Waiting time for P4 = 17 Total waiting time = 10 + 15 + 17 = 42 Average waiting time = Total waiting time / Number of processes = 42/4 = 10.5 3) Shortest Job First Scheduling: Process order:
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
4K views

Chapter 5: Operating System, Quiz With Answers

Here are the steps to calculate average waiting time for the given processes using different scheduling algorithms: 1) List the processes with their burst times: P1 burst time = 10 P2 burst time = 5 P3 burst time = 2 P4 burst time = 1 2) First Come First Served Scheduling: Waiting time for P1 = 0 Waiting time for P2 = 10 Waiting time for P3 = 15 Waiting time for P4 = 17 Total waiting time = 10 + 15 + 17 = 42 Average waiting time = Total waiting time / Number of processes = 42/4 = 10.5 3) Shortest Job First Scheduling: Process order:
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

SIMAD UNIVERSITY

FACULTY OF COMPUTING

OPERATING SYSTEMS ( I )
Quiz Four: Open Book

Chapter 05: CPU Scheduling


Deadline Expired: 28 March 2021, Time: 18:00 (Mogadishu Time)

Full name_________________________________________ID#:_____

Multiple Choice Questions

1. Which of the following scheduling policies allow the OS to interrupt the currently
running process and move it to the Ready state?

A) FIFO B) FCFS

C) non-preemptive D) preemptive
Ans: D

2. A risk with _________ is the possibility of starvation for longer processes, as long as
there is a steady supply of shorter processes.

A) SRTF B) SJF

C) FIFO D) FCFS

Ans: B

3. In the case of the __________ policy, the scheduler always chooses the process that has
the shortest expected remaining processing time.

A) SRTF B) FCFS

C) SPN D) HRRN

Ans: A
4. The operating system must make _________ types of scheduling decisions with respect to
the execution of processes.

A) six B) four

C) five D) three
Ans: D

5. Which of the following is true of cooperative scheduling?


A) It requires a timer.
B) A process keeps the CPU until it releases the CPU either by terminating or by switching to
the waiting state.
C) It incurs a cost associated with access to shared data.
D) A process switches from the running state to the ready state when an interrupt occurs.

Ans: B

6. ____ is the number of processes that are completed per time unit.
A) CPU utilization
B) Response time
C) Turnaround time
D) Throughput

Ans: D

7. ____ scheduling is approximated by predicting the next CPU burst with an exponential
average of the measured lengths of previous CPU bursts.
A) Multilevel queue
B) RR
C) FCFS
D) SJF

Ans: D

8. The ____ scheduling algorithm is designed especially for time-sharing systems.


A) SJF
B) FCFS
C) RR
D) Multilevel queue

Ans: C
9. Which of the following scheduling algorithms must be nonpreemptive?
A) SJF
B) RR
C) FCFS
D) priority algorithms

Ans: C

10. Which of the following is true of multilevel queue scheduling?


A) Processes can move between queues.
B) Each queue has its own scheduling algorithm.
C) A queue cannot have absolute priority over lower-priority queues.
D) It is the most general CPU-scheduling algorithm.

Ans: B

11. The default scheduling class for a process in Solaris is ____.


A) time sharing
B) system
C) interactive
D) real-time

Ans: A

12. Which of the following statements are false with regards to the Linux CFS scheduler?
A) Each task is assigned a proportion of CPU processing time.
B) Lower numeric values indicate higher relative priorities.
C) There is a single, system-wide value of vruntime.
D) The scheduler doesn't directly assign priorities.

Ans: C

13. The Linux CFS scheduler identifies _____________ as the interval of time during which
every runnable task should run at least once.
A) virtual run time
B) targeted latency
C) nice value
D) load balancing
Ans: B

14. In Little's formula, λ, represents the ____.


A) average waiting time in the queue
B) average arrival rate for new processes in the queue
C) average queue length
D) average CPU utilization

Ans: B

15. __________ involves the decision of which kernel thread to schedule onto which CPU.
A) Process-contention scope
B) System-contention scope
C) Dispatcher
D) Round-robin scheduling

Ans: B

16. With _______ a thread executes on a processor until a long-latency event (i.e. a
memory stall) occurs.
A) coarse-grained multithreading
B) fine-grained multithreading
C) virtualization
D) multicore processors

Ans: A

17. A significant problem with priority scheduling algorithms is _____.


A) complexity
B) starvation
C) determining the length of the next CPU burst
D) determining the length of the time quantum

Ans: B

18. The ______ occurs in first-come-first-served scheduling when a process with a long
CPU burst occupies the CPU.
A) dispatch latency
B) waiting time
C) convoy effect
D) system-contention scope

Ans: C

19. The rate of a periodic task in a hard real-time system is ____, where p is a period and t
is the processing time.
A) 1/p
B) p/t
C) 1/t
D) pt

Ans: A

Short Answer Questions

20. The simplest scheduling policy is __________ .


Ans: first-come-first-served (FCFS)

21. Giving each process a slice of time before being preempted is a technique known as
________ .
Ans: Time slicing

22. A common technique for predicting a future value on the basis of a time series of past
values is __________ .
Ans: Exponential Averaging

23. Explain the concept of a CPU–I/O burst cycle.

Ans:
The lifecycle of a process can be considered to consist of a number of bursts belonging to two
different states. All processes consist of CPU cycles and I/O operations. Therefore, a process
can be modeled as switching between bursts of CPU execution and I/O wait.

23. What role does the dispatcher play in CPU scheduling?

Ans:
The dispatcher gives control of the CPU to the process selected by the short-term scheduler. To
perform this task, a context switch, a switch to user mode, and a jump to the proper location in
the user program are all required. The dispatch should be made as fast as possible. The time
lost to the dispatcher is termed dispatch latency.

24. Explain the difference between response time and turnaround time. These times are
both used to measure the effectiveness of scheduling schemes.

Ans:
Turnaround time is the sum of the periods that a process is spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing I/O. Turnaround time essentially
measures the amount of time it takes to execute a process. Response time, on the other hand,
is a measure of the time that elapses between a request and the first response produced.

25. What effect does the size of the time quantum have on the performance of an RR
algorithm?

Ans:
At one extreme, if the time quantum is extremely large, the RR policy is the same as the FCFS
policy. If the time quantum is extremely small, the RR approach is called processor sharing
and creates the appearance that each of n processes has its own processor running at 1/n the
speed of the real processor.

26. Explain the process of starvation and how aging can be used to prevent it.

Ans:
Starvation occurs when a process is ready to run but is stuck waiting indefinitely for the CPU.
This can be caused, for example, when higher-priority processes prevent low-priority
processes from ever getting the CPU. Aging involves gradually increasing the priority of a
process so that a process will eventually achieve a high enough priority to execute if it waited
for a long enough period of time.

27. Explain the fundamental difference between asymmetric and symmetric


multiprocessing.

Ans:
In asymmetric multiprocessing, all scheduling decisions, I/O, and other system activities are
handled by a single processor, whereas in SMP, each processor is self-scheduling.
28. Describe two general approaches to load balancing.

Ans:
With push migration, a specific task periodically checks the load on each processor and — if it
finds an imbalance—evenly distributes the load by moving processes from overloaded to idle
or less-busy processors. Pull migration occurs when an idle processor pulls a waiting task
from a busy processor. Push and pull migration are often implemented in parallel on
load-balancing systems.

29. In Windows, how does the dispatcher determine the order of thread execution?

Ans:
The dispatcher uses a 32-level priority scheme to determine the execution order. Priorities are
divided into two classes. The variable class contains threads having priorities from 1 to 15, and
the real-time class contains threads having priorities from 16 to 31. The dispatcher uses a
queue for each scheduling priority, and traverses the set of queues from highest to lowest until
it finds a thread that is ready to run. The dispatcher executes an idle thread if no ready thread
is found.

30. What is deterministic modeling and when is it useful in evaluating an algorithm?

Ans:
Deterministic modeling takes a particular predetermined workload and defines the
performance of each algorithm for that workload. Deterministic modeling is simple, fast, and
gives exact numbers for comparison of algorithms. However, it requires exact numbers for
input, and its answers apply only in those cases. The main uses of deterministic modeling are
describing scheduling algorithms and providing examples to indicate trends.

31. What are the two types of latency that affect the performance of real-time systems?


Ans:
Interrupt latency refers to the period of time from the arrival of an interrupt at the CPU to the
start of the routine that services the interrupt. Dispatch latency refers to the amount of time
required for the scheduling dispatcher to stop one process and start another.

32. What are the advantages of the EDF scheduling algorithm over the rate-monotonic
scheduling algorithm? 

Ans:
Unlike the rate-monotonic algorithm, EDF scheduling does not require that processes be
periodic, nor must a process require a constant amount of CPU time per burst. The appeal of
EDF scheduling is that it is theoretically optimal - theoretically, it can schedule processes so
that each process can meet its deadline requirements and CPU utilization will be 100 percent.

33. Using the following scheduling algorithms with the following processes,
calculate Average Waiting Time
a. First Come First Served Scheduling
b. Shortest Job First Scheduling
c. Shortest Job First Scheduling (non-preemptive)
d. Shortest Remaining Time First Scheduling (preemptive)
e. Priority Scheduling
f. Round Robin Scheduling with time quantum = 8ms
g. Priority Scheduling with Round Robin time quantum = 15ms

Processes Arrival Time Burst Time Priority


P1 0.0 8 3
P2 2.0 4 2
P3 4.0 9 2
P4 5.0 5 1
P5 6.0 19 3

Answers:

First Come First Served Scheduling

P1 P2 P3 P4 P5
0 8 12 21 26 45

P1 =0, P2 = 8, P3=12, P4=21, P5=26, AWT= 13.4ms


Shortest Job First Scheduling

P2 P4 P1 P3 P5
0 4 9 17 26 45

P1 =9, P2 = 0, P3=17, P4=4, P5=26, AWT= 11.2ms

Shortest Job First Scheduling (non-preemptive)

P1 P2 P4 P3 P5
0 8 12 17 26 45

P1 =0, P2 = 8, P3=17, P4=12, P5=26, AWT= 12.6ms


Shortest Remaining Time First Scheduling (preemptive)

6 0 0 0 0 0
P1 P2 P4 P1 P3 P5
0 2 6 11 17 26 45
P1 =9, P2 = 0, P3=13, P4=1, P5=20, AWT= 8.6ms

Priority Scheduling

P4 P2 P3 P1 P5
0 5 9 18 26 45
P1 =18, P2 = 5, P3=9, P4=0, P5=26, AWT= 11.6ms
Round Robin Scheduling with time quantum = 8ms

0 0 1 0 11 0 3 0
P1 P2 P3 P4 P5 P3 P5 P5
0 8 12 20 25 33 34 42 45

P1 =0, P2 = 8, P3=25, P4=20, P5=26, AWT= 15.8ms

Priority Scheduling with Round Robin time quantum = 15ms

0 0 0 0 4 0
P4 P2 P3 P1 P5 P5
0 5 9 18 26 41 45

P1 =18, P2 = 5, P3=9, P4=0, P5=26, AWT= 11.6ms

True/False Questions

34. First-come-first-served (FCFS) is a simple scheduling policy that tends to favor I/O
bound processes over processor bound processes.
Ans: False
35. One problem with a pure priority scheduling scheme is that lower-priority processes
may suffer starvation.
Ans: True

36. FCFS performs much better for short processes than long ones.
Ans: False

37. Round robin is particularly effective in a general purpose time sharing system or
transaction processing system.
Ans: True

38. In preemptive scheduling, the sections of code affected by interrupts must be guarded
from simultaneous use.

Ans: True

39. In RR scheduling, the time quantum should be small with respect to the
context-switch time.

Ans: False

40. The most complex scheduling algorithm is the multilevel feedback-queue algorithm.

Ans: True

41. Load balancing is typically only necessary on systems with a common run queue.

Ans: False

42. Systems using a one-to-one model (such as Windows, Solaris , and Linux) schedule
threads using process-contention scope (PCS).

Ans: False

43. Solaris and Windows assign higher-priority threads/tasks longer time quantums and
lower-priority tasks shorter time quantums.
Ans: False

44. A Solaris interactive thread with priority 15 has a higher relative priority than an
interactive thread with priority 20

Ans: False

45. A Solaris interactive thread with a time quantum of 80 has a higher priority than an
interactive thread with a time quantum of 120.

Ans: True

46. SMP systems that use multicore processors typically run faster than SMP systems that
place each processor on separate cores.

Ans: True

47. Windows 7 User-mode scheduling (UMS) allows applications to create and manage
thread independently of the kernel

Ans: True

48. Round-robin (RR) scheduling degenerates to first-come-first-served (FCFS) scheduling


if the time quantum is too long.

Ans: True

49. Load balancing algorithms have no impact on the benefits of processor affinity.

Ans: False

50. A multicore system allows two (or more) threads that are in compute cycles to execute
at the same time.
Ans: True

51. Providing a preemptive, priority-based scheduler guarantees hard real-time


functionality.

Ans: False

52. In hard real-time systems, interrupt latency must be bounded.

Ans: True

53. In Pthread real-time scheduling, the SCHED_FIFO class provides time slicing among
threads of equal priority.

Ans: False

54. In the Linux CFS scheduler, the task with smallest value of vruntime is considered to
have the highest priority.

Ans: True

55. The length of a time quantum assigned by the Linux CFS scheduler is dependent upon
the relative priority of a task.

Ans: False

56. The Completely Fair Scheduler (CFS) is the default scheduler for Linux systems.

Ans: True

End of Quiz Four

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy