0% found this document useful (0 votes)
9 views3 pages

Document 18

Uploaded by

velluraju11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views3 pages

Document 18

Uploaded by

velluraju11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

ANY 3 SCHEDULING ALGORITHM

1. First-Come, First-Served (FCFS):

• How it works: Processes are executed in the order they arrive in the ready
queue. The first process that arrives gets executed first, and the next waits until
the CPU is free.
• Advantages: Simple and easy to implement.
• Disadvantages: Can lead to the convoy effect, where shorter processes get
stuck waiting behind longer ones, causing inefficiency.

2. Shortest Job Next (SJN) / Shortest Job First (SJF):

• How it works: The process with the shortest execution time is selected for
execution next.
• Advantages: Minimizes average waiting time, making it an optimal scheduling
algorithm in many cases.
• Disadvantages: Requires knowledge of the execution time in advance, and can
lead to starvation for longer processes.

3. Round Robin (RR):

• How it works: Each process gets an equal time slice (called a quantum) to
execute. After its time slice, the process is moved to the back of the queue, and
the next process is selected.
• Advantages: Fair and responsive, particularly good for time-sharing systems.
• Disadvantages: Performance depends on the length of the time quantum. If it’s
too short, there’s a lot of context switching; if too long, it behaves like FCFS.

Operations on a Process

1. New State: A process starts in the new state when it is being created.
2. Ready State: After creation, it moves to the ready state, where it waits for the
CPU to execute it. Multiple processes can be in this state, waiting for their turn.
3. Running State: When the CPU selects a process for execution, it moves to the
running state, where it starts executing its instructions.
4. Waiting State: If the running process needs to wait for an I/O operation or
another event, it moves to the waiting state until the event is completed.
5. Terminated State: After the process finishes its execution, it moves to the
terminated state, and its resources are released.

Principles of Concurrency in Operating Systems

Concurrency in operating systems refers to the ability of the OS to manage and


execute multiple tasks or processes at the same time, improving system efficiency and
responsiveness. Modern OSes support concurrency to enhance multitasking, real-time
processing, and parallel computing.

Advantages of Concurrency:

1. Improved Performance: By running multiple tasks in parallel, concurrency


boosts overall system performance, reducing processing time.
2. Better Resource Utilization: System resources like CPU, memory, and I/O
devices are utilized more efficiently as multiple tasks run simultaneously.
3. Enhanced Responsiveness: Particularly useful for real-time systems and
interactive applications (e.g., gaming), concurrency allows tasks to execute
concurrently, improving responsiveness.
4. Scalability: Concurrency helps systems handle increasing tasks and users
without degrading performance.
5. Fault Tolerance: Tasks are independent, so if one task fails, others can still run,
improving system fault tolerance.

Problems in Concurrency:

1. Race Conditions: Occur when multiple processes or threads access shared


resources at the same time, leading to unpredictable behavior depending on the
order of execution.
2. Deadlocks: Happen when processes wait indefinitely for each other to release
resources, resulting in a circular wait and preventing progress.
3. Starvation: When a process is unable to access a required resource due to
other processes continuously using it, causing the process to make no progress.
4. Priority Inversion: A low-priority process holds a resource needed by a high-
priority process, blocking the latter and causing performance issues.
5. Deadlock Avoidance: Methods to prevent deadlocks may lead to inefficient
resource usage or even cause starvation of some processes.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy