0% found this document useful (0 votes)
18 views14 pages

Rts Notes 2

The document discusses several approaches to real-time scheduling including clock-driven, weighted round robin, and priority-driven approaches. It also covers static and dynamic systems as well as real-time scheduling algorithms like earliest deadline first and least slack time.

Uploaded by

akashanuragi421
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views14 pages

Rts Notes 2

The document discusses several approaches to real-time scheduling including clock-driven, weighted round robin, and priority-driven approaches. It also covers static and dynamic systems as well as real-time scheduling algorithms like earliest deadline first and least slack time.

Uploaded by

akashanuragi421
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

What is real time scheduling.

Real-time scheduling manages tasks and processes in a time-sensitive manner. Its


main objective is to ensure that tasks are completed within their deadlines,
enabling the system to function properly and provide timely responses.

What is scheduler.
scheduler can also refer to a software or hardware program programmed to
process computer processes based on various priorities. Scheduler
responsible for managing the execution of computer processes in changing
states. Its primary objective is to ensure that all tasks of the computer
system are processed in a timely manner so that the system functions
properly and provides responses on time.

Common approaches to real-time scheduling.

clock-driven approach
The clock-driven approach is one of the approaches used in real-time scheduling.
In this approach, the scheduler makes decisions based on the system clock rather
than the arrival of tasks. Here's how it works:

1. Fixed Time Slots: The system is divided into fixed time slots, often referred to as
frames or time quanta. Each time slot has a predetermined duration.

2. Task Scheduling: Real-time tasks are assigned to these time slots based on their
deadlines and execution requirements. The scheduler determines which tasks will
execute during each time slot.
3. Periodic Execution: Tasks that have periodic deadlines are assigned to
appropriate time slots based on their periodicity. For example, if a task has a
deadline every 100 milliseconds, it will be scheduled to execute every 100
milliseconds within the assigned time slot.

4. Interrupt Handling: If an interrupt occurs during a time slot, the scheduler may
need to preempt the current task and handle the interrupt before resuming the
scheduled tasks.

5. Predictable Behavior: Since the scheduling decisions are based on the system
clock and the fixed time slots, the behavior of the system becomes more
predictable, making it suitable for real-time applications where timing guarantees
are critical.

6. Constraints: However, this approach may lead to inefficient use of resources if


tasks do not fully utilize their allocated time slots. Additionally, it may not be
suitable for handling aperiodic tasks with unpredictable arrival times, as the
scheduler operates based on fixed intervals.

Overall, the clock-driven approach provides determinism and predictability in real-


time systems, making it useful for applications with stringent timing requirements.

Weighted Round Robin (WRR) approach.

The Weighted Round Robin (WRR) approach is a scheduling algorithm commonly


used in network routers and load balancers. It is a variation of the Round Robin
(RR) algorithm that assigns weights to different tasks or processes to determine
their priority in the scheduling queue. Here's how the Weighted Round Robin
approach works:
1. Assignment of Weights: Each task or process in the scheduling queue is
assigned a weight, which represents its relative importance or resource
requirement compared to other tasks. Higher weights indicate higher priority.

2. Round Robin Selection: Tasks are scheduled in a round-robin manner, where


each task is given a turn to execute for a specified time quantum.

3. Weighted Selection: In the Weighted Round Robin approach, the scheduler


takes into account the weights assigned to tasks when selecting the next task to
execute. Tasks with higher weights are given more execution time compared to
tasks with lower weights.

4. Execution: During each scheduling cycle, the scheduler selects the next task
based on the round-robin order and the assigned weights. The selected task is
allowed to execute for a predetermined time quantum.

5. Fairness and Resource Allocation: The Weighted Round Robin approach aims to
achieve fairness in resource allocation by considering the relative weights of tasks.
Tasks with higher weights receive proportionally more CPU time compared to
tasks with lower weights.

6. Dynamic Adjustments: The weights assigned to tasks can be adjusted


dynamically based on changing system conditions or priorities. This flexibility
allows the scheduler to adapt to varying workload demands and resource
availability.
Overall, the Weighted Round Robin approach provides a balance between fairness
in resource allocation and flexibility in task prioritization, making it suitable for
environments where tasks have different resource requirements or priorities.

Priority-Driven Approach

The Priority-Driven Approach is a scheduling method where tasks are executed


based on their priority levels. Tasks with higher priority levels are given
precedence over tasks with lower priority levels. Here's how the Priority-Driven
Approach works:

1. Task Prioritization: Each task in the system is assigned a priority level. The
priority level typically indicates the importance or urgency of the task.

2. Priority Queue: Tasks are organized into a priority queue, where tasks with
higher priority levels are placed at the front of the queue, and tasks with lower
priority levels are placed at the back.

3. Task Selection: The scheduler selects the next task to execute from the priority
queue based on the task's priority level. Tasks with the highest priority are
selected first for execution.

4. Preemption: If a higher-priority task becomes available for execution while a


lower-priority task is running, the lower-priority task may be preempted, and the
higher-priority task will be scheduled to run instead. This ensures that critical
tasks are executed promptly when needed.

5. Priority Adjustment: The priority levels of tasks may be adjusted dynamically


based on changing system conditions or task characteristics. For example, tasks
that have missed deadlines or have been waiting for an extended period may have
their priorities boosted to ensure timely execution.

6. Resource Allocation: The Priority-Driven Approach allows for efficient resource


allocation by ensuring that tasks critical to the system's operation are executed
with higher priority, while lower-priority tasks may be delayed or deferred as
needed.

Overall, the Priority-Driven Approach provides a mechanism for scheduling tasks


based on their importance and urgency, allowing critical tasks to be executed
promptly to meet system requirements and deadlines.

Static and dynamic systems

1. Static System:
- A static system is one in which the output or behavior does not change over
time in response to external stimuli or inputs.
- In a static system, the relationships between inputs and outputs remain
constant and do not vary with time.
-
2. Dynamic System:
- A dynamic system is one in which the output changes over time in response to
varying inputs, disturbances, or environmental conditions.
- In a dynamic system, the relationships between inputs and outputs are time-
varying, and the behavior of the system evolves with time.
- Examples of dynamic systems Online reservation systems, traffic management
systems,
In summary, the main difference between static and dynamic systems lies in how
they respond to changes over time. Static systems have fixed relationships
between inputs and outputs, while dynamic systems exhibit time-varying behavior
in response to changing conditions.

EDF (Earliest Deadline First) algorithm:


The EDF (Earliest Deadline First) algorithm is a scheduling algorithm designed for
real-time systems. In this algorithm, tasks are scheduled based on their deadlines.
The algorithm prioritizes tasks to be executed next according to their deadlines,
meaning the task with the earliest deadline is executed first.

Here's the basic workflow of the EDF algorithm:

1. Task Arrival: When a new task arrives, its arrival time along with its deadline is
noted.

2. Deadline Computation: A deadline is computed for each task. This is typically


the time from the task's arrival until its completion.

3. Scheduling: When scheduling tasks, the scheduler prioritizes tasks based on


their deadlines. The task with the earliest deadline is executed first.

4. Execution: Tasks are executed according to their priorities. If a task misses its
deadline, it indicates a scheduling violation.
Some key points of the EDF algorithm are:

- Flexibility: The algorithm is flexible as it computes deadlines upon the arrival of


each new task.

- Optimality: EDF algorithm provides an optimal solution if feasible schedules


exist. In other words, if a feasible schedule is available, EDF will achieve it.

- Overhead: There is overhead in computing deadlines since it's done upon the
arrival of each new task.

Overall, the EDF algorithm is an important scheduling algorithm for real-time


systems that prioritizes deadlines and provides optimal solutions if feasible
schedules exist.

LST (Least Slack Time) algorithm:

The LST (Least Slack Time) algorithm is another scheduling algorithm used in real-
time systems. In this algorithm, tasks are prioritized based on their "slack time."
Slack time refers to the remaining time for completing a task, meaning if a task
has a lot of time left before its deadline, it will have more slack time.

Here's the basic workflow of the LST algorithm:

1. Task Arrival: Upon the arrival of new tasks, their arrival time and deadlines are
noted.
2.Slack Time Computation: Slack time is computed for each task. Slack time is the
time remaining until the task's deadline.

3. Scheduling: Tasks are prioritized based on their slack time. The task with the
least slack time is executed first.

4. Execution: Tasks are executed according to their priorities. If a task misses its
deadline, it indicates a scheduling violation.

Key points of the LST algorithm include:

- Slack Time: This algorithm relies on determining the slack time for tasks, which
can incur additional overhead.

- Flexibility: LST is flexible because it prioritizes tasks based on their remaining


time rather than strict deadlines.

- Optimality: The LST algorithm does not provide optimal solutions but generally
produces acceptable schedules.

Overall, the LST algorithm is another important scheduling algorithm for real-time
systems that manages tasks based on their remaining time, thereby improving
system performance and efficiency.

Rate Monotonic Algorithm (RMA):


The Rate Monotonic Algorithm (RMA) is another real-time scheduling algorithm
commonly used in real-time systems. In RMA, tasks are assigned priorities based
on their rates of execution, where tasks with higher execution rates are given
higher priorities.

Here's an overview of how the Rate Monotonic Algorithm works:

1. Task Assignment: Each task is assigned a priority based on its rate of execution.
Tasks with shorter periods (higher frequency) are assigned higher priorities. The
priority assignment is fixed and determined before execution.

2. Scheduling: During scheduling, the task with the highest priority (shortest
period) is selected for execution. If multiple tasks have the same priority, the task
with the earliest arrival time is chosen.

3. Execution: Tasks are executed according to their priorities. The task scheduler
ensures that higher priority tasks preempt lower priority tasks if necessary.

Key points of the Rate Monotonic Algorithm include:

- Determinism: RMA is deterministic, meaning the task schedule is known and


fixed before execution begins. This deterministic nature simplifies analysis and
ensures that deadlines are met.

- Optimality: Rate Monotonic Algorithm is optimal under certain conditions. It


guarantees schedulability for a set of tasks as long as the sum of their CPU
utilization is less than or equal to the system's capacity.

- Priority Inversion: RMA can suffer from priority inversion issues, where lower
priority tasks hold resources needed by higher priority tasks. Proper resource
locking mechanisms or priority inheritance protocols are required to mitigate
priority inversion.

Overall, the Rate Monotonic Algorithm is a widely used scheduling algorithm in


real-time systems, offering simplicity, determinism, and optimality under certain
conditions. However, it's important to address priority inversion concerns when
implementing RMA in practical systems.

Online and offline scheduling


Online and offline scheduling are two approaches to task scheduling, each with its
own characteristics and applications.

1. Offline Scheduling:

- In offline scheduling, the entire set of tasks is known in advance, including their
arrival times, execution times, and deadlines.

- The scheduler has complete information about the tasks before making
scheduling decisions.

- Offline scheduling algorithms aim to find an optimal or near-optimal schedule


for the given set of tasks.

- These algorithms often involve complex analysis and optimization techniques


to determine the best schedule.
- Examples of offline scheduling algorithms include EDF (Earliest Deadline First),
LLF (Least Laxity First), and RMA (Rate Monotonic Algorithm).

2. Online Scheduling:

- In online scheduling, tasks arrive dynamically over time, and the scheduler
makes scheduling decisions without complete knowledge of future tasks.

- The scheduler must make decisions based on the current state of the system
and any available information about future tasks.

- Online scheduling algorithms aim to make efficient and effective scheduling


decisions in real-time, often with limited information.

- These algorithms typically focus on heuristics and approximation techniques to


make quick decisions.

- Examples of online scheduling algorithms include Round Robin, Shortest Job


Next (SJN), and Shortest Remaining Time (SRT).

Comparison:

- Flexibility: Offline scheduling provides more flexibility since it has complete


information about all tasks in advance, whereas online scheduling adapts to
dynamic task arrivals.
- Optimality: Offline scheduling algorithms can aim for optimality since they have
complete information, while online scheduling algorithms often prioritize
efficiency and responsiveness over optimality.

- Complexity: Offline scheduling algorithms can be more complex since they


involve analyzing the entire task set, while online scheduling algorithms often rely
on simpler heuristics due to real-time constraints.

- Resource Usage: Offline scheduling algorithms may be more efficient in terms of


resource usage since they can plan ahead, while online scheduling algorithms may
need to react quickly to changing conditions, potentially leading to suboptimal
resource usage.

Overall, the choice between offline and online scheduling depends on factors such
as the system's requirements, the predictability of task arrivals, and the desired
level of optimality versus responsiveness.

"Sporadic" and "aperiodic" are two terms used to describe different types of tasks
in real-time systems based on their arrival patterns and deadlines:

1. Sporadic Tasks:

- Sporadic tasks are tasks that arrive periodically but with variable inter-arrival
times.

- These tasks have deadlines associated with them, but the deadlines are
typically relative to their arrival times.
- The time between consecutive arrivals of sporadic tasks is not constant, and
they may arrive in bursts or clusters.

- An example of a sporadic task is a periodic sensor reading task in a monitoring


system where sensor readings need to be processed every 100 milliseconds, but
the actual time between readings may vary.

2. Aperiodic Tasks:

- Aperiodic tasks are tasks that do not follow a regular periodic pattern in their
arrivals.

- These tasks arrive at irregular intervals, and their arrival times are not
predictable.

- Aperiodic tasks often have deadlines associated with them as well, but the
deadlines may be absolute or relative depending on the application.

- Examples of aperiodic tasks include user input events in an interactive system,


interrupts from external devices, or requests for processing from other tasks.

Comparison:

- Arrival Pattern: Sporadic tasks have a periodic arrival pattern with variable inter-
arrival times, while aperiodic tasks arrive at irregular intervals without following
any specific pattern.
- Predictability: Sporadic tasks have some predictability in their arrivals due to
their periodic nature, while aperiodic tasks are less predictable since their arrivals
are irregular.

- Scheduling: Both sporadic and aperiodic tasks require scheduling in real-time


systems to meet their deadlines, but different scheduling algorithms and
strategies may be used to handle each type effectively.

- Resource Usage: Sporadic tasks may have better utilization of resources since
they arrive in bursts, allowing for efficient resource allocation during active
periods. Aperiodic tasks, on the other hand, may lead to underutilization of
resources during idle periods if not managed effectively.

In real-time systems, it's essential to design scheduling algorithms that can handle
both sporadic and aperiodic tasks efficiently to meet timing constraints and
ensure system reliability and performance.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy