0% found this document useful (0 votes)
21 views24 pages

Rtos Unit III Notes

The document discusses real-time models and languages, focusing on event-based and process-based models, as well as real-time operating system (RTOS) tasks and scheduling. It explains event-driven architecture, including its benefits and types, and details process management, including creation, termination, and states. Additionally, it covers RT scheduling, interrupt processing, and synchronization methods necessary for managing tasks in real-time systems.

Uploaded by

sknathan kb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views24 pages

Rtos Unit III Notes

The document discusses real-time models and languages, focusing on event-based and process-based models, as well as real-time operating system (RTOS) tasks and scheduling. It explains event-driven architecture, including its benefits and types, and details process management, including creation, termination, and states. Additionally, it covers RT scheduling, interrupt processing, and synchronization methods necessary for managing tasks in real-time systems.

Uploaded by

sknathan kb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT III -REAL TIME MODELS AND LANGUAGES 9

Event Based – Process Based and Graph based Models – Real Time
Languages – RTOS Tasks – RT scheduling - Interrupt processing –
Synchronization – Control Blocks – Memory Requirements.

Event Based Model:


 An event is any significant occurrence or change in state for system
hardware or software. An event is not the same as an event notification,
which is a message or notification sent by the system to notify another
part of the system that an event has taken place.

 The source of an event can be from internal or external inputs. Events can
generate from a user, like a mouse click or keystroke, an external source,
such as a sensor output, or come from the system, like loading a program.

How does event-driven architecture work?

 Event-driven architecture is made up of event producers and event


consumers. An event producer detects or senses an event and represents
the event as a message. It does not know the consumer of the event, or the
outcome of an event.
 After an event has been detected, it is transmitted from the event producer
to the event consumers through event channels, where an event
processing platform processes the event asynchronously. Event
consumers need to be informed when an event has occurred. They might
process the event or may only be impacted by it.
 The event processing platform will execute the correct response to an
event and send the activity downstream to the right consumers. This
downstream activity is where the outcome of an event is seen.

Event-driven architecture models:

An event driven architecture may be based on either a pub/sub model or an


event stream model.

Pub/sub model
This is a messaging infrastructure based on subscriptions to an event stream.
With this model, after an event occurs, or is published, it is sent to subscribers
that need to be informed.

Event streaming model


With an event streaming model, events are written to a log. Event consumers
don’t subscribe to an event stream. Instead, they can read from any part of the
stream and can join the stream at any time.

There are a few different types of event streaming:

 Event stream processing uses a data streaming platform, like Apache


Kafka, to ingest events and process or transform the event stream. Event
stream processing can be used to detect meaningful patterns in event
streams.
 Simple event processing is when an event immediately triggers an action
in the event consumer.
 Complex event processing requires an event consumer to process a
series of events in order to detect patterns.
Benefits of event-driven architecture:

 An event-driven architecture can help organizations achieve a flexible


system that can adapt to changes and make decisions in real time. Real-
time situational awareness means that business decisions, whether manual
or automated, can be made using all of the available data that reflects the
current state of your systems.

 Events are captured as they occur from event sources such as Internet of
Things (IoT) devices, applications, and networks, allowing event
producers and event consumers to share status and response information
in real time.

 Organizations can add event-driven architecture to their systems and


applications to improve the scalability and responsiveness of applications
and access to the data and context needed for better business decisions.

Process Based Model:

 Process is the most central concept of any operating system (OS).

 Process is an abstraction of a running program.

 Process is an executing program including the current values of the


program counter, registers, and variables.

 Today, almost every computers can do many things at the same time. For
instance, you can download files from Internet, operate your Facebook
account, copy some songs to the pen drive, all can be perform at the same
time. This is called multitasking.
 In process model, all the runnable software on the computer, is organized
into a number of sequential processes. Each process has its own virtual
Central Processing Unit (CPU).

 The real Central Processing Unit (CPU) switches back and forth from
process to process. This work of switching back and forth is
called multiprogramming.

 A process is basically an activity. It has a program, input, output, and a


state.

Process Creation:

There are the following four principal events that cause the processes to be
created.

 System initialization
 Execution of a process creation system call by a running process
 A user request to create a new process
 Initiation of a batch work.

Generally, there are some processes that are created whenever an operating
system is booted. Some of those are foreground processes and others are
background processes.

 Foreground process is the process that interact with the computer users or
computer programmers.

 Background processes have some specific functions.


 In Unix system, the ps program can be used to list all the running
processes and in windows, the task manager is used to see what programs
are currently running into the system.

 In addition to the processes that are created at the boot time, new
processes can also be created.

Sometime a running process will issue the system calls just to create one or
more than one new processes to help it to do its work.

Process Termination:

 When a process has been created, it starts running an does its work.

 The new process will terminate generally due to one of the following
conditions, described in the table given below.

Condition Description

In normal exit, process terminates because they have done their


Normal exit
work successfully

In error exit, the termination of a process is done because of an error


Error exit
caused by the process, sometime due to the program bug

Fatal exit In fatal exit, process terminates because it discovers a fatal error

In this reason or condition, a process might also terminate due to


Killed by other process that it executes a system call that tells the operating system (OS)
just to kill some other process
Process Hierarchies:

 In some computer systems when a process creates another process, then


the parent process and child process continue to be associated in certain
ways. The child process can itself creates more processes that forms a
process hierarchy.

 In Unix system, a process group formed by a process and all of its


children and further descendants.

 Whenever a computer user sends a signal from the keyboard, that signal
is then delivered to all the members of the process group that are
currently associated with the keyboard.

OS Process States:

 Since each process is an independent entity with its own program counter
and internal state, processes sometime need to interact with other
processes.

 Sometime, a process may generate some output that is used by some


other process as their input.

 The diagram given below (state diagram) shows all the three states.
In these three states, a process may be in.

 Running - Actually using the Central Processing Unit at that instant


 Ready - Runnable, temporarily stopped to let another process run
 Blocked - Unable to run until some external event happens

RTOS Task States:


A task can exist in one of the following states:

 Running

When a task is actually executing it is said to be in the Running


state. It is currently utilising the processor. If the processor on which the
RTOS is running only has a single core then there can only be one task in
the Running state at any given time.

 Ready

Ready tasks are those that are able to execute (they are not in the
Blocked or Suspended state) but are not currently executing because a
different task of equal or higher priority is already in the Running state.

 Blocked

A task is said to be in the Blocked state if it is currently waiting for


either a temporal or external event. For example, if a task calls
vTaskDelay() it will block (be placed into the Blocked state) until the
delay period has expired - a temporal event. Tasks can also block to wait
for queue, semaphore, event group, notification or semaphore event.
Tasks in the Blocked state normally have a 'timeout' period, after which
the task will be timeout, and be unblocked, even if the event the task was
waiting for has not occurred.

Tasks in the Blocked state do not use any processing time and cannot be
selected to enter the Running state.

 Suspended

Like tasks that are in the Blocked state, tasks in the Suspended
state cannot be selected to enter the Running state, but tasks in the
Suspended state do not have a time out. Instead, tasks only enter or exit
the Suspended state when explicitly commanded to do so through the
vTaskSuspend() and xTaskResume() API calls respectively.

Valid task state transitions


RT Scheduling:

Real-time systems are systems that carry real-time tasks. These tasks
need to be performed immediately with a certain degree of urgency. In
particular, these tasks are related to control of certain events (or) reacting to
them. Real-time tasks can be classified as hard real-time tasks and soft real-
time tasks.
A hard real-time task must be performed at a specified time which could
otherwise lead to huge losses. In soft real-time tasks, a specified deadline can
be missed. This is because the task can be rescheduled (or) can be completed
after the specified time,

In real-time systems, the scheduler is considered as the most important


component which is typically a short-term task scheduler. The main focus of
this scheduler is to reduce the response time associated with each of the
associated processes instead of handling the deadline.

If a preemptive scheduler is used, the real-time task needs to wait until its
corresponding tasks time slice completes. In the case of a non-preemptive
scheduler, even if the highest priority is allocated to the task, it needs to wait
until the completion of the current task. This task can be slow (or) of the lower
priority and can lead to a longer wait.

A better approach is designed by combining both pre-emptive and non-


pre-emptive scheduling. This can be done by introducing time-based interrupts
in priority based systems which means the currently running process is
interrupted on a time-based interval and if a higher priority process is present
in a ready queue, it is executed by pre-empting the current process.

Based on schedulability, implementation (static or dynamic), and the result


(self or dependent) of analysis, the scheduling algorithm are classified as
follows.
1. Static table-driven approaches:
These algorithms usually perform a static analysis associated with
scheduling and capture the schedules that are advantageous. This helps in
providing a schedule that can point out a task with which the execution
must be started at run time.

2. Static priority-driven preemptive approaches:


Similar to the first approach, these type of algorithms also uses static
analysis of scheduling. The difference is that instead of selecting a
particular schedule, it provides a useful way of assigning priorities among
various tasks in preemptive scheduling.

3. Dynamic planning-based approaches:


Here, the feasible schedules are identified dynamically (at run time). It
carries a certain fixed time interval and a process is executed if and only if
satisfies the time constraint.

4. Dynamic best effort approaches:


These types of approaches consider deadlines instead of feasible schedules.
Therefore the task is aborted if its deadline is reached. This approach is
used widely is most of the real-time systems.

INTERRUPT PROCESSING:

An interrupt is an event that alters the sequence in which a processor


executes instructions. It is generated by the Hardware of the computer system.
When an interrupt occurs.

 The operating system gains control.


 The operating system saves the state of the interrupted process. In many
systems this information is stored in the interrupted process’s PCB.

 The operating system analyses the interrupt and passes control to the
appropriate routing to handle the interrupt.
 The interrupt handler routine processes the interrupt.
 The state of the interrupted process is restored.
 The interrupted process executes.
An interrupt may be initiated by a running process called a trap and said to be
synchronous with the operation of the process or it may be caused by some
event that may or may not be related to the running process it is said to be
asynchronous with the operation of the process.

INTERRUPT CLASSES:

There are six interrupt classes. They are

* SVC (Supervisor Call) interrupts.

These are initiated by a running process that execute the svc is a user generated
request for a particular system service such as performing input/output,
obtaining more storage, or communicating with the system operator.

* I/O interrupts:

These are initiated by the input/output hardware. They signal to the cpu that the
status of a channel or device has changed. For eg., they are caused when an I/O
operation completes, when an I/O error occurs.

* External interrupts:

These are caused by various events including the expiration of a quantum on an


interrupting clock or the receipt of a signal from another processor on a
multiprocessor system.

* Restart interrupts:

These occur when the operator presses the restart button or arrival of restart
signal processor instruction from another processor on a multiprocessor system.
* Program check interrupts:

These may occur when a programs machine language instructions are executed.
These problems include division by zero, arithmetic overflow or underflow,
data is in wrong format, attempt to execute an invalid operation code or attempt
to refer a memory location that do not exist or attempt to refer protected
resource.

* Machine check interrupts:

These are caused by multi-functioning hardware.

Synchronization:
 Synchronization and messaging provides the necessary communication
between tasks in one system to tasks in another system. The event flag is
used to synchronize internal activities while message queues and
mailboxes are used to send text messages between systems. Common
data areas utilize semaphores.

 Synchronization is classified into two categories: resource


synchronization and activity synchronization . Resource synchronization
determines whether access to a shared resource is safe, and, if not, when
it will be safe. Activity synchronization determines whether the execution
of a multithreaded program has reached a certain state and, if it hasn't,
how to wait for and be notified when this state is reached.
1 .Resource Synchronization:

 Access by multiple tasks must be synchronized to maintain the integrity


of a shared resource. This process is called resource synchronization , a
term closely associated with critical sections and mutual exclusions.
 Mutual exclusion is a provision by which only one task at a time can
access a shared resource. A critical section is the section of code from
which the shared resource is accessed.

 As an example, consider two tasks trying to access shared memory. One


task (the sensor task) periodically receives data from a sensor and writes
the data to shared memory. Meanwhile, a second task (the display task)
periodically reads from shared memory and sends the data to a display.
The common design pattern of using shared memory is illustrated
in Figure 1.
Figure 1: Multiple tasks accessing shared memory.

 Problems arise if access to the shared memory is not exclusive, and


multiple tasks can simultaneously access it. For example, if the sensor
task has not completed writing data to the shared memory area before the
display task tries to display the data, the display would contain a mixture
of data extracted at different times, leading to erroneous data
interpretation.
 The section of code in the sensor task that writes input data to the shared
memory is a critical section of the sensor task. The section of code in the
display task that reads data from the shared memory is a critical section
of the display task. These two critical sections are called competing
critical sections because they access the same shared resource.
 A mutual exclusion algorithm ensures that one task's execution of a
critical section is not interrupted by the competing critical sections of
other concurrently executing tasks.
 One way to synchronize access to shared resources is to use a client-
server model, in which a central entity called a resource server is
responsible for synchronization. Access requests are made to the resource
server, which must grant permission to the requestor before the requestor
can access the shared resource. The resource server determines the
eligibility of the requestor based on pre-assigned rules or run-time
heuristics.
 While this model simplifies resource synchronization, the resource server
is a bottleneck. Synchronization primitives, such as semaphores and
mutexes, and other methods introduced in a later section of this chapter,
allow developers to implement complex mutual exclusion algorithms.
These algorithms in turn allow dynamic coordination among competing
tasks without intervention from a third party.
2. Activity Synchronization:

 In general, a task must synchronize its activity with other tasks to execute
a multithreaded program properly. Activity synchronization is also
called condition synchronization or sequence control . Activity
synchronization ensures that the correct execution order among
cooperating tasks is used. Activity synchronization can be either
synchronous or asynchronous.

 One representative of activity synchronization methods is barrier


synchronization . For example, in embedded control systems, a complex
computation can be divided and distributed among multiple tasks. Some
parts of this complex computation are I/O bound, other parts are CPU
intensive, and still others are mainly floating-point operations that rely
heavily on specialized floating-point coprocessor hardware. These partial
results must be collected from the various tasks for the final calculation.
The result determines what other partial computations each task is to
perform next.
 The point at which the partial results are collected and the duration of the
final computation is a barrier . One task can finish its partial computation
before other tasks complete theirs, but this task must wait for all other
tasks to complete their computations before the task can continue.
Barrier synchronization comprises three actions:
 a task posts its arrival at the barrier,
 the task waits for other participating tasks to reach the barrier, and
 the task receives notification to proceed beyond the barrier.

A later section of this chapter shows how to implement barrier synchronization


using mutex locks and condition variables.

 As shown in Figure: 2, a group of five tasks participates in barrier


synchronization. Tasks in the group complete their partial execution and
reach the barrier at various times; however, each task in the group must
wait at the barrier until all other tasks have reached the barrier. The last
task to reach the barrier (in this example, task T5) broadcasts a
notification to the other tasks. All tasks cross the barrier at the same time
( conceptually in a uniprocessor environment due to task scheduling. We
say 'conceptually' because in a uniprocessor environment, only one task
can execute at any given time. Even though all five tasks have crossed the
barrier and may continue execution, the task with the highest priority will
execute next.
Figure 2: Visualization of barrier synchronization.

 Another representative of activity synchronization mechanisms


is rendezvous synchronization , which, as its name implies, is an
execution point where two tasks meet. The main difference between the
barrier and the rendezvous is that the barrier allows activity
synchronization among two or more tasks, while rendezvous
synchronization is between two tasks.
 In rendezvous synchronization, a synchronization and communication
point called an entry is constructed as a function call. One task defines its
entry and makes it public. Any task with knowledge of this entry can call
it as an ordinary function call. The task that defines the entry accepts the
call, executes it, and returns the results to the caller. The issuer of the
entry call establishes a rendezvous with the task that defined the entry.
 Rendezvous synchronization is similar to synchronization using event-
registers, which Chapter 8 introduces, in that both are synchronous. The
issuer of the entry call is blocked if that call is not yet accepted; similarly,
the task that accepts an entry call is blocked when no other task has
issued the entry call. Rendezvous differs from event-register in that
bidirectional data movement (input parameters and output results) is
possible.
 A derivative form of rendezvous synchronization, called simple
rendezvous in this book, uses kernel primitives, such as semaphores or
message queues, instead of the entry call to achieve synchronization. Two
tasks can implement a simple rendezvous without data passing by using
two binary semaphores, as shown in Figure 3.
 Both binary semaphores are initialized to 0 . When task #1 reaches the
rendezvous, it gives semaphore #2, and then it gets on semaphore #1.
When task #2 reaches the rendezvous, it gives semaphore #1, and then it
gets on semaphore #2. Task #1 has to wait on semaphore #1 before task
#2 arrives, and vice versa, thus achieving rendezvous synchronization.

Memory Management
 Memory Management is the process of controlling and coordinating
computer memory, assigning portions known as blocks to various
running programs to optimize the overall performance of the system.

 It is the most important function of an operating system that manages


primary memory. It helps processes to move back and forward between
the main memory and execution disk. It helps OS to keep track of every
memory location, irrespective of whether it is allocated to some process
or it remains free.

Uses:

 It allows you to check how much memory needs to be allocated to


processes that decide which processor should get memory at what time.
 Tracks whenever inventory gets freed or unallocated. According to it will
update the status.
 It allocates the space to application routines.
 It also make sure that these applications do not interfere with each other.
 Helps protect different processes from each other
 It places the programs in memory so that memory is utilized to its full
extent.

Memory Management Techniques:

Single Contiguous Allocation

It is the easiest memory management technique. In this method, all types


of computer's memory except a small portion which is reserved for the OS is
available for one application. For example, MS-DOS operating system allocates
memory in this way. An embedded system also runs on a single application.

Partitioned Allocation

It divides primary memory into various memory partitions, which is


mostly contiguous areas of memory. Every partition stores all the information
for a specific task or job. This method consists of allotting a partition to a job
when it starts & unallocate when it ends.

Paged Memory Management

This method divides the computer's main memory into fixed-size units
known as page frames. This hardware memory management unit maps pages
into frames which should be allocated on a page basis.
Segmented Memory Management

Segmented memory is the only memory management method that does


not provide the user's program with a linear and contiguous address space.

Segments need hardware support in the form of a segment table. It


contains the physical address of the section in memory, size, and other data like
access protection bits and status.

What is Swapping?

Swapping is a method in which the process should be swapped


temporarily from the main memory to the backing store. It will be later brought
back into the memory for continue execution.

Backing store is a hard disk or some other secondary storage device that
should be big enough inorder to accommodate copies of all memory images for
all users. It is also capable of offering direct access to these memory images.
Benefits of Swapping:

Here, are major benefits/pros of swapping:

 It offers a higher degree of multiprogramming.


 Allows dynamic relocation. For example, if address binding at execution
time is being used, then processes can be swap in different locations. Else
in case of compile and load time bindings, processes should be moved to
the same location.
 It helps to get better utilization of memory.
 Minimum wastage of CPU time on completion so it can easily be applied
to a priority-based scheduling method to improve its performance.
What is Memory allocation?

Memory allocation is a process by which computer programs are assigned


memory or space.

Here, main memory is divided into two types of partitions

1. Low Memory - Operating system resides in this type of memory.


2. High Memory- User processes are held in high memory.

Partition Allocation:

Memory is divided into different blocks or partitions. Each process is


allocated according to the requirement. Partition allocation is an ideal method to
avoid internal fragmentation.

Below are the various partition allocation schemes :

1. First Fit: In this type fit, the partition is allocated, which is the
first sufficient block from the beginning of the main memory.
2. Best Fit: It allocates the process to the partition that is the first
smallest partition among the free partitions.
3. Worst Fit: It allocates the process to the partition, which is the
largest sufficient freely available partition in the main memory.
4. Next Fit: It is mostly similar to the first Fit, but this Fit, searches
for the first sufficient partition from the last allocation point.

What is Paging?

Paging is a storage mechanism that allows OS to retrieve processes from


the secondary storage into the main memory in the form of pages. In the Paging
method, the main memory is divided into small fixed-size blocks of physical
memory, which is called frames. The size of a frame should be kept the same as
that of a page to have maximum utilization of the main memory and to avoid
external fragmentation. Paging is used for faster access to data, and it is a
logical concept.

What is Fragmentation?

Processes are stored and removed from memory, which creates free
memory space, which are too small to use by other processes.

After sometimes, that processes not able to allocate to memory blocks


because its small size and memory blocks always remain unused is called
fragmentation. This type of problem happens during a dynamic memory
allocation system when free blocks are quite small, so it is not able to fulfill any
request.

Two types of Fragmentation methods are:

1. External fragmentation
2. Internal fragmentation

 External fragmentation can be reduced by rearranging memory contents


to place all free memory together in a single block.
 The internal fragmentation can be reduced by assigning the smallest
partition, which is still good enough to carry the entire process.

Difference Between Static and Dynamic Loading

Static Loading Dynamic Loading

Static loading is used when you want to load In a Dynamically loaded program, referenc
your program statically. Then at the time of will be provided and the loading will be do
compilation, the entire program will be linked at the time of execution.
and compiled without need of any external
module or program dependency.

At loading time, the entire program is loaded Routines of the library are loaded into mem
into memory and starts its execution. only when they are required in the program

Difference Between Static and Dynamic Linking

Here, are main difference between Static vs. Dynamic Linking:

Static Linking Dynamic Linking

Static linking is used to combine all other When dynamic linking is used, it does not n
modules, which are required by a program into to link the actual module or library with the
a single executable code. This helps OS program. Instead of it use a reference to the
prevent any runtime dependency. dynamic module provided at the time of
compilation and linking.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy