0% found this document useful (0 votes)
25 views

Unit I: 1.1 OS Structures

The document discusses different operating system structures including monolithic, layered, microkernel, virtual machines, and exokernel. It also covers OS services for users and systems as well as process operations like creation and termination.

Uploaded by

shashi176597
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Unit I: 1.1 OS Structures

The document discusses different operating system structures including monolithic, layered, microkernel, virtual machines, and exokernel. It also covers OS services for users and systems as well as process operations like creation and termination.

Uploaded by

shashi176597
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

UNIT I

1.1 OS Structures:
1.1.1 Monolithic Operating System

 A monolithic kernel is an operating system architecture where the entire operating system is
working in kernel space.

 The monolithic model differs from other operating system architectures (such as the microkernel
architecture) in that it alone defines a high-level virtual interface over computer hardware.

 A set of primitives or system calls implement all operating system services such as process
management, concurrency, and memory management.

1.1.2 Layered Approach

 The operating system is divided into a number of layers (levels), each built on top of lower layers.

 The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.

 With modularity, layers are selected such that each uses functions (operations) and services of only
lower-level layers.

 The problem is deciding what order in which to place the layers, as no layer can call upon the
services of any higher layer, and so many chicken-and-egg situations may arise.
1.1.3 Microkernel System Structure

 Move as much functionality as possible from the kernel into “user” space.

 Only a few essential functions in the kernel: primitive memory management (address space), I/O
and interrupt management, Inter-Process Communication (IPC), basic scheduling

 Mach was the first and most widely known microkernel, and now forms a major component of Mac
OSX.

Benefits:

 Easier to extend a microkernel

 Easier to port the operating system to new architectures

 More reliable (less code is running in kernel mode)

 More secure

Detriments:

 Performance overhead of user space to kernel space communication


1.1.4 Virtual Machines

 A virtual machine takes the layered approach to its logical next step. It treats hardware and the
operating system kernel as though they were all hardware

 A virtual machine provides an interface identical to the underlying bare hardware

 The operating system host creates the illusion that a process has its own processor (and virtual
memory)

 Each guest provided with a (virtual) copy of underlying computer

1.1.5 Exokernel

 Operating systems generally present hardware resources to applications through high-level


abstractions such as (virtual) file systems.

 The idea behind exokernels is to force as few abstractions as possible on application developers,
enabling them to make as many decisions as possible about hardware abstractions.

 Exokernels are tiny, since functionality is limited to ensuring protection and multiplexing of
resources, which is considerably simpler than conventional microkernels' implementation of
message passing and monolithic kernels' implementation of high-level abstractions.

Some of the features of exokernel operating systems include:

 Better support for application control

 Separates security from management

 Abstractions are moved securely to an untrusted library operating system

 Provides a low-level interface

The benefits of the exokernel operating system include:

 Improved performance of applications

 More efficient use of hardware resources through precise resource allocation and revocation
1.2 SERVICES PROVIDED BY AN OS:

1.2.1 Services for the user


 User interface
o Almost all operating systems have a user interface (UI). This interface can take several
forms.
o One is a command-line interface (CLI), which uses text commands and a method for
entering them (say, a program to allow entering and editing of commands).
o Another is a batch interface, in which commands and directives to control those
commands are entered into files, and those files are executed.
o Most commonly/ a graphical user interface (GUI) is used. Here, the interface is a
window system with a pointing device to direct I/O, choose from menus, and make
selections and a keyboard to enter text. Some systems provide two or all three of these
variations.
 Program Execution

o The purpose of a computer systems is to allow the user to execute programs. So the
operating systems provides an environment where the user can conveniently run
programs. The user does not have to worry about the memory allocation or multitasking
or anything. These things are taken care of by the operating systems.

o Running a program involves the allocating and deallocating memory, CPU scheduling in
case of multiple process. These functions cannot be given to the user-level programs. So
user-level programs cannot help the user to run programs independently without the help
from operating systems.

 I/O Operations

o Each program requires an input and produces output. This involves the use of I/O. The
operating systems hides the user the details of underlying hardware for the I/O. All the
user sees is that the I/O has been performed without any details.

 File System Manipulation

o The output of a program may need to be written into new files or input taken from some
files. The operating system provides this service. The user does not have to worry about
secondary storage management. User gives a command for reading or writing to a file
and sees his / her task accomplished.

 Communications
o There are instances where processes need to communicate with each other to exchange
information. It may be between processes running on the same computer or running on
the different computers.

o By providing this service the operating system relieves the user of the worry of passing
messages between processes. In case where the messages need to be passed to processes
on the other computers through a network it can be done by the user programs.

 Error Detection

o An error is one part of the system may cause malfunctioning of the complete system. To
avoid such a situation the operating system constantly monitors the system for detecting
the errors. This relieves the user of the worry of errors propagating to various part of the
system and causing malfunctioning.

o This service cannot allowed to be handled by user programs because it involves


monitoring and in cases altering area of memory or deallocation of memory for a faulty
process.

1.2.2 Services for the system


 Resource allocation
o When there are multiple users or multiple jobs running at the same time, resources must
be allocated to each of them.
o Many different types of resources are managed by the operating system. Some (such as
CPU cycles, main memory, and file storage) may have special allocation code, whereas
others (such as I/O devices) may have much more general request and release code.
 Accounting
o We want to keep track of which users use how much and what kinds of computer
resources.
o This record keeping may be used for accounting (so that users can be billed) or simply
for accumulating usage statistics.
 Protection and security
o The owners of information stored in a multiuser or networked computer system may
want to control use of that information.
o When several separate processes execute concurrently, it should not be possible for one
process to interfere with the others or with the operating system itself. Protection
involves ensuring that all access to system resources is controlled.
1.3 OS Operations:
1.3.1 Process Creation

Parent process create children processes, which, in turn create other processes, forming a tree of processes

Resource sharing

Parent and children share all resources

Children share subset of parent’s resources

Parent and child share no resources

Execution

Parent and children execute concurrently

Parent waits until children terminate

Address space

Child duplicate of parent

Child has a program loaded into it

UNIX examples

fork system call creates new process

exec system call used after a fork to replace the process’ memory space with a new program

1.3.2 Process Termination

 Process executes last statement and asks the operating system to delete it (exit)

o Output data from child to parent (via wait)

o Process’ resources are deallocated by operating system

 Parent may terminate execution of children processes (abort)

o Child has exceeded allocated resources

o Task assigned to child is no longer required


o If parent is exiting

 Some operating system do not allow child to continue if its parent terminates

 All children terminated - cascading termination

1.4 Types of OS:


1.4.1 Mainframe Operating System

These OS are used in big data centers. They can handle thousand of GB of memory. In a batch processing
operating system interaction between the user and processor is limited or there is no interaction at all during
the execution of work. Data and programs that need to be processed are bundled and collected as a ‘batch’
and executed together.

Batch processing operating systems are ideal in situations where:

 There are large amounts of data to be processed.


 Similar data needs to be processed.
 Similar processing is involved when executing the data.

1.4.2 Server OS

 They run on server machines.

 They allow many users to connect over a network simultaneously and share hardware & software
resources.

1.4.3 Multiprocessor OS

An operating system that is capable of supporting and utilizing more than one computer processor. Some
examples of multiprocessing operating systems are Linux, Unix, Windows 2000

1.4.4 Personal Computer OS

 Their job is to provide good interface to a single user.

 Used mainly for spreadsheets, multimedia, word processor & internet.

1.4.5 Real-time Operating System

It is a multitasking operating system that aims at executing real-time applications. Real-time operating
systems often use specialized scheduling algorithms so that they can achieve a deterministic nature of
behavior. The main object of real-time operating systems is their quick and predictable response to events.
They either have an event-driven or a time-sharing design. An event-driven system switches between tasks
based of their priorities while time-sharing operating systems switch tasks based on clock interrupts. They
are of two types:
 Hard real time systems

 Soft real time systems

1.4.6 Embedded OS

 The operating systems designed for being used in embedded computer systems are known as
embedded operating systems.

 They are designed to operate on small machines like PDAs with less autonomy. They are able to
operate with a limited number of resources. They are very compact and extremely efficient by
design. Windows CE, FreeBSD and Minix 3 are some examples of embedded operating systems.

1.4.7 Smart Card OS


 They run on credit card sized devices.
 They have severe memory and processing power constraints.
 Most of them are built to handle a single function.
1.5 Process Management:
1.5.1 Program Control Block (PCB)

Each process is represented in operating system by a process control block. It contains the following
information associated with each process -

 Process state: The state could be new/ready/waiting/running/terminated.

 Program Counter: It stores the address of the next instruction to be executed.

 CPU registers: The registers like accumulators, index registers, stack pointer and all the general
purpose registers.

 CPU scheduling information: The information includes process priority and other scheduling
parameters.

 Memory-management information: the information may include the value of base and limit
registers, page tables and segment tables.

 Accounting information: the information includes amount of CPU time used and process numbers.

 I/O status information: It includes list of I/O devices allocated to the process.
1.5.2 Process States

As a process executes, it changes states

 New: The process is being created

 Running: Instructions are being executed

 Waiting: The process is waiting for some event to occur

 Ready: The process is waiting to be assigned to a processor

 Terminated: The process has finished execution

1.5.3 Context Switch

 When CPU switches to another process, the system must save the state of the old process and load
the saved state for the new process

 Context-switch time is overhead; the system does no useful work while switching

 Time dependent on hardware support

1.6 Type of Schedulers

A scheduler is a type of system software that allows you to handle process scheduling.

1.6.1 Long Term Scheduler


 also known as a job scheduler.
 regulates the program and select process from the queue and loads them into memory for execution.
 it also regulates the degree of multiprogramming.
 however, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor,
I/O jobs.
1.6.2 Medium Term Scheduler
 Medium-term scheduling is an important part of swapping.
 It enables you to handle the swapped out-processes.
 In this scheduler, a running process can become suspended, which makes an I/O request.
 In order to remove the process from memory and make space for other processes, the suspended
process should be moved to secondary storage.
1.6.3 Short Term Scheduler
 also known as CPU scheduler.
 The main goal of this scheduler is to boost the system performance according to set criteria.
 This helps you to select from a group of processes that are ready to execute and allocates CPU to
one of them.
 The dispatcher gives control of the CPU to the process selected by the short term scheduler.

Difference between Schedulers

Long-Term Short-Term Medium-Term

Long term is also known as a job Short term is also known as CPU Medium-term is also called
scheduler scheduler swapping scheduler.

It is either absent or minimal in a It is insignificant in the time-sharing This scheduler is an element of


time-sharing system. order. Time-sharing systems.

Speed is less compared to the short Speed is the fastest compared to the It offers medium speed.
term scheduler. short-term and medium-term
scheduler.

Allow you to select processes from


It only selects processes that is in a It helps you to send process back
the loads and pool back into the
ready state of the execution. to memory.
memory

Reduce the level of


Offers full control Offers less control
multiprogramming.

1.7 Types of Scheduling:

1.7.1 Preemptive Scheduling

 Preemptive scheduling is used when a process switches from running state to ready state or from
waiting state to ready state.

 The resources (mainly CPU cycles) are allocated to the process for the limited amount of time and
then is taken away, and the process is again placed back in the ready queue if that process still has
CPU burst time remaining.

 That process stays in ready queue till it gets next chance to execute.

1.7.2 Non – preemptive Schduling


 Non-preemptive Scheduling is used when a process terminates, or a process switches from running
to waiting state.
 In this scheduling, once the resources (CPU cycles) is allocated to a process, the process holds the
CPU till it gets terminated or it reaches a waiting state.
 In case of non-preemptive scheduling does not interrupt a process running CPU in middle of the
execution. Instead, it waits till the process complete its CPU burst time and then it can allocate the
CPU to another process.

1.8 Scheduling Criteria

There are several different criteria to consider when trying to select the "best" scheduling algorithm for a
particular situation and environment, including:

 CPU utilization – keep the CPU as busy as possible


 Throughput – number of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue
 Response time – amount of time it takes from when a request was submitted until the first response
is produced, not output (for time-sharing environment)

In general one wants to optimize the average value of a criteria (Maximize CPU utilization and throughput,
and minimize all the others.) However sometimes one wants to do something different, such as to minimize
the maximum response time.

1.9 Scheduling Algorithms:


1.9.1 First Come First Serve
 It is the simplest scheduling algorithm,
 FCFS simply queues processes in the order that they arrive in the ready queue.

 Since context switches only occur upon process termination, and no reorganization of the process
queue is required, scheduling overhead is minimal.

 Throughput can be low, since long processes can hog the CPU

 Turnaround time, waiting time and response time can be high for the same reasons above

 No prioritization occurs, thus this system has trouble meeting process deadlines.

 The lack of prioritization means that as long as every process eventually completes, there is no
starvation.

1.9.2 Shortest Job First


 In this strategy the scheduler arranges processes with the least estimated processing time remaining
to be next in the queue. This requires advance knowledge or estimations about the time required for
a process to complete.

 If a shorter process arrives during another process' execution, the currently running process may be
interrupted (known as preemption), dividing that process into two separate computing blocks. This
creates excess overhead through additional context switching. The scheduler must also place each
incoming process into a specific place in the queue, creating additional overhead.

 This algorithm is designed for maximum throughput in most scenarios.

 Waiting time and response time increase as the process' computational requirements increase.
Overall waiting time is smaller than FCFS, however since no process has to wait for the termination
of the longest process.

 No particular attention is given to deadlines; the programmer can only attempt to make processes
with deadlines as short as possible.
 Starvation is possible, especially in a busy system with many small processes being run.

1.9.3 Priority Scheduling


 The OS assigns a fixed priority rank to every process, and the scheduler arranges the processes in
the ready queue in order of their priority. Lower priority processes get interrupted by incoming
higher priority processes.

 It has no particular advantage in terms of throughput over FCFS scheduling.

 Waiting time and response time depend on the priority of the process. Higher priority processes
have smaller waiting and response times.

 Deadlines can be met by giving processes with deadlines a higher priority.

 Starvation of lower priority processes is possible with large amounts of high priority processes
queuing for CPU time.

1.9.4 Round-robin scheduling


 The scheduler assigns a fixed time unit per process, and cycles through them.

 RR scheduling involves extensive overhead, especially with a small time unit.

 Balanced throughput between FCFS and SJF, shorter jobs are completed faster than in FCFS and
longer processes are completed faster than in SJF.

 Fastest average response time, waiting time is dependent on number of processes, and not average
process length.

 Because of high waiting times, deadlines are rarely met in a pure RR system.

 Starvation can never occur, since no priority is given. Order of time unit allocation is based upon
process arrival time, similar to FCFS.

1.9.5 Multilevel feedback queue

 This is used for situations in which processes are easily divided into different groups.
 For example, a common division is made between foreground (interactive) processes and
background (batch) processes. These two types of processes have different response-time
requirements and so may have different scheduling needs.

1.10 Interprocess Communication:

Processes frequently need to communicate with other processes. For example, in a shell pipeline, the output
of the first process must be passed to the second process, and so on down the line. Thus there is a need for
communication between processes, preferably in a well-structured way not using interrupts.
1.10.1 Message passing vs Shared memory

1.10.2 Race Conditions

Situations where two or more processes are reading or writing some shared data and the final result depends
on who runs precisely when are called race conditions. Debugging programs containing race conditions is
very complex.

1.10.3 Critical Sections

The problem of avoiding race conditions can also be formulated in an abstract way. Part of the time, a
process is busy doing internal computations and other things that do not lead to race conditions. However,
sometimes a process may be accessing shared memory or files. That part of the program where the shared
memory is accessed is called the critical region or critical section. If we could arrange matters such that no
two processes were ever in their critical regions at the same time, we could avoid race conditions.

1.10.4 Mutual Exclusion with Busy Waiting

In this section we will examine various proposals for achieving mutual exclusion, so that while one process
is busy updating shared memory in its critical region, no other process will enter its critical region and
cause trouble.

1.10.4.1 Disabling Interrupts


 The simplest solution is to have each process disable all interrupts just after entering its critical
region and enable them just before leaving it.
 With interrupts disabled, no clock interrupts can occur. Thus, once a process has disabled interrupts,
it can examine and update the shared memory without fear that any other process will intervene.

Disadvatages

 it is unwise to give user processes the power to turn off interrupts.


 if the system is a multiprocessor, with two or more CPUs, disabling interrupts affects only the CPU
that executed the disable instruction. The other ones will continue running and can access the shared
memory.
1.10.4.2 Lock Variables
 Consider having a single, shared, (lock) variable, initially 0. When a process wants to enter its
critical region, it first tests the lock.
 If the lock is 0, the process sets it to 1 and enters the critical region. If the lock is already 1, the
process just waits until it becomes 0. Thus, a 0 means that no process is in its critical region and a 1
means that some process is in its critical region.
Unfortunately, this idea contains exactly the same fatal flaw that we saw in the spooler directory. Suppose
that one process reads the lock and sees that it is 0. Before it can set the lock to 1, another process is
scheduled, runs, and sets the lock to 1. When the first process runs again, it will also set the lock to 1, and
two processes will be in their critical regions at the same time.

1.10.4.3 Strict Alternation

In fact, this solution requires that the two processes strictly alternate in entering their critical regions, for
example, in spooling files. Neither one would be permitted to spool two in a row. While this algorithm does
avoid all races, it is not really a serious candidate as a solution because a process outside its critical section
is prohibiting another process from entering its critical section.

1.10.4.4 TSL Instruction

Now let us look at a proposal that requires a little help from the hardware. Many computers, especially
those designed with multiple processors in mind, have an instruction

TSL RX, LOCK

(Test and Set Lock) that works as follows: it reads the contents of the memory word LOCK into register
RX and then stores a nonzero value at the memory address LOCK. The operations of reading the word and
storing into it are guaranteed to be indivisible. No other processor can access the memory word until the
instruction is finished. The CPU executing the TSL instruction locks the memory bus to prohibit other
CPUs from accessing memory until it is done.

To use the TSL instruction, we will use a shared variable, LOCK, to coordinate access to shared memory.
When LOCK is 0, any process may set it to 1 using the TSL instruction and then read or write the shared
memory. When it is done, the process sets LOCK back to 0 using an ordinary move instruction.

There a four-instruction subroutine in a fictitious (but typical) assembly language is shown. The first
instruction copies the old value of LOCK to the register and then sets LOCK to 1. Then the old value is
compared with 0. If it is nonzero, the lock was already set, so the program just goes back to the beginning
and tests it again. Sooner or later it will become 0 (when the process currently in its critical region is done
with its critical region), and the subroutine returns, with the lock set. Clearing the lock is simple. The
program just stores a 0 in LOCK. No special instructions are needed.

enter_region:

TSL REGISTER,LOCK |copy LOCK to register and set LOCK to 1

CMP REGISTER,#0 |was LOCK zero?

JNE ENTER_REGION |if it was non zero, LOCK was set, so loop
RET |return to caller; critical region entered

leave_region:

MOVE LOCK,#0 |store a 0 in LOCK

RET |return to caller

1.10.5 Sleep and Wakeup

Not only do the above approaches waste CPU time, but it can also have unexpected effects. Consider a
computer with two processes, H, with high priority and L, with low priority, which share a critical region.
The scheduling rules are such that H is run whenever it is in ready state. At a certain moment, with L in its
critical region, H becomes ready to run (e.g., an I/O operation completes). H now begins busy waiting, but
since L is never scheduled while H is running, L never gets the chance to leave its critical region, so H
loops forever. This situation is sometimes referred to as the priority inversion problem.

Now let us look at some inter process communication primitives that block instead of wasting CPU time
when they are not allowed to enter their critical regions. One of the simplest is the pair sleep and wakeup.
Sleep is a system call that causes the caller to block, that is, be suspended until another process wakes it up.
The wakeup call has one parameter, the process to be awakened. Alternatively, both sleep and wakeup each
have one parameter, a memory address used to match up sleeps with wakeups.

1.10.6 Semaphores

It is simply a variable which is non-negative and shared between threads. This variable is used to solve the
critical section problem and to achieve process synchronization in the multiprocessing environment.
Semaphores are of two types:

 Binary Semaphore
o This is also known as mutex lock.
o It can have only two values – 0 and 1. Its value is initialized to 1. It is used to implement the
solution of critical section problem with multiple processes.
 Counting Semaphore
o Its value can range over an unrestricted domain.
o It is used to control access to a resource that has multiple instances.

Now, look at two operations which can be used to access and change the value of the semaphore variable.

Some point regarding P and V operation

1. P operation is also called wait, sleep or down operation and V operation is also called signal, wake-
up or up operation.

2. Both operations are atomic and semaphore(s) is always initialized to one. Here atomic means that
variable on which read, modify and update happens at the same time/moment with no pre-emption
i.e. in between read, modify and update no other operation is performed that may change the
variable.

3. A critical section is surrounded by both operations to implement process synchronization. See below
image - critical section of Process P is in between P and V operation.

Now, let us see how it implements mutual exclusion. Let there be two processes P1 and P2 and a
semaphore s is initialized as 1. Now if suppose P1 enters in its critical section then the value of semaphore s
becomes 0. Now if P2 wants to enter its critical section then it will wait until s > 0, this can only happen
when P1 finishes its critical section and calls V operation on semaphore s. This way mutual exclusion is
achieved. Look at the below image for details which is Binary semaphore.

1.11 Threads:

They are the smallest unit of execution. Each thread has its own TCB.

1.11.1 Threads vs Processes


 Both threads and processes are methods of parallelizing an application.
 However, processes are independent execution units that contain their own state information, use
their own address spaces, and only interact with each other via interprocess communication
mechanisms. Processes, in other words, are an architectural construct.
 By contrast, a thread is a coding construct that doesn't affect the architecture of an application. A
single process might contains multiple threads; all threads within a process share the same state and
same memory space, and can communicate with each other directly, because they share the same
variables.
 Threads typically are spawned for a short-term benefit that is usually visualized as a serial task, but
which doesn't have to be performed in a linear manner and then are absorbed when no longer
required.

 Each process has its own address space, but the threads within the same process share that address
space. Threads also share any other resources within that process. This means that it’s very easy to
share data amongst threads, but it’s also easy for the threads to step on each other, which can lead to
bad things.

 Context switching between threads is generally less expensive than in processes.

 The overhead (the cost of communication) between threads is very low relative to processes.

 Threads are easier to create than processes since they don't require a separate address space.

 Threads are considered lightweight because they use far less resources than processes.

Advantages

 Responsiveness: If a thread gets a lot of cache misses, the other thread(s) can continue, taking
advantage of the unused computing resources, which thus can lead to faster overall execution.
 Resource sharing: If a thread cannot use all the computing resources of the CPU (because
instructions depend on each other's result), running another thread can avoid leaving these idle.
 Utilization of multiprocessor architectures. The benefits of multithreading can be greatly increased
in a multiprocessor architecture, where threads may be running in parallel on different processors. A
single threaded process can only run on one CPU, no matter how many are available.
 Economy: Allocating memory and resources for process creation is costly. Because threads share
resources of the process to which they belong, it is more economical to create and context-switch
threads.

Disadvantages

 Multiple threads can interfere with each other when sharing hardware resources such as caches or
translation lookaside buffers (TLBs).

 Execution times of a single thread are not improved but can be degraded, even when only one thread
is executing. This is due to slower frequencies and/or additional pipeline stages that are necessary to
accommodate thread-switching hardware.

 Hardware support for multithreading is more visible to software, thus requiring more changes to
both application programs and operating systems than multiprocessing.

 Thread scheduling is also a major problem in multithreading.

1.11.2 Types of threads


1.11.2.1 Kernel-Level Threads
 All thread operations are implemented in the kernel and the OS schedules all threads in the system.
 OS managed threads are called kernel-level threads or light weight processes.
 In this method, the kernel knows about and manages the threads. No runtime system is needed in
this case.
 Operating Systems kernel provides system call to create and manage threads.

Advantages:

 Because kernel has full knowledge of all threads, Scheduler may decide to give more time to a
process having large number of threads than process having small number of threads.
 Kernel-level threads are especially good for applications that frequently block.

Disadvantages:

 The kernel-level threads are slow and inefficient. For instance, threads operations are hundreds of
times slower than that of user-level threads.
 Since kernel must manage and schedule threads as well as processes. It requires a full thread control
block (TCB) for each thread to maintain information about threads. As a result there is significant
overhead and increased in kernel complexity.
1.11.2.2 User-Level Threads
 User-Level threads are managed entirely by the run-time system (user-level library).
 The kernel knows nothing about user-level threads and manages them as if they were single-
threaded processes.
 User-Level threads are small and fast, each thread is represented by a PC, register, stack, and small
thread control block.

Advantages:

 The most obvious advantage of this technique is that a user-level threads package can be
implemented on an Operating System that does not support threads.
 User-level threads do not require modification to operating systems.
 Simple Management: This simply means that creating a thread, switching between threads and
synchronization between threads can all be done without intervention of the kernel.
 Fast and Efficient: Thread switching is not much more expensive than a procedure call.

Disadvantages:

 Since, User-Level threads are invisible to the OS they are not well integrated with the OS. As a
result, Os can make poor decisions like scheduling a process with idle threads, blocking a process
whose thread initiated an I/O.
 There is a lack of coordination between threads and operating system kernel. Therefore, process as
whole gets one time slice irrespective of whether process has one thread or 1000 threads within. It is
up to each thread to relinquish control to other threads.
1.11.3 Multithreading models
1.11.3.1 Many-to-one
 In this model, the library maps all threads to a single lightweight process
 Mainly used in language systems, portable libraries

Advantages:

 totally portable
 easy to do with few systems dependencies

Disadvantages:
 cannot take advantage of parallelism
 may have to block for synchronous I/O
 there is a clever technique for avoiding it

1.11.3.2 One-to-one
 In this model, the library maps each thread to a different lightweight process
 Used in LinuxThreads and other systems where LWP creation is not too expensive

Advantages:

 can exploit parallelism, blocking system calls

Disadvantages:

 thread creation involves LWP creation


 each thread takes up kernel resources
 limiting the number of total threads

1.11.3.3 Many-to-many
 In this model, the library has two kinds of threads: bound and unbound
o bound threads are mapped each to a single lightweight process
o unbound threads may be mapped to the same LWP
 Probably the best of both worlds.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy