Os Lpu
Os Lpu
CrackIT
Operating system
Process Management
Memory Management
Types of Operating System
Multitasking
Key points
1. A process is a program in execution.
2. Memory is a large array of words or bytes, each with its own address.
3. A file is a collection of related information defi ned by its creator.
4. A distributed system provides the user with access to the various resources the system
maintains.
5. An RTOS typically has very little user-interface capability, and no end-user utilities
6. A single user cannot always keep CPU or I10 devices busy at all times.
7. A multiprocessing system is a computer hardware confi guration that includes more
than one independent processing unit.
8. A networked computing system is a collection of physical interconnected computers.
9. A system task, such as spooling is also a process.
10. Interaction is achieved through a sequence of reads or writes of specifi c memory
address.
System Calls
System calls provide an interface between a running program and operating system. System calls are
generally available as assembly language instructions. Several higher level languages such as C also allow
to make system calls directly.
1. Running a program involves allocating, and deallocating memory.
2. CPU scheduling is needed in case of multiprocess
3. Reading from or writing to a fi le requires I/O service.
4. System calls provide an interface between the process, and the operating
system
5. A system call is implemented through hooking interrupt
7. Monolithic Systems structure is known as “The Big Mess”.
8. Six layers are there in the layered system structure.
9. Exokernel is developed by MIT
10. In Client-server Model, all the kernel does is handle the communication
between
clients and servers
PCB (Process Control Blocks)
The operating system groups all information that it needs about a particular process into a data structure called a
process descriptor or a Process Control Block (PCB). Whenever a process is created (initialized, installed), the
operating system creates a corresponding process control block to serve as its run-time description during the
lifetime of the process. When the process terminates, its PCB is released to the pool of free cells from which new PCBs
are drawn. The dormant state is distinguished from other states because a dormant process has no PCB. A process
becomes known to the O.S. and thus eligible to compete for system resources only when it has an active PCB
associate with it.
Inter-process Communication (IPC) is a set of techniques for the exchange of data among two
or more threads in one or more processes. It involves sending information from one process to
another. Processes may be running on one or more computers connected by a network. IPC
techniques are divided into methods for message passing, synchronization, shared memory, and
Remote Procedure Calls (RPC).
Concept of Thread
A thread, sometimes called a lightweight process (LWP), is a basic unit of resource utilization,
and consists of a program counter, a register set, and a stack. It shares with peer threads its code
section, data section, and operating-system resources such as open fi les and signals, collectively
known as a task.
Multi-tasking vs. Multi-threading
Processes vs. Threads
Processes are often called tasks in embedded operating systems. Process is the entity to which processors are
assigned. The rapid switching back and forth of CPU among processes is called multi-programming.
A thread is a single sequence stream within in a process. A process can have five states created, ready, running,
blocked and terminated
A process control block or PCB is a data structure (a table) that holds information about a process.
Dispatch occurs when all other processes have had their share and it is time for the fi rst process to run again.
Wakeup occurs when the external event for which a process was waiting (such as arrival of input) happens.
Admitted occurs when the process is created. Exit occurs when the process has fi nished execution.
Wakeup: It is a process state transition which occurs when the external event for which a process was waiting
(such as arrival of input) happens.
Multiprogramming: The rapid switching back and forth of CPU among processes is called multiprogramming
CPU Scheduling
CPU scheduling is the basics of multiprogramming. By switching the CPU among several processes the operating
systems can make the computer more productive. The objective of multiprogramming is to have some process
running at all times, in order to maximize CPU utilization.
Context Switching
This act of switching from one process to another is called a “Context Switch
Priority Scheduling
A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Priority
can be defi ned either internally or externally. Internally defi ned priorities use some measurable quantities to
compute the priority of a process.
Priority scheduling can be preemptive or non preemptive. A preemptive priority scheduling algorithm will preempt
the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.
A non preemptive priority scheduling algorithm will simply put the new process at the head of the ready queue
A major problem with priority scheduling algorithms is indefinite blocking or starvation. This can be solved by a
technique called aging wherein I gradually increase the priority of a long waiting process.
Round-Robin (RR)
Round-robin scheduling is really the easiest way of scheduling. All processes form a circular array and the
scheduler gives control to each process at a time. It is off course very easy to implement and causes almost no
overhead, when compared to all other algorithms. But response time is very low for the processes that need it.
Characteristics of Scheduling Algorithms
Types of Scheduling
Key points
Synchronization Process
Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, so
as to reach an agreement or commit to a certain sequence of action. Synchronization involves the orderly sharing
of system resources by processes.
The key to preventing trouble involving shared storage is fi nd some way to prohibit more than one process from
reading and writing the shared data simultaneously. That part of the program where the shared memory is
accessed is called the Critical Section. To avoid race conditions and fl awed results, one must identify codes in
Critical Sections in each thread. The characteristic properties of the code that form a Critical Section are:
1. Codes that reference one or more variables in a “read-update-write” fashion while any of those variables is
possibly being altered by another thread.
2. Codes that alter one or more variables that are possibly being referenced in “read-updatawrite” fashion by
another thread.
3. Codes use a data structure while any part of it is possibly being altered by another thread.
4. Codes alter any part of a data structure while it is possibly in use by another thread
Mutual Exclusion
If you could arrange matters such that no two processes were ever in their critical sections simultaneously, you
could avoid race conditions. You need four conditions to hold to have a good solution for the critical section
problem (mutual exclusion).
1. No two processes may at the same moment inside their critical sections.
Binary Semaphores can assume only the value 0 or the value 1 counting semaphores also called general
semaphores can assume only nonnegative values
Deadlock
Deadlock occurs when you have a set of processes [not necessarily all the processes in the system], each holding
some resources, each requesting some resources, and none of them is able to obtain what it needs, i.e. to make
progress. Those processes are deadlocked because all the processes are waiting
Necessary Conditions
Deadlock Prevention
Deadlocks can be prevented by ensuring that at least one of the following four conditions occur:
Key points
Race condition is a fl aw in a system of processes whereby the output of the process is unexpectedly and
critically dependent on the sequence of other processes.
Mutual exclusion means that only one of the processes is allowed to execute its critical section at a time.
A mutex is a program element that allows multiple program processes to share the same resource but not
simultaneously. Semaphore is a software concurrency control tool.
Bounded Buffer Problem, readers and writers problem, sleeping barber problem, and dining philosopher problem
are some of the classical synchronization problems taken from real life situations.
Resource Allocation Graphs (RAGs): Those are directed labeled graphs used to represent, from the point of view
of deadlocks, the current state of a system.
Monitor: It is a software synchronization tool with high-level of abstraction that provides a convenient and
effective mechanism for process synchronization.
Logical vs. Physical Address Space
An address generated by the CPU is commonly referred to as a logical address, whereas an address seen by the
memory unit – that is, the one loaded into the memory-address register of the memory – is commonly referred to
as a physical address
Contiguous Memory Allocation
Paging
It is a technique for increasing the memory space available by moving infrequently-used parts of a program’s
working memory from RAM to a secondary storage medium, usually hard disk. The unit of transfer is called a
page.
A memory management unit (MMU) monitors accesses to memory and splits each address into a page number
(the most signifi cant bits) and an offset within that page (the lower bits). It then looks up the page number in its
page table. The page may be marked as paged in or paged out. If it is paged in then the memory access can
proceed after translating the virtual address to a physical address. If the requested page is paged out then
space must be made for it by paging out some other page, i.e. copying it to disk. The requested page is then
located on the area of the disk allocated for “swap space” and is read back into RAM. The page table is updated
to indicate that the page is paged in and its physical address recorded.
The MMU also records whether a page has been modifi ed since it was last paged in. If it has not been modifi ed
then there is no need to copy it back to disk and the space can be reused immediately
Segmentation
It is very common for the size of program modules to change dynamically. For instance, the programmer may
have no knowledge of the size of a growing data structure. If a single address space is used, as in the paging
form of virtual memory, once the memory is allocated for modules they cannot vary in size. This restriction
results in either wastage or shortage of memory. To avoid the above problem, some computer systems are
provided with many independent address spaces. Each of these address spaces is called a segment. The address
of each segment begins with 0 and segments may be compiled separately.
Segmentation is one of the most common ways to achieve memory protection like paging. An instruction
operand that refers to a memory location includes a value that identifi es a segment and an offset within that
segment
Demand Paging
As there is much less physical memory than virtual memory the operating system must be careful that it does not
use the physical memory inefficiently. One way to save physical memory is to only load virtual pages that are
currently being used by the executing program. For example, a database program may be run to query a
database. In this case not the entire database needs to be loaded into memory, just those data records that are
being examined. Also, if the database query is a search query then it does not make sense to load the code from
the database program that deals with adding new records. This technique of only loading virtual pages into
memory as they are accessed is known as demand paging.
Thrashing
Thrashing happens when a hard drive has to move its heads over the swap area many times due to the high
number of page faults. This happens when memory accesses are causing page faults as the memory is not
located in main memory. The thrashing happens as memory pages are swapped out to disk only to be paged in
again soon afterwards. Instead of memory access happening mainly in main memory, access is mainly to disk
causing the processes to become slow as disk access is required for many memory pages and thus thrashing.
RAID Structure
RAID stands for Redundant Array of Independent (or Inexpensive) Disks. It involves the confi guration (setting
up) of two or more drives in combination for fault tolerance and performance. RAID disk drives are used
frequently on servers and are increasingly being found in home and offi ce personal computers.
Disks have high failure rates and hence there is the risk of loss of data and lots of downtime for restoring and
disk replacement. To improve disk usage many techniques have been implemented. One such technology is RAID
(Redundant Array of Inexpensive Disks). Its organisation is based on disk striping (or interleaving), which uses a
group of disks as one storage unit.
RAID-0: In RAID Level 0 (also called striping), each segment is written to a different disk, until all drives in the
array have been written to. The I/O performance of a RAID-0 array is signifi cantly better than a single disk. This
is true on small I/O requests, as several can be processed simultaneously, and for large requests, as multiple disk
drives can become involved in the operation. Spindle-sync will improve the performance for large I/O requests.
This level of RAID is the only one with no redundancy.
A RAID-1 array normally contains two disk drives. This will give adequate protection against drive failure. It is
possible to use more drives in a RAID-1 array, but the overall reliability will not be signifi cantly effected.
RAID-1 arrays with multiple mirrors are often used to improve performance in situations where the data on the
disks is being read from multiple programs or threads at the same time. By being able to read from the multiple
mirrors at the same time, the data throughput is increased, thus improving performance. The most common use
of RAID-1 with multiple mirrors is to improve performance of databases
RAID-2: RAID Level 2 is an intellectual curiosity, and has never been widely used. It is more space effi cient then
RAID-1, but less space effi cient then other RAID levels.
Instead of using a simple parity to validate the data (as in RAID-3, RAID-4 and RAID-5), it uses a much more
complex algorithm, called a Hamming Code. A Hamming code is larger than a parity, so it takes up more disk
space, but, with proper code design, is capable of recovering from multiple drives being lost. RAID-2 is the only
simple RAID level that can retain data when multiple drives fail.
RAID-3: RAID Level 3 is defi ned as bytewise (or bitwise) striping with parity. Every I/O to the array will access
all drives in the array, regardless of the type of access (read/write) or the size of the I/O request.
During a write, RAID-3 stores a portion of each block on each data disk. It also computes the parity for the data,
and writes it to the parity drive. RAID-3 provides a similar level of reliability to RAID-4 and RAID-5, but offers
much greater I/O bandwidth on small requests. In addition, there is no performance impact when writing.
Unfortunately, it is not possible to have multiple operations being performed on the array at the same time, due
to the fact that all drives are involved in every operation. RAID-3 also has confi guration limitations. The number
of data drives in a RAID-3 confi guration must be a power of two. The most common confi gurations have four
or eight data drives.
RAID-4: RAID Level 4 is defined as blockwise striping with parity. The parity is always written to the same disk
drive. This can create a great deal of contention for the parity drive during write operations.
RAID-5: RAID Level 5 is defined as blockwise striping with parity. It differs from RAID-4, in that the parity data
is not always written to the same disk drive.
Goals of Protection
Pretty Good Privacy (PGP)
PGP can be used to sign or encrypt e-mail messages with the mere click of the mouse. Depending upon the
version of PGP, the software uses SHA or MD5 for calculating the message hash; CAST, Triple-DES, or IDEA for
encryption; and RSA or DSS/Diffi e-Hellman for key exchange and digital signatures. PGP is a public key
encryption.
Public-key Encryption
Public-key cryptography, also known as asymmetric cryptography, is a form of cryptography in which a user has
a pair of cryptographic keys—a public key and a private key. The private key is kept secret, while the public key
may be widely distributed. The keys are related mathematically, but the private key cannot be practically derived
from the public key. A message encrypted with the public key can be decrypted only with the corresponding
private key
Symmetric Key Encryption
Symmetric cryptography involves a single, secret key, which both the message-sender and the message-
recipient must have. It is used by the sender to encrypt the message, and by the recipient to decrypt it.
Symmetric cryptography can also be used to address the integrity and authentication requirements.
A major difficulty with symmetric schemes is that the secret key has to be possessed by both parties, and hence
has to be transmitted from whomever creates it to the other party. Moreover, if the key is compromised, all of the
message transmission security measures are undermined. The steps taken to provide a secure mechanism for
creating and passing on the secret key are referred to as ‘key management.
The technique does not adequately address the non-repudiation requirement, because both parties have the
same secret key. Hence the other, and a claim by either party not to have sent a message is credible, because the
other may have compromised the key expose each to the risk of fraudulent falsifi cation of a messag
Digital Signature
Like the conventional signature, the digital signature assures all concerned that the contents of the electronic
messages are authentic, are really sent by the sender on the date and time recorded. All these functions can be
performed using the public-key encryption techniques and the message digest techniques. As the message
exchange and electronic commerce applications grow, the importance of digital signatures will increase.
Signing Process
1. Prepare the message. All the mail and messaging software including messaging programs like Microsoft
exchange have all the needed software for handling digital signatures.
2. Create a message digest for the message using the secret key, which the sender is sharing with the recipient.
3. Encrypt the message and the digest with the private key of the sender. At this stage the document is signed as
the message is authenticated with the private key of the sender. If required, send also the digital certifi cate of the
sender, as it contains the public key of the sender. The sender should not encrypt this digital certifi cate, so as top
facilitate easy retrieval of the sender’s public key by the recipient.
4. Send the cipher text and the digital certifi cate to the recipient
5. The recipient retrieves the public key of the sender using his/her private key.
6. The recipient decrypts the cipher text
7. Recipient runs the message digest algorithm on the message, using the secret key shared with the sender.
8. Compare the computed message digest with the received message digest. If they are the same, then the
message reached intact. Otherwise the message was tampered.