0% found this document useful (0 votes)
7 views30 pages

Operating systems Notes 2

The document discusses computer software, specifically focusing on operating systems, which are essential for managing hardware and software resources. It outlines the functions and types of operating systems, including their design and implementation, and highlights key components such as process management, memory management, and file system management. Additionally, it describes various operating system structures and capabilities, emphasizing the importance of user and system goals in their development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views30 pages

Operating systems Notes 2

The document discusses computer software, specifically focusing on operating systems, which are essential for managing hardware and software resources. It outlines the functions and types of operating systems, including their design and implementation, and highlights key components such as process management, memory management, and file system management. Additionally, it describes various operating system structures and capabilities, emphasizing the importance of user and system goals in their development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

COMPUTER SOFTWARE: OPERATING SYSTEMS

Software is a set of electronic instructions consisting of complex codes (also known as


programs) that make the computer perform tasks. In other words, software tells the
computer what to do. Some programs exist primarily for the computer's use and help the
computer perform and manage its own tasks. Other types of programs exist primarily for
the user and enable the computer to perform tasks, such as creating documents or
drawing pictures. Computer software is divided into two major categories: System and
application software.

SYSTEM SOFTWARE
Consists of the programs that control and maintain the operating of the computer and its
devices. It serves as an important interface between the user, application software and
hardware. Types of system software are:
a) Operating system
b) Utility programs

Operating System – are a set of programs containing instructions that co-ordinate all the
activities among computers hardware resources. Basic function of the operating system
include:
• Memory management
• Process Management
• File Management
• Device Management
• Providing security

Utility Programs – They are included with most operating systems and provide the
following functions:
• Managing files
• Viewing images
• Securing a computer from unauthorized access,
• Uninstalling programs.
• Scanning disks (deletes and corrects physical and logical problems on a hard disk
and searches and removes unnecessary file).

1
• Disk Defragmenter – reorganizes the file sand unused space on a computer’s hard
disk so that the operating system accesses data more quickly and programs run
faster).
• Diagnostic Utility – Compiles technical information about your computer’s
hardware and certain system software programs and prepares a report outlining
any identified problem.
• Back up utility - allows users to copy or back up selected files or an entire hard
disk to another storage medium.

Examples of operating system


Category Operating System Names
Stand alone DOS, Windows 3.X, Windows 95, Windows NT,
Windows 98, Windows ME, Windows XP, Windows
VISTA, Macintosh O/S, UNIX, LINUX
Network Netware, Windows 2000 Server, Windows2003 Server,
UNIX, LINUX, Solaris
Embedded Windows CE, Windows Mobile, Palm O/S, Embedded
LINUX, Symbian O/S.

What is an Operating System?


A program that control the execution of programs and acts as an intermediary
between application and the computer hardware. As an Intermediary, it provides
an environment in which the user can execute programs.
Operating system goals and objective:
▪ To execute user programs and make solving user problems easier.
▪ To make the computer system convenient to use.
▪ To use the computer hardware in an efficient manner (optimal use of
computing resources).
▪ Ability to evolve; An OS should be constructed in such a way as to permit
the effective development, testing and introduction of a new system
functions with our interfering with service.

2
Computer System Components
The Hardware and software used in providing applications to users can be
viewed in a layered or hierarchical fashion
There are 4 main components of a Computer System, namely;

1. Hardware – provides basic computing resources (CPU, memory, I/O devices).


2. Operating system – controls and coordinates the use of the hardware among the
various application programs for the various users.
3. Applications programs – define the ways in which the system resources are used
to solve the computing problems of the users (compilers, database systems, video
games, business programs).
4. Users –try to solve different problems (people, machines, other computers). The
user view the computer system in terms of set of application.
Classes of operating systems
They can be designed as:
 Proprietary o/s – designed for use by specific computer architecture. e.g. MS
DOS, PC DOS that runs on IBM and compatible computers using the Intel series
of microprocessors.

3
 Generic o/s – designed for use by a wide variety of computer architectures e.g.
UNIX that runs on both Intel and Motorola Microprocessors
Operating system functions
a) Process Management
Process - fundamental concept in OS
a. Process is a program in execution.
b. Process needs resources - CPU time, memory, files/data and I/O devices.
OS is responsible for the following process management activities.
• Process creation and deletion
• Process suspension and resumption
• Process synchronization and interprocess communication
• Process interactions - deadlock detection, avoidance and correction

b) Main Memory Management


Main Memory is an array of addressable words or bytes that is quickly
accessible.
Main Memory is volatile.
OS is responsible for:
• Allocate and deallocate memory to processes.
• Managing multiple processes within memory - keep track of which parts
of memory are used by which processes. Manage the sharing of memory
between processes.
• Determining which processes to load when memory becomes available.

c) File / Secondary storage Management


Since primary storage is expensive and volatile, secondary storage is required
for backup.
Disk is the primary form of secondary storage.
OS performs storage allocation, free-space management and disk scheduling.

d) Input /Output (Device) System management


I/O system in the OS consists of

4
• Buffer caching and management
• Device driver interface that abstracts device details
• Drivers for specific hardware devices

e) Protection System (Security)


Protection mechanisms control access of programs and processes to user and
system resources.
Protect user from himself, user from other users, system from users.
Protection mechanisms must:
• Distinguish between authorized and unauthorized use.
• Specify access controls to be imposed on use.
• Provide mechanisms for enforcement of access control.
• Security mechanisms provide trust in system and privacy authentication,
certification, encryption etc.
f) File System Management
File is a collection of related information defined by creator - represents
programs and data.
OS is responsible for
• File creation and deletion
• Directory creation and deletion
• Supporting primitives for file/directory manipulation.
• Mapping files to disks (secondary storage).
• Backup files on archival media (tapes).

Operating System Definitions


▪ Resource Allocator – OS manages and allocates resources (hardware &
software-CPU, memory & file storage space, I/O devices etc) to specific
programs and users. Incase of conflicting requests for resources, OS decides
which requests are allocated resources to ensure fair & efficient computer usage.

5
▪ Control Program – OS controls the execution of user programs to prevent errors
& improper use of the computer. Also concerned with operation and control of
I/O devices.
▪ Kernel – OS is the one program running at all times on the computer-usually
called the kernel (with all else being application programs).

Operating System Capabilities (“Types of o/s”)


A particular o/s may incorporate one or more of these capabilities:
a) Single User processing – only one user at a time to access a computer e.g. DOS
b) Multi-user processing – allows two or more users to access a computer at the same
time. Actual no of users depends on the hardware and o/s design e.g. UNIX
c) Single Tasking – allows one program to execute at a time and that program must
finish executing before the nest program can be gin e.g. DOS.
d) Context Switching – allows several programs to reside in the memory but only one
to be active at a time. Active program is in the foreground and others in the
background.
e) Multitasking / Multiprogramming – allows single CPU to execute what appears to
be more that one program at a time. The CPU switches its attention between 2 or
more programs in main memory as it receives requests for processing from one
program and the other. It happens so quickly that the programs appear to execute
simultaneously / executing concurrently.
f) Multiprocessing / Parallel processing – allows the simultaneous or parallel
execution of programs by a computer that has 2 or more CPU’s
g) Multithreading – support several simultaneous functions with the same application.
h) Inter-processing / Dynamic linking – allows any change made in one application to
be automatically reflected in any related linked applications e.g. a link between word
processing and financial applications (linked together).
i) Time sharing – allows multiple users to access a single computer found on a large
computer o/s where many users need access at the same time.
j) Virtual storage – o/s with the capability of virtual storage called virtual memory,
allows you to use a secondary storage device as an extension of main memory.
k) Real Time Processing – allows a computer to control or to monitor the task
performances of other machines and people by responding to input data in a specified
amount of time. To control processes immediate response is usually necessary.
6
OPERATING SYSTEM DESIGN AND IMPLEMENTATION

Design and Implementation of OS is not “solvable”, but some approaches have proven
successful. Internal structure of different Operating Systems can vary widely. The design starts by
defining goals and specifications and this is affected by choice of hardware and type of system.
In guiding the design operating systems the user and system goals are taken into consideration.
➢ User goals –operating system should be convenient to use, easy to learn, reliable, safe, and
fast.
➢ System Goals – operating systems should be easy to design, implement, and maintain, as
well as flexible, reliable, error- free, and efficient
Important principles to separate during design are:
➢ Policy: What will be done? (decide what will be done)
➢ Mechanism: How to do it? (determine how to do something)
The separation of policy from mechanism is a very important principle since it allows maximum
flexibility if policy decisions are to be changed later.

Types of structures
a) Simple Structure / Monolithic systems
It has also been subtitled “The Big Mess” i.e. no structure where the o/s is viewed as a collection
of procedures which can call each other. There is no information hiding (procedures can view
each other) and basically suggests a basic structure for the o/s which includes:
➢ Main Program – invokes service procedure
➢ A set of service procedures – which carry system calls.
➢ A set of utility procedures that help the service procedures
Example of MS DOS
MS-DOS –written to provide the most functionality in the least space. It is not divided into
modules. Although MS-DOS has some structure, its interfaces and levels of functionality are not
well separated (no information hiding).

7
b) Layered Approach
The operating system is divided into a number of layers (levels), each built on top of lower layers.
The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface. With
modularity, layers are selected such that each uses functions (operations) and services of only
lower-level layers.

Examples of layers developed by


Dijkstra (1968) in Netherlands
Layer Function
(5) Operator
(4) User Programs
(3) Input / Output
Management
(2) Operations / Process
communication
(1) Memory and Device
Management
(0) Process allocation and
multiprogramming

8
Example is UNIX which had a simple layered structure consisting of:
➢ Systems programs
➢ The kernel
i) Consists of everything below the system-call interface and above the
physical hardware
ii) Provides the file system, CPU scheduling, memory management, and other
operating-system functions; a large number of functions for one level
c) Modules
Most modern operating systems implement kernel modules. It uses object-oriented approach.
Each core component is separate and each talks to the others over known interfaces. Each is
loadable as needed within the kernel. Overall it is similar to layers but with more flexible

Solaris Modular approach

e) Virtual Machines
A virtual machine takes the layered approach to its logical conclusion. It treats hardware and the
operating system kernel as though they were all hardware. A virtual machine provides an
interface identical to the underlying bare hardware. The operating system creates the illusion of
multiple processes, each executing on its own processor with its own (virtual) memory.
The resources of the physical computer are shared to create the virtual machines. CPU scheduling
can create the appearance that users have their own processor. Spooling and a file system can
9
provide virtual card readers and virtual line printers. A normal user time-sharing terminal serves
as the virtual machine operator’s console

(a) Non virtual machine (b) virtual machine


The virtual-machine concept provides complete protection of system resources since each virtual
machine is isolated from all other virtual machines. This isolation, however, permits no direct
sharing of resources.
A virtual-machine system is a perfect vehicle for operating-systems research and development.
System development is done on the virtual machine, instead of on a physical machine and so does
not disrupt normal system operation.
The virtual machine concept is difficult to implement due to the effort required to provide an
exact duplicate to the underlying machine. Example is LINUX implementation. The technique is
also implemented in the Java Virtual Machine.

10
f) Exokernels
Gives each user a clone of actual computer, but with a subset of resources. Thus one Virtual
Machine might get disk blocks 0 to 1023, the next one might get block 1024 to 2047 and so on.
At bottom layer a kernel mode program runs (exokernel) which allocates resources to V.M and
monitors utilization.

g) Client-Server Model / Microkernel System Structure


Involves moving code to higher levels even further and remove as much as possible from the o/s
leaving a minimal kernel i.e. implementing most of the o/s system functions in user processes
(client process sends a request to server process).

Client Client Process Terminal File Memory


Process Process Server Server
..... Server Server
User Mode

Kernel Mode
Kernel Process

The kernel handles communication between clients and servers. The server processes run as user
mode processes and no in kernel mode, they do not have direct access to the hardware (hard for
hardware to crash). It has also been adapted to distributed systems.

Machine 1 Machine 2
Client File Server
Kernel Kernel
NETWORK

Message from client to server

11
Benefits:
➢ Easier to extend a microkernel
➢ Easier to port the operating system to new architectures
➢ More reliable (less code is running in kernel mode)
➢ More secure

Detriments:
➢ Performance overhead of user space to kernel space communication

PROCESS MANAGEMENT
A process can be thought as a program in execution. It needs certain resources including CPU
time memory, file and I/O devices to accomplish its task.

The o/s is responsible for the following activities in connection with process management:
• Creation and deletion of both user and system processes
• Suspension and resumption of processes
• Provision of mechanism for process synchronization
• Provision of mechanism for process communication
• Provision for mechanism for deadlock handling.

Process Model
A process is more than a program. It includes current activity temporary data containing global
value. A program is a passive entity e.g. contents of a file stored on disk where as a process is an
entity.
Process state
As a process executes, it changes state. The state of a process is defined in part by the current
activity of that process. It states may be:
i) New – process is being created
ii) Ready – waiting to be assigned to a processor
iii) Waiting – process waiting for some event to occur.
iv) Running – Instructions are being executed
v) Terminated – finished execution.

12
Diagram of process state

INTER-PROCESS COMMUNICATION
O/s must provide inter-process communication facility (IPC) which provides a mechanism to
allow processes to communicate and to synchronize their actions to prevent Race conditions (a
condition where several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access takes place).

To guard against this mechanisms are necessary to ensure that only one process at a time can be
manipulating the date. Some of this mechanisms / techniques include:

Critical Section Problem


It is a segment of code in which a process may be changing common variables, updating a table,
writing a file etc. Each process has its critical section while the other portion is referred to as
Reminder section.

The important feature of the system is that when one has process is executing in its critical
section no other process is to be allowed to execute its critical section. Each process must request
permission to enter its critical section.
A solution to critical section problem must satisfy the following:
i) Mutual Exclusion – If one process is executing its critical section then no other process
can be executing in their critical section.
ii) Progress – If no process is executing in its critical section and there exists some processes
that wish to enter critical sections then those processes that are not executing in their
remainder section can be allowed.
13
i) Bounded Waiting – There must exist a bound on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted (Prevents Bus waiting).
Busy waiting occurs when one process is being executed in its critical section and
process 2 tests to confirm if process 1 is through. It is only acceptable technique when the
anticipated waits are brief; otherwise it could waste CPU cycles.

Techniques used to handle Critical section problem


1. Semaphores
The fundamental principle is this: 2 or more processes can operate by means of simple signals,
such that a process can be forced to stop at a specified place until it has received a special signal.
Any complex coordinative requirement can be satisfied by the appropriate structure of signals.
For signaling special variables called semaphores are used: To transmit a signal via semaphores, a
process executes the primitive wait P(s); if the corresponding signal V(s) has not yet been
transmitted the process is suspended until transmission takes place.

To achieve the desired effect we can view the semaphore as a variable that has an integer value
upon which 3 operations are defined
a) A semaphore may be initialized to a non negative value
b) The wait operation decrements these semaphore value. If the value becomes negative the
process executing the wait is blocked.
c) The signal operation increments the semaphore value. If the value is not positive then a
process is blocked by await operation is unblocked
A binary semaphore accepts only 2 values 0 or 1. For both semaphore and binary semaphore a
queue is used to hold processes waiting on the semaphore.
A strong semaphore has a mechanism of removing processes from a queue e.g. FIFO policy.
A weak semaphore doesn’t specify the order in which processes are removed from the queue.
Example of implementation:
wait (S);
Critical Section
signal (S);

14
2. Message passing
Provides a means for co-operating processes to communicate when they are not using shared
memory environment. Processes generally send and receive messages by using system calls such
as: send (receiverprocess; message) or Receive (senderprocess; message).

A blocking send must wait for receiver to receive the message. A non blocking send enables the
sender to continue with other processing even if the receiver has not yet received the message.
This requires buffering mechanisms to hold messages until the receiver receives it.
Message passing can be flawless on some computers but in a distributed system it can be flawed
or even lost, thus acknowledgement protocols are used.

One complication in distributed systems with send / receive message passing is in naming
processes unambiguously so that send and receive calls references the proper process.
Process creation and destruction can be coordinated through some centralized naming
mechanisms but this can introduce considerable transmission overhead as individual machines
request permission to use new names.

3. Monitors
It is a high synchronization construct characterized by a set of programmer defined operations. It
is made of declarations of variables and bodies of procedures or functions that implement
operations of a given type. It cannot be used directly by the various processes; ensures that only
one process at a time can be active within a monitor.
Schematic view of a monitor

15
Processes desiring to enter the monitor when it is already in use must wait. This waiting is
automatically managed by the monitor. Data inside the monitor is accessible only to the process
inside it. There is no way for processes outside the monitor to access monitor data. This is called
information hiding.

Event Counters
Introduced to enable process synchronization without use of mutual exclusion. . It keeps track of
no of occurrences of events of a particular class of related events. It is an integer counter that does
not decrease. Some operations will allow processes to reference event counts e.g. Advance (event
Count), Read (event count) and await (event count value).
Advance (E) –signals the occurrence of an event of the class of events represented by E by
incrementing event count E by 1.
Read (E) – obtains value of E; because Advance (E) operations may be occurring during Read (E)
it is only guaranteed that the value will be at least as great as E was before the read started.
Await (E) – blocks the process until the value of E becomes at least V; this avoids the need for
busy waiting.
Other techniques used include:
Using Peterson’s algorithm, Dekker’s algorithm and Bakery’s Algorithm

16
DEADLOCKS
Occurs when several processes compete for a finite number of resources i.e. a process is waiting
for a particular event that will not occur. The event here may be resource acquisition and release.

R1
Held by Requests

P2
P2

R2 Held by
Requests

This system is deadlocked because each process holds a resource being requested by the other
process and neither process is willing to release the resource it holds. (Leads to deadly embrace).

1. Deadlock in spooling systems


A spooling system is used to improve system throughput by disassociating, a program from the
slow operating speeds of devices such as printers e.g. lines of text are sent to a disk before
printing starts. If disk space is small and jobs are man a dead lock may occur.

Deadlocks Characterization
Four necessary conditions for deadlock to exist:
1. Mutual Exclusion condition – a process claims exclusive control of resources they require.
2. Wait for Condition (Hold and Wait) –Processes hold resources already allocated to them
while waiting for additional resources.
3. No preemption condition – resources cannot be removed from the processes holding them
until the resources are used to completion.
4. Circular wait condition – A circular chain of processes exists in which each process holds one
or more resources that are requested by the next process in the chain

MAJOR AREAS OF DEADLOCK RESEARCH IN COMPUTER SCIENCE


a) Deadlock Detection
b) Deadlock Recovery
c) Deadlock Prevention

17
DEADLOCK DETECTION
This is the process of actually determining that a deadlock exists and of identifying the processes
involved in the deadlock. (i.e. determine if a circular wait exists).
DEADLOCK RECOVERY
Once a deadlock as been detected several alternatives exist:
a) Ostrich Algorithm (Bury head in the sand and assume things would just work out).
b) Informing the operator who will deal with it manually
c) Let the system recover automatically

There are 2 options for breaking a deadlock:

1. Process termination – killing some processes


Methods available include:
a) Abort all deadlocked processes – has a great expense
b) Abort one process at a time until the deadlock cycle is eliminated.
A scheduling algorithm may be necessary to identify the process to abort.
2. Resource Preemption
Preempts some resources from processes and give these resources to other processes until the
deadlock cycle is broken. Issues to be addressed include:-
➢ Selecting a victim i.e. a resource to preempt.
➢ Roll Back – a process that is preempted a resource needs to be rolled back to some safe
state and restarted from that state once the deadlock is over.
➢ Starvation – guarantee that resources will not always be preempted from some process.

DEADLOCK PREVENTION
Aims at getting rid of conditions that cause deadlock. Methods used include:
a) Denying Mutual exclusion – should only be allowed or non-shareable resources. For
shareable resources e.g. read only files the processes’ attempting to open it should be
granted simultaneous access. Thus for shareable we do not allow mutual exclusion.
b) Denying Hold and wait (wait for condition) – To deny this we must guarantee that
whenever a process requests a resource it does not hold any other resources.
c) Denying “No preemption”
If a process that is holding some resource requests another resource that cannot be
immediately allocated to it (i.e. it must wait) then all resources currently being held are
preempted. The process will be started only when it can regain its old resources as well as the
new ones that it is requesting.

18
d) Denying Circular wait
All resources are uniquely numbered and processes must request resources in linear
ascending order. It has been implemented in many o/s but has some difficulties i.e.
- Addition of new resources required rewriting of existing program so as to give the unique
numbers.

DEVICE MANAGEMENT (INPUT/OUTPUT MANAGEMENT)


Device management encompasses the management of I/O devices such as printers, keyboard,
mouse, disk drives, tape drives, modems etc. The devices normally transfer and receive
alphanumeric characters in ASCII which use 7 bits to code 128 characters i.e. A-Z, a-z, 0-9 and
32 special printable characters such as %, *.
Major differences between I/O devices
a) Manner of operation – some are electromechanical (printer), other electromagnetic
(disk) while others are electronic (RAM).
b) Data transfer rates
c) Data representation – data codes mats
d) Units of transfer – some may use bytes, others blocks.
e) Error conditions - Nature of errors are varied.
The major objective of I/O management is to resolve these differences by supervising and
synchronizing all input and output transfers.
Basic functions of I/O management
1. Keeping track of the status of all devices, which require special mechanisms.
2. Deciding policy to determine who gets a device for how long and when. There are 3 basic
techniques for implementing this policies:
(i) Dedicated – a device is assigned to a single process.
(ii) Shared – A device is shared by many processes.
(iii) Virtual – A technique whereby one physical device is simulated on another physical
device (RAM on hard disk).
3. Allocation – it physically assigns the device to a process.
4. De-allocation of resources for processes.

19
Address bus – carry device address that identifies a device being communicated
Data Bus – carry data to or from a device
Control Bus – transmit I/O Commands

There are four types of commands that are transmitted:


(a) Control command – activates device and informs it what to do e.g. rewind
(b) Status command – used to test conditions in the interface and peripherals e.g. checking
for errors.
(c) Data output command – interface caused to transfer data into device.
(d) Data input command – interface receives an item of data from device.

INPUT – OUTPUT PROCESSOR (IOP)


Instead of having each interface communicate with the processor a computer may incorporate one
or more external processors and assign them the task of communicating directly with all I/O
devices.

It takes care of input/ output tasks, relieving the main processor the house keeping chores
involved in I/O transfer. In addition the IOP can perform other processing tasks such as an
arithmetic, logic and code translation.

TECHNIQUES FOR PERFORMING I/O OPERATIONS (Modes of transfer)


a) Programmed I/O – Occurs as a result of I/O instructions written in the computer
program. Each data transfer is initiated by an instruction in program. Once it has been
initiated the CPU is required to monitor the interface to see when a transfer can again be
made.
Disadvantage: CPU stays in a program loop until the I/O unit indicates that it is ready for
data transfer (time consuming).
b) Interrupt Driven (Interrupt initiated I/O)
Requires interface to issue an interrupt request signal only when the data are available
from the device. In the meantime the CPU can proceed to execute another program. The
interface meanwhile keeps monitoring the device. When the interface determines that
device is ready it generates interrupt request to computer, then momentarily stops the task
it is processing and branches to a service program to process the I/O transfer then returns
to original task later.
20
c) Direct Memory Access - Is a technique of transfer, which removes CPU from control of
the transfer of data from a fast storage device to a memory. This is because at times the
transfer is limited by speed of CPU. Removing the CPU from the path and letting the
peripheral device manage the memory buses directly would improve the speed of
transfer. The CPU has no control of memory buses during this operation. A DMA
controller takes over the buses to manage the transfer directly between I/O device and
memory.

SOFTWARE CONSIDERATIONS
Computers must have software routines for controlling peripherals and for transfer of data
between the processor and peripherals. I/O routines must issue control commands to activate the
peripheral and to check the device status to determine when it is ready for data transfer.

DISKS
They include floppy disks, hard disks, disk caches, RAM disks and laser optical disks (DVD, CD-
ROMs).On DVDs and CD ROMs sectors form a long spiral that spins out from the center of the
disk. On floppy disks and hard disks sectors are organized into a no of concentric circles or
tracks. As one moves out from the center of the disk, the tracks get larger,

Disk Hardware
This refers to the disk drive that accesses information in the disk. The actual details of disk I/O
operation depend on the computer system, the operating system and nature of I/O channel and
disk controller hardware.

VIRTUAL DEVICES
It is a technique where by one physical device is simulated on another physical device. They
include:
a) Buffers / Buffering
A buffer is an area of primary storage for holding data during I/O transfers e.g. printer buffer.
b) Spooling
A technique whereby a high speed device is interposed between a running program and a low
speed device involved with the program in input/ output. Instead of writing to a printer, outputs
are written to a disk.

21
c) Caching
Cache memory is usually a memory that is smaller and faster than memory and that is interposed
between main memory and processor. It reduces average memory access times by exploiting the
principle of locality.

MEMORY MANAGEMENT
It is necessary in order to improve both utilization of the CU and the speed of computer’s
response to its users.
Memory allocation techniques
1. Paging
Main memory (physical memory) is broken into fixed blocks called frames. The processes
(logical memory) are also broken into blocks of same size called pages. When a process is
executed its pages are loaded into any available memory frames from the backing storage.
Allocation of frames to process’s are monitored by Free-frame list and page table.
Example

Before Allocation After Allocation


Free frame List Free frame List
14 13 Pg 0
13 14 Pg 1
18
15 15
20
16 Page 0 16
15
17 Page 1 17
18 Page 2 Pg 2
19 Page 3 19
Page 0 New Process
Page 1 20 New process page Pg 3
Page 2 table
21 21
Page Frame No
Page 3
0 14
1 13
New Process 2 18
3 20

Advantages
➢ Minimize fragmentation

22
Disadvantages
➢ Page mapping hardware may lead to high cost of computer
➢ Memory used to store various tables may be big
➢ Some memory may remain unused if it is not sufficient for a frame.

2. Segmentation
Processes are divided into variable sized segments determined by sizes of process’s program,
sub-routines, data structures e.t.c A segment is a logical grouping of information such as a
routine, an array or data area that are determined by programmer. Each has a name and a length.
Example: User’s view of a program

1400
Subroutine Stack Sgmt 0
2400
Segment 3
Segment 0 3200
Symbol Table Sgmt 3
Limit Base 4300
Sgmt 2
1000 1400 4700
Sgmt 4
Sqrt 400 6300
Segment 4 1100 3200 5700
1000 4700 6300
Segment Table 6700 Sgmt 1
Segment 1 Main Program
Segment 2 Main
Logical Address space Memory
(Program in backing store)

Advantages
➢ Eliminates internal fragmentation
➢ Allows dynamic growth of segments
➢ Facilitates the loading of only one copy of shared routines.
Disadvantages
➢ Considerable compaction overheads incurred in order to support dynamic growth and
eliminate fragmentation.
➢ Maximum segment size is limited to the size of main memory
➢ Can cause external fragmentation when all blocks of free memory are too small to
accommodate a segment.

23
3. Partitioned Allocation
Ensures the main memory accommodates both o/s and the various user processes i.e. memory is
divided into 2 partitions one for resident o/s and one for user processes. It protects o/s code and
data from changes (accidental or malicious) by user processes.
The o/s takes into account the memory requirements of each process and amount of available
memory space in determining which processes are allocated memory.

Advantages
➢ Eliminates fragmentation and makes it possible to allocate more partitions which allows
higher degree of multiprogramming
Disadvantages
➢ Relocational h/ware increase cost of computers
➢ Compaction time may be substantial
➢ Job partition size is limited to the size of limited memory

4. Overlays
Used when process size is larger than amount of memory allocated (keeps in memory only those
instructions and data that are needed at any given time)
Advantages
➢ Do not require any special support form the o/s i.e. can be implemented by the users with
simple file structures
Disadvantages
➢ Programmers must design and program an overlay structure properly

5. Swapping
User programs do not remain in main memory until completion. In some systems one job
occupies the main storage once. That job runs then when it can no longer continue it relinquishes
both storage and CPU to the next job.

Thus the entire storage is dedicated to one job for a brief period, it is then removed (i.e. swapped
out or rolled out) and next job is brought in (swapped in or rolled in). A job will normally be
swapped in and out many times before it is completed. It guarantees reasonable response times

24
VIRTUAL MEMORY
It is a technique where a part of secondary storage is addressed as main memory. It allows
execution of processes that may not be completely in memory i.e. programs can be larger than
memory.

FILE MANAGEMENT
File system – concerned with managing secondary storage space particularly disk storage. It
consists of two distinct parts:
➢ A collection of files each storing related data
➢ A directory structure which organizes and provide information about all the files in the
system.
A file – is a named collection of related information that is recorded on secondary storage.

File Naming
A file is named for convenience of its human users and a name is usually a string of characters
e.g. “good.c”. When a file is named it becomes independent of the process, the user and even the
system that created it i.e. another user may edit the same file and specify a different name.

File Types
An o/s should recognize and support different files types so that it can operate and file in
reasonable ways.
Files types can be implemented by including it as part of file name. The name is split into 2 parts
– a name and extension usually separated by a period character. The extension indicates the type
of file and type of operations that can be on that file.

File Type Usual extensions


Executable .exe , .com
Source Code .c, .p, .pas, .bas, .vbp, .prg, .java
Text .txt, .rtf, .doc
Graphics .bmp, .gpg, .mtf
Spreadsheet .xls
Archive .zip, .arc.rar (compressed files)

File Attributes
A part fro the name other attributes of files include
• Size – amount of information stored in the file

25
• Location – Is a pointer to a device and to the location of file on the device
• Protection – Access control to information in file (who can read, write, execute and so
on)
• Time, Date and user identification –This information is kept for creation, last
modification and last use. These data can be useful for protection, security and usage
monitor.
• Volatility – frequency with which additions and deletions are made to a file
• Activity – Refers to the percentage of file’s records accessed during a given period of
time.

File Operations
File can be manipulated by operations such as:
• Open – prepare a file to be referenced
• Close – prevent further reference to a file until it is reopened
• Create – Build a new file
• Destroy – Remove a file
• Copy – create another version of file with a new name.
• Rename – change name of a file.
• List – Print or display contents.
• Move – change location of file
Individual data items within the file may be manipulated by operations like:
• Read – Input a data tem to a process from a file
• Write – Output a data item from a process to a file
• Update – modify an existing data item in a file
• Insert – Add a new data item to a file
• Delete – Remove a data item to a file
• Truncating – delete some data items but file retains all other attributes
File Structure
Refers to internal organization of the file. File types may indicate structure. Certain files must
conform to a required structure that is understood by the o/s. Some o/s have file systems that does
support multiple structure while others impose (and support) a minimal number of file structures
e.g. MS DOS and UNIX. UNIX considers ach file to be a sequence of 8-bit bytes. Macintosh o/s
supports a minimal no of file structure and it expects executables to contain 2 parts – a resource
fork and a data fork. Resource fork contains information of importance to user e.g. labels of any
buttons displayed by program.
File Organization
26
Refers to the manner in which records f a file are arranged on secondary storage.
The most popular schemes are:-
a) Sequential
Records placed in physical order. The next record is the one that physically follows the previous
record. It is used for records in magnetic tape.
b) Direct
Records are directly (randomly) accessed by their physical addresses on a direct Access by
storage device (DASD). The application user places the records on DASD in any order
appropriate for a particular application.
c) Indexed Sequential
Records are arranged in logical sequence according to a key contained in each record. The system
maintains an index containing the physical addresses of certain principal records. Indexed
sequential records may be accessed sequentially in key order or they may be accessed directly by
a search through the system created index. It is usually used inn disk.
d) Partitioned
It is a file of sequential sub files. Each sequential sub file is called a member. The starting address
of each member is stored in the file directory. Partitioned files are often used to store program
libraries.
FILE ALLOCATION METHODS
Deals with how to allocate disk space to different files so that the space is utilized effectively and
files can be accessed quickly. Methods used include:
1. Contiguous Allocation
Files are assigned to contiguous areas of secondary storage. A user specifies in advance the size
of area needed to hold a file to be created. If the desired amount of contiguous space is not
available the file cannot be created.
Advantages
ii) Speeds up access since successive logical records are normally physically adjacent to one
another
iii) File directories are relatively straight forward to implement for each file merely retain the
address of start of file and its length.
Disadvantages
i) Difficult to find space for a new file especially if the disk is fragmented into a number of
separate holes (block)
To prevent this, a user can run a repackaging routine (Disk defragmenter)
ii) Another difficulty is determining how much space is needed for a file since if too little
space is allocated the file cannot be extended or cannot grow.

27
Some o/s use a modified contiguous allocation scheme where a contiguous chunk of
space is allocated initially and then when that amount is not large enough, another chunk
of contiguous space, an extent is added.
2. Non Contiguous Allocation
It is used since files do tend to grow or shrink overtime and because users rarely know in advance
how large a file will be. Techniques used include:
a) Linked Allocation
It solves all problems of contiguous allocation. Each file is a linked list of disk blocks, the disk
blocks may be scattered anywhere on the disk. The directory contains a pointer to the first and
last disk block of the file. Each Block contains a pointer to the next block.
Advantage
• Eliminates external fragmentation
Disadvantages
• It is very effective only for sequential access files i.e. where you have to access all
blocks.
• Space may be wasted for pointers since they take 4 bytes. The solution is to collect
blocks into multiples clusters rather than blocks.
• Not very reliable since loss or damage of pointer would lead to failure
b) Indexed Allocation
It tries to support efficient direct access which is not possible in linked allocation. Pointers to the
blocks are brought together into one location (index block). Each file has its own index block,
which is an array of disk block addresses.
Advantage
• Supports Direct access without suffering from external fragmentation

Disadvantage
• Suffers from wasted space due to pointer overhead of the index block
• The biggest problem is determining how large the index block would be
The mechanisms to deal with this include:-
i) Linked scheme – index block is normally one disk block; to allow large files
we may link together several index blocks.
ii) Multilevel index – use a separate index block to point to the index blocks
which point to the files blocks themselves
iii) Combined scheme – keeps the first say 15 pointers of the index block in the
file’s index block. The first 12 of these pointers point to direct blocks that

28
contain addresses of blocks that contain data of the file. The next 3 pointers
point to indirect blocks which may contain address to blocks containing data.

FILE MANAGEMENT TECHNIQUES


File Implementation
It is concerned with issues such as file storage and access on the most common secondary storage
medium the hard disk. It explores ways to allocate disk space, to recovers freed space, to track the
locations of data and to interface other parts of the o/s to the secondary storage.
Directory implementation
The selection of directory allocation and directory management algorithms has a large effect on
the efficiency, performance and reliability of the file system.
Algorithms used:
a) Linear is the simplest method of implementing a directory. It uses a linear list of
filenames with pointers to the data blocks. It requires a linear search to find a particular
entry.
b) Hash Table – A linear list stores the directory entries but a hash data structure is used.
The hash table takes a value computed from the file name and returns a pointer to the file
name in the linear list.

Free Space Management


To keep track of free disk space the system maintains a free space list which records all disk
blocks that are free (i.e. those not allocated to some file or directory). To create a file we search
the free space list for the required amount of space and allocate that space to the new file. This
space is then removed from the free space list. Techniques used to manage disk space include:-
a) Bit vector – Free space list is implemented as a bit map or bit vector. Each block is
represented by 1 bit. If the block is free, the bit is 1; if the block is allocated, the bit 0.
b) Linked List – links together all the free disk blocks, keeping a pointer to the first free
block in a special location on the disk and caching it in memory. This first block contains
a pointer to the next free disk block and so on.
c) Grouping – Is a modification of free list approach and stores the addresses of n free
blocks in the first free block. The first n-1 of these blocks are actually free blocks and so
on. The importance of this implementation is that the addresses of a large number of free
blocks can be found quickly unlike in the standard linked list approach.
d) Counting – it takes advantage of the fact that generally several contiguous blocks may be
allocated or freed simultaneously, particularly when space is allocated with contiguous
allocation algorithm or through clustering. Thus rather than keeping a list of n-free disk
addresses we can keep the address of the first free block and the number n of free
29
contiguous blocks that follow the first block. Although each entry requires more space
that would a simple dist address; the overall list will be shorter as long as the count is
generally greater than 1.

Efficiency and Performance


This is essential when it comes to files and directory implementation since disks tends to be a
major bottleneck in system performance i.e. they are the slowest main computer component. To
improve performance disk controllers are provided to include enough local memory to create on-
board cache that may be sufficiently large to store an entire track at a time.
File Sharing
In a multi user system there is almost a requirement for allowing files to be shared among a
number of users. Two issues arise:-
i) Access rights – should provide a number of options so that the way in which a particular
file is accessed can be controlled. Access rights assigned to a particular file include:
Read, execute, append, update, delete.
ii) Simultaneous access – a discipline is required when access is granted to append or update
a file to more than one user. An approach can allow a user to lock the entire file when it
is to be updated.

30

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy