0% found this document useful (0 votes)
14 views27 pages

Unit-4 Os Notes

The document covers key concepts of file systems, including file organization, access methods, directory structures, and free-space management. It also discusses I/O management, disk management, and the importance of protection and security in operating systems, along with case studies of UNIX and Windows file systems. Additionally, it explains file attributes, operations, and various directory structures, as well as disk formatting and management techniques.

Uploaded by

swarasinghstate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views27 pages

Unit-4 Os Notes

The document covers key concepts of file systems, including file organization, access methods, directory structures, and free-space management. It also discusses I/O management, disk management, and the importance of protection and security in operating systems, along with case studies of UNIX and Windows file systems. Additionally, it explains file attributes, operations, and various directory structures, as well as disk formatting and management techniques.

Uploaded by

swarasinghstate
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT-IV

File Systems: file concept, file organization and access methods, allocation methods,
directory structure, free-space management
I/O Management: I/O hardware, polling, interrupts, DMA, kernel I/O subsystem
(scheduling, buffering, caching, spooling and device reservation)
Disk Management: disk structure, disk scheduling (FCFS, SSTF, SCAN,C-SCAN) , disk
reliability, disk Performance parameters
Protection and Security: Goals of protection and security, security attacks,
authentication, program threats, system threats, threat monitoring.
Case studies: UNIX file system, Windows file system
Topperworld.in

FILE SYSTEM INTERFACE


The file system provides the mechanism for on-line storage of and access to both data
and programs of the operating system and all the users of the computer system. The file system
consists of two distinct parts: a collection of files, each storing related data, and a directory
structure, which organizes and provides information about all the files in the system.

FILE CONCEPT
A file is a collection of related information that is recorded on secondary storage. From
a user's perspective, a file is the smallest allotment of logical secondary storage and data can
not be written to secondary storage unless they are within a file.
Four terms are in common use when discussing files: Field, Record, File and Database
A field is the basic element of data. An individual field contains a single value, such as
an employee’s last name, a date, or the value of a sensor reading. It is characterized by
its length and data type.
A record is a collection of related fields that can be treated as a unit by some application
program. For example, an employee record would contain such fields as name, social
security number, job classification, date of hire, and so on.
A file is a collection of similar records. The file is treated as a single entity by users and
applications and may be referenced by name.
A database is a collection of related data. A database may contain all of the information
related to an organization or project, such as a business or a scientific study. The database
itself consists of one or more types of files.

File Attributes:
A file has the following attributes:
➢ Name: The symbolic file name is the only information kept in human readable form.
➢ Identifier: This unique tag, usually a number, identifies the file within the file system; it
is the non-human-readable name for the file.
➢ Type: This information is needed for those systems that support different types.
➢ Location: This information is a pointer to a device and to the location of the file on that
device.
➢ Size: The current size of the file (in bytes, words, or blocks), and possibly the maximum
allowed size are included in this attribute.
➢ Protection: Access-control information determines who can do reading, writing,
executing, and so on.
➢ Time, date, and user identification: This information may be kept for creation,
modification and last use. These data can be useful for protection, security, and usage
monitoring.
File Operations:
The operating system can provide system calls to create, write, read, reposition, delete, and
truncate files. The file operations are described as followed:
Creating a file: Two steps are necessary to create a file. First, space in the file system must
be found for the file. Second, an entry for the new file must be made in the directory. The
directory entry records the name of the file and the location in the file system, and possibly
other information.
Writing a file: To write a file, we make a system call specifying both the name of the file and
the information to be written to the file. Given the name of the file, the system searches the
directory to find the location of the file. The system must keep a write pointer to the location
in the file where the next write is to take place. The write pointer must be updated whenever
a write occurs.
Reading a file: To read from a file, we use a system call that specifies the name of the file and
where (in main memory) the next block of the file should be put. Again, the directory is
searched for the associated directory entry, and the system needs to keep a read pointer to
the location in the file where the next read is to take place. Once the read has taken place,
the read pointer is updated.
Repositioning within a file: The directory is searched for the appropriate entry, and the
current-file-position is set to a given value. Repositioning within a file does not need to
involve any actual I/O. This file operation is also known as a file seeks.
Deleting a file: To delete a file, we search the directory for the named file. Having found the
associated directory entry, we release all file space, so that it can be reused by other files,
and erase the directory entry.
Truncating a file: The user may want to erase the contents of a file but keep its attributes.
Rather than forcing the user to delete the file and then recreate it, this function allows all
attributes to remain unchanged-except for file length-but lets the file be reset to length
zero and its file space released.

File Types: The files are classified into different categories as follows:
The name is split into two parts-a name and an extension, The system uses the extension to
indicate the type of the file and the type of operations that can be done on that file.
ACCESS METHODS
When a file is used, this information must be accessed and read into computer memory.
The information in the file can be accessed in several ways. There are two major access methods
as follows:
Sequential Access: Information in the file is processed in order, one record after the other.
A read operation reads the next portion of the file and automatically advances a file pointer,
which tracks the I/O location. Similarly, a write appends to the end of the file and advances to
the end of the newly written material (the new end of file). Sequential access is based on a tape
model of a file, and works as well on sequential-access devices as it does on random-access
ones.
Direct Access: A file is made up of fixed length logical records that allow programs to read
and write records rapidly in no particular order. The direct-access method is based on a disk
model of a file, since disks allow random access to any file block. For direct access, the file is
viewed as a numbered sequence of blocks or records. A direct-access file allows arbitrary blocks
to be read or written. There are no restrictions on the order of reading or writing for a direct-
access file. For the direct-access method, the file operations must be modified to include the
block number as a parameter. Thus, we have read n, where n is the block number, rather than
read next, and write n rather than write next.

DIRECTORY STRUCTURE
A directory is an object that contains the names of file system objects. File system allows
the users to organize files and other file system objects through the use of directories.
The structure created by placement of names in directories can take a number of forms:
Single-level tree, Two-level tree, multi-level tree or cyclic graph.
1. Single-Level Directory: The simplest directory structure is the single-level directory.
All files are contained in the same directory, which is easy to support and understand. A
single-level directory has significant limitations, when the number of files increases or when
the system has more than one user. Since all files are in the same directory, they must have
unique names.

2. Two-Level Directory: In the two-level directory structure, each user has its own user
file directory (UFD). Each UFD has a similar structure, but lists only the files of a single
user. When a user job starts or a user logs in, the system's master file directory (MFD) is
searched. The MFD is indexed by user name or account number, and each entry points to
the UFD for that user.
When a user refers to a particular file, only his own UFD is searched. Different users may
have files with the same name, as long as all the file names within each UFD are unique. To
create a file for a user, the operating system searches only that user's UFD to ascertain whether
another file of that name exists. To delete a file, the operating system confines its search to the
local UFD; thus, it cannot accidentally delete another user's file that has the same name.

3. Tree-structured directories: A tree structure is A more powerful and flexible approach


to organize files and directories in hierarchical. There is a master directory, which has under
it a number of user directories. Each of these user directories may have subdirectories and
files as entries. This is true at any level: That is, at any level, a directory may consist of entries
for subdirectories and/or entries for files.

4. Acyclic-Graph Directories:
An acyclic graph allows directories to have shared subdirectories and files. The same file
or subdirectory may be in two different directories. An acyclic graph is a natural generalization
of the tree structured directory scheme.
A shared file (or directory) is not the same as two copies of the file. With two copies, each
programmer can view the copy rather than the original, but if one programmer changes the file,
the changes will not appear in the other's copy.
Shared files and subdirectories can be implemented in several ways. A common way is
to create a new directory entry called a link. A link is a pointer to another file or subdirectory.

5. General Graph Directory:

When we add links to an existing tree-structured directory, the tree structure is


destroyed, resulting in a simple graph structure.
Free Space Management:
A file system is responsible to allocate the free blocks to the file therefore it has to keep track of
all the free blocks present in the disk. There are mainly two approaches by using which, the free
blocks in the disk are managed.

1. Bit Vector

In this approach, the free space list is implemented as a bit map vector. It contains the number of
bits where each bit represents each block.

If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks are empty therefore
each bit in the bit map vector contains 1.

LAs the space allocation proceeds, the file system starts allocating blocks to the files and setting
the respective bit to 0.

2. Linked List

It is another approach for free space management. This approach suggests linking together all the
free blocks and keeping a pointer in the cache which points to the first free block.

Therefore, all the free blocks on the disks will be linked together with a pointer. Whenever a block
gets allocated, its previous free block will be linked to its next free block.

DISK MANAGEMENT
The operating system is responsible for several aspects of disk management.
Disk Formatting
A new magnetic disk is a blank slate. It is just platters of a magnetic recording material.
Before a disk can store data, it must be divided into sectors that the disk controller can read and
write. This process is called low-level formatting (or physical formatting).
Low-level formatting fills the disk with a special data structure for each sector. The data
structure for a sector consists of a header, a data area, and a trailer. The header and trailer
contain information used by the disk controller, such as a sector number and an error-
correcting code (ECC).
To use a disk to hold files, the operating system still needs to record its own data
structures on the disk. It does so in two steps. The first step is to partition the disk into one or
more groups of cylinders. The operating system can treat each partition as though it were a
separate disk. For instance, one partition can hold a copy of the operating system's executable
code, while another holds user files. After partitioning, the second step is logical formatting
(or creation of a file system). In this step, the operating system stores the initial file-system data
structures onto the disk.
Boot Block
When a computer is powered up or rebooted, it needs to have an initial program to run.
This initial program is called bootstrap program. It initializes all aspects of the system (i.e. from
CPU registers to device controllers and the contents of main memory) and then starts the
operating system.
To do its job, the bootstrap program finds the operating system kernel on disk, loads that
kernel into memory, and jumps to an initial address to begin the operating-system execution.
For most computers, the bootstrap is stored in read-only memory (ROM). This location
is convenient, because ROM needs no initialization and is at a fixed location that the processor
can start executing when powered up or reset. And since ROM is read only, it cannot be infected
by a computer virus. The problem is that changing this bootstrap code requires changing the
ROM hardware chips.
For this reason, most systems store a tiny bootstrap loader program in the boot ROM,
whose only job is to bring in a full bootstrap program from disk. The full bootstrap program
can be changed easily: A new version is simply written onto the disk. The full bootstrap
program is stored in a partition (at a fixed location on the disk) is called the boot blocks. A disk
that has a boot partition is called a boot disk or system disk. Bad Blocks
Since disks have moving parts and small tolerances, they are prone to failure. Sometimes
the failure is complete, and the disk needs to be replaced, and its contents restored from backup
media to the new disk.
More frequently, one or more sectors become defective. Most disks even come from the
factory with bad blocks. Depending on the disk and controller in use, these blocks are handled
in a variety of ways.
The controller maintains a list of bad blocks on the disk. The list is initialized during the
low-level format at the factory, and is updated over the life of the disk. The controller can be
told to replace each bad sector logically with one of the spare sectors. This scheme is known as
sector sparing or forwarding.

DISKS STRUCTURE
Magnetic disks provide the bulk of secondary storage for modern computer systems.
Each disk platter has a flat circular shape, like a CD. Common platter diameters range from 1.8
to 5.25 inches. The two surfaces of a platter are covered with a magnetic material. We store
information by recording it magnetically on the platters.
A read-write head "flies" just above each surface of every platter. The heads are attached
to a disk arm, which moves all the heads as a unit. The surface of a platter is logically divided
into circular tracks, which are subdivided into sectors. The set of tracks that are at one arm
position forms a cylinder. There may be thousands of concentric cylinders in a disk drive, and
each track may contain hundreds of sectors. The storage capacity of common disk drives is
measured in gigabytes.
(Structure of Magnetic disks (Harddisk drive))
When the disk is in use, a drive motor spins it at high speed. Most drives rotate 60 to 200
times per second. Disk speed has two parts. The transfer rate is the rate at which data flow
between the drive and the computer. The positioning time (or random-access time) consists of
seek time and rotational latency. The seek time is the time to move the disk arm to the desired
cylinder. And the rotational latency is the time for the desired sector to rotate to the disk head.
Typical disks can transfer several megabytes of data per second and they have seek times and
rotational latencies of several milliseconds.
Capacity of Magnetic disks(C) = S x T x P x N
Where S= no. of surfaces = 2 x no. of disks, T= no. of tracks in a surface,
P= no. of sectors per track, N= size of each sector
Transfer Time: The transfer time to or from the disk depends on the rotation speed of the disk
in the following fashion: T = b / ( r x N)
Where T= transfer time, b=number of bytes to be transferred, N= number of bytes on a track,
r= rotation speed, in revolutions per second.
Modern disk drives are addressed as large one-dimensional arrays of logical blocks,
where the logical block is the smallest unit of transfer. The one-dimensional array of logical
blocks is mapped onto the sectors of the disk sequentially. Sector 0 is the first sector of the first
track on the outermost cylinder. The mapping proceeds in order through that track, then
through the rest of the tracks in that cylinder, and then through the rest of the cylinders from
outermost to innermost.
By using this mapping, we can convert a logical block number into an old-style disk
address that consists of a cylinder number, a track number within that cylinder, and a sector
number within that track. In practical, it is difficult to perform this translation, for two reasons.
First, most disks have some defective sectors, but the mapping hides this by substituting spare
sectors from elsewhere on the disk. Second, the number of sectors per track is not a constant on
some drives.
The density of bits per track is uniform. This is called Constant linear velocity (CLV).
The disk rotation speed can stay constant and the density of bits decreases from inner tracks to
outer tracks to keep the data rate constant. This method is used in hard disks and is known as
constant angular velocity (CAV).

DISK SCHEDULING
The seek time is the time for the disk arm to move the heads to the cylinder containing
the desired sector. The rotational latency is the time waiting for the disk to rotate the desired
sector to the disk head. The disk bandwidth is the total number of bytes transferred divided by
the total time between the first request for service and the completion of the last transfer.
We can improve both the access time and the bandwidth by scheduling the servicing of
disk I/O requests in a good order. Several algorithms exist to schedule the servicing of disk I/O
requests as follows:
1. FCFS Scheduling
The simplest form of scheduling is first-in-first-out (FIFO) scheduling, which processes
items from the queue in sequential order. We illustrate this with a request queue (0-199): 98,
183, 37, 122, 14, 124, 65, 67
Consider now the Head pointer is in cylinder 53.

(FCFS disk scheduling)


2. SSTF Scheduling
It stands for shortest-seek-time-first (SSTF) algorithm. The SSTF algorithm selects the
request with the minimum seek time from the current head position. Since seek time increases
with the number of cylinders traversed by the head, SSTF chooses the pending request closest
to the current head position. We illustrate this with a request queue (0-199): 98, 183, 37, 122, 14,
124, 65, 67
Consider now the Head pointer is in cylinder 53.
(SSTF disk scheduling)
3. SCAN Scheduling
In the SCAN algorithm, the disk arm starts at one end of the disk, and moves toward the
other end, servicing requests as it reaches each cylinder, until it gets to the other end of the disk.
At the other end, the direction of head movement is reversed, and servicing continues. The head
continuously scans back and forth across the disk. We illustrate this with a request queue (0-
199): 98, 183, 37, 122, 14, 124, 65, 67
Consider now the Head pointer is in cylinder 53.

(SCAN disk scheduling)


4. C-SCAN Scheduling
Circular SCAN (C-SCAN) scheduling is a variant of SCAN designed to provide a more
uniform wait time. Like SCAN, C-SCAN moves the head from one end of the disk to the other,
servicing requests along the way. When the head reaches the other end, it immediately returns
to the beginning of the disk, without servicing any requests on the return trip. The CSCAN
scheduling algorithm essentially treats the cylinders as a circular list that wraps around from
the final cylinder to the first one. We illustrate this with a request queue (0-199):
98, 183, 37, 122, 14, 124, 65, 67. Consider now the Head pointer is in cylinder 53.
(C-SCAN disk scheduling)
5. LOOK Scheduling
Practically, both SCAN and C-SCAN algorithm is not implemented this way. More
commonly, the arm goes only as far as the final request in each direction. Then, it reverses
direction immediately, without going all the way to the end of the disk. These versions of SCAN
and C-SCAN are called LOOK and C-LOOK scheduling, because they look for a request before
continuing to move in a given direction. We illustrate this with a request queue
(0-199): 98, 183, 37, 122, 14, 124, 65, 67
Consider now the Head pointer is in cylinder 53.

(C-LOOK disk scheduling)


Selection of a Disk-Scheduling Algorithm
SSTF is common and has a natural appeal.
➢ Performance depends on the number and types of requests.
➢ Requests for disk service can be influenced by the file allocation method.
➢ The disk-scheduling algorithm should be written as a separate module of the operating
system, allowing it to be replaced with a different algorithm if necessary. Either SSTF or
LOOK is a reasonable choice for the default algorithm.
I/O SYSTEM
The role of the operating system in computer I/O is to manage and control I/O
operations and I/O devices. A device communicates with a computer system by sending
signals over a cable or even through the air. The device communicates with the machine via a
connection point. If one or more devices use a common set of wires for communication, the
connection is called a bus.
When device A has a cable that plugs into device B, and device B has a cable that plugs
into device C, and device C plugs into a port on the computer, this arrangement is called a daisy
chain. A daisy chain usually operates as a bus.
Buses are used widely in computer architecture as follows:

(A typical PC bus structure)


This figure shows a PC1 bus (the common PC system bus) that connects the
processormemory subsystem to the fast devices, and an expansion bus that connects relatively
slow devices such as the keyboard and serial and parallel ports. In the upper-right portion of
the figure, four disks are connected together on a SCSI bus plugged into a SCSI controller.
A controller is a collection of electronics that can operate a port, a bus, or a device. A
serial-port controller is a simple device controller. It is a single chip (or portion of a chip) in the
computer that controls the signals on the wires of a serial port.
Since the SCSI protocol is complex, the SCSI bus controller is often implemented as a
separate circuit board (or a host adapter) that plugs into the computer. It contains a processor,
microcode, and some private memory to enable it to process the SCSI protocol messages.
Some devices have their own built-in controllers. This board is the disk controller. It has
microcode and a processor to do many tasks, such as bad-sector mapping, prefetching,
buffering, and caching.
An I/O port consists of four registers, called the status, control, data-in, and data-out
registers.
➢ The status register contains bits that can be read by the host. These bits indicate states
such as whether the current command has completed, whether a byte is available to be
read from the data-in register, and whether there has been a device error.
➢ The control register can be written by the host to start a command or to change the mode
of a device.
➢ The data-in register is read by the host to get input.
➢ The data-out register is written by the host to send output.

The external devices can be grouped into three categories:

1. Human readable: Suitable for communicating with the computer user. Examples include
printers and terminals, the latter consisting of video display, keyboard, and perhaps other
devices such as a mouse.
2. Machine readable: Suitable for communicating with electronic equipment. Examples are
disk drives, USB keys, sensors, controllers, and actuators.
3. Communication: Suitable for communicating with remote devices. Examples are digital line
drivers and modems.

ORGANIZATION OF THE I/O FUNCTION(I/O Technique)


There are three techniques for performing I/O operations:
1. Programmed I/O: The processor issues an I/O command, on behalf of a process, to an I/O
module; that process busy waits for the operation to be completed before proceeding.
2. Interrupt-driven I/O: The processor issues an I/O command on behalf of a process. There
are two possibilities. If the I/O instruction from the process is non blocking, then the
processor continues to execute instructions from the process that issued the I/O command.
If the I/O instruction is blocking, then the next instruction that the processor executes is
from the OS, which will put the current process in a blocked state and schedule another
process.
3. Direct memory access (DMA): A DMA module controls the exchange of data between main
memory and an I/O module. The processor sends a request for the transfer of a block of
data to the DMA module and is interrupted only after the entire block has been transferred.
Interrupt-driven I/O
The basic interrupt mechanism works as follows.
The CPU hardware has a wire called the interrupt-request line that the CPU senses after
executing every instruction.
When the CPU detects that a controller has asserted a signal on the interrupt request line, the
CPU saves a small amount of state, such as the current value of the instruction pointer, and
jumps to the interrupt-handler routine at a fixed address in memory.
The interrupt handler determines the cause of the interrupt, performs the necessary
processing, and executes a return from interrupt instruction to return the CPU to the
execution state prior to the interrupt.
The above mechanism concludes that the device controller raises an interrupt by asserting a
signal on the interrupt request line, the CPU catches the interrupt and dispatches to the interrupt
handler, and the handler clears the interrupt by servicing the device.

(Diagram of Interrupt-driven I/O cycle)


In a modern operating system, we need more sophisticated interrupt-handling features as
follows:
➢ First, we need the ability to defer interrupt handling during critical processing.
➢ Second, we need an efficient way to dispatch to the proper interrupt handler for
a device.
➢ Third, we need multilevel interrupts, so that the operating system can distinguish
between high- and low-priority interrupts, and can respond with the appropriate
degree of urgency.
In modern computer hardware, these three features are provided by the CPU and by the
interrupt-controller hardware.
Direct Memory Access
The DMA unit is capable of mimicking the processor and of taking over control of the
system bus just like a processor. It needs to do this to transfer data to and from memory over
the system bus. The DMA technique works as follows:
 When the processor wishes to read or write a block of data, it issues a command to the
DMA module by sending to the DMA module the following information:
➢ Whether a read or write is requested, using the read or write control line between
the processor and the DMA module.
➢ The address of the I/O device involved, communicated on the data lines.
➢ The starting location in memory to read from or write to, communicated on the
data lines and stored by the DMA module in its address register.
➢ The number of words to be read or written, again communicated via the data lines
and stored in the data count register.
 Then the processor continues with other work. It has delegated this I/O operation to
the DMA module.
 The DMA module transfers the entire block of data, one word at a time, directly to or
from memory, without going through the processor.
 When the transfer is complete, the DMA module sends an interrupt signal to the
processor. Thus, the processor is involved only at the beginning and end of the transfer.
(Steps in a DMA transfer)

APPLICATION I/O INTERFACE

I/O system calls encapsulate device behaviors in generic classes. Each general kind is
accessed through a standardized set of functions, called an interface. The differences are
encapsulated in kernel modules called device drivers that internally are custom tailored to each
device, but that export one of the standard interfaces. Devices vary in many dimensions: o
Character-stream or block o Sequential or random-access o Sharable or dedicated o Speed of
operation
o read-write, read only, or write only
(Diagram of a kernel I/O structure)
The purpose of the device-driver layer is to hide the differences among device controllers
from the I/O subsystem of the kernel, much as the I/O system calls encapsulate the behavior
of devices in a few generic classes that hide hardware differences from applications. Making
the I/O subsystem independent of the hardware simplifies the job of the operating-system
developer. It also benefits the hardware manufacturers. They either design new devices to be
compatible with an existing host controller interface (such as SCSI-2), or they write device
drivers to interface the new hardware to popular operating systems. Thus, new peripherals can
be attached to a computer without waiting for the operating-system vendor to develop support
code.

KERNEL I/O SUBSYSTEM


Kernels provide many services related to I/O. Several services (i.e. scheduling,
buffering, caching, spooling, device reservation and error handling) are provided by the
kernel's I/O subsystem and build on the hardware and device driver infrastructure.
1. I/O Scheduling: It is used to schedule a set of I/O requests that means to determine a
good order in which to execute them. Scheduling can improve overall system performance and
can reduce the average waiting time for I/O to complete.
2. Buffering: A buffer is a memory area that stores data while they are transferred between
two devices or between a device and an application. Buffering is done for three reasons.
➢ One reason is to cope with a speed mismatch between the producer and consumer of a data
stream.
➢ A second use of buffering is to adapt between devices that have different data-transfer sizes.
➢ A third use of buffering is to support copy semantics for application I/O.
3. Caching: A cache is a region of fast memory that holds copies of data. Access to the cached
copy is more efficient than access to the original. Cache is used to improve the I/O efficiency
for files that are shared by applications or that are being written and reread rapidly. The
difference between a buffer and a cache is that a buffer may hold the only existing copy of a
data item, whereas a cache just holds a copy on faster storage of an item that resides
elsewhere. Caching and buffering are distinct functions, but sometimes a region of memory
can be used for both purposes.
4. Spooling: A spool is a buffer that holds output for a device, such as a printer, that cannot
accept interleaved data streams. Although a printer can serve only one job at a time, several
applications may wish to print their output concurrently, without having their output mixed
together. The operating system solves this problem by intercepting all output to the printer.
Each application's output is spooled to a separate disk file. When an application finishes
printing, the spooling system queues the corresponding spool file for output to the printer.
The spooling system copies the queued spool files to the printer one at a time. The
operating system provides a control interface that enables users and system administrators to
display the queue, to remove unwanted jobs before those jobs print, to suspend printing while
the printer is serviced, and so on.
5. Device Reservation:
It provides support for exclusive device access, by enabling a process to allocate an idle device,
and to deallocate that device when it is no longer needed. Many operating systems provide
functions that enable processes to coordinate exclusive access among them. It watches out for
deadlock to avoid.
Transforming I/O request to Hardware Operations
Consider reading a file from disk for a process:
Determine device holding file
➢ Translate name to device representation
➢ Physically read data from disk into buffer
➢ Make data available to requesting process
➢ Return control to process
The following steps are described the lifecycle of a blocking read request:
1. A process issues a blocking read() system call to a file descriptor of a file that has been
opened previously.
2. The system-call code in the kernel checks the parameters for correctness. In the case of input,
if the data are already available in the buffer cache, the data are returned to the process and
the I/O request is completed.
3. Otherwise, a physical I/O needs to be performed, so the process is removed from the run
queue and is placed on the wait queue for the device, and the I/O request is scheduled.
Eventually, the I/O subsystem sends the request to the device driver. Depending on the
operating system, the request is sent via a subroutine call or via an in-kernel message.
4. The device driver allocates kernel buffer space to receive the data, and schedules the I/O.
Eventually, the driver sends commands to the device controller by writing into the device
control registers.
5. The device controller operates the device hardware to perform the data transfer.
6. The driver may poll for status and data, or it may have set up a DMA transfer into kernel
memory. We assume that the transfer is managed by a DMA controller, which generates an
interrupt when the transfer completes.
7. The correct interrupt handler receives the interrupt via the interrupt-vector table, stores any
necessary data, signals the device driver, and returns from the interrupt.
8. The device driver receives the signal, determines which I/O request completed, determines
the request's status, and signals the kernel I/O subsystem that the request has been
completed.
9. The kernel transfers data or returns codes to the address space of the requesting process,
and moves the process from the wait queue back to the ready queue.
10. Moving the process to the ready queue unblocks the process. When the scheduler assigns
the process to the CPU, the process resumes execution at the completion of the system call.
(The life cycle of an I/O request)

Interrupt: Interrupt is a hardware mechanism in which, the device notices the CPU that it
requires its attention. Interrupt can take place at any time. So when CPU gets an interrupt
signal through the indication interrupt-request line, CPU stops the current process and
respond to the interrupt by passing the control to interrupt handler which services device.

Polling: In polling is not a hardware mechanism, its a protocol in which CPU steadily checks
whether the device needs attention. Wherever device tells process unit that it desires
hardware processing, in polling process unit keeps asking the I/O device whether or not it
desires CPU processing. The CPU ceaselessly check every and each device hooked up thereto
for sleuthing whether or not any device desires hardware attention. Each device features a
command-ready bit that indicates the standing of that device, i.e., whether or not it’s some
command to be dead by hardware or not. If command bit is ready one, then it’s some
command to be dead else if the bit is zero, then it’s no commands.
Let’s see that the difference between interrupt and polling:

Interrupt Polling

In interrupt, the device notices the Whereas, in polling, CPU steadily checks
1. CPU that it requires its attention. whether the device needs attention.

An interrupt is not a protocol, its a Whereas it isn’t a hardware mechanism, its a


2. hardware mechanism. protocol.

In interrupt, the device is serviced While in polling, the device is serviced by


3. by interrupt handler. CPU.

Interrupt can take place at any Whereas CPU steadily ballots the device at
4. time. regular or proper interval.

In interrupt, interrupt request line While in polling, Command ready bit is used
is used as indication for indicating as indication for indicating that device
5. that device requires servicing. requires servicing.

On the opposite hand, in polling, processor


In interrupts, processor is simply waste countless processor cycles by
disturbed once any device repeatedly checking the command-ready little
6. interrupts it. bit of each device.

Direct Memory Access (DMA) :

DMA Controller is a hardware device that allows I/O devices to directly access memory
with less participation of the processor. DMA controller needs the same old circuits of
an interface to communicate with the CPU and Input/Output devices.
Fig-1 below shows the block diagram of the DMA controller. The unit communicates
with the CPU through data bus and control lines. Through the use of the address bus
and allowing the DMA and RS register to select inputs, the register within the DMA is
chosen by the CPU. RD and WR are two-way inputs. When BG (bus grant) input is 0, the
CPU can communicate with DMA registers. When BG (bus grant) input is 1, the CPU
has relinquished the buses and DMA can communicate directly with the memory.
DMA Controller Register:
The DMA controller has three registers as follows.
• Address register – It contains the address to specify the desired location in
memory.
• Word count register – It contains the number of words to be transferred.
• Control register – It specifies the transfer mode.
Note –

All registers in the DMA appear to the CPU as I/O interface registers. Therefore, the
CPU can both read and write into the DMA registers under program control via the data
bus.

Block Diagram

Explanation:

The CPU initializes the DMA by sending the given information through the data bus.
• The starting address of the memory block where the data is available (to read)
or where data are to be stored (to write).
• It also sends word count which is the number of words in the memory block to
be read or write.
• Control to define the mode of transfer such as read or write.
• A control to begin the DMA transfer.
Protection

A mechanism that controls the access of programs, processes, or users to the resources defined
by a computer system is referred to as protection. You may utilize protection as a tool for multi-
programming operating systems, allowing multiple users to safely share a common logical
namespace, including a directory or files.

Goals of Protection in Operating System


Various goals of protection in the operating system are as follows:

1. The policies define how processes access the computer system's resources, such as the
CPU, memory, software, and even the operating system. It is the responsibility of both the
operating system designer and the app programmer. Although, these policies are
modified at any time.
2. Protection is a technique for protecting data and processes from harmful or intentional
infiltration. It contains protection policies either established by itself, set by management
or imposed individually by programmers to ensure that their programs are protected to
the greatest extent possible.
3. It also provides a multiprogramming OS with the security that its users expect when
sharing common space such as files or directories.

Security

Security refers to providing a protection system to computer system resources such as CPU,
memory, disk, software programs and most importantly data/information stored in the
computer system. If a computer program is run by an unauthorized user, then he/she may
cause severe damage to computer or data stored in it. So a computer system must be protected
against unauthorized access, malicious access to system memory, viruses, worms etc.

The goal of Security System


There are several goals of system security. Some of them are as follows:

1. Integrity

Unauthorized users must not be allowed to access the system's objects, and users with insufficient
rights should not modify the system's critical files and resources.
2. Secrecy

The system's objects must only be available to a small number of authorized users. The system
files should not be accessible to everyone.

3. Availability

All system resources must be accessible to all authorized users, i.e., no single user/process should
be able to consume all system resources. If such a situation arises, service denial may occur. In
this case, malware may restrict system resources and preventing legitimate processes from
accessing them.

System Authentication
One-time passwords, encrypted passwords, and cryptography are used to create a strong
password and a formidable authentication source.

1. One-time Password

It is a way that is unique at every login by the user. It is a combination of two passwords that
allow the user access. The system creates a random number, and the user supplies a matching
one. An algorithm generates a random number for the system and the user, and the output is
matched using a common function.

2. Encrypted Passwords

It is also a very effective technique of authenticating access. Encrypted data is passed via the
network, which transfers and checks passwords, allowing data to pass without interruption or
interception.

3. Cryptography

It's another way to ensure that unauthorized users can't access data transferred over a network.
It aids in the data secure transmission. It introduces the concept of a key to protecting the data.
The key is crucial in this situation. When a user sends data, he encodes it using a computer that
has the key, and the receiver must decode the data with the same key. As a result, even if the data
is stolen in the middle of the process, there's a good possibility the unauthorized user won't be
able to access it.

Program threats
The operating system's processes and kernel carry out the specified task as directed. Program
Threats occur when a user program causes these processes to do malicious operations. The
common example of a program threat is that when a program is installed on a computer, it could
store and transfer user credentials to a hacker. There are various program threats. Some of them
are as follows:

1.Virus

A virus may replicate itself on the system. Viruses are extremely dangerous and can
modify/delete user files as well as crash computers. A virus is a little piece of code that is
implemented on the system program. As the user interacts with the program, the virus becomes
embedded in other files and programs, potentially rendering the system inoperable.

2. Trojan Horse

This type of application captures user login credentials. It stores them to transfer them to a
malicious user who can then log in to the computer and access system resources.

3. Logic Bomb

A logic bomb is a situation in which software only misbehaves when particular criteria are met;
otherwise, it functions normally.

4. Trap Door

A trap door is when a program that is supposed to work as expected has a security weakness in
its code that allows it to do illegal actions without the user's knowledge.

System Threats
System threats are described as the misuse of system services and network connections to cause
user problems. These threats may be used to trigger the program threats over an entire network,
known as program attacks. System threats make an environment in which OS resources and user
files may be misused. There are various system threats. Some of them are as follows:

1. Port Scanning

It is a method by which the cracker determines the system's vulnerabilities for an attack. It is a
fully automated process that includes connecting to a specific port via TCP/IP. To protect the
attacker's identity, port scanning attacks are launched through Zombie Systems, which
previously independent systems now serve their owners while being utilized for such terrible
purposes.
2. Worm

The worm is a process that can choke a system's performance by exhausting all system resources.
A Worm process makes several clones, each consuming system resources and preventing all other
processes from getting essential resources. Worm processes can even bring a network to a halt.

3. Denial of Service

Denial of service attacks usually prevents users from legitimately using the system. For example,
if a denial-of-service attack is executed against the browser's content settings, a user may be
unable to access the internet.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy