0% found this document useful (0 votes)
57 views44 pages

CSC 205 - 1 Introduction 2023-2024

This document discusses operating systems and provides an overview of computer system components and organization. It describes the key components of a computer system including the processor, memory, I/O devices, and buses. The processor controls the operation of the computer by fetching and executing instructions. Modern processors use techniques like pipelining and superscalar execution to improve performance. The operating system manages these hardware resources and provides services to users and applications.

Uploaded by

hahnonimus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views44 pages

CSC 205 - 1 Introduction 2023-2024

This document discusses operating systems and provides an overview of computer system components and organization. It describes the key components of a computer system including the processor, memory, I/O devices, and buses. The processor controls the operation of the computer by fetching and executing instructions. Modern processors use techniques like pipelining and superscalar execution to improve performance. The operating system manages these hardware resources and provides services to users and applications.

Uploaded by

hahnonimus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Operating Systems

Dr Wilson Sakpere
Department of Computer Science
Lead City University, Ibadan, Nigeria
- 09137792962; - sakpere.wilson@lcu.edu.ng
Contents
• Class Introduction
Course Objectives
Assessment
Suggested Texts
• Computer System Overview
Top-Level View of Computer Components
Computer System Components
Computer System Organisation
 Motherboard
 Processor
 Multithreading
 Multicore Processors
 Memory
 Disks
 I/O Devices
 Buses
Booting the Computer
Computer System Operation
2
Course Objectives
• At the end of this course, the student should be able to:
• describe and explain the concepts, structure, design and role
of operating systems.
• describe the impact of operating system design on application
system design and performance.
• demonstrate competency in recognising and using operating
system design and performance.
• demonstrate competency in recognising and using operating
system features.
• explain the performance trade-offs inherent in OS
implementation.

3
Assessment
• Class Attendance 20 marks
• Assignments 10 marks
• Test 10 marks
• Final Examination 60 marks

4
Suggested Texts
• Operating Systems: Design and Implementation, 3rd Edition,
Andrew S. Tanenbaum and Albert S. Woodhull, 2006.
• Operating System Concepts, 10th Edition, Abraham Silberschatz,
Peter Baer Galvin and Greg Gagne, 2018.
• Operating Systems, 3rd Edition, H. M. Deitel, P. J. Deitel and D.
R. Choffnes, 2004.
• Modern Operating Systems, 4th Edition, Andrew S. Tanenbaum
and Herbert Bos, 2015.
• Understanding Operating Systems, 7th Edition, Ann McIver
McHoes and Ida M. Flynn, 2014.

5
Computer System Overview
• An operating system (OS) exploits the hardware resources of
one or more processors to provide a set of services to system
users.
• The OS also manages secondary memory and I/O
(input/output) devices on behalf of its users.
• So, it is important to understand the underlying computer
system hardware before exploring operating systems.
• At a top level, a computer consists of processor, memory, and
I/O components, with one or more modules of each type.
• These components are interconnected to achieve the main
function of the computer, which is to execute programs.
6
Top-Level View of Computer Components

7
Computer System Components
• One of the processor’s functions is to exchange data with memory.
• Thus, it makes use of two internal (to the processor) registers: a memory
address register (MAR), which specifies the address in memory for the
next read or write; and a memory buffer register (MBR), which contains
the data to be written into memory, or receives the data read from
memory.
• Similarly, an I/O address register (I/OAR) specifies a particular I/O
device. An I/O buffer register (I/OBR) is used for the exchange of data
between an I/O module and the processor.
• A memory module consists of a set of locations, defined by sequentially
numbered addresses. Each location contains a bit pattern that can be
interpreted as either an instruction or data.
8
Computer System Components…
• An I/O module transfers data from external devices to processor and
memory, and vice versa. It contains internal buffers for temporarily
storing data until they can be sent on.

9
Computer System Organisation
• There are four main structural elements of the computer system:
• Processor: Controls the operation of the computer and performs its data
processing functions. When there is only one processor, it is often referred
to as the central processing unit (CPU).
• Main memory: Stores data and programs. This memory is typically volatile;
that is, when the computer is shut down, the contents of the memory are
lost. Main memory is also referred to as real memory or primary memory.
• I/O modules: Move data between the computer and its external
environment. The external environment consists of a variety of devices,
including secondary memory devices (e.g., disks), communications
equipment, and terminals.
• System bus: Provides communication among processors, main memory,
and I/O modules.
10
Computer System Organisation…
• One or more CPUs, device controllers connect through common bus providing
access to shared memory.
• Concurrent execution of CPUs and devices competing for memory cycles.

11
Motherboard

12
Processor
• The ‘‘brain’’ of the computer is the Central Processing Unit (CPU). Multiple CPUs
is referred to as processor. The CPU fetches instructions from memory and
executes them. The basic cycle of every CPU is to fetch the first instruction from
memory, decode it to determine its type and operands, execute it, and then
fetch, decode, and execute subsequent instructions. The cycle is repeated until
the program finishes. In this way, programs are carried out.
• Each CPU has a specific set of instructions that it can execute. Thus, an x86
processor cannot execute ARM programs and an ARM processor cannot
execute x86 programs. Because accessing memory to get an instruction or data
word takes much longer than executing an instruction, all CPUs contain some
registers inside to hold key variables and temporary results. Thus, the
instruction set generally contains instructions to load a word from memory into
a register and store a word from a register into memory. Other instructions
combine two operands from registers, memory, or both into a result, such as
adding two words and storing the result in a register or in memory.

13
Processor…
• Also, most computers have several special registers that are visible to the
programmer. One of these is the program counter, which contains the memory
address of the next instruction to be fetched. After that instruction has been
fetched, the program counter is updated to point to its successor.
• Another register is the stack pointer, which points to the top of the current stack
in memory. The stack contains one frame for each procedure that has been
entered but not yet exited. A procedure’s stack frame holds those input
parameters, local variables, and temporary variables that are not kept in registers.
• Yet another register is the Program Status Word (PSW). This register contains the
condition code bits, which are set by comparison instructions, the CPU priority,
the mode (user or kernel), and various other control bits. User programs may
normally read the entire PSW but typically may write only some of its fields. The
PSW plays an important role in system calls and I/O.

14
Processor…
• When time multiplexing the CPU, the operating system will often stop the
running program to (re)start another one. Every time it stops a running
program, the operating system must save all the registers so they can be
restored when the program runs later.
• To improve performance, CPU designers have long abandoned the simple
model of fetching, decoding, and executing one instruction at a time. Many
modern CPUs have facilities for executing more than one instruction at the
same time. For example, a CPU might have separate fetch, decode, and
execute units, so that while it is executing instruction n, it could also be
decoding instruction n + 1 and fetching instruction n + 2. Such an organisation
is called a pipeline. In most pipeline designs, once an instruction has been
fetched into the pipeline, it must be executed, even if the preceding
instruction was a conditional branch that was taken.

15
Processor…
• A superscalar CPU has multiple execution units, for example, one for
integer arithmetic, one for floating-point arithmetic, and one for
Boolean operations.

A three-stage pipeline A superscalar CPU


16
Processor…
• Most CPUs, except very simple ones used in embedded systems, have
two modes, kernel mode and user mode. Usually, a bit in the PSW
controls the mode.
• When running in kernel mode, the CPU can execute every instruction in
its instruction set and use every feature of the hardware. On desktop
and server machines, the operating system normally runs in kernel
mode, giving it access to the complete hardware. On most embedded
systems, a small piece runs in kernel mode, with the rest of the
operating system running in user mode.
• User programs always run in user mode, which permits only a subset of
the instructions to be executed and a subset of the features to be
accessed. Generally, all instructions involving I/O and memory
protection are disallowed in user mode. Setting the PSW mode bit to
enter kernel mode is also forbidden, of course.
17
Processor…
• To obtain services from the operating system, a user program must make a
system call, which traps into the kernel and invokes the operating system. The
TRAP instruction switches from user mode to kernel mode and starts the
operating system. When the work has been completed, control is returned to
the user program at the instruction following the system call. The system call
mechanism is a special kind of procedure call that has the additional property
of switching from user mode to kernel mode.
• It is worth noting that computers have traps other than the instruction for
executing a system call. Most of the other traps are caused by the hardware
to warn of an exceptional situation such as an attempt to divide by 0 or a
floating-point underflow. In all cases the operating system gets control and
must decide what to do. Sometimes the program must be terminated with an
error. Other times the error can be ignored (an underflowed number can be
set to 0). Finally, when the program has announced in advance that it wants
to handle certain kinds of conditions, control can be passed back to the
program to let it deal with the problem.
18
Multithreading
• Multithreading (or hyperthreading) replicates the functional units and the
control logic of a CPU. It allows the CPU to hold the state of two different
threads and then switch back and forth on a nanosecond time scale. For
example, if one of the processes needs to read a word from memory
(which takes many clock cycles), a multithreaded CPU can just switch to
another thread. Multithreading does not offer true parallelism. Only one
process at a time is running, but thread-switching time is reduced to the
order of a nanosecond.
• Multithreading has implications for the operating system because each
thread appears to the operating system as a separate CPU. Consider a
system with two actual CPUs, each with two threads. The operating system
will see this as four CPUs. If there is only enough work to keep two CPUs
busy at a certain point in time, it may inadvertently schedule two threads
on the same CPU, with the other CPU completely idle. This choice is far
less efficient than using one thread on each CPU.
19
Multicore Processors
• Many CPU chips now have four, eight or more complete processors or cores
on them. The multicore chips in the Figure below effectively carry four
minichips on them, each with its own independent CPU.

A quad-core chip with a shared L2 cache A quad-core chip with separate L2 caches
20
Multicore Processors…
• Some processors, like Intel Xeon Phi and the Tilera TilePro, already
have more than 60 cores on a single chip.
• Making use of such a multicore chip will definitely require a
multiprocessor operating system with a modern Graphics Processing
Unit (GPU).
• A GPU is a processor with thousands of tiny cores. They are very good
for many small computations done in parallel, like rendering polygons
in graphics applications.
• They are not so good at serial tasks. They are also hard to program.
While GPUs can be useful for operating systems (e.g., encryption or
processing of network traffic), it is not likely that much of the
operating system itself will run on the GPUs.
21
Memory
• The second major component in any computer is the memory. The memory
should be extremely fast (faster than executing an instruction so that the CPU
is not held up by the memory), abundantly large, and cheap. No current
technology satisfies all of these goals, so a different approach is taken. The
memory system is constructed as a hierarchy of layers, as shown below.

A typical memory hierarchy. The numbers are very rough approximations


22
Memory…
• The top layers have higher speed, smaller capacity, and greater cost per
bit than the lower ones, often by factors of a billion or more.
• The top layer consists of the registers internal to the CPU. They are
made of the same material as the CPU and are thus just as fast as the
CPU. Consequently, there is no delay in accessing them. The storage
capacity available in them is typically 32 × 32 bits on a 32-bit CPU and 64
× 64 bits on a 64-bit CPU. Less than 1 KB in both cases. Programs must
manage the registers (i.e., decide what to keep in them) themselves, in
software.

23
Cache Memory
• The cache memory is mostly controlled by the hardware. Main memory
is divided up into cache lines, typically 64 bytes, with addresses 0 to 63
in cache line 0, 64 to 127 in cache line 1, 128 to 191 in cache line 2, and
so on. The most heavily used cache lines are kept in a high-speed cache
located inside or very close to the CPU.
• When the program needs to read a memory word, the cache hardware
checks to see if the line needed is in the cache. If it is, called a cache hit,
the request is satisfied from the cache and no memory request is sent
over the bus to the main memory.
• Cache hits normally take about two clock cycles. Cache misses have to
go to memory, with a substantial time penalty. Cache memory is limited
in size due to its high cost. Some machines have two or even three levels
of cache, each one slower and bigger than the one before it.

24
Cache Memory…
• Caching plays a major role in many areas of computer science, not just
caching lines of RAM. Whenever a resource can be divided into pieces,
some of which are used much more heavily than others, caching is often
used to improve performance.
• Operating systems use it all the time. For example, most operating
systems keep (pieces of) heavily used files in main memory to avoid
having to fetch them from the disk repeatedly. Similarly, the results of
converting long path names like into the disk address where the file is
located can be cached to avoid repeated lookups.
• Finally, when the address of a Web page (URL) is converted to a network
address (IP address), the result can be cached for future use. Many
other uses exist.

25
Cache Memory…
• Caches are such a good idea that modern CPUs have two of them. The first
level or L1 cache is always inside the CPU and usually feeds decoded
instructions into the CPU’s execution engine. Most chips have a second L1
cache for very heavily used data words. The L1 caches are typically 16 KB
each. In addition, there is often a second cache, called the L2 cache, that
holds several megabytes of recently used memory words. The difference
between the L1 and L2 caches lies in the timing. Access to the L1 cache is
done without any delay, whereas access to the L2 cache involves a delay of
one or two clock cycles.
• On multicore chips, the designers must decide where to place the caches.
From Figure (a) in slide 20, a single L2 cache is shared by all the cores. This
approach is used in Intel multicore chips. In Figure (b), each core has its own
L2 cache. This approach is used by AMD. Each strategy has its pros and cons.
For example, the Intel shared L2 cache requires a more complicated cache
controller but the AMD way makes keeping the L2 caches consistent more
difficult.
26
Main Memory
• Main memory is the workhorse of the memory system. It is usually called
Random Access Memory (RAM), and core memory in some quarters. All CPU
requests that cannot be satisfied out of the cache go to main memory. In
addition to the main memory, many computers have a small amount of
nonvolatile random-access memory. Unlike RAM, nonvolatile memory does
not lose its contents when the power is switched off.
• Read Only Memory (ROM) is programmed at the factory and cannot be
changed afterward. It is fast and inexpensive. On some computers, the
bootstrap loader used to start the computer is contained in ROM. Also, some
I/O cards come with ROM for handling low-level device control.
• Electrically Erasable Programmable ROM (EEPROM) and flash memory are
also nonvolatile, but in contrast to ROM can be erased and rewritten.
However, writing them takes orders of magnitude more time than writing
RAM, so they are used in the same way ROM is, only with the additional
feature that it is now possible to correct bugs in programs they hold by
rewriting them in the field.
27
Main Memory…
• Flash memory is also commonly used as the storage medium in portable
electronic devices. It serves as film in digital cameras and as the disk in
portable music players, to name just two uses. Flash memory is intermediate
in speed between RAM and disk. If it is erased too many times, it wears out.
• Another kind of memory is Complementary Metal–Oxide–Semiconductor
(CMOS), which is volatile. Many computers use CMOS memory to hold the
current time and date. The CMOS memory and the clock circuit that
increments the time in it are powered by a small battery, so the time is
correctly updated, even when the computer is unplugged. The CMOS memory
can also hold the configuration parameters, such as which disk to boot from.
CMOS is used because it draws so little power that the original factory-
installed battery often lasts for several years. However, when it begins to fail,
the computer can appear to have Alzheimer’s disease, forgetting things that it
has known for years, like which hard disk to boot from.
28
Disks
• Disk storage is cheaper than RAM and larger as well. The only problem is that
the time to randomly access data on it is slower. The reason is that a disk is a
mechanical device.

Structure of a disk drive


29
Disks…
• A disk consists of one or more metal platters that rotate at 5400, 7200,
10,800 RPM or more. A mechanical arm pivots over the platters from
the corner. Information is written onto the disk in a series of concentric
circles. At any given arm position, each of the heads can read an annular
region called a track. Together, all the tracks for a given arm position
form a cylinder.
• Each track is divided into some number of sectors, typically 512 bytes
per sector. On modern disks, the outer cylinders contain more sectors
than the inner ones. Moving the arm from one cylinder to the next takes
about 1 msec. Moving it to a random cylinder typically takes 5 to 10
msec, depending on the drive. Once the arm is on the correct track, the
drive must wait for the needed sector to rotate under the head, an
additional delay of 5 msec to 10 msec, depending on the drive’s RPM.
Once the sector is under the head, reading or writing occurs at a rate of
50 MB/sec on low-end disks to 160 MB/sec on faster ones.
30
Disks…
• Solid State Disks (SSDs) do not have moving parts nor contain platters in the
shape of disks, but store data in (Flash) memory. The only ways in which they
resemble disks is that they also store a lot of data that is not lost when the power
is off.
• Many computers support a scheme known as virtual memory. This scheme
makes it possible to run programs larger than physical memory by placing them
on the disk and using main memory as a kind of cache for the most heavily
executed parts. This scheme requires remapping memory addresses on the fly to
convert the address the program generated to the physical address in RAM
where the word is located. This mapping is done by a part of the CPU called the
Memory Management Unit (MMU).
• The presence of caching and the MMU can have a major impact on performance.
In a multiprogramming system, when switching from one program to another,
sometimes called a context switch, it may be necessary to flush all modified
blocks from the cache and change the mapping registers in the MMU. Both are
expensive operations and programmers try hard to avoid them.
31
I/O Devices
• I/O devices also interact heavily with the operating system. They generally
consist of two parts: a controller and the device itself. The controller is a chip
or a set of chips that physically controls the device. It accepts commands from
the operating system and carries them out. In many cases, the actual control
of the device is complicated and detailed, so it is the job of the controller to
present a simpler (but still very complex) interface to the operating system. To
do this work, controllers often contain small embedded computers that are
programmed to do their work.
• The other piece is the actual device itself. Devices have simple interfaces, both
because they cannot do much and to make them standard. To make them
standard is needed so that any SATA disk controller can handle any SATA disk,
for example. SATA stands for Serial ATA and ATA stands for AT Attachment,
while AT stands for Advanced Technology. So, SATA stands for Serial Advanced
Technology Attachment.
32
I/O Devices…
• SATA is currently the standard type of disk on many computers. Since the
actual device interface is hidden behind the controller, all that the
operating system sees is the interface to the controller, which may be
quite different from the interface to the device.
• Because each type of controller is different, different software is needed
to control each one. The software that talks to a controller, giving it
commands and accepting responses, is called a device driver. Each
controller manufacturer must supply a driver for each operating system it
supports. Thus, a scanner may come with drivers for OS X, Windows 7, 8,
10 and Linux, for example.
• To be used, the driver must be put into the operating system so it can run
in kernel mode. Drivers can run outside the kernel, and operating systems
like Linux and Windows nowadays do offer some support for doing so.
Most of the drivers still run below the kernel boundary.
33
I/O Devices…
• Every controller has a small number of registers that are used to communicate
with it. For example, a minimal disk controller might have registers for
specifying the disk address, memory address, sector count, and direction
(read or write). To activate the controller, the driver gets a command from the
operating system, then translates it into the appropriate values to write into
the device registers. The collection of all the device registers forms the I/O
port space.
• On some computers, the device registers are mapped into the operating
system’s address space (the addresses it can use), so they can be read and
written like ordinary memory words. On such computers, no special I/O
instructions are required, and user programs can be kept away from the
hardware by not putting these memory addresses within their reach. On
other computers, the device registers are put in a special I/O port space, with
each register having a port address. On these machines, special IN and OUT
instructions are available in kernel mode to allow drivers to read and write the
registers. The former scheme eliminates the need for special I/O instructions
but uses up some of the address space. The latter uses no address space but
requires special instructions. Both systems are widely used.
34
I/O Devices…
• Input and output can be done in three different ways. In the simplest method, a user
program issues a system call, which the kernel then translates into a procedure call
to the appropriate driver. The driver then starts the I/O and sits in a tight loop
continuously polling the device to see if it is done (usually there is some bit that
indicates that the device is still busy). When the I/O has completed, the driver puts
the data (if any) where they are needed and returns. The operating system then
returns control to the caller. This method is called busy waiting and has the
disadvantage of tying up the CPU polling the device until it is finished.
• The second method is for the driver to start the device and ask it to give an interrupt
when it is finished. At that point, the driver returns. The operating system then
blocks the caller if need be and looks for other work to do. When the controller
detects the end of the transfer, it generates an interrupt to signal completion.
• The third method for doing I/O makes use of special hardware: a Direct Memory
Access (DMA) chip that can control the flow of bits between memory and some
controller without constant CPU intervention. The CPU sets up the DMA chip, telling
it how many bytes to transfer, the device and memory addresses involved, and the
direction, and lets it go. When the DMA chip is done, it causes an interrupt.
35
Buses
• As processors and memories got faster, the ability of a single bus to handle all the
traffic of a minicomputer was strained to the breaking point. Something had to be
done. As a result, additional buses were added, both for faster I/O devices and for
CPU-to-memory traffic. Because of this evolution, a large x86 system is developed.

The structure of a large x86 system


36
Buses…
• The x86 system has many buses (e.g., cache, memory, PCIe, PCI, USB, SATA
and DMI), each with a different transfer rate and function. The operating
system must be aware of all of them for configuration and management. The
main bus is the Peripheral Component Interconnect Express (PCIe) bus.
• The PCIe bus can transfer tens of gigabits per second. It is much faster than its
predecessors. It is also very different in nature. Before PCIe, most buses were
parallel and shared. A shared bus architecture means that multiple devices
use the same wires to transfer data. Thus, when multiple devices have data to
send, you need an arbiter to determine who can use the bus. In contrast, PCIe
makes use of dedicated, point-to-point connections.
• A parallel bus architecture as used in traditional PCI means that you send each
word of data over multiple wires. In contrast to this, PCIe uses a serial bus
architecture and sends all bits in a message through a single connection,
known as a lane. This is much simpler, because you do not have to ensure that
all data arrive at the destination at the same time. Parallelism is still used,
because you can have multiple lanes in parallel. As the speed of peripheral
devices like network cards and graphics adapters increases rapidly, the PCIe
standard is upgraded every 3–5 years.
37
Buses…
• For the older PCI standard, legacy devices are hooked up to a separate hub processor.
In the future, it is possible that all PCI devices will attach to yet another hub that in
turn connects them to the main hub, creating a tree of buses. In the configuration of
slide 36, the CPU talks to memory over a fast DDR3 bus, to an external graphics
device over PCIe and to all other devices via a hub over a Direct Media Interface
(DMI) bus. The hub in turn connects all the other devices, using the Universal Serial
Bus (USB) to talk to USB devices, the SATA bus to interact with hard disks and DVD
drives, and PCIe to transfer Ethernet frames. Moreover, each of the cores has a
dedicated cache and a much larger cache that is shared between them. Each of these
caches introduces another bus.
• The USB was invented to attach all the slow I/O devices, such as the keyboard and
mouse, to the computer. USB uses a small connector with four to eleven wires
(depending on the version), some of which supply electrical power to the USB devices
or connect to ground. USB is a centralised bus in which a root device polls all the I/O
devices every 1 msec to see if they have any traffic. USB 1.0 could handle an
aggregate load of 12 Mbps, USB 2.0 increased the speed to 480 Mbps, and USB 3.0
tops at no less than 5 Gbps. Any USB device can be connected to a computer, and it
will function immediately without requiring a reboot.
38
Buses…
• The Small Computer System Interface (SCSI) bus is a high-performance bus intended for fast
disks, scanners, and other devices needing considerable bandwidth. Nowadays, we find
them mostly in servers and workstations. They can run at up to 640 MB/sec. The operating
system must know what peripheral devices are connected to the computer and configure
them. This led Intel and Microsoft to design a PC system called plug and play. Before plug
and play, each I/O card had a fixed interrupt request level and fixed addresses for its I/O
registers. For example, the keyboard was interrupt 1 and used I/O addresses 0x60 to 0x64,
the floppy disk controller was interrupt 6 and used I/O addresses 0x3F0 to 0x3F7, and the
printer was interrupt 7 and used I/O addresses 0x378 to 0x37A, and so on.
• The problem is when a sound card and a modem card both happened to use, say, interrupt
4. They would conflict and would not work together. The solution was to include Dual in-line
Package (DIP) switches or jumpers on every I/O card and instruct the user to set them to
select an interrupt level and I/O device addresses that did not conflict with any other in the
user’s system. However, this is tedious. What plug and play does is have the system
automatically collect information about the I/O devices, centrally assign interrupt levels and
I/O addresses, and then tell each card what its numbers are.

39
Booting the Computer
• Every PC contains a motherboard. On the motherboard is a program
called the system BIOS (Basic Input Output System). The BIOS contains
low-level I/O software, including procedures to read the keyboard, write
to the screen, and do disk I/O, among other things. Nowadays, it is held
in a flash RAM, which is nonvolatile, but can be updated by the
operating system when bugs are found in the BIOS.
• When the computer is booted, the BIOS is started. It first checks to see
how much RAM is installed and whether the keyboard and other basic
devices are installed and responding correctly. It starts out by scanning
the PCIe and PCI buses to detect all the devices attached to them. If the
devices present are different from when the system was last booted, the
new devices are configured.

40
Booting the Computer…
• The BIOS then determines the boot device by trying a list of devices stored in the
CMOS memory. The user can change this list by entering a BIOS configuration program
just after booting. Typically, an attempt is made to boot from a CD-ROM (or
sometimes USB) drive, if one is present. If that fails, the system boots from the hard
disk. The first sector from the boot device is read into memory and executed. This
sector contains a program that normally examines the partition table at the end of the
boot sector to determine which partition is active. Then a secondary boot loader is
read in from that partition. This loader reads in the operating system from the active
partition and starts it.
• The operating system then queries the BIOS to get the configuration information. For
each device, it checks to see if it has the device driver. If not, it asks the user to insert
a CD-ROM containing the driver (supplied by the device’s manufacturer) or to
download it from the Internet. Once it has all the device drivers, the operating system
loads them into the kernel. Then it initialises its tables, creates whatever background
processes are needed, and starts up a login program or GUI.
41
Computer System Operation
• I/O devices and the CPU can execute concurrently.
• Each device controller manages a particular device type.
• Each device controller has a local buffer.
• Each device controller type has an operating system device driver to
manage it.
• CPU moves data from/to main memory to/from local buffers.
• I/O is from the device to local buffer of controller.
• Device controller informs CPU that it has finished its operation by
causing an interrupt.

42
Homework 1
• Discuss the various types of operating systems that we
may have.

• Submit a word or pdf file on piazza in the hw1 folder and


name the file in the following format: LastName-Initials-
MatricNo-hw1. For example, Doe-JB-LCUUG20123456-
hw1

43
www.citysourced.com

44

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy