CS 4348 HW1
CS 4348 HW1
2.
Interrupts are provided primarily as a way to improve processor utilization. For example, most I/O
devices are much slower than the processor. Suppose that the processor is transferring data to a
printer using the instruction cycle scheme. After each write operation, the processor must pause
and remain idle until the printer catches up. The length of this pause may be on the order of many
thousands or even millions of instruction cycles. Clearly, this is a very wasteful use of the
processor.
3.
• Performance: If the work to be done by a computer can be organized such that some portions of
the work can be done in parallel, then a system with multiple processors will yield greater
performance than one with a single processor of the same type.
• Incremental Growth:A user can enhance the performance of a system by adding an additional
processor.
• Scaling: Vendors can offer a range of products with different price and per- formance
characteristics based on the number of processors configured in the system.
4.
- The use of multiple levels of memory in a hierarchy, each with different access times
and storage capacities, allows the processor to access data quickly and efficiently.
- The memory hierarchy arranges many layers of memory into a hierarchy, with each
level giving a trade-off between access speed and storage capacity, enabling more
efficient memory access.
- The CPU may swiftly access frequently used data from the fast cache memory and
only access slower memory when absolutely necessary thanks to the hierarchical
organization of memory. As a result, performance improves because the time needed to
retrieve data is reduced.
- By providing distinct levels of memory with different speeds and capacities, the memory
hierarchy enables more efficient memory access. This allows the processor to quickly
access frequently used data from fast memory and decreases the amount of time
needed to access data.
5.
Spatial locality refers to the tendency of execution to involve a number of memory locations that
are clustered. This reflects the tendency of a processor to access instructions sequentially.
Spatial location also reflects the tendency of a pro- gram to access data locations sequentially,
such as when processing a table of data.
Temporal locality refers to the tendency for a processor to access memory locations that have
been used recently. For example, when an iteration loop is executed, the processor executes the
same set of instructions repeatedly.
(b) Discuss the impact of speed if the microprocessor has: i. a 16-bit local
address bus and a 32-bit data bus ii. a 32-bit local address bus and a 16-bit data
bus
i. Only 2^16 = 65536 memory locations can be directly addressed by the CPU at once if
the microprocessor has a 16-bit local address bus. The CPU will have to perform
numerous memory cycles in order to access additional memory locations, which can
reduce processing performance. The CPU can transfer more data in a single cycle if the
data bus is 32 bits wide, which can speed up processing.
ii. The CPU can simultaneously address 2^32 = 4294967296 memory locations directly if
the microprocessor has a 32-bit local address bus. Because fewer memory cycles are
required, the CPU can access memory locations more quickly. The processor's ability to
transfer a certain amount of data in a single cycle will be constrained if the data bus is
only 16 bits wide, which will slow down processing speed.
(c) How many bits are needed for the program counter and the instruction
register?
32 bits
7.
4800/8 = 600
1/684000(1/600) * 100 = 0.088%
8.
A portion of the operating system that includes the most heavily used portions of software.
Generally, the kernel is maintained permanently in main memory. The kernel runs in a privileged
mode and responds to calls from pro- cesses and interrupts from devices.
A large kernel containing virtually the complete operating system, including scheduling, file
system, device drivers, and memory management. All the functional components of the kernel
have access to all of its internal data structures and routines. Typically, a monolithic kernel is
implemented as a single process, with all elements sharing the same address space.
What is a microkernel?
A small, privileged operating system core that provides process scheduling, memory
management, and communication services and relies on other processes to perform some of the
functions traditionally associated with the operating system kernel.
10.
1) Process Isolation: The OS must prevent independent processes from interfering with each
other’s memory, both data and instructions.
2) Automatic allocation and management: Programs should be dynamically allocated across the
memory hierarchy as required. Allocation should be transpar- ent to the programmer. Thus, the
programmer is relieved of concerns relating to memory limitations, and the OS can achieve
efficiency by assigning memory to jobs only as needed.
3) Support of modular programming: Programmers should be able to define pro- gram modules,
and to dynamically create, destroy, and alter the size of modules.
4) Protection and access control: Sharing of memory, at any level of the memory hierarchy,
creates the potential for one program to address the memory space of another. This is desirable
when sharing is needed by particular applications. At other times, it threatens the integrity of
programs and even of the OS itself. The OS must allow portions of memory to be accessible in
various ways by various users.
5) Long-term storage: Many application programs require means for storing information for
extended periods of time, after the computer has been powered down.
11.
A virtual address is the address of a storage location in virtual memory. Whereas, a real address
is a physical address in main memory.
12.
13.
• Scheduling: Any processor may perform Scheduling, which complicates the task of enforcing a
scheduling policy and ensuring that corruption of the scheduler data structures is avoided. If
kernel-level multithreading is used, then the opportunity exists to schedule multiple threads from
the same process simultaneously on multiple processors. Multiprocessor scheduling will be
examined in Chapter 10.
• Synchronization: With multiple active processes having potential access to shared address
spaces or shared I/O resources, care must be taken to provide effective synchronization.
Synchronization is a facility that enforces mutual exclusion and event ordering. A common
synchronization mechanism used in multiprocessor operating systems is locks, and will be
described in Chapter 5.
• Memory management: Memory management on a multiprocessor must deal with all of the
issues found on uniprocessor computers, and will be discussed in Part Three. In addition, the OS
needs to exploit the available hardware parallelism to achieve the best performance. The paging
mechanisms on different processors must be coordinated to enforce consistency when several
proces- sors share a page or segment and to decide on page replacement. The reuse of physical
pages is the biggest problem of concern; that is, it must be guaranteed that a physical page can
no longer be accessed with its old contents before the page is put to a new use.
• Reliability and fault tolerance: The OS should provide graceful degradation in the face of
processor failure. The scheduler and other portions of the OS must recognize the loss of a
processor and restructure management tables accordingly.
14)
a) Total Up time: 91 + 67 + 180 + 4 + 103 + 44 + 315 + 153 = 957 minutes
15)
a)
Job1: 37 seconds
Job2: 73 seconds
Job3: 162 seconds
Job4: 142 seconds
Job5: 205 seconds
b)
5/205 = 0.024 jobs per second
c)
181/205 = 0.882 so 88%