0% found this document useful (0 votes)
7 views5 pages

CS 4348 HW1

The document outlines key concepts in computer architecture, including the instruction cycle, the role of interrupts for processor efficiency, and the advantages of multiprocessor systems. It discusses memory hierarchy, locality principles, and the differences between virtual and real addresses, as well as multithreading and operating system kernel types. Additionally, it covers essential operating system functions such as process isolation, memory management, and reliability.

Uploaded by

wobay25311
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views5 pages

CS 4348 HW1

The document outlines key concepts in computer architecture, including the instruction cycle, the role of interrupts for processor efficiency, and the advantages of multiprocessor systems. It discusses memory hierarchy, locality principles, and the differences between virtual and real addresses, as well as multithreading and operating system kernel types. Additionally, it covers essential operating system functions such as process isolation, memory management, and reliability.

Uploaded by

wobay25311
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

1.

- The instruction cycle is the process required for a single instruction


- There are two steps. The fetch stage and the execution stage.
- The program counter (PC) holds the memory address of the next instruction to be
executed, and the processor increments the PC after each instruction fetch to move on
to the next instruction. The fetched instruction is loaded into the instruction register (IR)
and interpreted by the processor to determine the required action.
- The four categories of actions the processor can take are processor-memory transfers,
processor-I/O transfers, data processing, and control operations. Processor-memory
transfers involve moving data between the processor and memory, while processor-I/O
transfers involve moving data to or from peripheral devices. Data processing operations
involve performing arithmetic or logic operations on data, while control operations involve
altering the sequence of instruction execution.
- The execution stage involves a combination of these actions.
- In the execute stage, the instruction stored in the IR is decoded and executed. The
processor interprets the instruction and performs the specified operation, which can
involve transferring data between the processor and memory, performing arithmetic or
logic operations, or altering the sequence of instruction execution.

2.

Interrupts are provided primarily as a way to improve processor utilization. For example, most I/O
devices are much slower than the processor. Suppose that the processor is transferring data to a
printer using the instruction cycle scheme. After each write operation, the processor must pause
and remain idle until the printer catches up. The length of this pause may be on the order of many
thousands or even millions of instruction cycles. Clearly, this is a very wasteful use of the
processor.

3.

• Performance: If the work to be done by a computer can be organized such that some portions of
the work can be done in parallel, then a system with multiple processors will yield greater
performance than one with a single processor of the same type.

• Availability:In a symmetric multiprocessor, because a processor can perform the same


functions, the failure of a single processor does not halt the machine. Instead, the system can
continue to function at reduced performance.

• Incremental Growth:A user can enhance the performance of a system by adding an additional
processor.

• Scaling: Vendors can offer a range of products with different price and per- formance
characteristics based on the number of processors configured in the system.
4.

- The use of multiple levels of memory in a hierarchy, each with different access times
and storage capacities, allows the processor to access data quickly and efficiently.
- The memory hierarchy arranges many layers of memory into a hierarchy, with each
level giving a trade-off between access speed and storage capacity, enabling more
efficient memory access.
- The CPU may swiftly access frequently used data from the fast cache memory and
only access slower memory when absolutely necessary thanks to the hierarchical
organization of memory. As a result, performance improves because the time needed to
retrieve data is reduced.
- ​By providing distinct levels of memory with different speeds and capacities, the memory
hierarchy enables more efficient memory access. This allows the processor to quickly
access frequently used data from fast memory and decreases the amount of time
needed to access data.

5.

Spatial locality refers to the tendency of execution to involve a number of memory locations that
are clustered. This reflects the tendency of a processor to access instructions sequentially.
Spatial location also reflects the tendency of a pro- gram to access data locations sequentially,
such as when processing a table of data.

Temporal locality refers to the tendency for a processor to access memory locations that have
been used recently. For example, when an iteration loop is executed, the processor executes the
same set of instructions repeatedly.

6. (a) What is the maximum directly addressable memory capacity (assuming


memory is byte addressable)?

2^25 = 33554432 bytes

(b) Discuss the impact of speed if the microprocessor has: i. a 16-bit local
address bus and a 32-bit data bus ii. a 32-bit local address bus and a 16-bit data
bus

i. Only 2^16 = 65536 memory locations can be directly addressed by the CPU at once if
the microprocessor has a 16-bit local address bus. The CPU will have to perform
numerous memory cycles in order to access additional memory locations, which can
reduce processing performance. The CPU can transfer more data in a single cycle if the
data bus is 32 bits wide, which can speed up processing.

ii. The CPU can simultaneously address 2^32 = 4294967296 memory locations directly if
the microprocessor has a 32-bit local address bus. Because fewer memory cycles are
required, the CPU can access memory locations more quickly. The processor's ability to
transfer a certain amount of data in a single cycle will be constrained if the data bus is
only 16 bits wide, which will slow down processing speed.

(c) How many bits are needed for the program counter and the instruction
register?

32 bits

7.

4800/8 = 600
1/684000(1/600) * 100 = 0.088%

8.

Access cache: 0.8*20 = 16ns


Main memory access = .2*.8*40=3.2ns
Disk access: .2*.2*87 = 3.48
3.48+3.2+16 = 22.68ns

9. What is the kernel of an operating system?

A portion of the operating system that includes the most heavily used portions of software.
Generally, the kernel is maintained permanently in main memory. The kernel runs in a privileged
mode and responds to calls from pro- cesses and interrupts from devices.

What is a monolithic kernel?

A large kernel containing virtually the complete operating system, including scheduling, file
system, device drivers, and memory management. All the functional components of the kernel
have access to all of its internal data structures and routines. Typically, a monolithic kernel is
implemented as a single process, with all elements sharing the same address space.

What is a microkernel?

A small, privileged operating system core that provides process scheduling, memory
management, and communication services and relies on other processes to perform some of the
functions traditionally associated with the operating system kernel.

10.

1) Process Isolation: The OS must prevent independent processes from interfering with each
other’s memory, both data and instructions.
2) Automatic allocation and management: Programs should be dynamically allocated across the
memory hierarchy as required. Allocation should be transpar- ent to the programmer. Thus, the
programmer is relieved of concerns relating to memory limitations, and the OS can achieve
efficiency by assigning memory to jobs only as needed.

3) Support of modular programming: Programmers should be able to define pro- gram modules,
and to dynamically create, destroy, and alter the size of modules.

4) Protection and access control: Sharing of memory, at any level of the memory hierarchy,
creates the potential for one program to address the memory space of another. This is desirable
when sharing is needed by particular applications. At other times, it threatens the integrity of
programs and even of the OS itself. The OS must allow portions of memory to be accessible in
various ways by various users.

5) Long-term storage: Many application programs require means for storing information for
extended periods of time, after the computer has been powered down.

11.

A virtual address is the address of a storage location in virtual memory. Whereas, a real address
is a physical address in main memory.

12.

Multithreading is a technique in which a process, executing an application, is divided into threads


that can run concurrently. Multithreading is useful for applications that perform a number of
essentially independent tasks that do not need to be serialized. An example is, suppose you are
using two tasks at a time on the computer, be it using Microsoft Word and listening to music.

13.

• Simultaneous Concurrent Processes Threads: Kernel routines need to be reentrant to allow


several processors to execute the same kernel code simultaneously. With multiple processors
executing the same or different parts of the kernel, kernel tables and management structures
must be managed properly to avoid data corruption or invalid operations.

• Scheduling: Any processor may perform Scheduling, which complicates the task of enforcing a
scheduling policy and ensuring that corruption of the scheduler data structures is avoided. If
kernel-level multithreading is used, then the opportunity exists to schedule multiple threads from
the same process simultaneously on multiple processors. Multiprocessor scheduling will be
examined in Chapter 10.

• Synchronization: With multiple active processes having potential access to shared address
spaces or shared I/O resources, care must be taken to provide effective synchronization.
Synchronization is a facility that enforces mutual exclusion and event ordering. A common
synchronization mechanism used in multiprocessor operating systems is locks, and will be
described in Chapter 5.
• Memory management: Memory management on a multiprocessor must deal with all of the
issues found on uniprocessor computers, and will be discussed in Part Three. In addition, the OS
needs to exploit the available hardware parallelism to achieve the best performance. The paging
mechanisms on different processors must be coordinated to enforce consistency when several
proces- sors share a page or segment and to decide on page replacement. The reuse of physical
pages is the biggest problem of concern; that is, it must be guaranteed that a physical page can
no longer be accessed with its old contents before the page is put to a new use.

• Reliability and fault tolerance: The OS should provide graceful degradation in the face of
processor failure. The scheduler and other portions of the OS must recognize the loss of a
processor and restructure management tables accordingly.

14)
a) Total Up time: 91 + 67 + 180 + 4 + 103 + 44 + 315 + 153 = 957 minutes

b) Total Down time: 1440 minutes - 957 minutes = 483 minutes

c) MTTF = (24 hours x 60 minutes) / 7 failures


= 20.57 hours

d) MTTR = Total down time / Number of failures


= 483 minutes / 7 failures
= 69 minutes
Therefore, the MTTR is 69 minutes.

e) Availability: 957/1440 = 0.6646=66.46%

15)

a)
Job1: 37 seconds
Job2: 73 seconds
Job3: 162 seconds
Job4: 142 seconds
Job5: 205 seconds

b)
5/205 = 0.024 jobs per second

c)
181/205 = 0.882 so 88%

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy