Chapter 11 CO BIM III
Chapter 11 CO BIM III
Chapter 11
Multiprocessor
11.1 Introduction and Characteristics of Multiprocessor
The system in which two or more processing units (CPU or IOP) are connected to the memory and I/O
devices is known as multiprocessor system.
Multiprocessors are classified as multiple instruction stream, multiple data stream (MIMD) systems.
A multiprocessor system is controlled by one operating system that provides interaction between processors
and all the components of the system.
Characteristics of multiprocessors
Multiple processing elements
System is controlled by single operating system
System is more reliable
The system derives its high performance from the fact that computations can proceed in parallel in one of two
ways:
Multiple independent jobs can be made to operate in parallel.
A single job can be partitioned into multiple parallel tasks.
Types of multiprocessor
Multiprocessors are classified by the way their memory is organized. They are:
Shared memory or tightly-coupled multiprocessor: the multiprocessor system in which all processing
elements share a common memory. In this type, there is no local memory with processor but they have their
own cache memory.
2. Multiport Memory
It uses separate buses between each memory module and each CPU.
Each processor has direct independent access of memory modules by their own bus connected to each
module.
The modules must have internal control logic to determine which port will have access to memory at any
given time.
Memory access conflicts are resolved by assigning fixed priorities to each memory port. CPU1 will have
priority over CPU2, CPU2 will have priority over CPU3 and CPU4 will have the lowest priority.
3. Crossbar Switch
A crossbar switch (also known as cross-point switch or matrix switch) is a switch connecting multiple inputs
to multiple outputs in a matrix manner.
The crossbar switch organization consists of a number of cross points that are placed at interconnection
between processor bus and memory module path.
5. Hypercube Interconnection
The hypercube or binary n-cube multiprocessor structure is a loosely coupled system composed of N=2 n
processors interconnected in an n-dimensional binary cube.
Each processor forms a node of the cube. Each processor has direct communication paths to n other
neighbor processors. These paths correspond to the edges of the cube.
choosing a different processor. The polling sequence is normally programmable, and as a result, the
selection priority can be altered under program control.
iii) Least Recently Used (LRU): The least recently used (LRU) algorithm gives the highest priority to
the requesting device that has not used the bus for the longest interval. With this, no processor is
favored over any other since the priorities are dynamically changed to give every device an
opportunity to access the bus.
iv) First-Come, First-Serve (FIFO) scheme: In the first come, first serve scheme, requests are served
in the order received. To implement this algorithm, the bus controller establishes a queue arranged
according to the time that the bus requests arrive.
v) Rotating Daisy-Chain: The rotating daisy chain procedure is a dynamic extension of the daisy chain
algorithm. Each arbiter priority for a given bus cycle is determined by its position along the bus
priority line from the arbiter whose processor is currently controlling the bus. Once an arbiter
releases the bus, it has the lowest priority.
Interprocessor Synchronization
The instruction set of a multiprocessor contains basic instructions that are used to implement
communication and synchronization between cooperating processes.
Synchronization refers to the special case where the data used to communicate between processors is control
information. Synchronization is needed to enforce the correct sequence of processes and to ensure mutually
exclusive access to shared writable data.
A number of hardware mechanisms for mutual exclusion have been developed. One of the most popular
methods is through the use of a binary semaphore.
A properly functioning multiprocessor system must provide a mechanism that will guarantee orderly access
to shared memory and other shared resources. This is necessary to protect data from being changed
simultaneously by two or more processors. This mechanism has been termed mutual exclusion.
Mutual exclusion must be provided in a multiprocessor system to enable one processor to exclude or lock
out access to a shared resource by other processors when it is in a critical section. A critical section is a
program sequence that, once begun, must complete execution before another processor accesses the same
shared resource.
A binary variable called a semaphore is often used to indicate whether or not a processor is executing a
critical section. A semaphore is a software controlled flag that is stored in a memory location that all
processors can access. When the semaphore is equal to 1, it means that a processor is executing a critical
program, so that the shared memory is not available to other processors.
When the semaphore is equal to 0, the shared memory is available to any requesting processor. Processors
that share the same memory segment agree by convention not to use the memory segment unless the
semaphore is equal to 0, indicating that memory is available.
They also agree to set the semaphore to 1 when they are executing a critical section and to clear it to 0 when
they are finished.
A semaphore can be initialized by means of a test and set instruction in conjunction with a hardware lock
mechanism.
Assume that the semaphore is a bit in the least significant position of a memory word whose address is
symbolized by SEM. Let the mnemonic TSL designate the "test and set while locked" operation.
The instruction TSL SEM will be executed in two memory cycles (the first to read and the second to write)
without interference as follows:
RM [SEM] Test semaphore
M [SEM]1 Set semaphore