0% found this document useful (0 votes)
13 views84 pages

Sanchez Gertrude S

Module 2 covers Computer Architecture and Organization, consisting of two lessons: one on architecture with a pre-test and five activities, and another on computing infrastructure with a pre-test and three activities. Key concepts include the distinction between computer architecture and organization, memory types, access methods, and the functional components of a digital computer. The module also discusses interfacing and communication methods, emphasizing the importance of memory and I/O interfacing in computer systems.

Uploaded by

applecraves02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views84 pages

Sanchez Gertrude S

Module 2 covers Computer Architecture and Organization, consisting of two lessons: one on architecture with a pre-test and five activities, and another on computing infrastructure with a pre-test and three activities. Key concepts include the distinction between computer architecture and organization, memory types, access methods, and the functional components of a digital computer. The module also discusses interfacing and communication methods, emphasizing the importance of memory and I/O interfacing in computer systems.

Uploaded by

applecraves02
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 84

lOMoARcPSD|16175287

Platform Technologies
lOMoARcPSD|16175287

MODULE 2

COMPUTER ARCHITECTURE and ORGANIZATION AND COMPUTING


INFRASTRUCTURE

KENNETH JAY L. DUGARIA

MODULE 2 is composed of Two (2) Lessons.

Lessons 1 is Architecture and Organization with a Pre-Test and Five (5) Activities.

Lessons 2 is on Computing Infrastructure. It is composed of a pre-test and three (3) activities.

References are also included in this module.


lOMoARcPSD|16175287

Platform
Technologies Page
1
lOMoARcPSD|16175287

Lesson 1: Architecture and

Organization At the end of this lesson, YOU are expected

to:

▪ draw a block diagram, including interconnections, of the main parts of a computer;


▪ describe how a computer stores and retrieves information to/from memory and hard drives;
and
▪ define the terms: bus, handshaking, serial, parallel, data rate.

Instruction: Write your name, course and year level, excluding the score on the blanks provided.
Before proceeding to the content, please answer the questions below. These items will test your
knowledge on Computer Architecture and Organization. Write the answer on a space provided.

Name Gertrude S Sanchez Course & Year: BSIT 2C Score:


:

1. What is Computer Architecture?


Answer: Computer architecture refers to those attributes of a system visible to a programmer or, put
another way, those attributes that have a direct impact on the logical execution of a program.
Computer organization refers to the operational units and their interconnections that realize the
architectural specifications. Examples of architectural attributes include the instruction set, the number
of bits used to represent various data types (e.g., numbers, characters), I/O mechanisms, and
techniques for addressing memory. Organizational attributes include those hardware details
transparent to the programmer, such as control signals; interfaces between the computer and
peripherals; and the memory technology used.

2. What ii Computer
Organization? Answer: Computer
Organization is frequently called
microarchitecture. 8. Computer
lOMoARcPSD|16175287

Architecture comprises logical


functions such as instruction sets,
registers, data types, and
addressing modes.

3.What are the main parts of a computer?


Answer: A motherboard
A. A Central Processing Unit (CPU)
B. A Graphics Processing Unit (GPU), also known as a video card
C. Random Access Memory (RAM), also known as volatile memory
D. Storage: Solid State Drive (SSD) or Hard Disk Drive (HDD)

4. What is a hard drive?


lOMoARcPSD|16175287

Answer: A hard drive is the hardware component that stores all of your digital content. Your documents, pictures,
music, videos, programs, application preferences, and operating system represent digital content stored on a hard
drive. Hard drives can be external or internal.

5. . What is a bus?
Answer: bus is a transmission path, made of a set of conducting wires over which data or information
in the form of electric signals, is passed
lOMoARcPSD|16175287

from one component to another in a computer. The bus can be of three types – Address bus, Data
bus and Control Bus.

Platform
Technologies Page
3
lOMoARcPSD|16175287

Assembly-Level Machine Organization

In describing computers, a distinction is often made between computer architecture and computer
organization. Although it is difficult to give precise definitions for these terms, a consensus exists
about the general areas covered by each (e.g., see [VRAN80], [SIEW82], and [BELL78a]); an
interesting alternative view is presented in [REDD76].
Computer architecture refers to those attributes of a system visible to a programmer or, put another
way, those attributes that have a direct impact on the logical execution of a program. Computer
organization refers to the operational units and their interconnections that realize the architectural
specifications. Examples of architectural attributes include the instruction set, the number of bits used
to represent various data types (e.g., numbers, characters), I/O mechanisms, and techniques for
addressing memory. Organizational attributes include those hardware details transparent to the
programmer, such as control signals; interfaces between the computer and peripherals; and the
memory technology used.
For example, it is an architectural design issue whether a computer will have a multiply instruction. It is
an organizational issue whether that instruction will be implemented by a special multiply unit or by
a mechanism that makes repeated use of the add unit of the system. The organizational decision may
be based on the anticipated frequency of use of the multiply instruction, the relative speed of the
two approaches, and the cost and physical size of a special multiply unit.
Historically, and still today, the distinction between architecture and organization has been an
important one. Many computer manufacturers offer a family of computer models, all with the same
architecture but with differences in organization. Consequently, the different models in the family have
different price and performance characteristics. Furthermore, a particular architecture may span many
years and encompass a number of different computer models, its organization changing with
changing technology. A prominent example of both these phenomena is the IBM System/370
architecture. This architecture was first introduced in 1970 and included a number of models. The
customer with modest requirements could buy a cheaper, slower model and, if demand increased, later
upgrade to a more expensive, faster model without having to abandon software that had already been
developed. Over the years, IBM has introduced many new models with improved technology to replace
older models, offering the customer greater speed, lower cost, or both. These newer models retained
the same architecture so that the customer’s software investment was protected. Remarkably, the
System/370 architecture,
lOMoARcPSD|16175287

Platform
Technologies Page
4
lOMoARcPSD|16175287

with a few enhancements, has survived to this day as the architecture of IBM’s mainframe product
line. In a class of computers called microcomputers, the relationship between architecture and
organization is very close. Changes in technology not only influence organization but also result in
the introduction of more powerful and more complex architectures. Generally, there is less of a
requirement for generation-to-generation compatibility for these smaller machines. Thus, there is more
interplay between organizational and architectural design decisions.

Memory Organization in Computer Architecture


A memory unit is the collection of storage units or devices together. The memory unit stores the
binary information in the form of bits. Generally, memory/storage is classified into 2 categories:

• Volatile Memory: This loses its data, when power is switched off.
• Non-Volatile Memory: This is a permanent storage and does not lose any data when
power is switched off.

Platform
Technologies Page
5
lOMoARcPSD|16175287

Memory Hierarchy

The total memory capacity of a computer can be visualized by hierarchy of components. The memory
hierarchy system consists of all storage devices contained in a computer system from the slow
Auxiliary Memory to fast Main Memory and to smaller Cache memory.
Auxillary memory access time is generally 1000 times that of the main memory, hence it is at
the bottom of the hierarchy.
The main memory occupies the central position because it is equipped to communicate directly
with the CPU and with auxiliary memory devices through Input/output processor (I/O).
When the program not residing in main memory is needed by the CPU, they are brought in from
auxiliary memory. Programs not currently needed in main memory are transferred into auxiliary
memory to provide space in main memory for other programs that are currently in use.
Platform
Technologies Page
6
lOMoARcPSD|16175287

The cache memory is used to store program data which is currently being executed in the CPU.
Approximate access time ratio between cache memory and main memory is about 1 to 7~10

Memory Access Methods


Each memory type, is a collection of numerous memory locations. To access data from any memory,
first it must be located and then the data is read from the memory location. Following are the
methods to access information from memory locations:

1. Random Access: Main memories are random access memories, in which each memory
location has a unique address. Using this unique address any memory location can be
reached in the same amount of time in any order.
2. Sequential Access: This method allows memory access in a sequence or in order.

Platform
Technologies Page
7
lOMoARcPSD|16175287

3. Direct Access: In this mode, information is stored in tracks, with each track having a separate
read/write head.

Main Memory
The memory unit that communicates directly within the CPU, Auxillary memory and Cache memory, is
called main memory. It is the central storage unit of the computer system. It is a large and fast
memory used to store data during computer operations. Main memory is made up of RAM and
ROM, with RAM integrated circuit chips holing the major share.

• RAM: Random Access Memory


o DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed
every 10~100 ms. It is slower and cheaper than SRAM.
o SRAM: Static RAM, has a six-transistor circuit in each cell and retains data, until powered off.
o NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example: Flash memory.
• ROM: Read Only Memory, is non-volatile and is more like a permanent storage for information. It
also stores the bootstrap loader program, to load and start the operating system when
computer is turned on. PROM (Programmable ROM), EPROM (Erasable PROM) and EEPROM
(Electrically Erasable PROM) are some commonly used ROMs.

Platform
Technologies Page
8
lOMoARcPSD|16175287

Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For example: Magnetic disks
and tapes are commonly used auxiliary devices. Other devices used as auxiliary memory are
magnetic drums, magnetic bubble memory and optical disks.
It is not directly accessible to the CPU, and is accessed using the Input/Output channels.

Cache Memory
The data or contents of the main memory that are used again and again by CPU, are stored in the
cache memory so that we can easily access that data in shorter time.
Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found
in cache memory then the CPU moves onto the main memory. It also transfers block of recent data
into the cache and keeps on deleting the old data in cache to accomodate the new one.

Hit Ratio
The performance of cache memory is measured in terms of a quantity called hit ratio. When the CPU
refers to memory and finds the word in cache it is said to produce a hit. If the word is not found in
cache, it is in main memory then it counts as a miss.
The ratio of the number of hits to the total CPU references to memory is called hit ratio.
Hit Ratio = Hit/(Hit + Miss)

Platform
Technologies Page
9
lOMoARcPSD|16175287

Associative Memory
It is also known as content addressable memory (CAM). It is a memory chip in which each bit
position can be compared. In this the content is compared in each bit cell which allows very fast table
lookup. Since the entire chip can be compared, contents are randomly stored without considering
addressing scheme. These chips have less storage capacity than regular memory chips.

Platform
Technologies Page
10
lOMoARcPSD|16175287

Interfacing and Communication


Interface is the path for communication between two components. Interfacing is of two types,
memory interfacing and I/O interfacing.
Memory Interfacing
When we are executing any instruction, we need the microprocessor to access the memory for
reading instruction codes and the data stored in the memory. For this, both the memory and
the microprocessor requires some signals to read from and write to registers.
The interfacing process includes some key factors to match with the memory requirements and
microprocessor signals. The interfacing circuit therefore should be designed in such a way that it
matches the memory signal requirements with the signals of the microprocessor.
I/O Interfacing
There are various communication devices like the keyboard, mouse, printer, etc. So, we need to
interface the keyboard and other devices with the microprocessor by using latches and buffers. This
type of interfacing is known as I/O interfacing.

Platform
Technologies Page
11
lOMoARcPSD|16175287

Block Diagram of Memory and I/O Interfacing

8085 Interfacing Pins


Following is the list of 8085 pins used for interfacing with other devices −

• A15 - A8 (Higher Address Bus)


• AD7 - AD0(Lower Address/Data Bus)
• ALE
• RD
• WR
• READY
Ways of Communication − Microprocessor with the Outside World?
There are two ways of communication in which the microprocessor can connect with the outside
world.
Platform
Technologies Page
12
lOMoARcPSD|16175287

• Serial Communication Interface


• Parallel Communication interface
Serial Communication Interface − In this type of communication, the interface gets a single
byte of data from the microprocessor and sends it bit by bit to the other system serially and vice-
a-versa.
Parallel Communication Interface − In this type of communication, the interface gets a byte
of data from the microprocessor and sends it bit by bit to the other systems in simultaneous (or)
parallel fashion and vice-a- versa.

Functional Organization
Computer: A computer is a combination of hardware and software resources which integrate
together and provides various functionalities to the user. Hardware are the physical components of a
computer like the processor, memory devices, monitor, keyboard etc. while software is the set of
programs or instructions that are required by the hardware
resources to function properly. There are a few basic components that
aids the working-cycle of a computer i.e. the Input- Process- Output Cycle and these are called as the
functional components of a computer. It needs certain input, processes that input and produces the
desired output. The input unit takes the input, the central processing unit does the processing of data
and the output unit produces the output. The memory unit holds the data and instructions during the
processing.
Digital Computer: A digital computer can be defined as a programmable machine which reads
the binary data passed as instructions, processes this binary data, and displays a calculated digital
output. Therefore, Digital computers are those that work on the digital data.
Details of Functional Components of a Digital Computer

Platform
Technologies Page
lOMoARcPSD|16175287

13
lOMoARcPSD|16175287

• Input Unit: The input unit consists of input devices that are attached to the computer. These
devices take input and convert it into binary language that the computer understands. Some of the
common input devices are keyboard, mouse, joystick, scanner etc.
• Central Processing Unit (CPU): Once the information is entered into the computer by the
input device, the processor processes it. The CPU is called the brain of the computer because it is
the control center of the computer. It first fetches instructions from memory and then interprets
them so as to know what is to be done. If required, data is fetched from memory or input device.
Thereafter CPU executes or performs the required computation and then either stores the output
or displays on the output device. The CPU has three
Platform
Technologies Page
14
lOMoARcPSD|16175287

main components which are responsible for different functions – Arithmetic Logic Unit (ALU),
Control Unit (CU) and Memory registers
• Arithmetic and Logic Unit (ALU): The ALU, as its name suggests performs mathematical
calculations and takes logical decisions. Arithmetic calculations include addition, subtraction,
multiplication and division. Logical decisions involve comparison of two data items to see which
one is larger or smaller or equal.
• Control Unit: The Control unit coordinates and controls the data flow in and out of CPU and also
controls all the operations of ALU, memory registers and also input/output units. It is also
responsible for carrying out all the instructions stored in the program. It decodes the fetched
instruction, interprets it and sends control signals to input/output devices until the required
operation is done properly by ALU and memory.
• Memory Registers: A register is a temporary unit of memory in the CPU. These are used to store
the data which is directly used by the processor. Registers can be of different sizes (16-bit, 32-
bit, 64 bit and so on) and each register inside the CPU has a specific function like storing data,
storing an instruction, storing address of a location in memory etc. The user registers can be
used by an assembly language programmer for storing operands, intermediate results etc.
Accumulator (ACC) is the main register in the ALU and contains one of the operands of an operation
to be performed in the ALU.
• Memory: Memory attached to the CPU is used for storage of data and instructions and is
called internal memory the internal memory is divided into many storage locations, each of
which can store data or instructions. Each memory location is of the same size and has an
address. With the help of the address, the computer can read any memory location easily
without having to search the entire memory. when a program is executed, its data is copied to
the internal memory and is stored in the memory till the end of the execution. The internal
memory is also called the Primary memory or Main memory. This memory is also called as
RAM, i.e. Random-Access Memory. The time of access of data is independent of its location
in
memory, therefore this memory is also called Random Access memory (RAM). Read this for
different types of RAMs
• Output Unit: The output unit consists of output devices that are attached with the computer.
It converts the binary data coming from CPU to human understandable form. The common output
devices are monitor, printer, plotter etc.

Interconnection between Functional Components

A computer consists of input unit that takes input, a CPU that processes the input and an output
unit that produces output. All these devices communicate with each other through a common bus. A
bus is a transmission path, made of a set of conducting wires over which data or information in the
lOMoARcPSD|16175287

form of electric signals, is passed


lOMoARcPSD|16175287

from one component to another in a computer. The bus can be of three types – Address bus, Data
bus and Control Bus.
Following figure shows the connection of various functional components:

Platform
Technologies Page
16
lOMoARcPSD|16175287

The address bus carries the address location of the data or instruction. The data bus carries data from
one component to another and the control bus carries the control signals. The system bus is the
common communication path that carries signals to/from CPU, main memory and input/output devices.
The input/output devices communicate with the system bus through the controller circuit which helps
in managing various input/output devices attached to the computer.

Platform
Technologies Page
17
lOMoARcPSD|16175287

Multiprocessing and Alternative Architecture

A multiprocessor system is defined as "a system with more than one processor", and, more
precisely, "a number of central processing units linked together to enable parallel processing to take
place".
The key objective of a multiprocessor is to boost a system's execution speed. The other objectives are
fault tolerance and application matching.
The term "multiprocessor" can be confused with the term "multiprocessing". While multiprocessing is a
type of processing in which two or more processors work together to execute multiple programs
simultaneously, multiprocessor refers to a hardware architecture that allows multiprocessing.
Multiprocessor systems are classified according to how processor memory access is handled and
whether system processors are of a single type or various ones.
Multiprocessor System Types
There are many types of multiprocessor systems:
• Loosely coupled multiprocessor system
• Tightly coupled multiprocessor system
• Homogeneous multiprocessor system
• Heterogeneous multiprocessor system
• Shared memory multiprocessor system
• Distributed memory multiprocessor system
• Uniform memory access (UMA) system
• cc–NUMA system
• Hybrid system – shared system memory for global data and local memory for local data

Platform
Technologies Page
18
lOMoARcPSD|16175287

Loosely-Coupled (Distributed Memory) Multiprocessor System


In loosely-coupled multiprocessor systems, each processor has its own local memory, input/output (I/O)
channels, and operating system. Processors exchange data over a high-speed communication
network by sending messages via a technique known as "message passing". Loosely-coupled
multiprocessor systems are also known as distributed-memory systems, as the processors do not
share physical memory and have individual I/O channels.

System characteristics

Platform
Technologies Page
19
lOMoARcPSD|16175287

• These systems are able to perform multiple-instructions-on-multiple-data (MIMD) programming.


• This type of architecture allows parallel processing.
• The distributed memory is highly scalable.

Tightly-Coupled (Shared Memory) Multiprocessor System


Multiprocessor system with a shared memory closely connected to the processors. A symmetric
multiprocessing system is a system with centralized shared memory called main memory (MM)
operating under a single operating system with two or more homogeneous processors.
There are two types of systems:
• Uniform memory-access (UMA) system
• NUMA system
Uniform memory access (UMA) system
• Heterogeneous multiprocessing system
• Symmetric multiprocessing system (SMP)
Heterogeneous multiprocessor system
A heterogeneous multiprocessing system contains multiple, but not homogeneous, processing
units – central processing units (CPUs), graphics processing units (GPUs), digital signal processors
(DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture
allows any accelerator – for instance, a graphics processor – to operate at the same processing
level as the system's CPU.
Symmetric multiprocessor system
Platform
Technologies Page
20
lOMoARcPSD|16175287

Systems operating under a single OS (operating system) with two or more homogeneous processors
and with a centralized shared main memory. A symmetric multiprocessor system (SMP) is a system
with a pool of homogeneous processors running under a single OS with a centralized, shared main
memory. Each processor, executing different programs and working on different sets of data, has the
ability to share common resources (memory, I/O device, interrupt system, and so on) that are
connected using a system bus, a crossbar, or a mix of the two, or an address bus and data crossbar.
Each processor has its own cache memory that acts as a bridge between the processor and main
memory. The function of the cache is to alleviate the need for main-memory data access, thus
reducing system-bus traffic. Use of shared memory allows for a uniform memory-access time
(UMA).
cc-NUMA system
It is known that the SMP system has limited scalability. To overcome this limitation, the architecture
called "cc- NUMA" (cache coherency–non-uniform memory access) is normally used. The main
characteristic of a cc-NUMA system is having shared global memory that is distributed to each node,
although the effective "access" a processor has to the memory of a remote component subsystem, or
"node", is slower compared to local memory access, which is why the memory access is "non-uniform".
A cc–NUMA system is a cluster of SMP systems – each called a "node", which can have a single
processor, a multi- core processor, or a mix of the two, of one or other kinds of architecture –
connected via a high-speed "connection network" that can be a "link" that can be a single or double-
reverse ring, or multi-ring, point-to-point connections, or a mix of these (e.g. IBM Power Systems), bus
interconnection (e.g. NUMAq), "crossbar", "segmented bus" (NUMA Bull HN ISI ex Honeywell,[10])
"mesh router", etc.
cc-NUMA is also called "distributed shared memory" (DSM) architecture.
The difference in access times between local and remote memory can be also an order of
magnitude, depending on the kind of connection network used (faster in segmented bus, crossbar, and
point-to-point interconnection; slower in serial rings connection).

Platform
Technologies Page
lOMoARcPSD|16175287

21
lOMoARcPSD|16175287

Platform
Technologies Page
22
lOMoARcPSD|16175287

Performance Enhancements

What Is Computer Performance?


Fast, inexpensive computers are now essential to numerous human endeavors. But less well
understood is the need not just for fast computers but also for ever-faster and higher-performing
computers at the same or better costs. Exponential growth of the type and scale that have fueled the
entire information technology industry is ending.1 In addition, a growing performance gap between
processor performance and memory bandwidth, thermal-power challenges and increasingly expensive
energy use, threats to the historical rate of increase in transistor density, and a broad new class of
computing applications pose a wide-ranging new set of challenges to the computer industry.
Meanwhile, societal expectations for increased technology performance continue apace and show no
signs of slowing, and this underscores the need for ways to sustain exponentially increasing
performance in multiple dimensions. The essential engine that has met this need for the last 40 years
is now in considerable danger, and this has serious implications for our economy, our military, our
research institutions, and our way of life.

Embedded Computing Performance


The design of desktop systems often places considerable emphasis on general CPU performance in
running desktop workloads. Particular attention is paid to the graphics system, which directly
determines which consumer games will run and how well. Mobile platforms, such as laptops and
notebooks, attempt to provide enough computing horsepower to run modern operating systems well—
subject to the energy and thermal constraints inherent in mobile, battery-operated devices—but tend
not to be used for serious gaming, so high-end graphics solutions would not be appropriate. Servers run
a different kind of workload from either desktops or mobile platforms, are subject to substantially
different economic constraints in their design, and need no graphics support at all. Desktops and
mobile platforms tend to value legacy compatibility (for example, that existing operating systems and
software applications will continue to run on new hardware), and this compatibility requirement affects
the design of the systems, their economics, and their use patterns.

Platform
Technologies Page
lOMoARcPSD|16175287

23
lOMoARcPSD|16175287

Although desktops, mobile, and server computer systems exhibit important differences from one
another, it is natural to group them when comparing them with embedded systems. It is difficult to
define embedded systems accurately because their space of applicability is huge—orders of magnitude
larger than the general-purpose computing systems of desktops, laptops, and servers. Embedded
computer systems can be found everywhere: a car’s radio, engine controller, transmission controller,
airbag deployment, antilock brakes, and dozens of other places. They are in the refrigerator, the
washer and dryer, the furnace controller, the MP3 player, the television set, the alarm clock, the
treadmill and stationary bike, the Christmas lights, the DVD player, and the power tools in the garage.
They might even be found in ski boots, tennis shoes, and greeting cards. They control the elevators and
heating and cooling systems at the office, the video surveillance system in the parking lot, and the
lighting, fire protection, and security systems.

WHY PERFORMANCE MATTERS


Humans design machinery to solve problems. Measuring how well machines perform their tasks is of
vital importance for improving them, conceiving better machines, and deploying them for economic
benefit. Such measurements often speak of a machine’s performance, and many aspects of a
machine’s operations can be characterized as performance. For example, one aspect of an
automobile’s performance is the time it takes to accelerate from 0 to 60 mph; another is its average
fuel economy. Braking ability, traction in bad weather conditions, and the capacity to tow trailers are
other measures of the car’s performance.
Computer systems are machines designed to perform information processing and computation. Their
performance is typically measured by how much information processing they can accomplish per
unit time, but there are various perspectives on what type of information processing to consider
when measuring performance and on the right time scale for such measurements. Those
perspectives reflect the broad array of uses and the diversity of end users of modern computer
systems. In general, the systems are deployed and valued on the basis of their ability to improve
productivity. For some users, such as scientists and information technology specialists, the
improvements can be measured in quantitative terms. For others, such as office workers and
casual home users, the performance and resulting productivity gains are more qualitative. Thus, no
single measure of performance or productivity adequately characterizes computer systems for all
their possible uses.
lOMoARcPSD|16175287

Platform
Technologies Page
24
lOMoARcPSD|16175287

PERFORMANCE AS MEASURED BY RAW COMPUTATION


The classic formulation for raw computation in a single CPU core identifies operating frequency,
instruction count, and instructions per cycle (IPC) as the fundamental low-level components of
performance.3 Each has been the focus of a considerable amount of research and discovery in the last
20 years. Although detailed technical descriptions of them are beyond the intended scope of this
report, the brief descriptions below will provide context for the discussions that follow.
• Operating frequency defines the basic clock rate at which the CPU core runs. Modern high-end
processors run at several billion cycles per second. Operating frequency is a function of the
low-level transistor characteristics in the chip, the length and physical characteristics of the
internal chip wiring, the voltage
that is applied to the chip, and the degree of pipelining used in the microarchitecture of the
machine. The last 15 years have seen dramatic increases in the operating frequency of CPU
cores. As an unfortunate side effect of that growth, the maximum operating frequency has often
been used as a proxy for performance by much of the popular press and industry marketing
campaigns. That can be misleading because there are many other important low-level and
system-level measures to consider in reasoning about performance.

• Instruction count is the number of native instructions—instructions written for that specific CPU—
that must be executed by the CPU to achieve correct results with a given computer program.
Users typically write programs in high-level programming languages—such as Java, C, C++,
and C#—and then use a compiler to
translate the high-level program to native machine instructions. Machine instructions are specific
to the instruction set architecture (ISA) that a given computer architecture or architecture family
implements. For a given high-level program, the machine instruction count varies when it
executes on different computer systems because of differences in the underlying ISA, in the
microarchitecture that implements the ISA, and in the tools used to compile the program.
Although this section of the report focuses mostly on the low-level raw performance measures,
the role of the compiler and other modern software system technologies are also necessary to
understand performance fully.
lOMoARcPSD|16175287

Platform
Technologies Page
25
lOMoARcPSD|16175287

• Instructions per cycle refers to the average number of instructions that a particular CPU core
can execute and complete in each cycle. IPC is a strong function of the underlying
microarchitecture, or machine organization, of the CPU core. Many modern CPU

COMPUTATION AND COMMUNICATION’S EFFECTS ON PERFORMANCE


The raw computational capability of CPU cores is an important component of system-level
performance, but it is by no means the only one. To complete any useful tasks, a CPU core must
communicate with memory, a broad array of input/output devices, other CPU cores, and in many cases
other computer systems. The overhead and latency of that communication in effect delays
computational progress as the CPU waits for data to arrive and for system-level interlocks to clear.
Such delays tend to reduce peak computational rates to effective computational rates substantially. To
understand effective performance, it is important to understand the characteristics of the various forms
of communication used in modern computer systems.

In general, CPU cores perform best when all their operands (the inputs to the instructions) are stored in
the architected registers that are internal to the core. However, in most architectures, there tend to be
few such registers because of their relatively high cost in silicon area. As a result, operands must often
be fetched from memory before the actual computation specified by an instruction can be completed.
For most computer systems today, the amount of time it takes to access data from memory is more
than 100 times the single cycle time of the CPU core. And, worse yet, the gap between typical CPU
cycle times and memory-access times continues to grow. That imbalance would lead to a devastating
loss in performance of most programs if there were not hardware caches in these systems. Caches
hold the most frequently accessed parts of main memory in special hardware structures that have
much smaller latencies than the main memory system; for example, a typical level- 1 cache has an
access time that is only 2-3 times slower than the single cycle time of the CPU core. They leverage a
principle called locality of reference that characterizes common data-access patterns exhibited by
most computer programs. To accommodate large working sets that do not fit in the first-level cache,
many computer systems deploy a hierarchy of caches. The later levels of caches tend to be
increasingly large (up to several megabytes), but as a result they also have longer access times and
resulting latencies.

Platform
lOMoARcPSD|16175287

Technologies Page
26
lOMoARcPSD|16175287

Amdahl’s Law
Amdahl’s law sets the limit to which a parallel program can be sped up. Programs can be thought of as
containing one or more parallel sections of code that can be sped up with suitably parallel hardware
and a sequential section that cannot be sped up. Amdahl’s law is
Speedup = 1/[(1 – P) + P/N)],
where P is the proportion of the code that runs in parallel and N is the number of processors.
The way to think about Amdahl’s law is that the faster the parallel section of the code run, the more
the remaining sequential code looms as the performance bottleneck. In the limit, if the parallel section
is responsible for 80 percent of the run time, and that section is sped up infinitely (so that it runs in
zero time), the other 20 percent now constitutes the entire run time. It would therefore have been sped
up by a factor of 5, but after that no amount of additional parallel hardware will make it go any faster.

THE INTERPLAY OF SOFTWARE AND PERFORMANCE


Although the amazing raw performance gains of the microprocessor over the last 20 years has
garnered most of the attention, the overall performance and utility of computer systems are strong
functions of both hardware and software. In fact, as computer systems have deployed more hardware,
they have depended more and more on software technologies to harness their computational
capability. Software has exploited that capability directly and indirectly. Software has directly
exploited increases in computing capability by adding new features to existing software, by
solving larger problems more accurately, and by solving previously unsolvable problems. It has
indirectly exploited the capability through the use of abstractions in high-level programming languages,
libraries, and virtual-machine execution environments. By using high-level programming languages and
exploiting layers of abstraction, programmers can of memory (and possibly by the bus that carries the
traffic between the CPU and memory.) A third benchmark could be constructed that hammers on the
I/O subsystem with little dependence on the speed of either the CPU or the memory.

Platform
Technologies Page
27
lOMoARcPSD|16175287

Handling most real workloads relies on all three computer subsystems, and their performance metrics
therefore reflect the combined speed of all three. Speed up only the CPU by 10 percent, and the
workload is liable to speed up, but not by 10 percent—it will probably speed up in a prorated way
because only the sections of the code that are CPU-bound will speed up. Likewise, speed up the
memory alone, and the workload performance improves, but typically much less than the memory
speedup in isolation. Numerous other pieces of computer systems make up the hardware. The CPU
architectures and microarchitectures encompass instruction sets, branch-prediction algorithms, and
other techniques for higher performance. Storage (disks and memory) is a central component. Memory,
flash drives, traditional hard drives, and all the technical details associated with their performance
(such as bandwidth, latency, caches, volatility, and bus overhead) are critical for a system’s overall
performance. In fact, information storage (hard-drive capacity) is understood to be increasing even
faster than transistor counts on the traditional Moore’s law curve,1 but it is unknown how long this will
continue. Switching and interconnect components, from switches to routers to T1 lines, are part of
every level of a computer system. There are also hardware interface devices (keyboards, displays, and
mice). All those pieces can contribute to what users perceive of as the “performance” of the system
with which they are interacting.

Time to Solution
Consider a jackhammer on a city street. Assume that using a jackhammer is not a pastime enjoyable in
its own right—the goal is to get a job done as soon as possible. There are a few possible avenues for
improvement: try to make the jackhammer’s chisel strike the pavement more times per second; make
each stroke of the jackhammer more effective, perhaps by putting more power behind each stroke;
or think of ways to have the jackhammer drive multiple chisels per stroke. All three possibilities
have analogues in computer design, and all three have been and continue to be used. The notion of
“getting the job done as soon as possible” is known in the computer industry as time to solution and
has been the traditional metric of choice for system performance since computers were invented.
Modern computer systems are designed according to a synchronous, pipelined schema.
Synchronous means occurring at the same time. Synchronous digital systems are based on a
system clock, a specialized timer signal that coordinates all activities in the system. Early
computers had clock frequencies in the tens of kilohertz. Contemporary microprocessor designs
routinely sport clocks with frequencies of over about 3-GHz range. To a first
lOMoARcPSD|16175287

Platform
Technologies Page
28
lOMoARcPSD|16175287

approximation, the higher the clock rate, the higher the system performance. System designers cannot
pick arbitrarily high clock frequencies, however—there are limits to the speed at which the transistors
and logic gates can reliably switch, limits to how quickly a signal can traverse a wire, and serious
thermal power constraints that worsen in direct proportion to the clock frequency. Just as there are
physical limits on how fast a jackhammer’s chisel can be driven downward and then retracted for the
next blow, higher computer clock rates generally yield faster time-to-solution results, but there are
several immutable physical constraints on the upper limit of those clocks, and the attainable
performance speedups are not always proportional to the clock-rate improvement.
How much a computer system can accomplish per clock cycle varies widely from system to system and
even from workload to workload in a given system. More complex computer-instruction sets, such as
Intel’s x86, contain instructions that intrinsically accomplish more than a simpler instruction set, such
as that embodied in the ARM processor in a cell phone; but how effective the complex instructions are
is a function of how well a compiler can use them. Recent additions to historical instruction sets—such
as Intel’s SSE 1, 2, 3, and 4—attempt to accomplish more work per clock cycle by operating on grouped
data that are in a compressed format (the equivalent of a jackhammer that drives multiple chisels per
stroke). Substantial system-performance improvements, such as factors of 2-4, are available to
workloads that happen to fit the constraints of the instruction-set extensions.
There is a special case of time-to-solution workloads: those which can be successfully sped up with
dedicated hardware accelerators. Graphics processing units (GPUs)—such as those from NVIDIA, from
ATI, and embedded in some Intel chipsets—are examples. These processors were designed originally to
handle the demanding computational and memory bandwidth requirements of 3D graphics but more
recently have evolved to include more general programmability features. With their intrinsically
massive floating-point horsepower, 10 or more times higher than is available in the general-purpose
(GP) microprocessor, these chips have become the execution engine of choice for some important
workloads. Although GPUs are just as constrained by the exponentially rising power dissipation of
modern silicon as are the GPs, GPUs are 1-2 orders of magnitude more energy-efficient for suitable
workloads and can therefore accomplish much more processing within a similar power budget.
Applying multiple jackhammers to the pavement has a direct analogue in the computer industry that
has recently become the primary development avenue for the hardware vendors: “multicore.” The
computer industry’s pattern has been for the hardware makers to leverage a new silicon process
technology to make a software-compatible chip that is substantially faster than any previous chips.
The new, higher-performing systems are then capable of
lOMoARcPSD|16175287

Platform
Technologies Page
29
lOMoARcPSD|16175287

executing software workloads that would previously have been infeasible; the attractiveness of the new
software drives demand for the faster hardware, and the virtuous cycle continues. A few years ago,
however, thermal- power dissipation grew to the limits of what air cooling can accomplish and began
to constrain the attainable system performance directly. When the power constraints threatened to
diminish the generation-to-generation performance enhancements, chipmakers Intel and AMD turned
away from making ever more complex microarchitectures on a single chip and began placing multiple
processors on a chip instead. The new chips are called multicore chips. Current chips have several
processors on a single die, and future generations will have even more.

Platform
Technologies Page
30
lOMoARcPSD|16175287

Activity 1.
Instruction: Write your name, course and year level, excluding the score on the blanks provided.
Write the answer on a space provided.

Name Gertrude Sanchez Course & Year: BSIT 2C Score


:
:

1. What is the relationship of Amdahl’s Law and Overall Computer Performance?


Answer: Performance. The important point is to always remember Amdahl's law. This states that the
overall performance improvement gained by optimizing a single part of a system is limited by the fraction
of time that the improved part is actually used.

2. What are the differences between Loosely Coupled and Tightly Coupled Multiprocessor
System?

Answer: Tightly coupled systems share a single memory space and share information through the shared
common memory. Loosely coupled multiprocessors consist of distributed memory where each processor has
its own memory and IO channels. The processors communicate with each other via message passing or
interconnection switching.

3. What is an I/O interfacing?


Answer: Input-Output Interface is used as an method which helps in transferring of information between the internal storage
devices i.e. memory and the external peripheral device . A peripheral device is that which provide input and output for the
computer, it is also called Input-Output devices.

4. What are the differences between serial and parallel communication interface?
Answer: The main difference between the serial and parallel interfaces is how they transmit data. In serial
interface the data is sent or received one bit at a time over a series of clock pulses. In parallel mode the
interface sends and receives 4 bits, 8 bits, or 16 bits of data at a time over multiple transmission lines.
lOMoARcPSD|16175287

5. What are the differences between volatile and non-volatile memory?


Answer: In a volatile memory, the process can both read and write. It means that the process would have
direct access to the data and information within it. In a non-volatile memory, the process can only read. It
means that the processor won't have direct access to the data and information within.

Platform
Technologies Page
31
lOMoARcPSD|16175287

Lesson 2: Computing

Infrastructures At the end of this lesson, YOU are

expected to:

▪ estimate the power requirements for a computer system;


▪ explain the need for power and heat budgets within an IT environment;
▪ classify and describe the various types of servers and services required within
organizations; and
▪ describe the need for hardware and software integration.

Instruction: Write your name, course and year level, excluding the score on the blanks
provided. Before proceeding to the content, please answer the questions below. These items
will test your knowledge on Operating Systems. Write the answer on a space provided.

Name Gertrude Sanchez Course & Year: BSIT 2C Score:


:

1. What is a server?
Answer: A server is a computer program or device that provides a service to another computer program
and its user, also known as the client. In a data center, the physical computer that a server program
runs on is also frequently referred to as a server.

2. What is a server farm?


Answer: A server farm or server cluster is a collection of computer servers, usually maintained by an organization to
supply server functionality far beyond the capability of a single machine. They often consist of thousands of computers which
require a large amount of power to run and to keep cool.

3. Are you familiar with hardware and software integration?


Answer: Yes
lOMoARcPSD|16175287

4. In connection to question no 3, what is hardware and software integration?


lOMoARcPSD|16175287

Answer: Software constantly stands by to await user input, at which point it then performs the necessary action by
sending a signal to the hardware. Hardware performs the required action by accessing memory stored as bits on memory
chips.

5. When it comes so server, what’s the use of power and heat?


Answer: Capabilities of servers and their power consumption have increased over time. Multiply the power servers
consume by the number of servers in use today and power consumption emerges as a significant expense for many
companies. The main power consumers in a server are the processors and memory. Server processors are capping
and controlling their power usage, but the amount of memory used in a server is growing and with that growth, more
power is consumed by memory. In addition, today’s power supplies are very inefficient and waste power at the wall
socket and when converting AC power to DC. Also, when servers are in operation, the entire chassis will heat up;
cooling is required to keep the components at a safe operating temperature, but cooling takes additional power. This
article explains how servers consume power, how to estimate power usage, the mechanics of cooling, and other
related topics.
lOMoARcPSD|16175287

Platform
Technologies Page
33
lOMoARcPSD|16175287

Power and Heat Budgets

Power Requirements

Each server, when properly configured and installed, must receive sufficient incoming power to supply
all installed components. The data center should be able to provide a stable, dual-current path to the
installed equipment. In addition, the power infrastructure must be designed to maintain system
uptime even during disruption of the main power source. It is important to use dedicated breaker
panels for all power circuits that supply power to your servers. The power system should be designed
to provide sufficient redundancy, eliminate all single points of failure, and allow the isolation of a
single server for testing or maintenance without affecting the power supplied to other servers.

Power Sources

It is important to secure multiple sources of power when possible. Ideally, multiple utility feeds should
be provided from different substations or power grids. This setup provides power redundancy and
backup.

The servers provide power input fault tolerance via redundant power supplies. Therefore, it is prudent
to attach to each primary power supply a common power cord from one power grid that can supply
power to all servers, and to attach another power cord from a different power grid to the redundant
supplies. If a primary power grid goes offline, a backup power grid will provide power to the redundant
supplies to keep the servers operating.

UPS and Backup Generator

Using an online uninterruptible power supply (UPS) and a backup power generator provides a good
strategy for obtaining an uninterruptible source of power. The online UPS filters, conditions, and
regulates the power. It protects the servers from fluctuating voltages, surges and spikes, and noise that
might be on the power line. The battery backup for the UPS should be capable of maintaining the
critical load of the data center for a minimum of 15 minutes during a power failure. This is typically
sufficient time to transfer power to an alternate feed or to the power generator.

Platform
Technologies Page
lOMoARcPSD|16175287

34
lOMoARcPSD|16175287

The backup power generator should be able to carry the load of both the computer equipment and the
supporting heat, ventilation, and air conditioning (HVAC) equipment. The generator should include dual
power distribution switch gear with automatic transfer switching. To offset the possibility of a generator
failure, power system designers often include a temporary generator for secondary backup.

Grounding

Grounding design must address both the electrical service and the installed equipment. A properly
designed grounding system should have as low an impedance as is practically achievable for proper
operation of electronic devices as well as for safety. It is important to use a continuous, dedicated
ground for the entire power system to avoid a ground differential between various grounds. Grounding
design in the United States should comply with Article 250 of the U.S. National Electrical Code unless
superseded by local codes. Use an antistatic wrist strap when working inside the chassis.

All properly installed Sun servers are grounded through the power cable. However, there are reasons
for installing an additional mechanism to equalize potential. Problematic or deficient conduits can
negatively affect another server, especially with respect to the possibility of spreading voltages.
Additional grounding points help to avoid leakage current, which prevent system malfunctions.
Therefore, additional cables might be used to connect Sun servers and cabinets to the data center's
potential equalization rail. Enlist the aid of a qualified electrician to install grounding cables.

Emergency Power Control

A primary power switch that can disconnect all electronic equipment in the data center is specified by
NFPA 70 and NFPA 75 (National Fire Protection Association specifications) at each point of entry to the
data center. The primary switch should disconnect power to all servers and related electronic
equipment, HVAC equipment, UPS, and batteries. Multiple disconnects for separate parts of the
power systems are also acceptable, but in both cases, the switches must be unobstructed and
clearly marked.

Platform
Technologies Page
35
lOMoARcPSD|16175287

Power Constraints

All Sun servers are shipped with a sufficient number of power supplies to provide all power needed by
all Sun supported configurations.

Sun does not test many third-party products that are compatible with Sun servers. Therefore, Sun
makes no representations about those products or about the power requirements for products not
supplied by Sun.

Power constraints can occur in two areas:

• Total power consumption

• Current limit of the power outlet

To maintain a safe facility, you must ensure that the current draw does not exceed the maximum
current limit for your power outlet. In the United States and Canada, the maximum is 80% of the
outlet's total capacity.

See the site planning product specifications for the maximum input current and power
consumption for your server.

Power Supplies

Most servers shipped by Sun are configured with one or more power supplies, which are sufficient to
support the maximum load of the server.

The servers provide "N+1" power supply redundancy to maintain system uptime. An N+1
redundant power supply configuration does not add to the power capacity of the servers. "N"
represents the number of power supplies needed to power a fully configured server. The "1"
means that there is one additional power supply in the server to handle the load if a supply fails.
When the server is operating normally, all of the power supplies are turned on, even the redundant
supplies.

Platform
lOMoARcPSD|16175287

Technologies Page
36
lOMoARcPSD|16175287

In a 1+1 configuration (that is, two power supplies are installed, each capable of providing enough
power for the entire server), both supplies are turned on and are delivering power. Each supply
delivers approximately 50% of the power needed by the server. If one supply fails, the supply that is
still online will deliver 100% of the power needed to keep the server running.

In a 2+1 configuration (that is, three power supplies are installed, with two power supplies
delivering enough power for the entire server), all three power supplies are turned on and are
delivering power. Each supply delivers approximately 33% of the power needed by the server. If one
supply fails, the supplies that are still online will each provide 50% of the power needed to keep the
server running.

Most power supplies cannot support the maximum values on all inputs at the same time because
that would exceed the power supply's total output capacity. The load must be distributed among the
power supplies in a way that does not exceed their individual maximum values.

The servers have built-in protection against exceeding the output capacity of the individual power
supply. Be sure to consult the server documentation to learn how the servers will react during a power
overload.

PCI Bus Power

The PCI slots in the Sun servers comply with PCI Local Bus Specification Revision 2.1. The PCI buses in
the servers are designed to provide either 15 watts or 25 watts of power, depending on the server,
multiplied by the number of PCI slots in the PCI chassis. Thus, a 15-watt per slot, four-slot PCI chassis
has a total of 60 watts of power available. These 60 watts can be used in any manner that conforms to
the PCI standard. A single PCI slot can support a card that requires up to 25 watts. Each slot in the Sun
Fire V490 server can supply up to 25 watts of power. The total power used by all six slots in the V490
must not exceed 90 watts.

Here are some examples of how you might populate a 60-watt, four-slot PCI chassis:

• Example 1 - You install four 15-watt cards, which would use up all of the 60 watts of available
power and all slots in the PCI chassis.
lOMoARcPSD|16175287

Platform
Technologies Page
37
lOMoARcPSD|16175287

• Example 2 - You install two 22-watt cards plus one 15-watt card. This combination of cards would
use 59 watts of the 60 watts available. In all probability, you would have to leave the fourth slot
empty, since PCI cards typically require more than 1 watt.

Heat Output and Cooling

Servers and related equipment generate a considerable amount of heat in a relatively small area. This
is because every watt of power used by a server is dissipated into the air as heat. The amount of
heat output per server varies, depending on the server configuration.

The heat load in a data center is seldom distributed uniformly and the areas generating the most heat
can change frequently. Further, data centers are full of equipment that is highly sensitive to
temperature and humidity fluctuations.

See the site planning product specifications for your server's heat output, temperature, and humidity
specifications.

Proper cooling and related ventilation of a server within a cabinet is affected by many variables,
including the cabinet and door construction, cabinet size, and thermal dissipation of any other
components within the cabinet. Therefore, it is the responsibility of the data center manager to ensure
that the cabinet's ventilation system is sufficient for all the equipment mounted in the cabinet.

Do not use a server's nameplate power ratings when calculating the server's heat release. The purpose
of the nameplate power ratings is solely to indicate the server's hardware limits for maximum power
draw.

Chassis Airflow

The flow of air through the server is essential to the proper cooling of the server. Even though the data
center air might be at a safe and steady temperature at one location, the temperature of the air
entering each server is critical. Problems sometimes arise for these reasons:

• One server is positioned so that its hot exhaust air is directed into the intake air of another server,
thus preheating the intake air of the second server.
Platform
Technologies Page
38
lOMoARcPSD|16175287

• Servers are sometimes mounted in cabinets that restrict airflow excessively. This might occur
because the cabinets have solid front or rear doors, inadequate plenums, or they might have
cooling fans that work against the fans in the servers themselves.

• A server might be mounted in a cabinet above a device that generates a great amount of heat.

Most all Sun servers draw in ambient air for cooling from the front and discharge heated exhaust air
to the rear. The servers require that the front and back cabinet doors to be at least 63% open for
adequate airflow. This can be accomplished by removing the doors, or by ensuring that the doors have
a perforated pattern that provides at least 63% open area. In addition, maintain a minimum of 1.5-inch
(3.8-cm) clearance between the servers and front and back doors of a cabinet.

The servers are equipped with fans that route cool air throughout the chassis. As long as the necessary
air conditioning is provided in the data center to dissipate the heat load, and sufficient space and door
openings are provided at the front and back of the servers, the fans will enable the rack mounted
servers to work within the operational temperature specifications. Again, see the site planning product
specifications for your server's temperature specifications.

Units of Measurement

A standard unit for measuring the heat generated within, or removed from, a data center is the
British Thermal Unit (Btu). The heat produced by electronic devices such as servers is usually
expressed as the number of Btu generated in an hour (Btu/hr).

Watts (W) is also a term used to express heat output and cooling. One watt is equal to 3.412
Btu/hr. For example, if you use 100 watts of power, you generate 341.2 Btu/hr.

Air conditioning capacity is also measured in Btu/hr or watts. Large air conditioning systems are rated
in tons. One ton of air conditioning is a unit of cooling equal to 12,000 Btu/hr or 3517 watts.

Platform
Technologies Page
39
lOMoARcPSD|16175287

Determining Heat Output and Cooling

The site planning product specifications provides the minimum, typical, and maximum heat output and
cooling requirements for base configurations of your server. These specifications are the
measured power ratings, which are calculated for the base server configurations as defined by
Sun. It is important to realize the nameplate ratings are only a reference to the servers' hardware limits
that could accommodate future components. Do not use these values to calculate the servers'
current power and cooling requirements.

In addition to the heat load generated by the servers, some cabinets include fans, power sequencers,
and other devices that generate heat. Be sure to obtain the heat output values of these devices from
your cabinet supplier. Also, when calculating data center cooling requirements, be sure to include heat
dissipation for all equipment in the room.

To determine the heat output and cooling requirements of the rackmounted servers, add the Btu or
watts for each server in the rack. For example, if one server is putting out 1000 Btu/hr (293 watts) and
another one is putting out 2000 Btu/hr (586 watts), the total heat generated is 3000 Btu/hr (879 watts).
The air conditioning equipment then should be properly sized to cool at least 3000 Btu/hr (879 watts)
to accommodate these two servers.

Using Rack Location Units to Determine Heat Output and Cooling

In the book Enterprise Data Center Design and Methodology by Rob Snevely
(available at http://www.sun.com/books/blueprints.series.html) the concept of using rack
location units (RLUs) to determine heat output and cooling requirements in the data center is
discussed. A rack location is the specific location on the data center floor where services that can
accommodate power, cooling, physical space, network connectivity, functional capacity, and rack
weight requirements are delivered. Services delivered to the rack location are specified in units of
measure, such as watts or Btus, thus forming the term rack location unit.

Since today's data centers house hundreds or thousands of servers with widely varying power and
cooling requirements, RLUs can help you determine where greater or less power and cooling
services are needed. RLUs can also help you determine how to locate the racks to maximize services.
Using square footage calculations for power and cooling assumes that power and cooling loads are
the same across the entire room. Using RLUs lets you divide the data center into areas that need
unique power and cooling services.
lOMoARcPSD|16175287

Platform
Technologies Page
40
lOMoARcPSD|16175287

To determine RLUs for heat output and cooling, you must add together the heat output and cooling
requirements for all servers installed in the rack. Then assess the RLUs for adjacent racks. For
example, suppose you had 24,000 square feet of space in the data center. You might have a 12,000-
square foot area where 600 PCs output 552,000 Btu/hour and need 46 Btu/hour of cooling per square
foot. Another 6000-square foot area might contain 48 severs which output 1,320,000 Btu/hour and
need 220 Btu/hour of cooling per square foot. A third 6000-square foot area might contain 12 high-end
servers which output 972,000 Btu/hour and need 162 Btu/hour of cooling per square foot.

Using a square footage calculation for this example yields a cooling requirement for all three sections
of 2,844,000 Btu/hour, or 118.5 Btu/hour of cooling per square foot. This would exceed the 46 Btu/hour
cooling needed by the PCs, but it is much too little cooling capacity required for both server areas.
Knowing the RLUs for power and cooling enable the data center manager to adjust the physical design,
the power and cooling equipment, and rack configurations within the facility to meet the systems'
requirements.

Servers
In computing, a server is a piece of computer hardware or software (computer program) that provides
functionality for other programs or devices, called "clients". This architecture is called the client–server
model. Servers can provide various functionalities, often called "services", such as sharing data or
resources among multiple clients, or performing computation for a client. A single server can serve
multiple clients, and a single client can use multiple servers. A client process may run on the same
device or may connect over a network to a server on a different device. Typical servers are database
servers, file servers, mail servers, print servers, web servers, game servers, and application servers.
Client–server systems are today most frequently implemented by (and often identified with) the
request–response model: a client sends a request to the server, which performs some action and
sends a response back to the client, typically with a result or acknowledgment. Designating a
computer as "server-class hardware" implies that it is specialized for running servers on it. This often
implies that it is more powerful and reliable than standard

Platform
Technologies Page
lOMoARcPSD|16175287

41
lOMoARcPSD|16175287

personal computers, but alternatively, large computing clusters may be composed of many relatively
simple, replaceable server components.
History
The use of the word server in computing comes from queueing theory, where it dates to the mid-20th
century, being notably used in Kendall (1953) (along with "service"), the paper that introduced
Kendall's notation. In earlier papers, such as the Erlang (1909), more concrete terms such as
"[telephone] operators" are used.

In computing, "server" dates at least to RFC 5 (1969),[4] one of the earliest documents describing
ARPANET (the predecessor of Internet), and is contrasted with "user", distinguishing two types of
host: "server-host" and "user- host". The use of "serving" also dates to early documents, such as
RFC 4, contrasting "serving-host" with "using- host".
The Jargon File defines "server" in the common sense of a process performing service for
requests, usually remote, with the 1981 (1.1.0) version reading:
SERVER n. A kind of DAEMON which performs a service for the requester, which often runs on a
computer other than the one on which the server runs.

Operation
A network based on the client–server model where multiple individual clients request services and
resources from centralized servers
Strictly speaking, the term server refers to a computer program or process (running program).
Through metonymy, it refers to a device used for (or a device dedicated to) running one or several
server programs. On a network, such a device is called a host. In addition to server, the words serve
and service (as noun and as verb) are frequently used, though servicer and servant are not. [a] The
word service (noun) may refer to either the abstract form of functionality, e.g. Web service.
Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows
service. Originally used as "servers serve users" (and "users use servers"),

Platform
Technologies Page
lOMoARcPSD|16175287

42
lOMoARcPSD|16175287

in the sense of "obey", today one often says that "servers serve data", in the same sense as
"give". For instance, web servers "serve [up] web pages to users" or "service their requests".
The server is part of the client–server model; in this model, a server serves data for clients. The nature
of communication between a client and server is request and response. This is in contrast with peer-to-
peer model in which the relationship is on-demand reciprocation. In principle, any computerized
process that can be used or called by another process (particularly remotely, particularly to share a
resource) is a server, and the calling process or processes is a client. Thus, any general-purpose
computer connected to a network can host servers. For example, if files on a device are shared by
some process, that process is a file server. Similarly, web server software can run on any capable
computer, and so a laptop or a personal computer can host a web server.
While request–response is the most common client-server design, there are others, such as the
publish–subscribe pattern. In the publish-subscribe pattern, clients register with a pub-sub server,
subscribing to specified types of messages; this initial registration may be done by request-response.
Thereafter, the pub-sub server forwards matching messages to the clients without any further
requests: the server pushes messages to the client, rather than the client pulling messages from the
server as in request-response.

Purpose
The role of a server is to share data as well as to share resources and distribute work. A server
computer can serve its own computer programs as well; depending on the scenario, this could be part
of a quid pro quo transaction, or simply a technical possibility. The following table shows several
scenarios in which a server is used.

Server type Purpose Client


s

Application Hosts web apps (computer programs that run inside Computers with a web browser
a web
server browser) allowing users in the network to run
and use
them, without having to install a copy on their
own
computers. Unlike what the name might imply,
lOMoARcPSD|16175287

these
servers need not be part of the World Wide Web;
any local

Platform
Technologies Page
43
lOMoARcPSD|16175287

network would do.

Maintains an index or table of contents of information Any computer program that needs to
that can be found across a large distributed network, find something on the
Catalog
such as computers, users, files shared on file servers, network, such a Domain member
server
and web apps. Directory servers and name servers attempting to log in, an email client
are examples of catalog servers. looking for an email address, or a
user looking for a file

Maintains an environment needed for one


communication endpoint (user or devices) to find
Communicat other endpoints and communicate with them. It may Communicatio endpoints (users or
i ons server or may not include a directory of communication n devices)
endpoints and a presence detection service,
depending on the openness and security parameters
of the network

Any computer program that needs


more CPU power and RAM than a
Shares vast amounts of computing resource
Computin personal computer can probably
especially CPU and random- memory s, over
g server afford. The client must be a
access network. ,
networked computer; otherwise,
a
there would be no client- server
model.

Spreadsheets, accounting software,


Maintains and shares any form of database asset management software or
Databas
(organized collections of data with predefined virtually any computer program that
e server
properties that may be displayed in a table) over a consumes well- organized data,
network. especially in large volumes

Fax server Shares one or more fax machines over a Any fax sender or recipient
network, thus
Platform
Technologies Page
44
lOMoARcPSD|16175287

eliminating the hassle of physical access

Networked computers are the


Shares files and folders, storage space to hold files
File server intended clients, even though local
and folders, or both, over a network
programs can be clients

Enables several or gamin device to


Game Personal computers or gaming
computers play g s
server consoles
multiplayer video games

Makes email communication possible in the same


Mail server Senders and recipients of email
way that a post office makes snail mail
communication possible

Shares digital video or digital audio over a network


through media streaming (transmitting content in a
User-attended personal
Media way that portions received can be watched or
computers equipped
server listened to as they arrive, as opposed to
with a monitor and a speaker
downloading an entire file and then using it)

Shares one or more printers over a network,


Print server Computers in need of printing
thus eliminating the hassle of physical
something
access

Enables computer programs to play and record Computer programs of sam


Sound
sound, individually or cooperatively the computer and e
server
network clients.

Proxy Acts as an intermediary between a client and a Any networked computer


server server, accepting incoming traffic from the client and
sending it to
Platform
Technologies Page
45
lOMoARcPSD|16175287

the server. Reasons for doing so include content


control and filtering, improving traffic performance,
preventing unauthorized network access or simply
routing the traffic over a large and complex network.

Shares hardware and software resources with other


virtual servers. It exists only as defined within
Virtua specialized software called hypervisor. The
Any networked computer
l hypervisor presents virtual hardware to the server as
serve if it were real physical hardware. Server
r virtualization allows for a more efficient
infrastructure.

Hosts web pages. A web server is what makes the


Web server World Wide Web possible. Each website has one or Computers with a web browser
more web servers.

Almost the entire structure of the Internet is based upon a client–server model. High-level root
nameservers, DNS, and routers direct the traffic on the internet. There are millions of servers
connected to the Internet, running continuously throughout the world and virtually every action taken
by an ordinary Internet user requires one or more interactions with one or more servers. There are
exceptions that do not use dedicated servers; for example, peer-to-peer file sharing and some
implementations of telephony (e.g. pre-Microsoft Skype).

Hardware
Hardware requirement for servers vary widely, depending on the server's purpose and its software.
Servers are more often than not, more powerful and expensive than the clients that connect to
them. Since servers are usually accessed over a network, many run unattended without a computer
monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical
user interface (GUI). They are configured and managed remotely. Remote management can be
conducted via various methods including Microsoft Management

Platform
Technologies Page
lOMoARcPSD|16175287

46
lOMoARcPSD|16175287

Console (MMC), PowerShell, SSH and browser-based out-of-band management systems such as
Dell's iDRAC or HP's iLo.
Large servers
Large traditional single servers would need to be run for long periods without interruption. Availability
would have to be very high, making hardware reliability and durability extremely important. Mission-
critical enterprise servers would be very fault tolerant and use specialized hardware with low failure
rates in order to maximize uptime. Uninterruptible power supplies might be incorporated to guard
against power failure. Servers typically include hardware redundancy such as dual power supplies,
RAID disk systems, and ECC memory, along with extensive pre-boot memory testing and verification.
Critical components might be hot swappable, allowing technicians to replace them on the running
server without shutting it down, and to guard against overheating, servers might have more
powerful fans or use water cooling. They will often be able to be configured, powered up and down or
rebooted remotely, using out-of-band management, typically based on IPMI. Server casings are usually
flat and wide, and designed to be rack-mounted, either on 19-inch racks or on Open Racks.
These types of servers are often housed in dedicated data centers. These will normally have very
stable power and Internet and increased security. Noise is also less of a concern, but power
consumption and heat output can be a serious issue. Server rooms are equipped with air
conditioning devices.

Platform
Technologies Page
lOMoARcPSD|16175287

47
lOMoARcPSD|16175287

Server Farms
A server farm or server cluster is a collection of computer servers – usually maintained by an
organization to supply server functionality far beyond the capability of a single machine. Server
farms often consist of thousands of computers which require a large amount of power to run and to
keep cool. At the optimum performance level, a server farm has enormous costs (both financial and
environmental) associated with it. Server farms often have backup servers, which can take over the
function of primary servers in the event of a primary-server failure. Server farms are typically
collocated with the network switches and/or routers which enable communication between the different
parts of the cluster and the users of the cluster. Server farmers typically mount the computers, routers,
power supplies, and related electronics on 19-inch racks in a server room or data.
Server farms are commonly used for cluster computing. Many modern supercomputers comprise
giant server farms of high-speed processors connected by either Gigabit Ethernet or custom
interconnects such as Infiniband or Myrinet. Web hosting is a common use of a server farm; such a
system is sometimes collectively referred to as a web farm. Other uses of server farms include
scientific simulations (such as computational fluid dynamics) and the rendering of 3D computer
generated imagery (see render farm).
Server farms are increasingly being used instead of or in addition to mainframe computers by large
enterprises, although server farms do not yet reach the same reliability levels as mainframes. Because
of the sheer number of computers in large server farms, the failure of an individual machine is a
commonplace event, and the management of large server farms needs to take this into account by
providing support for redundancy, automatic failover, and rapid reconfiguration of the server cluster.
Performance
The performance of the largest server farms (thousands of processors and up) is typically limited by
the performance of the data center's cooling systems and the total electricity cost rather than by the
performance of the processors. Computers in server farms run 24/7 and consume large amounts of
electricity, and for this reason, the critical design parameter for both large and continuous systems
tends to be performance per watt rather than cost of peak performance or (peak performance / (unit *
initial cost)). Also, for high availability systems that must run 24/7 (unlike supercomputers that can be
power-cycled to demand, and also tend to run at much higher utilizations), there is more attention
placed on power saving features such as variable clock-speed and the ability to turn off both
computer parts, processor parts, and entire computers (WoL and virtualization) according to
lOMoARcPSD|16175287

Platform
Technologies Page
48
lOMoARcPSD|16175287

demand without bringing down services. The network connecting the servers in a server farm is also
an essential factor in the overall performance especially when running applications that process
massive volumes of data.

Platform
Technologies Page
49
lOMoARcPSD|16175287

Performance per watt


The EEMBC EnergyBench, SPECpower, and the Transaction Processing Performance Council TPC-Energy
are benchmarks designed to predict performance per watt in a server farm. The power used by each
rack of equipment can be measured at the power distribution unit. Some servers include power
tracking hardware so the people running the server farm can measure the power used by each server.
The power used by the entire server farm may be reported in terms of power usage effectiveness or
data center infrastructure efficiency.
According to some estimates, for every 100 watts spent on running the servers, roughly another
50 watts is needed to cool them. For this reason, the siting of a Server Farm can be as important as
processor selection in achieving power efficiency. Iceland, which has a cold climate all year as well as
cheap and carbon-neutral geothermal electricity supply, is building its first major server farm hosting
site. Fiber optic cables are being laid from Iceland to North America and Europe to enable companies
there to locate their servers in Iceland. Other countries with favorable conditions, such as Canada,[8]
Finland, Sweden and Switzerland, are trying to attract cloud computing data centers. In these
countries, heat from the servers can be cheaply vented or used to help heat buildings, thus reducing
the energy consumption of conventional heaters.

Hardware and Software Integration


Software integration is the process of bringing together various types of software sub-systems
so that they create a unified single system. Software integration can be required for a number of
reasons, such as:
• Migrating from a legacy system to a new database system, including cloud-based data storage
• Setting up a data warehouse where data needs to be moved through an ETL process from its
production system to the data storage systemLinking different systems, such as various
databases and file-based systems
• Joining various stand-alone systems to make it easier to replicate processes and gain uniform
results
This kind of application integration is increasingly necessary for companies who use distinct systems to
perform various tasks. These operations can include anything from recording sales, keeping track of
supplier information,
lOMoARcPSD|16175287

Platform
Technologies Page
50
lOMoARcPSD|16175287

and storing customer data. To incorporate all of these different systems and applications into one
system, where data can be collected and analyzed, requires specialized functionality.
A data integration tool seeks to provide a solution for cloud-based data repositories, where large
amounts of data from disparate sources need to be collated, processed, and analyzed as one. By using
such tools, companies can combine and utilize all of their data.

There are four methods which are used for software integration:
• Vertical integration, which integrates software based on the specifically required functionality
• Star system integration, which interconnects one sub-system with the rest of a sub-system
• Enterprise Service Bus (ESB), where a custom-made sub-system is created which allows a
variety of different systems to communicate with each other simultaneously
• Common data format integration is independent of applications so that all data is in one
format and so doesn’t have to be converted into others depending on the application using it.

Platform
Technologies Page
51
lOMoARcPSD|16175287

Activity
2.
Instruction: Write your name, course and year level, excluding the score on the blanks provided.
Write the answer on a space provided.

Name: Gertrude Sanchez Course & Year: BSIT 2C Score


: :

1. What is the main purpose of a server?


Answer: A server stores, sends, and receives data. In essence, it "serves" something else and exists to
provide services. A computer, software program, or even a storage device may act as a server, and it may
provide one service or several.

2. Why computer performance matters?


Answer: Performance matters. Performance matters to your users and your customers. A fast website generates more
money, reduces bounce rate, generates more clicks, ensures better ranking in organic search, increases conversions, and is a
key factor to make sure users come back.

3. What’s the difference between virtual and web server?


Answer: The key difference between virtual machine and server is that a virtual machine is a software
similar to a physical computer that can run an operating system and related applications while a server is a
device or software that can provide services requested by the other computers or clients in the network.

4. Why do server farms exist?


Answer: A server farm is designed to provide a massive and redundant source of computing power for
computing-intensive applications. Server farms generally consist of thousands of servers, but their size can
vary in different organizations and based on underlying requirements.
lOMoARcPSD|16175287

Activity
5.
2. Why is it necessary to know the heat and power requirement of a server?
Answer: The design of your electrical power system must ensure that adequate, high-quality power is
provided to each server and all peripherals at all times. Power system failures can result in server shutdown
and possible loss of data.

Platform
Technologies Page
52
lOMoARcPSD|16175287

Activity
2.

Instruction: Write your name, course and year level, excluding the score on the blanks provided.
Write the answer on a space provided.

Name: Gertrude Sanchez Course & Year: BSIT 2C Score


:
:

1. Define a server rack. Why is it necessary to have a rack for the server?
Answer: A rack server is a computer optimised for server operation. It is designed and manufactured to fit into
a rectangular mounting system, also called a rack mount. It contains several mounting slots and is also known
as a rack bay. A rack server is designed to hold a hardware unit securely in place with screws.

Lab 1.

Open a computer and identify the cooling systems within the computer.
Answer: Computer cooling systems are passive or active systems that are designed to regulate and dissipate the heat
generated by a computer so as to maintain optimal performance and protect the computer from damage that will occur from
overheating.
lOMoARcPSD|16175287

Activity
2.

Platform
Technologies Page
53
lOMoARcPSD|16175287

REFERENCES

E. Books

ABRAHAM SILBERSCHATZ, et Al. Copyright © 2018: Operating System Concepts 10th Edition

Stallings, William, Copyright © 2013: Computer Organization and Architecture Designing for
Performance 9th Edition

Laan, Sjaak, Copyright © 2017: IT Infrastructure Architecture - Infrastructure Building Blocks and
Concepts 3rd Edition

F. Electronic Media

https://nptel.ac.in/courses/106/108/106108101/

https://www.geeksforgeeks.org/introduction-of-operating-

system-set-1/ https://en.wikipedia.org/wiki/Server_(computing)

Platform
Technologies Page
54

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy