0% found this document useful (0 votes)
118 views57 pages

CSC 111 Lecture Notes 2019

This document provides information about an introductory computer science course including its aims, objectives, modules, units, and summaries of study units. The course aims to provide students with a basic understanding of computers and their applications. It covers topics such as the history of computers, hardware, software, programming, and computer threats. The course is divided into 5 modules containing 9 units total. The units will provide definitions of key concepts, an overview of computer evolution and components, software types, programming languages, and computer viruses.

Uploaded by

Ami Eliud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views57 pages

CSC 111 Lecture Notes 2019

This document provides information about an introductory computer science course including its aims, objectives, modules, units, and summaries of study units. The course aims to provide students with a basic understanding of computers and their applications. It covers topics such as the history of computers, hardware, software, programming, and computer threats. The course is divided into 5 modules containing 9 units total. The units will provide definitions of key concepts, an overview of computer evolution and components, software types, programming languages, and computer viruses.

Uploaded by

Ami Eliud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

RESTRICTED

COURSE TITLE: INTRODUCTION TO COMPUTERS


COURSE CODE: CSC 111
SESSION: 2019/2020
COURSE AIMS
The aim of this course is to provide students with the basic understanding of the
computer and its applications in everyday life.
COURSE OBJECTIVES
The specific objectives of this course are to:
• Provide basic understanding of the historical evolution of the computer,
types of computers and the classification of computers.
• Enable the students understand the components of the computer – the
hardware and software.
• Help students to identify the different categories of computer software and
their uses.
• Introduce students to computer programming with emphasis on the
building blocks and stages of programming and writing of computer
programs using visual basic.
• Enable students to identify and appreciate the areas of application of
computers in the society, thereby stimulating their thought to regard
computer as a tool for human use rather than a master.
• Create awareness at the early stage of the study of computers about the
potential threats that computer viruses poses to the smooth operations of
computers.

Module 1: Understanding the computer


Unit 1: Basic Concepts.
Unit 2: Historical overview of the Computer.
Unit 3: Classification of Computers

Module 2: Computer Hardware


Unit 4: Hardware Components

Module 3: Computer Software


Unit 5: Computer Software

Module 4: Programming the Computer


Unit 6: Computer Languages

1
RESTRICTED
RESTRICTED

Unit 7: Basic Principles of Computer Programming


Unit 8: Flowcharts and Algorithms

Module 5: Threats to the Computer


Unit 9: Computer Virus

SYNOPSES OF THE STUDY UNITS


Unit 1: This unit presents the definition of the computer, basic understanding of
data processing, the concept of data and information, methods of data
processing and the characteristics of a computer.

Unit 2: It gives the brief history of computer technology, evolution of computer


and the generations of computer.

Unit 3: You are introduced to the classification of computers. This involves


classification based on size, type of signal and purpose. At the end of the unit you
will be able to differentiate between one class of computer from the others.

Unit 4: In this unit you will be familiarized with hardware components of the
computer. This will enable you to appreciate the importance of each component
to the overall smooth operations of the computer.

Unit 5: This unit introduces the computer software in some details. You will learn
about the system software, language translators such as compliers and the utility
software, and application programs in greater detail.

Unit 6: In this unit you will learn about computer programming languages such as
low level language (machine language and assemblers) and the high level
languages.

Unit 7: You will be introduced to computer programming in this unit. Topics


covered include the concept of problem solving with computers, principles of
programming and stages of programming.

Unit 8: This unit advances further on unit 10 by discussing the use of flowchart and
algorithms in computer programming. These two concepts are essential
ingredients to the writing of well structured computer programs.

Unit 9: This is the concluding unit of this course. It presents discussion on computer
virus as one of the major threats to the smooth operations of the computers.
Detailed discussions on computer virus, its mode of transmission, detection,
prevention and cure, are presented.

2
RESTRICTED
RESTRICTED

MODULE 1: UNDERSTANDING THE COMPUTER


In this module we shall discuss the following topics:
• Basic concepts
• Historical overview of the development of computers
• Generations of computers
• Classification of computers
STUDY UNIT 1: BASIC CONCEPTS
Table of Contents
Definition of the computer
Basic understanding of data processing
The concept of data and information
Methods of data processing
Characteristics of a computer
1.0 Introduction
Computer is fast becoming the universal machine of the 21st century. Early
computers were large in size and too expensive to be owned by individuals. Thus
they were confined to the laboratories and few research institutes. They could
only be programmed by computer engineers. The basic applications were
confined to undertaking complex calculations in science and engineering.
Today, computer is no longer confined the laboratory. Computers and indeed,
computing have become embedded in almost every item we use. Computing is
fast becoming ubiquitous. Its application transcends science, engineering,
communication, space science, aviation, financial institutions, social sciences,
humanities, the military, transportation, manufacturing, extractive industries to
mention but a few. This unit presents the background information about
computers.
2.0 Objectives
The objective of this unit is to enable students understand the following basic
concepts:
(a) Definition of the computer
(b) Basic understanding of data processing
(c) The concept of data and information
(d) Methods of data processing
(e) Characteristics of a computer
3.0 Definitions
Computer: A computer is basically defined as a tool or machine used for
processing data to give required information. It is capable of:
a. taking input data through the keyboard (input unit)
3
RESTRICTED
RESTRICTED

b. storing the input data in a diskette, hard disk or other medium


c. processing it at the central processing unit (CPU) and
d. giving out the result (output) on the screen or the Visual Display Unit (VDU).

Figure 3.0: Schematic diagram to define a computer


Data: The term data is referred to facts about a person, object or place e.g.
name, age, complexion, school, class, height etc.
Information: Is referred to as processed data or a meaningful statement e.g. Net
pay of workers, examination results of students, list of successful candidates in an
examination or interview etc.
3.1 Methods of Data Processing
The following are the three major methods that have been widely used for data
processing over the years:
a. Manual method
b. Mechanical method and
c. Computer method.
Manual Method
The manual method of data processing involves the use of chalk, wall, pen pencil
and the like. These devices, machine or tools facilitate human efforts in recording,
classifying, manipulating, sorting and presenting data or information. The manual
data processing operations entail considerable manual efforts. Thus, manual
method is cumbersome, tiresome, boring, frustrating and time consuming.
Furthermore, the processing of data by the manual method is likely to be affected
by human errors. When there are errors, then the reliability, accuracy, neatness,
tidiness, and validity of the data would be in doubt. The manual method does not
allow for the processing of large volume of data on a regular and timely basis.
Mechanical Method
The mechanical method of data processing involves the use of machines such as
typewriter, roneo machines, adding machines and the like. These machines
facilitate human efforts in recording, classifying, manipulating, sorting and
presenting data or information. The mechanical operations are basically routine
in nature. There is virtually no creative thinking. The mechanical operations are
noisy, hazardous, error prone and untidy. The mechanical method does not allow
for the processing of large volume of data continuously and timely.

4
RESTRICTED
RESTRICTED

Computer Method
The computer method of carrying out data processing has the following major
features:
a. Data can be steadily and continuously processed
b. The operations are practically not noisy
c. There is a store where data and instructions can be stored temporarily and
permanently.
d. Errors can be easily and neatly corrected.
e. Output reports are usually very neat, decent and can be produced in
various forms such as adding graphs, diagrams, pictures etc.
f. Accuracy and reliability are highly enhanced.
g. Below are further attributes of a computer which makes it to be an
indispensable tool for human being:

3.2 Characteristics of a Computer


1. Speed: The computer can manipulate large data at incredible speed and
response time can be very fast.
2. Accuracy: Its accuracy is very high and its consistency can be relied upon.
Errors committed in computing are mostly due to human rather than
technological weakness. There are in-built error detecting schemes in the
computer.
3. Storage: It has both internal and external storage facilities for holding data and
instructions. This capacity varies from one machine to the other. Memories are
built up in K (Kilo) modules where K = 1024 memory locations.
4. Automatic: Once a program is in the computer’s memory, it can run
automatically each time it is opened. The individual has little or no instruction to
give again.
5. Reliability: Being a machine, a computer does not suffer human traits of
tiredness and lack of concentration. It will perform the last job with the same
speed and accuracy as the first job every time even if ten million jobs are involved.
6. Flexibility: It can perform any type of task once it can be reduced to logical
steps. Modern computers can be used to perform a variety of functions like on-
line processing, multi-programming, real time processing etc.

UNIT 2: HISTORICAL OVERVIEW OF THE COMPUTER


Table of content
• A brief history of computer technology.
• Evolution of the computer.
5
RESTRICTED
RESTRICTED

• Generations of computer.
1.0 Introduction
The computer as we know it today has evolved over the ages. An attempt is
made in this unit to present in chronological order the various landmarks and
milestones in the development of the computer. Based on the milestone
achievement of each era the computer evolution is categorized into
generations. The generational classification however, is not rigid as we may find
one generation eating into the next.
2.0 Objectives
The objective of this unit is to enable the student to know the processes leading
to the emergence of the modern computer. There can be no present without the
past just as the future depends on the present. By the end of this unit, students
should be able to appreciate and visualize the direction of research in computer
technology in the nearby future.
3.0 A Brief History of Computer Technology
A complete history of computing would include a multitude of diverse devices
such as the ancient Chinese abacus, the Jacquard loom (1805) and Charles
Babbage’s “analytical engine” (1834). It would also include discussion of
mechanical, analog and digital computing architectures. As late as the 1960s,
mechanical devices, such as the Marchant calculator, still found widespread
application in science and engineering. During the early days of electronic
computing devices, there was much discussion about the relative merits of
analog vs. digital computers. In fact, as late as the 1960s, analog computers were
routinely used to solve systems of finite difference equations arising in oil reservoir
modeling. In the end, digital computing devices proved to have the power,
economics and scalability necessary to deal with large scale computations.
Digital computers now dominate the computing world in all areas ranging from
the hand calculator to the supercomputer and are pervasive throughout society.
Therefore, this brief sketch of the development of scientific computing is limited
to the area of digital, electronic computers.

The evolution of digital computing is often divided into generations. Each


generation is characterized by dramatic improvements over the previous
generation in the technology used to build computers, the internal organization
of computer systems, and programming languages. Although not usually
associated with computer generations, there has been a steady improvement in
algorithms, including algorithms used in computational science. The following

6
RESTRICTED
RESTRICTED

history has been organized using these widely recognized generations as


mileposts.

3.1 First Generation Electronic Computers (1937 – 1953)


Three machines have been promoted at various times as the first electronic
computers. These machines used electronic switches, in form of vacuum tubes,
instead of electromechanical relays. In principle the electronic switches were
more reliable, since they would have no moving parts that would wear out, but
technology was still new at that time and the tubes were comparable to relays in
reliability. Electronic components had one major benefit, however: they could
“open” and “close” about 1,000 times faster than mechanical switches.
The earliest attempt to build an electronic computer was by J. V. Atanasoff, a
professor of physics and mathematics at Iowa State, in 1937. Atanasoff set out to
build a machine that would help his graduate students solve systems of partial
differential equations. By 1941, he and graduate student Clifford Berry had
succeeded in building a machine that could solve 29 simultaneous equations with
29 unknowns. However, the machine was not programmable, and was more of
an electronic calculator.
A second early electronic machine was Colossus, designed by Alan Turning for
the British military in 1943. This machine played an important role in breaking codes
used by the German army in World War II. Turning’s main contribution to the field
of computer science was the idea of the Turing Machine, a mathematical
formalism widely used in the study of computable functions. The existence of
Colossus was kept secret until long after the war ended, and the credit due to
Turning and his colleagues for designing one of the first working electronic
computers was slow in coming.
The first general purpose programmable electronic computer was the Electronic
Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John
V. Mauchly at the University of Pennysylvania. Work began in 1943, funded by the
Army Ordinance Department, which needed a way to compute ballistics during
World War II.
The machine wasn’t completed until 1945, but then it was used extensively for
calculations during the design of the hydrogen bomb. By the time it was
decommissioned in 1955 it had been used for research on the design of wind
tunnels, random number generators, and weather prediction. Eckert, Mauchly,
and John Von Neumann, a consultant to the ENIAC project, began work on a
new machine before ENIAC was finished. The main contribution of EDVAC, their
new project, was the notion of a stored program. There is some controversy over
7
RESTRICTED
RESTRICTED

who deserves the credit for this idea, but no one knows how important the idea
was to the future of general purpose computers. ENIAC was controlled by a set
of external switches and dials; to change the program required physically altering
the settings on these controls. These controls also limited the speed of the internal
electronic operations. Through the use of a memory that was large enough to
hold both instructions and data, and using the program stored in memory to
control the order of arithmetic operations, EDVAC was able to run orders of
magnitude faster than ENIAC. By storing instructions in the same medium as data,
designers could concentrate on improving the internal structure of the machine
without worrying about matching it to the speed of an external control.

Regardless of who deserves the credit for the stored program idea, the EDVAC
project is significant as an example of the power of interdisciplinary projects that
characterize modern computational science. By recognizing that functions, in the
form of a sequence of instructions for a computer, can be encoded as numbers,
the EDVAC group knew the instructions could be stored in the computer’s
memory a long with numerical data. The notion of using numbers to represent
functions was a key step used by Goedel in his incompleteness theorem in 1937,
work which Von Neumann, as a logician, was quite familiar with. Von Neumann’s
background in logic, combined with Eckert and Mauchly’s electrical engineering
skills, formed a very powerful interdisciplinary team.
Software technology during this period was very primitive. The first programs were
written out in machine code, i.e. programmers directly wrote down the numbers
that corresponded to the instructions they wanted to store in memory. By the
1950s programmers were using a symbolic notation, known as assembly
language, then hand translating the symbolic notation into machine code. Later
programs known as assemblers performed the translation task.
As primitive as they were, these first electronic machines were quite useful in
applied science and engineering. Atanasoff estimated that it would take eight
hours to solve a set of equations with eight unknowns using a Marchant
calculator, and 381 hours to solve 29 equations for 29 unknowns. The Atanasoff-
Berry computer was able to complete the task in under an hour. The first problem
run on the ENIAC, a numerical simulation used in the design of the hydrogen
bomb, required 20 seconds, as opposed to forty hours using mechanical
calculators. Eckert and Mauchly later developed what was arguably the first
commercially successful computer, the UNIVAC; in 1952, 45 minutes after the polls
closed and with 7% of the vote counted, UNIVAC predicted Eisenhower would
defeat Stevenson with 438 electoral votes (he ended up with 442).
8
RESTRICTED
RESTRICTED

3.2 Second Generation (1954 – 1962)


The second generation saw several important developments at all levels of
computer system design, from the technology used to build the basic circuits to
the programming languages used to write scientific applications.
Electronic switches in this era were based on discrete diode and transistor
technology with a switching time of approximately 0.3 microseconds. The first
machines to be built with this technology include TRADIC at Bell Laboratories in
1954 and TX-0 at MIT’s Lincoln Laboratory. Memory technology was based on
magnetic cores which could be accessed in random order, as opposed to
mercury delay lines, in which data was stored as an acoustic wave that passed
sequentially through the medium and could be accessed only when the data
moved by the I/O interface.
Important innovations in computer architecture included index registers for
controlling loops and floating point units for calculations based on real numbers.
Prior to this accessing successive elements in an array was quite tedious and often
involved writing self-modifying code (programs which modified themselves as
they ran; at the time viewed as a powerful application of the principle that
programs and data were fundamentally the same, this practice is now frowned
upon as extremely hard to debug and is impossible in most high level languages).
Floating point operations were performed by libraries of software routines in early
computers, but were done in hardware in second generation machines.
During this second generation many high level programming languages were
introduced, including FORTRAN (1956), ALGOL (1958), and COBOL (1959).
Important commercial machines of this era include the IBM 704 and 7094. The
latter introduced I/O processors for better throughput between I/O devices and
main memory.
The second generation also saw the first two supercomputers designed
specifically for numeric processing in scientific applications. The term
“supercomputer” is generally reserved for a machine that is an order of
magnitude more powerful than other machines of its era. Two machines of the
1950s deserve this title. The Livermore Atomic Research Computer (LARC) and the
IBM 7030 (aka Stretch) were early examples of machines that overlapped
memory operations with processor operations and had primitive forms of parallel
processing.

3.3 Third Generation (1963 – 1972)


The third generation brought huge gains in computational power. Innovations in
this era include the use of integrated circuits, or ICs (semiconductor devices with
9
RESTRICTED
RESTRICTED

several transistors built into one physical component), semiconductor memories


starting to be used instead of magnetic cores, microprogramming as a technique
for efficiently designing complex processors, the coming of age of pipelining and
other forms of parallel processing, and the introduction of operating systems and
time-sharing.
The first ICs were based on small-scale integration (SSI) circuits, which had around
10 devices per circuit (or “chip”), and evolved to the use of medium-scale
integrated (MSI) circuits, which had up to 100 devices per chip. Multilayered
printed circuits were developed and core memory was replaced by faster, solid
state memories. Computer designers began to take advantage of parallelism by
using multiple functional units, overlapping CPU and I/O operations, and
pipelining (internal parallelism) in both the instruction stream and the data stream.
In 1964, Seymour Cray developed the CDC 6600, which was the first architecture
to use functional parallelism. By using 10 separate functional units that could
operate simultaneously and 32 independent memory banks, the CDC 6600 was
able to attain a computation rate of 1 million floating point operations per second
(1 Mflops). Five years later CDC released the 7600, also developed by Seymour
Cray. The CDC 7600, with its pipelined functional units, is considered to be the first
vector processor and was capable of executing at 10 Mflops. The IBM 360/91,
released during the same period, was roughly twice as fast as the CDC 660. It
employed instruction look ahead, separate floating point and integer functional
units and pipelined instruction stream. The IBM 360-195 was comparable to the
CDC 7600, deriving much of its performance from a very fast cache memory. The
SOLOMON computer, developed by Westinghouse Corporation, and the ILLIAC
IV, jointly developed by Burroughs, the Department of Defense and the University
of Illinois, was representative of the first parallel computers. The Texas Instrument
Advanced Scientific Computer (TI-ASC) and the STAR-100 of CDC were pipelined
vector processors that demonstrated the viability of that design and set the
standards for subsequent vector processors.
Early in this, third generation Cambridge and the University of London cooperated
in the development of CPL (Combined Programming Language, 1963). CPL was,
according to its authors, an attempt to capture only the important features of the
complicated and sophisticated ALGOL. However, the ALGOL, CPL was large with
many features that were hard to learn. In an attempt at further simplification,
Martin Richards of Cambridge developed a subset of CPL called BCPL (Basic
Computer Programming Language, 1967).

10
RESTRICTED
RESTRICTED

3.4 Fourth Generation (1972 – 1984)


The next generation of computer systems saw the use of large scale integration
(LSI – 1000 devices per chip) and very large scale integration (VLSI – 100,000
devices per chip) in the construction of computing elements. At this scale entire
processors will fit onto a single chip, and for simple systems the entire computer
(processor, main memory, and I/O controllers) can fit on one chip. Gate delays
dropped to about Ins per gate. Semiconductor memories replaced core
memories as the main memory in most systems; until this time the use of
semiconductor memory in most systems was limited to registers and cache.
During this period, high speed vector processors, such as the CRAY 1, CRAY X-MP
and CYBER 205 dominated the high performance computing scene.
Computers with large main memory, such as the CRAY 2, began to emerge. A
variety of parallel architectures began to appear; however, during this period the
parallel computing efforts were of a mostly experimental nature and most
computational science was carried out on vector processors. Microcomputers
and workstations were introduced and saw wide use as alternatives to time-
shared mainframe computers.
Developments in software include very high level languages such as FP
(functional programming) and Prolog (programming in logic). These languages
tend to use a declarative programming style as opposed to the imperative style
of Pascal, C. FORTRAN, et al. In a declarative style, a programmer gives a
mathematical specification of what should be computed, leaving many details
of how it should be computed to the compiler and/or runtime system. These
languages are not yet in wide use, but are very promising as notations for
programs that will run on massively parallel computers (systems with over 1,000
processors). Compilers for established languages started to use sophisticated
optimization techniques to improve code, and compilers for vector processors
were able to vectorize simple loops (turn loops into single instructions that would
initiate an operation over an entire vector).
Two important events marked the early part of the third generation: the
development of the C programming language and the UNIX operating system,
both at Bell Labs. In 1972, Dennis Ritchie, seeking to meet the design goals of CPL
and generalize Thompson’s B, developed the C language. Thompson and Ritchie
then used C to write a version of UNIX for the DEC PDP-11. This C-based UNIX was
soon ported to many different computers, relieving users from having to learn a
new operating system each time they change computer hardware. UNIX or a
derivative of UNIX is now a de facto standard on virtually every computer system.

11
RESTRICTED
RESTRICTED

3.5 Fifth Generation (1984 – 1990)


The development of the next generation of computer systems is characterized
mainly by the acceptance of parallel processing. Until this time, parallelism was
limited to pipelining and vector processing, or at most to a few processors sharing
jobs. The fifth generation saw the introduction of machines with hundreds of
processors that could all be working on different parts of a single program. The
scale of integration in semiconductors continued at an incredible pace, by 1990
it was possible to build chips with a million components – and semiconductor
memories became standard on all computers.
Other new developments were the widespread use of computer networks and
the increasing use of single-user workstations.
Scientific computing in this period was still dominated by vector processing. Most
manufacturers of vector processors introduced parallel models, but there were
very few (two to eight) processors in these parallel machines. In the area of
computer networking, both wide area network (WAN) and local area network
(LAN) technology developed at a rapid pace, stimulating a transition from the
traditional mainframe computing environment towards a distributed computing
environment in which each user has their own workstation for relatively simple
tasks (editing and compiling programs, reading mail) but sharing large, expensive
resources such as file servers and supercomputers.
RISC technology (a style of internal organization of the CPU) and plummeting
costs for RAM brought tremendous gains in computational power of relatively low
cost workstations and servers. This period also saw a marked increase in both the
quality and quantity of scientific visualization.

3.6 Sixth Generation (1990 to date)


Transitions between generations in computer technology are hard to define,
especially as they are taking place. Some changes, such as the switch from
vacuum tubes to transistors, are immediately apparent as fundamental changes,
but others are clear only in retrospect. Many of the developments in computer
systems since 1990 reflect gradual improvements over established systems, and
thus it is hard to claim they represent a transition to a new “generation”, but other
developments will prove to be significant changes.
In this section, we offer some assessments about recent developments and
current trends that we think will have a significant impact on computational
science.
This generation is beginning with many gains in parallel computing, both in the
hardware area and in improved understanding of how to develop algorithms to
12
RESTRICTED
RESTRICTED

exploit diverse, massively parallel architectures. Parallel systems now compete


with vector processors in terms of total computing power and most especially
parallel systems to dominate the future.
Combinations of parallel/vector architectures are well established, and one
corporation (Fujitsu) has announced plans to build a system with over 200 of its
high and vector processors. Manufacturers have set themselves the goal of
achieving teraflops (1012 arithmetic operations per second) performance by the
middle of the decade, and it is clear this will be obtained only by a system with a
thousand processors or more. Workstation technology has continued to improve,
with processor designs now using a combination of RISC, pipelining, and parallel
processing. As a result it is now possible to procure a desktop workstation that has
the same overall computing power (100 megaflops) as fourth generation
supercomputers. This development has sparked an interest in heterogeneous
computing: a program started on one workstation can find idle workstations
elsewhere in the local network to run parallel subtasks.
One of the most dramatic changes in the sixth generation is the explosive growth
of wide area networking. Network bandwidth has expanded tremendously in the
last few years and will continue to improve for the next several years. T1
transmission rates are now standard for regional networks, and the national
“backbone” that interconnects regional networks uses T3. networking technology
is becoming more widespread than its original strong base in universities and
government laboratories as it is rapidly finding application in K-12 education,
community networks and private industry. A little over a decade after the warning
voiced in the Lax report, the future of a strong computational science
infrastructure is bright.
4.0 Conclusion
The development of computer span through many generations with each
generations chronicling the landmark achievements of the period.

UNIT 3: CLASSIFICATION OF COMPUTERS


Table of contents
1. Categories of computers
2. Classification by type
a. Digital computer
b. Analog computer
c. Hybrid computer
3. Classification by purpose
a. Special purpose
13
RESTRICTED
RESTRICTED

b. General purpose
4. Classification by size
a. Micro computers
b. Mini computers
c. Main frame
d. Super computers

1.0 Introduction
The computer has passed through many stages of evolution from the days of the
mainframe computers to the era of microcomputers. Computers have been
classified based on different criteria. In this unit, we shall classify computers based
on three popular methods.

2.0 Objectives
The objectives of this unit are to:
i. Classify computers based on size, type of signal and purpose.
ii. ii. Study the features that differentiate one class of the computer from the
others.

3.0 Categories of Computers


Although there are no industry standards, computers are generally classified in
the following ways:

14
RESTRICTED
RESTRICTED

3.1 Classification By Type


There are basically three types of electronic computers. These are the Digital,
Analog and Hybrid computers.

Digital Computer
Represent its variable in the form of digits. It counts the data it deals with, whether
representing numbers, letters or other symbols, are converted into binary form on
input to the computer. The data undergoes a processing after which the binary
digits are converted back to alpha numeric form for output for human use.
Because of the fact that business applications like inventory control, invoicing and
payroll deal with discrete values (separate, disunited, discontinuous); they are
beset processed with digital computers. As a result of this, digital computers are
mostly used in commercial and business places today.

Analog Computer
It measures rather than counts. This type of computer sets up a model of a system.
Common type represents it variables in terms of electrical voltage and sets up
circuit analog to the equation connecting the variables. The answer can be either
15
RESTRICTED
RESTRICTED

by using a voltmeter to read the value of the variable required, or by feeding the
voltage into a plotting device. They hold data in the form of physical variables
rather than numerical quantities. In theory, analog computers give an exact
answer because the answer has not been approximated to the nearest digit.
Whereas, when we try to obtain the answers using a digital voltmeter, we often
find that the accuracy is less than that which could have been obtained from an
analog computer.
It is almost never used in business systems. It is used by the scientist and engineer
to solve systems of partial differential equations. It is also used in controlling and
monitoring of systems in such areas as hydrodynamics and rocketry; in
production.
There are two useful properties of this computer once it is programmed:
1. It is simple to change the value of a constant or coefficient and study the
effect of such changes.
2. It is possible to link certain variables to a time pulse to study changes with
time as a variable, and chart the result on an X-Y plotter.

Hybrid Computer
In some cases, the user may wish to obtain the output from an analog computer
as processed by a digital computer or vice versa. To achieve this, he set up a
hybrid machine where the two are connected and the analog computer may
be regarded as a peripheral of the digital computer. In such a situation, a hybrid
system attempts to gain the advantage of both the digital and the analog
elements in the same machine. This kind of machine is usually a special-purpose
device which is built for a specific task. It needs a conversion element which
accepts analog inputs, and output digital value. Such converters are called
digitizers. There is need for a converter from analog to digital also.
It has the advantage of giving real-time response on a continuous basis. Complex
calculations can be dealt with by the digital elements, thereby requiring a large
memory, and giving accurate results after programming. They are mainly used in
aerospace and process control applications.

3.2 Classification By Purpose


Depending on their flexibility in operation, computers are classified as either
special purpose or general purpose.
Special Purpose Computers
A special purpose computer is one that is designed to solve a restricted class of
problems. Such computers may even be designed and built to handle only one
16
RESTRICTED
RESTRICTED

job. In such machines, the steps or operations that the computer follows may be
built into the hardware. Most of the computers used for military purposes fall into
this class. Other example of special purpose computers include:
• Computers designed specifically to solve navigational problems.
• Computers designed for tracking airplane or missiles.
• Computers used for process control applications in industries such as oil
refinery, chemical manufacture, steel processing and power generation.
• Computers used as robots in factories like vehicles assembly plants and
glass industries.
General Attributes of Special Purpose Computers
Special purpose computer are usually very efficient for the tasks for which they
are specially designed.
They are very much less complex than the General-Purpose Computers. The
simplicity of the circuiting stems from the fact that provision is made only for limited
facilities.
They are very much cheaper than the General-Purpose type since they involve
less components and are less complex.

General-Purpose Computers
General-Purpose computers are computers designed to handle wide range of
problems.
Theoretically, a general-purpose computer can be adequate by means of some
easily alterable instructions to handle any problems that can be solved by
computation. In practice however, there are limitations imposed by memory size,
speed and the type of input/output devices. Examples of areas where the
general purpose are employed include the following:
a. Payroll
b. Banking
c. Billing
d. Sales analysis
e. Cost accounting
f. Manufacturing scheduling
g. Inventory control

General Attributes of General-Purpose Computers


General-Purpose computers are more flexible than special purpose computers.
They can handle a wide spectrum of problems.

17
RESTRICTED
RESTRICTED

They are less efficient than the special-purpose computers due to such problems
as;
· Inadequate storage;
· Low operating speed;
· Coordination of the various tasks and subsection may take time.
· General Purpose Computers are more complex than the special purpose
ones.

3.3 Classification By Size


In the past, the capacity of computers was measured in terms of physical size.
Today, however, the physical size is not a good measure of capacity because the
modern technology has made it possible to achieve compactness.
A better measure of capacity today is the volume of work that computer can
handle. The volume of work that a given computer handles is closely tied to the
cost and to the memory size of computer. Therefore, most authorities today
accept the price of rental price as the standard for ranking computers.
Here, both memory size and cost shall be used to rank (classify) computer into
three main categories as follows:

1. Micro Computers
Microcomputers, also known as single board computers, are the cheapest class
of computers. In the microcomputer, we do not have a Central Processing Unit
(CPU) as we have in the larger computers rather we have a microprocessor chip
as the main data processing unit. They are the cheapest smallest and can
operate under normal office condition. Examples are IBM, APPLE, COMPAQ,
Hewlett Packard (HP), Dell, Toshiba, e.t.c.

Different Types of Personal Computers (Micro Computers)


Normally, personal computers are placed on table desk hence they are referred
to as desktop personal computers. Still other types are available under the
categories of personal computers. They are:
Laptop Computers are small size types that are battery-operated. The screen is
used to cover the system while the keyboard is installed flatly on the system unit.
They could be carried about like a box when closed after operation and can be
operated in vehicles while on a journey.
Notebook Computer
This is like laptop computers but smaller in size. Though small, it comprises all the
components of a full system.
18
RESTRICTED
RESTRICTED

Palmtop Computer
Palmtop computer is far smaller in size. All the components are complete as any
of the above but made smaller so that it can be held on the palm.

Uses of Personal Computers


Personal computers can perform the following functions:
· Can be used to produce documents like memos, reports, letters and briefs.
· Can be used to calculate budget and accounting tasks
· It can analyze numeric function
· It can create illustrations
· Can be used for electronic mails
· Can help in making schedule and plan projects.
· It can assist in schedules and plan projects.
· It can assist in searching for specific information from lists or from reports.

Advantages of Personal Computers


· Computer is versatile; it can be used in any establishment.
· Has faster speed for processing data.
· Can deal with several data at a time
· Can attend to several users at the same time, thereby able to process
several jobs at a time.
· Capable of storing several data.
· Operating of Computer is less fatigue
· Network possible, that is linking of two or more computers together.

Disadvantages of Personal Computers


· Computer is costly to maintain.
· It is very fragile and complex to handle
· It requires special skill to operate
· With the invention and innovation everyday, computer suffers from being
obsolete.
· It can lead to unemployment when used mostly in less Developed
Countries.
· Some computers cannot function properly without the aid of cooling
system e.g. air-condition or fan in some locations.

19
RESTRICTED
RESTRICTED

2. Mini Computers
It is a medium sized computer with moderate cost, available indigenously and
used for large volume applications. It can serve multi-users simultaneously. It is a
midsize multi-processing system capable of supporting up to 250 users
simultaneously.
3. Workstations
Workstation is a computer used for engineering applications (CAD/CAM),
desktop publishing, software development, and other such types of applications
which require a moderate amount of computing power and relatively high-
quality graphics capabilities.
Workstations generally come with a large, high-resolution graphics screen, large
amount of RAM, inbuilt network support, and a graphical user interface. Most
workstations also have a mass storage device such as a disk drive, but a special
type of workstation, called a diskless workstation, comes without a disk drive.
Common operating systems for workstations are UNIX and Windows NT. Like PC,
Workstations are also single-user computers like PC but are typically linked
together to form a local-area network, although they can also be used as stand-
alone systems.

4. Mainframe
The Main Frame Computers often called number crunches have large memory
and they are very expensive. They can execute up to 100MIPS (Meanwhile
Instructions Per Second). They have large systems and are used by many people
for a variety of purpose.
They have large storage and high computing speed (but relatively lower than the
super computers). They are used in applications like weather forecasting, space

20
RESTRICTED
RESTRICTED

applications etc., they support a large number of terminals for use by a variety of
users simultaneously, but are expensive.

5. Super Computers
These have extremely large storage capacities and computing speeds which are
at least 10 times faster than other computers. These are used for large scale
numerical problems in scientific and engineering disciplines such as electronics,
weather forecasting etc. The first super computer was developed in U.S.A. by
CRAY computers. In India the indigenous super computer was developed under
the name Param.

MODULE 2: COMPUTER HARDWARE/


UNIT 4: HARDWARE COMPONENTS

21
RESTRICTED
RESTRICTED

1.0 Introduction
Your Personal Computer (PC) is really a collection of separate items working
together as a team-with you as the captain. Some of these components are
essential; others simply make working more pleasant or efficient. Adding extra
items expands the variety of tasks you can accomplish with your machine.

2.0 The System Unit


The system unit is the main unit of a PC. It is the Computer itself while other units
attached to it are regarded as peripherals. It could be viewed as the master
conductor orchestrating your PC’s operation. It is made up of several
components like the Motherboard, Processor, Buses, memory, power supply unit,
etc. This unit (system unit) has been confused over the years by novices as the
CPU. This is not true. The CPU (Central Processing Unit) or simply processor is a
component within the system unit and it is not the only thing that makes up the
system unit. Hence, it will be wrong to equate the system unit with the CPU.

Architecture of computers
A computer is made up of a group of interrelated entities working together to
achieve a common goal, so it is a ‘system’. All types of computers follow the same
basic physical structure and perform the following five basic operations for
converting raw input data into information useful to their users: Take Input, Store
Data, Processing Data, Output Information, and Control the workflow.

Fig 3.0: General design (Architecture) of the system unit

22
RESTRICTED
RESTRICTED

Input Unit
This unit contains devices with the help of which we enter data into computer.
This unit makes link between user and computer. The input devices translate the
information into the form understandable by computer.

Output unit
Output unit consists of devices with the help of which we get the information from
computer. This unit is a link between computer and users. Output devices translate
the computer's output into the form understandable by users.

CPU (Central Processing Unit)


It controls the operation of all parts of computer. CPU itself has following
components: ALU (Arithmetic Logic Unit), memory unit, and the Control Unit.
a. The Control Unit controls the operations of all parts of computer It is
responsible for controlling the transfer of data and instructions among other
units of a computer.
b. The ALU (Arithmetic Logic Unit) consists of two subsections namely:
Arithmetic section and Logic Section. The arithmetic section perform
arithmetic operations like addition, subtraction, multiplication and division.
All complex operations are done by making repetitive use of above
operations. The logic section perform logic operations such as comparing,
selecting, matching and merging of data. Memory or Storage Unit can
store instructions, data and intermediate results. This unit supplies
information to the other units of the computer when needed.

23
RESTRICTED
RESTRICTED

Computer - Input Devices


Input device is any peripheral (piece of computer hardware equipment which
provide data and control signals to an information processing system such as a
computer or other information appliance).
Input device Translate data from form that humans understand to one that the
computer can work with. Most common are keyboard and mouse Examples of
input device are:
1. Keyboard 2. Mouse (pointing device) 3. Microphone
4. Touch screen 5. Scanner 6. Webcam
7. Touchpads 8. MIDI keyboard 9. cameras
10.Graphics Tablets 11. Electronic whiteboard 12.Pen Input
13.Video Capture Hardware 14.Microphone 15.Trackballs
16.Barcode reader 17.Digital camera 18.Joystick
19.Gamepad

Note: The most common use keyboard is the QWERTY keyboard. Generally
standard Keyboard has 104 keys.
24
RESTRICTED
RESTRICTED

Computer - Output Devices


An output device is any peripheral that receives data from a computer, usually
for display, projection, or physical reproduction. Output devices used to send
data from a computer to another device or user.
An output device is any piece of computer hardware equipment used to
communicate the results of data processing carried out by an information
processing system (such as a computer) which converts the electronically
generated information into human-readable form.
Following are few of the important output devices which are used in a computer:
Monitors commonly called visual Display unit (VDU), Graphic Plotter, projectors,
speakers and Printer.
Examples of output devices are:
1. Monitor 2. LCD Projection Panels
3. Printers (all types) 4. Computer Output Microfilm (COM)
5. Plotters 6. Speaker(s)

Computer Memory
A memory is just like a human brain. It is used to store data and instructions.
Computer memory is the storage space in computer where data is to be
processed and instructions required for processing are stored. The memory is
divided into large number of small parts called cells. Each location or cell has a
unique address which varies from zero to memory size minus one. For example, if
computer has 64k words, then this memory unit has 64 * 1024=65536 memory
locations. The address of these locations varies from 0 to 65535.
Memory is primarily of three types:
• Cache Memory
• Primary Memory/Main Memory
• Secondary Memory
Cache Memory
Cache memory is a very high-speed semiconductor memory which can speed
up CPU. It acts as a buffer between the CPU and main memory. It is used to hold
those parts of data and program which are most frequently used by CPU. The
parts of data and programs are transferred from disk to cache memory by
operating system, from where CPU can access them.
Advantages
• Cache memory is faster than main memory.

25
RESTRICTED
RESTRICTED

• It consumes less access time as compared to main memory.


• It stores the program that can be executed within a short period of
time.
• It stores data for temporary use.
Disadvantages
• Cache memory has limited capacity.
• It is very expensive.

Primary Memory (Main Memory)


Main memory is where programs and data are kept when the processor is actively
using them. When programs and data become active, they are copied from
secondary memory into main memory where the processor can interact with
them. A copy remains in secondary memory.
Main memory holds only those data and instructions on which computer is
currently working. It has limited capacity and data is lost when power is switched
off. It is generally made up of semiconductor device. These memories are not as
fast as registers. The data and instruction required to be processed reside in main
memory.
Main memory is intimately connected to the processor, so moving instructions and
data into and out of the processor is very fast. These memories are not as fast as
registers. The data and instruction required to be processed reside in main
memory.
The Main memory is sometimes called RAM. RAM stands for Random Access
Memory. "Random" means that the memory cells can be accessed in any order.
However, properly speaking, "RAM" means the type of silicon chip used to
implement main memory.
When people say that a computer has "512 megabytes of RAM" they are talking
about how big its main memory is. One megabyte of memory is enough to hold
approximately one million (106) characters of a word processing document.
Nothing permanent is kept in main memory. Sometimes data are placed in main
memory for just a few seconds, only as long as they are needed.
Characteristics of Main Memory
• They are semiconductor memories
• Usually volatile memory because it loses information when power is
removed.
• Data is lost in case power is switched off.
• It is the working memory of the computer.
• Faster than secondary memories.
26
RESTRICTED
RESTRICTED

• A computer cannot run without primary memory.

Secondary Memory
Secondary memory is where programs and data are kept on a long-term basis,
common secondary storage devices are the hard disk and optical disks.
• The hard disk has enormous storage capacity compared to main
memory.
• The hard disk is usually contained inside the case of a computer
• The hard disk is used for a long-term storage of programs and data
• Data and programs on the hard disk are organized into files
• A file is a collection of data on the disk that has a name
The secondary memory (ROM) is also known as external memory or non-volatile.
ROM stands for Read Only Memory. It is slower than main memory. These are used
for storing data/Information permanently. CPU directly does not access these
memories instead they are accessed via input-output routines. Contents of
secondary memories are first transferred to main memory, and then CPU can
access it. For example: disk, CD-ROM, DVD etc.
A hard disk might have a storage capacity of 500 gigabytes (room for about 500
x 109 characters). This is about 100 times the capacity of main memory. A hard
disk is slow compared to main memory. If the disk were the only type of memory
the computer system would slow down to a crawl. The reason for having two
types of storage is this difference in speed and capacity. Large blocks of data are
copied from disk into main memory. The operation is slow, but lots of data is
copied, Then the processor can quickly read and write small sections of that data
in main memory. When it is done, a large block of data is written to disk. Often,
while the processor is computing with one block of data in main memory, the next
block of data from disk is read into another section of main memory and made
ready for the processor. One of the jobs of an operating system is to manage
main storage and disks this way.

Primary memory Secondary memory

▪ Fast ▪ Slow
▪ Expensive ▪ Cheap
▪ Low capacity ▪ Large capacity
▪ Works directly with the processor▪ Not connected directly to the processor

27
RESTRICTED
RESTRICTED

Characteristic of Secondary Memory


1. These are magnetic and optical memories
2. It is known as backup memory.
3. It is non-volatile memory because it retains information when power is
removed.
4. Data is permanently stored even if power is switched off.
5. It is used for storage of data in a computer.
6. Computer may run without secondary memory.
7. Slower than primary memories.

Examples of Secondary Memory Storage


1. Hard drive (HD): A hard disk is part of a unit, often called a "disk drive," "hard
drive," or "hard disk drive," that store and provides relatively quick access to
large amounts of data on an electromagnetically charged surface or set
of surfaces.

A Hard Disk Drive


2. Optical Disk: an optical disc drive (ODD) is a disk drive that uses laser light
as part of the process of reading or writing data to or from optical discs.
Some drives can only read from discs, but recent drives are commonly both
readers and recorders, also called burners or writers. Compact discs, DVDs,
and Blu-ray discs are common types of optical media which can be read
and recorded by such drives. Optical drive is the generic name; drives are
usually described as "CD" "DVD", or "Bluray", followed by "drive", "writer", etc.
There are three main types of optical media: CD, DVD, and Blu-ray disc.
CDs can store up to 700 megabytes (MB) of data and DVDs can store up
to 8.4 GB of data. Blu-ray discs, which are the newest type of optical media,
can store up to 50 GB of data. This storage capacity is a clear advantage

28
RESTRICTED
RESTRICTED

over the floppy disk storage media (a magnetic media), which only has a
capacity of 1.44 MB.
3. Flash Disk: A storage module made of flash memory chips. A Flash disks
have no mechanical platters or access arms, but the term "disk" is used
because the data are accessed as if they were on a hard drive. The disk
storage structure is emulated.

S.No. Unit Description

1 Kilobyte (KB) 1 KB = 1024 Bytes

2 Megabyte (MB) 1 MB = 1024 KB

3 GigaByte (GB) 1 GB = 1024 MB

4 TeraByte (TB 1 TB = 1024 GB

5 PetaByte (PB) 1 PB = 1024 TB

Module 3: Computer Software


UNIT 7: COMPUTER SOFTWARE
Table of content
1. System software
- Operating System
- Types of operating System
2. Language translators
3. Assemblers
4. Interpreters
5. Compilers
29
RESTRICTED
RESTRICTED

6. Utility software

1.0 Introduction
The computer hardware are driven by the software. The usefulness of the
computer depends on the programs that are written to manipulate it. Computer
software come in different forms: the operating system, utility software, language
translators and application software. This unit therefore presents detailed
discussions of each category of computer software.

2.0 Objectives
The objective of this unit are to:
i. Identify the different types of computer software.
ii. Discuss the importance of each type of software.

3.0 Computer Software


The physical components of the computer are called the hardware while all the
other resources or parts of the computer that are not hardware, are referred to
as the Software. Software are the set of programs that makes the computer
system active. In essence, the software are the programs that run on the
computer. Then, what is a program? A Program is a series of coded instructions
showing the logical steps the computer follows to solve a given problem.

3.1 Classification of Computer Software


The computer software could be divided into two major groups namely System
Software (Programs) and Application Software (Programs).

3.1.1 System Software


This is referring to the suits of programs that facilitates the optimal use of the
hardware systems and/or provide a suitable environment for the writing, editing,
debugging, testing and running of User Programs. Usually, every computer system
comes with collection of these suits of programs which are provided by the
Hardware Manufacturer.

3.1.1.2 Operating System


An operating system is a program that acts as an interface between a user of a
computer and the computer hardware. The purpose of an operating system is to
provide an environment in which a user may execute programs.

The operating system is the first component of the systems programs that interests
us here. Systems programs are programs written for direct execution on computer
30
RESTRICTED
RESTRICTED

hardware in order to make the power of the computer fully and efficiently
accessible to applications programmers and other computer users. Systems
programming is different from application programming because the requires an
intimate knowledge of the computer hardware as well as the end users’ needs.
Moreover, systems programs are often large and more complex than application
programs, although that is not always the case. Since systems programs provide
the foundation upon which application programs are built, it is most important
that systems programs are reliable, efficient and correct. In a computer system
the hardware provides the basic computing resources. The applications
programs define the way in which these resources are used to solve the
computing problems of the users. The operating system controls and coordinates
the use of the hardware among the various systems programs and application
programs for the various users.

The basic resources of a computer system are provided by its hardware, software
and data. The operating system provides the means for the proper use of these
resources in the operation of the computer system. It simply provides an
environment within which other programs can do useful work.

We can view an operating system as a resource allocator. A computer system


has many resources (hardware and software) that may be required to solve a
problem: CPU time, memory space, file storage space, input/output devices etc.

The operating system acts as the manager of these resources and allocates them
to specific programs and users as necessary for their tasks. Since there may be
many, possibly conflicting, requests for resources, the operating system must
decide which requests are allocated resources to operate the computer system
fairly and efficiently. An operating system is a control program. This program
controls the execution of user programs to prevent errors and improper use of the
computer.

Operating systems exist because they are a reasonable way to solve the problem
of creating a usable computing system. The fundamental goal of a computer
system is to execute user programs and solve user problems.

The primary goal of an operating system is a convenience for the user. Operating
systems exit because they are supposed to make it easier to compute with an
operating system than without an operating system. This is particularly clear when
you look at operating system for small personal computers.

31
RESTRICTED
RESTRICTED

A secondary goal is the efficient operation of a computer system. This goal is


particularly important for large, shared multi-user systems. Operating systems can
solve this goal. It is known that sometimes these two goals, convenience and
efficiency, are contradictory.

While there is no universally agreed upon definition of the concept of an


operating system, we offer the following as a reasonable starting point: A
computer’s operating system (OS) is a group of programs designed to serve two
basic purposes:

1. To control the allocation and use of the computing system’s resources among
the various users and tasks, and

2. To provide an interface between the computer hardware and the programmer


that simplifies and makes feasible the creation, coding, debugging, and
maintenance of application programs.

Types of operating system

Modern computer operating systems may be classified into three groups, which
are distinguished by the nature of interaction that takes place between the
computer user and his or her program during its processing. The three groups are
called batch, time-shared and real time operating systems.

i. Batch processing operating system

In a batch processing operating system environment, users submit jobs to a


central place where these jobs are collected into a batch, and subsequently
placed on an input queue at the computer where they will be run. In this case,
the user has no interaction with the job during its processing, and the computer’s
response time is the turnaround time-the time from submission of the job until
execution is complete, and the results are ready for return to the person who
submitted the job.

ii. Time sharing operating system

Another mode for delivering computing services is provided by time sharing


operating systems. In this environment a computer provides computing services
to several or many users concurrently on-line. Here, the various users are sharing
the central processor, the memory, and other resources of the computer system
in a manner facilitated, controlled, and monitored by the operating system. The
user, in this environment, has nearly full interaction with the program during its

32
RESTRICTED
RESTRICTED

execution, and the computer’s response time may be expected to be no more


than a few second.

iii. Real time operating system

The third class of operating systems, real time operating systems, are designed to
service those applications where response time is of the essence in order to
prevent error, misrepresentation or even disaster. Examples of real time operating
systems are those which handle airlines reservations, machine tool control, and
monitoring of a nuclear power station. The systems, in this case, are designed to
be interrupted by external signal that require the immediate attention of the
computer system.

In fact, many computer operating systems are hybrids, providing for more than
one of these types of computing service simultaneously. It is especially common
to have a background batch system running in conjunction with one of the other
two on the same computer.

iv. Distributed operating system

A distributed operating system, in contrast, is one that appears to its users as a


traditional uniprocessor system, even though it is actually composed of multiple
processors. In a true distributed system, users should not be aware of where their
programs are being run or where their files are located; that should all be handled
automatically and efficiently by the operating system.

v. Network operating systems

Network operating systems are not fundamentally different from single processor
operating systems. They obviously need a network interface controller and some
low-level software to drive it, as well as programs to achieve remote login and
remote files access, but these additions do not change the essential structure of
the operating systems.

True distributed operating systems require more than just adding a little code to a
uniprocessor operating system, because distributed and centralized systems differ
in critical ways. Distributed systems, for example, often allow program to run on
several processors at the same time, thus requiring more complex processor
scheduling algorithms in order to optimize the amount of parallelism achieved.

33
RESTRICTED
RESTRICTED

Language Translator
A programming language is a set of notations in which were express our
instructions to the computer. At the initial stage of computer development,
programs were written in machine language conducting the binary system i.e. 0
and 1. Such programs were hard to write, read, debug and maintain. In an
attempt to solve these problems, other computer languages were developed.
However, computers can run programs written only in machine language. There
is therefore the need to translate programs written in these other languages to
machine language. The suites of languages that translate other languages to
machine language are called Language Translator. The initial program written in
a language different from machine language is called the source program and
its equivalent in machine language is called object program.

Three examples of classes of language translators are Assemblers, Interpreters and


Compilers.

1. Assemblers: An Assembler is a computer program that accepts a source


program in assembly language program reads and translates the entire
program into an equivalent program in machine language called the
object program or object code. Each machine has its own assembly
language, meaning that the assembly language of one machine cannot
run on another machine.
2. Interpreter: An Interpreter is a program that accepts program in a source
language, reads, translates and executes it, line by line into machine
language.
3. Compilers: A Compiler is a computer program that accepts a source
program in one high-level language, reads and translates the entire user’s
program into an equivalent program in machine language, called the
object program or object code.

Utility Software

This is a set of commonly used programs in data processing departments also


called service or general-purpose programs.

They perform the following operations.

i. File Conversion: This covers data transfer from any medium to another,
making an exact copy or simultaneously editing and validating.
ii. File Copy: It makes an exact copy of a file from one medium to another or
from an area of a medium to another area of the same medium.
34
RESTRICTED
RESTRICTED

iii. Housekeeping Operations: These include programs to clear areas of


storage, writing file labels and updating common data. They are not
involved in solving the problem at hand. They are operations that must be
performed before and after actual processing.

Application software

Application software is a set of programs designed to solve problems of a specific


nature. It could either be supplied by the computer manufacturer or in some
cases, the users produce their own application program called USER PROGRAMS.
Hence, an application software could be subdivided into two classes, namely;
Generalized and User defined Software.

Under the Generalized software, we have as examples: Word Processing


Programs e.g. Word Perfect, Word Star, Microsoft word. Also, Desktop Publishing
e.g. Ventura, PageMaker, CorelDraw likewise the Spreadsheet program e.g.
LOTUS 1,2,3, Excel, Super-Q while under the User-defined, we could have some
User-defined packages for a particular company or organization, for accounting,
payroll or some other specialized purposes.

i. Word Processor: A Word Processor is used to create, edit, save and print
reports. It affords the opportunity to make amendments before printing is
done. During editing character, word sentence or a number of lines can be
removed or inserted as the case may be. Another facility possible is spell
checking. A document can be printed as many times as possible. Word
processors are mainly used to produce: Letters, Mailing lists, Label, Greeting
Cards, Business Cards, Reports, Manual, Newsletter. Examples are:
WordPerfect, WordStar, Display Writer, Professional Writer, LOTUS
Manuscript, Ms-Word, LOCO Script, MM Advantage II etc.
ii. Spreadsheet: Is an application mainly designed for numerical figures and
reports. Spreadsheets contain columns and rows, in which numbers can be
entered. It is possible to change numbers before printing is done. Other
features of spread sheets is the ability to use formulas to calculate, use sum
and average function, ability to perform automatic recalculation and has
the capacity to display reports in graphical modes. Spreadsheet is used for
Budget, Tables, Cost analysis, Financial reports. Tax and Statistical analysis.
Examples are: LOTUS 123, Supercalc, MS Multiplan, MS-excel, VP Planner
etc.
iii. Integrated Packages: They are programs or packages that perform a
variety of different processing operations using data that is compatible with
35
RESTRICTED
RESTRICTED

whatever operation is being carried out. They perform a number of


operations like Word Processing, Data-base Management and Spread
sheeting. Examples are: Office writer, Logistic Symphony, Framework,
Enable, Ability, Smart ware II, Microsoft Work V2.
iv. Graphic Packages: These are packages that enable you to bring out
images, diagrams and pictures. Examples are PM, PM Plus, Graphic Writer,
Photoshop.
v. Database Packages: It is software for designing, setting up and
subsequently managing a database. (A database is an organized
collection of data that allows for modification taking care of different users
view). Examples are Dbase II, III, IV, FoxBASE, Rbase Data Perfect, Paradox
III, Revelation Advanced and MS-Access.
vi. Statistical Packages: These are packages that can be used to solve
statistical problems, e.g. Stat graphical, SPSS (Statistical Packages for Social
Scientists).
vii. Desktop Publishing: These are packages that can be used to produce
books and documents in standard form. Examples are PageMaker,
Ventura, Publishers, Paints Brush, Xerox Form Base, News Master II, Dbase
Publisher.
viii. Game Packages: These are packages that contain a lot of games for
children and adults. Examples are Chess, Scrabble, Monopoly, Tune Trivia,
Star Trek 2, California Game, Soccer Game, War Game, Spy Catcher
Dracula in London.
ix. Communication Packages: Examples are Carbon Plus, Data talk V3.3, Cross
talk, SAGE Chit Chat, Data Soft.

There are so many packages around, virtually for every field of study but these
are just to mention a few of them. Advantages of these packages include quick
and cheaper implementation, time saving, minimum time for its design, they have
been tested and proven to be correct, they are usually accompanied by full
documentation and are also very portable.

User Programs

This is a suit of programs written by programmers for computer users. They are
required for the operation of their individual business or tasks. Example is a payroll
package developed for salary operation of a particular company.

36
RESTRICTED
RESTRICTED

4.0 Conclusion

Apart from the operating systems, we need program translators for us to be able
to program and use the computer effectively. Since computers do not
understand natural languages, there is the need to have language translators
such as assemblers, interpreters and compilers. Utility programs such file
conversion and scandisk on the other hand, enable us to maintain and enhance
the operations of the computer. Application and user programs such as the word
processors, spreadsheet and the like help us to perform specific tasks on the
computer.

MODULE 4: PROGRAMMING THE COMPUTER

This topic shall be discussed under the following sub-topics:

· Computer programming languages


· Basic principles of computer programming
· Flowcharts
· Algorithms

UNIT 6: COMPUTER LANGUAGES


Table of content
Machine language
Assembly language
High level symbolic language
Very high level symbolic language

1.0 Introduction

In this unit, we shall take a look at computer programming with emphasis on:

a) The overview of computer programming languages.


b) Evolutionary trends of computer programming languages.
c) Programming computers in a Beginner All-Purpose Symbolic Instruction
Code (BASIC) language environment.

2.0 Objective

The objective of this unit is to introduce the student to the background information
about programming the Computer.

37
RESTRICTED
RESTRICTED

3.0 Overview of Computer Programming Languages

Basically, human beings cannot speak or write in computer language, and since
computers cannot speak or write in human language, an intermediate language
had to be devised to allow people to communicate with the computers. These
intermediate languages, known as programming languages, allow a computer
programmer to direct the activities of the computer. These languages are
structured around unique set of rules that dictate exactly how a programmer
should direct the computer to perform a specific task.

With the powers of reasoning and logic of human beings, there is the capability
to accept an instruction and understand it in many different forms. Since a
computer must be programmed to respond to specific instructions, instructions
cannot be given in just any form. Programming languages standardize the
instruction process. The rules of a particular language tell the programmer how
the individual instructions must be structured and what sequence of worlds and
symbols must be used to form an instruction.

(a) An operation code.

(b) Some operands.

The operation code tells the computer what to do such as add, subtract, multiply
and divide. The operands tell the computer the data items involved in the
operations. The operands in an instruction may consist of the actual data that the
computer may use to perform an operation, or the storage address of data.
Consider for example the instruction: a = b + 5. The ‘=’ and ‘+’ are operation
codes while ‘a’, ‘b’ and ‘5’ are operands. The ‘a’ and ‘b’ are storage addresses
of actual data while ‘5’ is an actual data.

Some computers use many types of operation codes in their instruction format
and may provide several methods for doing the same thing. Other computers use
fewer operation codes, but have the capacity to perform more than one
operation with a single instruction.

There are four basic types of instructions namely:

a) input-output instructions;
b) arithmetic instructions;
c) branching instructions;
d) logic instructions.

38
RESTRICTED
RESTRICTED

An input instruction directs the computer to accept data from a specific input
device and store it in a specific location in the store. An output instruction tells the
computer to move a piece of data from a computer storage location and record
it on the output medium.

All of the basic arithmetic operations can be performed by the computer. Since
arithmetic operations involve at least two numbers, an arithmetic operation must
include at least two operands.

Branch instructions cause the computer to alter the sequence of execution of


instruction within the program. There are two basic types of branch instructions;
namely unconditional branch instruction and conditional branch instruction. An
unconditional branch instruction or statement will cause the computer to branch
to a statement regardless of the existing conditions. A conditional branch
statement will cause the computer to branch to a statement only when certain
conditions exist.

Logic instructions allow the computer to change the sequence of execution of


instruction, depending on conditions built into the program by the programmer.
Typical logic operations include: shift, compare and test.

3.1 Types of Programming Language

The effective utilization and control of a computer system is primarily through the
software of the system. We note that there are different types of software that
can be used to direct the computer system. System software directs the internal
operations of the computer and applications software allows the programmer to
use the computer to solve user made problems. The development of
programming techniques has become as important to the advancement of
computer science as the developments in hardware technology. More
sophisticated programming techniques and a wider variety of programming
languages have enabled computers to be used in an increasing number of
applications.

Programming languages, the primary means of human-computer


communication, have evolved from early stages where programmers entered
instructions into the computer in a language similar to that used in the application.
Computer programming languages can be classified into the following
categories:

a) Machine language

39
RESTRICTED
RESTRICTED

b) Assembly language
c) High level symbolic language
d) Very high level symbolic language.

3.1.1 Machine Language

The earliest forms of computer programming were carried out by using languages
that were structured according to the computer stored data, that is, in a binary
number system. Programmers had to construct programs that used instructions
written in binary notation 1 and 0. Writing programs in this fashion is tedious, time-
consuming and susceptible to errors.

Each instruction in a machine language program consists, as mentioned before,


of two parts namely: operation code and operands. An added difficulty in
machine language programming is that the operands of an instruction must tell
the computer the storage address of the data to be processed. The programmer
must designate storage locations for both instructions and data as part of the
programming process. Furthermore, the programmer has to know the location of
every switch and register that will be used in executing the program, and must
control their functions by means of instructions in the program.

A machine language program allows the programmer to take advantage of all


the features and capabilities of the computer system for which it was designed. It
is also capable of producing the most efficient program as far as storage
requirements and operating speeds are concerned. Few programmers today
write applications programs in machine language. A machine language is
computer dependent. Thus, an IBM machine language will not run on NCR
machine, DEC machine or ICL machine. A machine language is the First
Generation (computer) Language (IGL).

3.1.2 Assembly (Low Level) Language

Since machine language programming proved to be a difficult and tedious task,


a symbolic way of expressing machine language instructions is devised. In
assembly language, the operation code is expressed as a combination of letters
rather than binary numbers, sometimes called mnemonics. This allows the
programmer to remember the operations codes easily than when expressed
strictly as binary numbers. The storage address or location of the operands is
expressed as a symbol rather than the actual numeric address. After the
computer has read the program, operations software are used to establish the

40
RESTRICTED
RESTRICTED

actual locations for each piece of data used by the program. The most popular
assembly language is the IBM Assembly Language.

Because the computer understands and executes only machine language


programs, the assembly language program must be translated into a machine
language. This is accomplished by using a system software program called an
assembler. The assembler accepts an assembly language program and produces
a machine language program that the computer can actually execute. The
schematic diagram of the translation process of the assembly language into the
machine language is shown in the below diagram.

Although, assembly language programming offers an improvement over


machine language programming, it is still an arduous task, requiring the
programmer to write programs based on particular computer operation codes.
An assembly language program developed and run on IBM computer would fail
to run on ICL computers. Consequently, the portability of computer programs in
a computer installation to another computer installation which houses different
makes or types of computers were not possible. The low level languages are,
generally, described as Second Generation (computer) Language (2GL).

3.1.3 High Level Language

The difficulty of programming and the time required to program computers in


assembly languages and machine languages led to the development of high-
level languages. The symbolic languages, sometimes referred to as problem-
oriented languages reflect the type of problem being solved rather than the
computer being used to solve it. Machine and assembly language programming
is machine dependent but high-level languages are machine independent, that
is, a high-level language program can be run on a variety of computer.

While the flexibility of high-level languages is greater than that of the machine
and assembly languages, there are close restrictions in exactly how instructions
are to be formulated and written. Only a specific set of numbers, letters, and
special characters may be used to write a high-level program and special rules
must be observed for punctuation. High level language instructions do resemble
English language statements and the mathematical symbols used in ordinary
mathematics. Among the existing and popular high-level programming
languages are Fortran, Basic, Cobol, Pascal, Algol, Ada and P1/1. The schematic
diagram of the translation process of a high-level language into the machine

41
RESTRICTED
RESTRICTED

language is shown in the diagram below. The high-level languages are, generally,
described as Third Generation (computer) Language (3GL).

3.1.4 Very High Level Language

Programming aids or programming tools are provided to help programmers do


their programming work more easily. Examples of programming tools are:

(a) Program development systems that help users to learn programming, and to
program in a powerful high level language. Using a computer screen(monitor)
and keyboard under the direction of an interactive computer program, users are
helped to construct application programs.

(b) A program generator or application generator that assists computer users to


write their own programs by expanding simple statements into program code’.

(c) Database management system.

(d) Debuggers that are programs that help computer user to locate errors (bugs)
in the application programs they write.

The very high level language generally described as the Fourth Generation
(computer) Language (4GL), is an ill-defined term that refers to software intended
to help computer users or computer programmers to develop their own
application programs more quickly and cheaply. A 4GL, by using a menu system
for example, allows users to specify what they require, rather than describe the
procedures by which these requirements are met.

The detail procedure by which the requirements are met is done by the 4GL
software which is transparent to the users. A 4GL offers the user an English-like set
of commands and simple control structures in which to specify general data
processing or numerical operations. A program is translated into a conventional
high-level language such as Cobol, which is passed to a compiler. A 4GL is,
therefore, a non-procedural language. The program flows are not designed by
the programmer but by the fourth-generation software itself. Each user request is
for a result rather than a procedure to obtain the result.

UNIT 7: BASIC PRINCIPLES OF COMPUTER PROGRAMMING

Table of Content
Problem solving with Computer.
Principles of programming.
Stages of programming.
42
RESTRICTED
RESTRICTED

1.0 Introduction
Computer programming is both an art and a science. In this unit, we students shall
be exposed to some arts and science of computer programming including
principles of programming and stages of programming.

2.0 Objectives

The objective of this unit is to expose students to the principles of programming


and the stages involved in writing computer programs.

3.0 Problem Solving with The Computer

The computer is a general-purpose machine with a remarkable ability to process


information. It has many capabilities, and its specific function at any particular
time is determined by the user. This depends on the program loaded into the
computer memory being utilized by the user.

Computer programming is the act of writing a program which a computer can


execute to produce the desired result. A program is a series of instructions
assembled to enable the computer to carry out a specified procedure. A
computer program is the sequence of simple instructions into which a given
problem is reduced and which is in a form the computer can understand, either
directly or after interpretation.

3.1 Programming Methodology


Principles of good Programming
It is generally accepted that a good Computer program should have the
characteristics shown below:
i. Accuracy: The Program must do what it is supposed to do correctly and
must meet the criteria laid down in its specification.
ii. Reliability: The Program must always do what it is supposed to do, and
never crash.
iii. Efficiency: Optimal utilization of resources is essential. The program must use
the available storage space and other resources in such as way that the
system speed is not wasted.
iv. Robustness: The Program should cope with invalid data and not stop
without an indication of the cause of the source of error.
v. Usability: The Program must be easy enough to use and be well
documented.
vi. Maintainability: The Program must be easy to amend having good
structuring and documentation.
43
RESTRICTED
RESTRICTED

vii. Readability: The Code of a program must be well laid out and explained
with comments.

3.2 Stages of Programming


The preparation of a computer program involves a set of procedure. These steps
can be classified into eight major stages, viz
i. Problem Definition
ii. Devising the method of solution
iii. Developing the method using suitable aids, e.g. pseudo code or flowchart.
iv. Writing the instructions in a programming language
v. Transcribing the instructions into “machine sensible” form
vi. Debugging the program
vii. Testing the program
viii. Documenting all the work involved in producing the program.
(i) Problem definition
The first stage requires a good understand of the problem. The programmer (i.e.
the person writing the program) needs to thoroughly understand what is required
of a problem. A complete and precise unambiguous statement of the problem
to be solved must be stated. This will entail the detailed specification which lays
down the input, processes and output-required.
(ii) Devising the method of solution
The second stage involved is spelling out the detailed algorithm. The use of a
computer to solve problems (be it scientific or business data processing problems)
requires that a procedure or an algorithm be developed for the computer to
follow in solving the problem.
(iii) Developing the method of solution
There are several methods for representing or developing methods used in solving
a problem. Examples of such methods are: algorithms, flowcharts, pseudo code,
and decision tables.
(iv) Writing the instructions in a programming language
After outlining the method of solving the problem, a proper understanding of the
syntax of the programming language to be used is necessary in order to write the
series of instructions required to get the problem solved.
(v) Transcribing the instructions into machine sensible form
After the program is coded, it is converted into machine sensible form or machine
language. There are some manufacturers written programs that translate users’
program (source program) into machine language (object code). These are
called translators and instructions that machines can execute at a go, while
44
RESTRICTED
RESTRICTED

interpreters accept a program and executes it line-by-line. During translation, the


translator carries out syntax check on the source program to
detect errors that may arise from wrong use of the programming language.
(vi) Program debugging
A program seldomly executes successfully the first time. It normally contains a few
errors (bugs). Debugging is the process of locating and correcting errors. There
are three classes of errors.
a. Syntax errors: Caused by mistake coding (illegal use of a feature of the
programming language).
b. Logic errors: Caused by faulty logic in the design of the program. The
program will work but not as intended.
c. Execution errors: The program works as intended but illegal input or other
circumstances at run-time makes the program stop. There are two basic
levels of debugging. The first level called desk checking or dry running is
performed after the program has been coded and entered or key
punched. Its purpose is to locate and remove as many logical and clerical
errors as possible.

The program is then read (or loaded) into the computer and processed by a
language translator. The function of the translator is to convert the program
statements into the binary code of the computer called the object code. As part
of the translation process, the program statements are examined to verify that
they have been coded correctly, if errors are detected, a series of diagnostics
referred to as an error message list is generated by the language translator. With
this list in the hand of programmer, enters the second level of debugging is
reached.

The error message list helps the programmer to find the cause of errors and make
the necessary corrections. At this point, the program may contain entering errors,
as well as clerical errors or logic errors. The programming language manual will be
very useful at this stage of program development.
After corrections have been made, the program is again read into the computer
and again processed by the language translator. This is repeated over and over
again until the program is error-free.
(vii) Program testing
The purpose of testing is to determine whether a program consistently produces
correct or expected results. A program is normally tested by executing it with a
given set of input data (called test data), for which correct results are known.

45
RESTRICTED
RESTRICTED

For effective testing of a program, the testing procedure is broken into three
segments.
a. The program is tested with inputs that one would normally expect for
an execution of the program.
b. Valid but slightly abnormal data is injected (used) to determine the
capabilities of the program to cope with exceptions. For example,
minimum and maximum values allowable for a sales-amount field
may be provided as input to verify that the program processed them
correctly.
c. Invalid data is inserted to test the program’s error-handling routines.
If the result of the testing is not adequate, then minor logic errors still
abound in the program. The programmer can use any of these three
alternatives to locate the bugs.
(viii) Program documentation
Documentation of the program should be developed at every stage of the
programming cycle. The following are documentations that should be done for
each program.
1. Problem Definition Step
· A clear statement of the problem
· The objectives of the program (what the program is to accomplish)
· Source of request for the program.
· Person/official authorizing the request.
2. Planning the Solution Step
· Flowchart, pseudocode or decision tables
· Program narrative
· Descriptive of input, and file formats
3. Program source coding sheet
4. User’s manual to aid persons who are not familiar with the program to apply
it correctly. It contains a description of the program and what it is designed
to achieve.
5. Operator’s manual to assist the computer operator to successfully run the
program. This manual contains:
a. Instructions about starting, running and terminating the program.
b. Message that may be printed on the console or VDU (terminal) and their
meanings.
c. Setup and take down instruction for files.
Advantages of Program documentation

46
RESTRICTED
RESTRICTED

i. It provides all necessary information for anyone who comes in contact with
the program.
ii. It helps the supervisor in determining the program’s purpose, how long the
program will be useful and future revision that may be necessary.
iii. It simplifies program maintenance (revision or updating)
iv. It provides information as to the use of the program to those unfamiliar with
it.
v. It provides operating instructions to the computer operator.
4.0 Conclusion
The intelligence of a computer derives to a large extent from the quality of the
programs.
In this unit, we have attempted to present in some details, the principles and the
stages involved in writing a good computer program.

STUDY UNIT 7: FLOWCHART AND ALGORITHMS


Table of Content
Flowchart
Algorithms

1.0 Introduction
In this unit you are introduced to the principles of flowcharts and algorithms. The
importance of these concepts are presented and the detailed steps and
activities involved are also presented.

2.0 Objectives
The objective of this unit is to enable the student grasp the principles of good
programming ethics through flowcharting and algorithms.

3.0 Flowchart
A Flowchart is a graphical representation of the major steps of work in process. It
displays in separate boxes the essential steps of the program and shows by means
of arrows the directions of information flow. The boxes most often referred to as
illustrative symbols may represent documents, machines or actions taken during
the process. The area of concentration is on where or who does what, rather than
on how it is done. A flowchart can also be said to be a graphical representation
of an algorithm, that is, it is visual picture which gives the steps of an algorithm
and also the flow of control between the various steps.
3.1 Flowchart Symbols

47
RESTRICTED
RESTRICTED

Flowcharts are drawn with the help of symbols. The following are the most
commonly used flowchart symbols and their functions:
Name Symbol Uses

1. Terminal Symbol used to show the START or STOP point in


or a flowchart. It indicates the beginning or
ending of a flowchart.

2. Process Symbol Used to indicate processing or points


where calculations or computations are
carried out in a flowchart. E.g. Sum = A +
B+C

3. Input/Output Symbol Used to indicate Input or Output


instructions in a flowchart.

4. Decision Symbol Used for decision making. It indicates


points in a flowchart where decisions are
made. Has two or more lines leaving the
box. These lines are labeled with
different decision results, that is, ‘TRUE’ or
‘FALSE’, ’ Yes’ or ‘No’, or ‘NEGATIVE’ or
‘ZERO’.

5. Subroutine Symbol Used for one or more named operations


or program steps specified in a
subroutine or function or procedure.

6. On-page Connector Used for entry to or exit from another part


of flowchart. It connects two parts of a
flowchart on the same page.

7. Off-page Connector Used for entry to or exit from a page.


Connects two parts of a flowchart on
different pages.

8. Direction Symbol Used to show the direction of flow of


program logic. They are used in linking
symbols. These show operations
sequence and data flow directions.

9. Comment Symbol Used to add comments or explanatory


notes a flowchart.
48
RESTRICTED
RESTRICTED

3.2 Algorithms
Before a computer can be put to any meaningful use, the user must be able to
come out with or define a unit sequence of operations or activities (logically
ordered) which gives an unambiguous method of solving a problem or finding
out that no solution exists. Such a set of operations is known as an ALGORITHM.

An algorithm, named after the ninth century scholar Abu Jafar Muhammad Ibn
Musu Al-Khowarizmi, is defined as follows:

• An algorithm is a set of rules for carrying out calculation either by hand or


a machine.
• An algorithm is a finite step-by-step procedure to achieve a required result.
• An algorithm is a sequence of computational steps that transform the input
into the output.
• An algorithm is a sequence of operations performed on data that have to
be organized in data structures.
• An algorithm is an abstraction of a program to be executed on a physical
machine (model of computation)
The most famous algorithm in history dates well before the time of the ancient
Greeks: this is Euclids algorithm for calculating the greatest common divisor of two
integers.

An algorithm therefore can be characterized by the following:


i. A finite set or sequence of actions
ii. This sequence of actions has a unique initial action
iii. Each action in the sequence has unique successor
iv. The sequence terminates with either a solution or a statement that
the problem is unresolved.
An algorithm can therefore be seen as a step-by-step method of solving a
problem.

Example
Write an algorithm to compute a customer’s bill. Note bill = unit price x Quantity.
Solution
1. Input values for Unit price and Quantity
2. Compute value for Bill

49
RESTRICTED
RESTRICTED

3. Print value of Bill


4. Stop
3.3 Flowcharting the Problem
The digital computer does not do any thinking and cannot make unplanned
decisions. Every step of the problem has to be taken care of by the program. A
problem which can be solved by a digital computer need not be described by
an exact mathematical equation, but it does need a certain set of rules that the
computer can follow. If a problem needs intuition or guessing, or is so badly
defined that it is hard to put into words, the computer cannot solve it. You have
to define the problem and set it up for the computer in such a way that every
possible alternative is taken care of. A typical flowchart consists of special boxes,
in which are written the activities or operations for the solution of the problem. The
boxes linked by means of arrows which show the sequence of operations. The
flowchart acts as an aid to the Programmer who follows the flowchart design to
write his programs.

Example 1: The diagram below represents a program flowchart for finding the
sum of first five natural numbers ( i.e. 1,2,3,4,5).

50
RESTRICTED
RESTRICTED

Example 2: Draw the flowchart for computing a customer’s bill. Hints: Bill = unit price x
Quantity.
Start

Enter Unit Price, Quantity

Compute Bill

Output Bill

Example 3: Draw a flowchart to find the sum of first 50 natural numbers.


end

51
RESTRICTED
RESTRICTED

Example 4
Draw a flowchart to find the largest of three numbers A, B, and C.

52
RESTRICTED
RESTRICTED

Flowchart for finding out the largest of three numbers

3.4 Pseudocode
Pseudocode is a program design aid that serves the function of a flowchart in
expressing the detailed logic of a program. Sometimes a program flowchart
might be inadequate for expressing the control flow and logic of a program. By
using Pseudocode, program algorithm can be expressed as English-language
statements. These statements can be used both as a guide when coding the
program in specific language and as documentation for review by others.
Because there is no rigid rule for constructing pseudocodes, the logic of the
program can be expressed in a manner without confronting to any particular
programming language. A series of structured words is used to express the major
program functions. These structured words are the basis for writing programs using
a technical term called “structure programming”.

Example: Construct Pseudocode to find the sum of first 50 natural numbers


BEGIN
STORE 0 TO SUM
53
RESTRICTED
RESTRICTED

STORE 1 TO COUNT
DO WHILE COUNT not greater than 50
ADD COUNT to SUM
INCREMENT COUNT by 1
ENDWHILE
OUTPUT SUM
END
4.0 Conclusion
Flowcharts, pseudocodes and algorithms are essential ingredients to the writing
of good programs. If they are done properly, they lead to reduction in errors in
programs. They help minimize the time spent in debugging. In addition, they make
logic errors easier to trace and discovered.

5.0 Summary
In this unit we have learnt that:
i. A Flowchart is a graphical representation of the major steps of work in
process. It displays in separate boxes the essential steps of the program and
shows by means of arrows the directions of information flow.
ii. Pseudocode is a program design aid that serves the function of a flowchart
in expressing the detailed logic of a program.
iii. An algorithm is a set of rules for carrying out calculation either by hand or
a machine.

STUDY UNIT 8: COMPUTER VIRUS


Table of Contents
Computer virus
How to detect computer virus
Mode of transmission of computer virus

1.0 Introduction
One of the biggest fears of having computers are viruses, viruses are malicious
programs designed entirely for destruction and havoc. Viruses are created by
people who either know a lot about programming or know a lot about
computers.

2.0 Objectives
The objective of this unit is to introduce students to the concept of computer virus,
its mode of transmission, detection, prevention and cure.
54
RESTRICTED
RESTRICTED

3.0 Computer Virus


Computer virus is one of the greatest threats to computers and computer
applications. Once the virus is made it will generally be distributed through
shareware, pirated software, e-mail or other various ways of transporting data.
Once the virus infects someone's computer, it will either start infecting other data,
destroying data, overwriting data, or corrupting software.
The reason that these programs are called viruses is because it is spreads like a
human virus, once you have become infected either by downloading something
off of the Internet or sharing software any disks or write able media that you
placed into the computer will then be infected. When that disk is put into another
computer, their computer is then infected. If the infected person puts files on the
Internet and hundreds of people download that file, they all become infected
and the process continues infecting thousands if not millions of people.

MODE OF TRANSMISSION OF COMPUTER VRUS


The majority of viruses are contacted via flash drives when information is passed
from one source and then put onto your computer. VIRUSES can infect disks and
when that disk is put into your computer your computer will then become infected
with that virus.
It is a known fact that a majority of viruses were contracted from email
attachments and over the Internet. This means that you must have received an
email with an attached file and opened the file and gotten infected.
VIRUS PROPERTIES
1. Your computer can be infected even if files are just copied. Because some
viruses are memory resident as soon as an external storage device or
program is loaded into memory the virus then attaches itself into memory.
2. Can be Polymorphic. Some viruses have the capability of modifying their
code which means one virus could have various amounts of similar variants.
3. Can be memory / Non memory resident. Depending on the virus can be
memory resident virus which first attaches itself into memory and then
infects the computer. The virus can also be Non memory resident which
means a program must be ran in order to infect the computer.
4. Can be a stealth virus. Stealth viruses will first attach itself to files on the
computer and then attack the computer this causes the virus to spread
more rapidly.
5. Viruses can carry other viruses and infect that system and also infect with
the other virus as well. Because viruses are generally written by different

55
RESTRICTED
RESTRICTED

individuals and do not infect the same locations of memory and or files this
could mean multiple viruses can be stored in one file, diskette or computer.
6. Can make the system never show outward signs. Some viruses will hide
changes made such as when infecting a file the file will stay the same size.
7. Can stay on the computer even if the computer is formatted. Viruses have
the capability of infecting different portions of the computer such as the
CMOS battery.
HOW VIRUSES MAY EFFECT FILES
Viruses can affect any files however usually attack .com, .exe, .sys, .bin, .pif or
any data files. Viruses have the capability of infecting any file however will
generally infect executable files or data files such as word or excel documents
which are open frequently.
• It can increase the files size, however this can be hidden. When infecting
files virtues will generally increase the size of the file however with more
sophisticated viruses these changes can be hidden.
• It can delete files as the file is ran. Because most files are loaded into
memory and then ran once the program is in memory the Virus can
delete the file.
• It can corrupt files randomly. Some destructive viruses are not designed
to destroy random data but instead randomly delete or corrupt files.
• It can cause write protect errors when executing .exe files from a write
protected disk. Viruses may need to write themselves to files which are
executed because of this if a diskette is write protected you may receive
a write protection error.
• It can convert .exe files to .com files. Viruses may use a separate file to
run the program and rename the original file to another extension so the
exe is ran before the com.
• It can reboot the computer when a files is ran. Various computers may
be designed to reboot the computer when ran.
DETECTING VIRUSES
The most commonly used method of protecting against and detecting viruses is
to purchase a third-party application designed to scan for all types of viruses. This
is known as an antivirus.
Alternatively, a user can look at various aspects of the computer and detect
possible signs indicating a virus is on the computer. While this method can be used
to determine some viruses it cannot clean or determine the exact virus you may
or may not have.
4.0 Conclusion
56
RESTRICTED
RESTRICTED

Computer viruses are perhaps the greatest threats to the computer. If not
detected and promptly cured, computer virus attack could lead to the total
breakdown of the computer. With the aid of our discussion in this unit, students
should be able to prevent, detect and clean viruses in a computer installation.
5.0 Summary
In this unit we have learnt the following:
(a) That computer viruses are programs written by programmers with the aim of
causing havoc to the computer.
(b) Computer viruses could lead to malfunctioning and total breakdown of the
computer.
(c) Computer viruses are transferred from one computer to another through the
use of infested storage media such as flash drive, CDROM, or across a computer
network (internet).
(d) There are antivirus packages specially written to prevent, detect and clean
viruses.

57
RESTRICTED

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy