History of Computer
History of Computer
One of the earliest machines designed to assist people in calculations was abacus which is still being used
some 5000 years after its invention.
Blaise Pascal:
In 1642 Blaise pascal (a famous French mathematician) invented an adding
machine based on mechanical gears in which numbers were represented by the cogs on the
wheels.
Charles Babbage:
1. An input deice
2. Storage for numbers waiting to be processed
3. A processor or number calculator
4. A unit to control the task and the sequence of its calculations
5. An output device
Augusta Ada Byron (late countess of Lovelace) was an associate of Babbage who has become known as
the first computer programmer.
Herman Hollerith:
Howard Aiken:
At about the same time (the late 1930’s) Johan Atansoff of Lowa state University and
his assistant Clifford Berry built the first digital computer that worked electronically, the ABC (Atansoff-
Berry Computer). This machine was basically a small calculator.
HISTORY OF COMPUTER 1
John Mauchly and J.Presper Eckert:
In the late 1940’s John Von Neumonn ( at the time a special consultant to the
ENIAC team) develped the EDVAC (Electronic Discrete Variable Automatic Computer) which pioneered
the “Stored Program Concept”. This allowed programs to be read into the computer and so gave birth to
the age of general-purpose computers.
This generation os often described as starting with the delivery of the first
commercial computer to a bussiness client. This happened in 1951 with the delivery of the UNIVAC to
the US Bureau of the Census.this generation lasted until about the end of the 1950’s(although some
stayed in operation much longer than that). The main defining feature of the first generation of coputers
was that vaccum tubes were used as internal computer components, vaccum tubes are generally about
5-10 centimeters in length and the large numbers of them required in computers resulted in huge and
extremely expensive machines that often broke down (as tubes failed).
The other main improvement of this period was the development of computer languages. Assembler
languages or Symbolic languages allowed programmers to specify instructions in words which were
then translated into a form that the machines cold understand (typically series of 0’s and 1’s: binary
code).Higher level languages also came into being during this period. Whereas assembler languages
had a one-to-one correspondance between their symbols and actual machine fnctions, higher level
HISTORY OF COMPUTER 2
language commands often represt complex sequences of machine codes. Two higher-level languages
developed during this period (Forton and Cobol) are still in use today though in a much more
developed form.
In 1965 the first Integrated Circuit (IC) was developed in which a complete circuit
hundreds of component s were able to be placed on a single silicon chip 2 or 3 mm square. Computers
using theses IC’s soon replaced transistor based machine. Again, one the major advantages was size, with
computers becoming more powerful and at the same time much smaller and cheaper. Computers thus
become accessible to much larger audience. An added advantage of smaller size is that electrical signals
have much shorter distances to travel and so the speed of computers increased.
Another feature of this period is that computer software became much more powerful and flexible and for
the first time more than one program could share the computer’s resources at the same time (multi tasking).
The majority of programming languages used today are often referred to as 3GL’s (3rd generation
languages) even though some of them orginated during the 2nd generation.
The bondry between the third and fourth generations is not very clear-cut at
all. Most of the developments since the mid 1960’s can be seen as part of a continuum of gradual
miniaturisation. In 1970 Large scale integration was achieved where the equivalent of thousands of
integrated circuits were crammed onto a single silicon chip. The development again increased
computer performance (especially reliability and speed) whilst reducing computer size and cost. Around
this time the first complete general-purpose microprocessor became avaliable on a single chip. In 1975
Very Large Scale Integration (VLSI)took the process one step further. Complete computer central
processors could now be built into one chip. The microcomputer was born. Such chips are far more
power ful than ENIAC and are only about 1cm square whilst ENIAC filled a large building.
During this period Fourth Generation Languages (4GL’s) have come into existance. Such languages
are a step furtherremoved from the computer hardware in that they use language much like natural
language. Many database languages can be described as 4GL’s. They are generally much easier to
learn than are 3GL’s.
HISTORY OF COMPUTER 3
componenets of the definition. As you may have guessed, this goal has not yet been fully realised,
although significant progress has been made towards various aspects of these goals.
Parallel Computing:
Up until recently more computers were serial computers. Such computers had a single processor
chip containing a single processors. Parallel computing is based on the idea that if more than one
task can be processed simultaneously on multiple processors than a program would be able to run
more rapidly than it could run on a single processor. The supercomputers of the 1990’s, such as
the cray computers, were extremely expensive to purchase (usually over $1,000,000) and often
required cooling by liquid helium so they were also very expensive to run. Clusters of networked
computers (eg. A Beowulf cluster of PCs running Linux) have been,since1994, a much cheaper
solution to the problem of fast processing of complex computing tasks. By 2008,most new desktop
and laptop computers contained more than one processor on a single chip(eg.the Intel”Core 2
Duo” released in 2006 or the Intel “Core 2 Quad”released in 2007). Having multiple processors
does not necessarily mean that parallel computing will work automatically. The operating system
must be able to distribute programs between the processors (eg.recent versions of Microsoft
Windows and Mac OS X can do this). An individual program will only be able to take advantage of
multiple processors if the computer language it’s written in is able to distribute tasks within a
program between multiple processors. For example, openMP supports parallel programming in
Fortran and C/C++.
HISTORY OF COMPUTER 4
HISTORY OF COMPUTER 5