0% found this document useful (0 votes)
11 views40 pages

LDP - UNIT-1 (Introduction of C)

Chapter 1 introduces computer programming, detailing the history and evolution of computers from the abacus to modern digital systems. It outlines the generations of computers, types of computers including PCs, workstations, and supercomputers, and provides an overview of computer architecture, including the CPU, memory, and input/output devices. The chapter concludes with descriptions of various input and output devices used in computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views40 pages

LDP - UNIT-1 (Introduction of C)

Chapter 1 introduces computer programming, detailing the history and evolution of computers from the abacus to modern digital systems. It outlines the generations of computers, types of computers including PCs, workstations, and supercomputers, and provides an overview of computer architecture, including the CPU, memory, and input/output devices. The chapter concludes with descriptions of various input and output devices used in computing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

CHAPTER 1 INTRODUCTION TO COMPUTER PROGRAMMING

1.1 INTRODUCTION

1.1.1 HISTORY OF COMPUTERS


→ The abacus “the first automatic computer” is the earliest known tool of computing
→ It was thought to have been invented in Babylon, circa 2400 BCE.
→ The abacus generally features a table or tablet with beaded strings.
→ The abacus is still in use today in China and Japan. It was only very recently (the
1990’s) that the availability and sophistication of the hand-held calculator supplanted the
abacus.
→ In 1115 BCE the Chinese invented the South Pointing Chariot, a device which was
the first to use the differential gear, which is believed to have given rise to the first analog
computers.

Fig 1: example of abaci


→ Analog computers are a form of computer that use electrical, mechanical, or
hydraulic means to model the problem being solved (simulation).
→ Analog computers are believed to have been first developed by the Greeks with
the Antikythera mechanism, which was used for astronomy. The Antikythera mechanism
was discovered 1901 and was dated back to circa 100 BCE.
→ Analog computers are not like today’s computers. Modern computers are digital
in nature and are immensely more sophisticated.

→ There are still analog computers in use, such as the ones for research at the
University of Indiana and the Harvard Robotics Laboratory.
→ Charles Babbage and Ada Lovelace together are often thought of as the founders
of modern computing.
→ Babbage invented the Difference Engine, and, more importantly, the Analytical
Engine. The latter is often recognized as a key step towards the formation of the modern
computer.
→ Ada Lovelace, daughter of famous poet Lord Byron, is known for describing-in
algorithms- the processes the Analytical Engine was intended for.In this sense she is
considered a pioneer in computer programming.
→ Ever since the abacus was developed humans had been using devices to aid in the
act of computation.
→ Throughout 1500 to 1800’s many breakthroughs in computational hardware
technology were developed, including mechanical calculators and punch card technology
(used to this day).
→ In the late 1800’s the first “programmable” computers appeared using punch card
technology. To be programmable, a machine had to be able to simulate the
computations of any other machine by altering its computational process.
→ From the 1930’s to the 1960’s “desktop calculators” were developed, making the
transition from mechanical to electronic.
→ By the 1940’s the age of analog computers was about to become the age of digital
computers.
→ Charles Babbage laid the foundations of Computer Science, but it was Alan Turing of
England who is regarded as the “Father of Computer Science”.
→ He provided a new concept of both algorithms and the process of calculations with the
invention of his Turing Machine.
→ The Turing Machine is a basic abstract symbol manipulating device that can be used to
simulate the logic of any computer that could possibly be constructed. It was not
actually constructed, but its theory yielded many insights.

1.1.2 GENERATIONS OF COMPUTER

The Zeroth Generation

→ The term Zeroth generation is used to refer to the period of development of computing,
which predated the commercial production and sale of computer equipment.

→ The period might be dated as extending from the mid-1800s. In particular, this period
witnessed the emergence of the first electronics digital computers on the ABC, since it
was the first to fully implement the idea of the stored program and serial execution of
instructions.

→ The development of EDVAC set the stage for the evolution of commercial computing
and operating system software. The hardware component technology of this period
was electronic vacuum tubes.

→ The actual operation of these early computers took place without be benefit of an
operating system. Early programs were written in machine language and each
contained code for initiating operation of the computer itself.

→ This system was clearly inefficient and depended on the varying competencies of the
individual programmer as operators.

The First Generation, 1951-1956


→ The first generation marked the beginning of commercial computing.

→ The first generation was characterized by high-speed vacuum tube as the active
component technology. Operation continued without the benefit of an operating
system for a time.
→ The mode was called "closed shop" and was characterized by the appearance of hired
operators who would select the job to be run, initial program loads the system, run the
user ‘s program, and then select another job, and so forth.

→ Programs began to be written in higher level, procedure-oriented languages, and thus


the operator ‘s routine expanded.

The Second Generation, 1956-1964

→ The second generation of computer hardware was most notably characterized by


transistors replacing vacuum tubes as the hardware component technology. In
addition, some very important changes in hardware and software architectures
occurred during this period.

→ For the most part, computer systems remained card and tape-oriented systems.
Significant use of random-access devices, that is, disks, did not appear until towards
the end of the second generation.

→ The most significant innovations addressed the problem of excessive central processor
delay due to waiting for input/output operations. Recall that programs were executed
by processing the machine instructions in a strictly sequential order.

→ As a result, the CPU, with its high-speed electronic component, was often forced to wait
for completion of I/O operations which involved mechanical devices (card readers and
tape drives) that were order of magnitude slower.

→ These hardware developments led to enhancements of the operating system. I/O and
data channel communication and control became functions of the operating system,
both to relieve the application.
The Third Generation, 1964-1979
→ The third generation officially began in April 1964 with IBM‘s announcement of its
System/360 family of computers.

→ Hardware technology began to use integrated circuits (ICs) which yielded significant
advantages in both speed and economy. Operating System development continued
with the introduction and widespread adoption of multiprogramming.

→ This marked first by the appearance of more sophisticated I/O buffering in the form of
spooling operating systems. These systems worked by introducing two new systems
programs, a system reader to move input jobs from cards to disk, and a system writer
to move job output from disk to printer, tape, or cards.

The Fourth Generation, 1979 – Present

→ The fourth generation is characterized by the appearance of the personal computer


and the workstation. Miniaturization of electronic circuits and components continued
and Large-Scale Integration (LSI), the component technology of the third generation,
was replaced by Very Large-Scale Integration (VLSI), which characterizes the fourth
generation.

→ Improvements in hardware miniaturization and technology have evolved so fast that


we now have inexpensive workstation-class computer capable of supporting
multiprogramming and time-sharing. Hence the operating systems that supports
today‘s personal computers and workstations look much like those which were
available for the minicomputers of the third generation.
Fig 2: generations of computer
DIFFERENCE
1.2 TYPES OF COMPUTER

PC Personal Computer

→ A PC can be defined as a small, relatively inexpensive computer designed for an


individual user. PCs are based on the microprocessor technology that enables
manufacturers to put an entire CPU on one chip.
→ Businesses use personal computers for word processing, accounting, desktop
publishing, and for running spreadsheet and database management applications

Workstation
→ Workstation is a computer used for engineering applications CAD/CAM, desktop
publishing, software development, and other such types of applications which require
a moderate amount of computing power and relatively high-quality graphics
capabilities.

→ Workstations generally come with a large, high-resolution graphics screen, large


amount of RAM, inbuilt network support, and a graphical user interface. Most
workstations also have a mass storage device such as a disk drive, but a special type of
workstation, called a diskless workstation, comes without a disk drive.
Minicomputer
→ It is a midsize multi-processing system capable of supporting up to 250 users
simultaneously.

Mainframe
→ Mainframe is very large in size and is an expensive computer capable of supporting
hundreds or even thousands of users simultaneously. Mainframe executes many
programs concurrently and supports many simultaneous executions of programs.
Supercomputer
→ Supercomputers are one of the fastest computers currently available. Supercomputers
are very expensive and are employed for specialized applications that require
immense amount of mathematical calculations numbercrunching. For example,
weather forecasting, scientific simulations, animated graphics, fluid dynamic
calculations, nuclear energy research, electronic design.

1.3 BASIC BLOCK DIAGRAM OF COMPUTER

Fig 3: BLOCK DIAGRAM OF COMPUTER


→ Block diagram of a computer gives you the pictorial representation of a computer that
how it works inside.
→ You can say that, in computer's block diagram, we will see how computer works from
feeding the data to getting the result.
→ Both control (control unit or CU) and arithmetic & logic unit (ALU) combinedly called
as Central Processing Unit (CPU).

The Processor Unit (CPU)


→ It is the brain of the computer system.
→ All major calculation and comparisons are made inside the CPU and it is also
responsible for activation and controlling the operation of other unit.
→ This unit consists of two major components, that are arithmetic logic unit (ALU) and
control unit (CU).

→ The fundamental operation of most CPUs


→ - To execute a sequence of stored instructions called a program.

→ 1. The program is represented by a series of numbers that are kept in some kind
of computer memory.
→ 2. There are four steps that nearly all CPUs use in their operation: fetch, decode,
execute, and write back.
→ 3. Fetch:
• Retrieving an instruction from program memory.
• The location in program memory is determined by a program counter (PC)
• After an instruction is fetched, the PC is incremented by the length of the
instruction word in terms of memory units.
→ Arithmetic Logic Unit (ALU)

→ Here arithmetic logic unit performs all arithmetic operations such as addition,
subtraction, multiplication and division. It also uses logic operation for comparison.
→ Control Unit (CU) :
→ And the control unit of a CPU controls the entire operation of the computer. It also
controls all devices such as memory, input/output devices connected to the CPU.
→ CU fetches instructions from memory, decodes the instruction, interprets the
instruction to know what the task are to be performed and sends suitable control
signals to the other components to perform for the necessary steps to executes the
instruction.
Input/Output Unit:
→ The input/output unit consists of devices used to transmit information between the
external world and computer memory.
→ The information fed through the input unit is stored in computer's memory for
processing and the final result stored in memory can be recorded or display on the
output medium.

Memory Unit
→ Memory unit is an essential component of a digital computer. It is where all data
intermediate and find results are stored.
→ The data read from the main storage or an input unit are transferred to the computer's
memory where they are available for processing.
→ This memory unit is used to hold the instructions to be executed and data to be
processes.
→ Memory is often used as a shorter synonym for Random Access Memory (RAM). This
kind of memory is located on one or more microchips that are physically close to the
microprocessor in your computer.
→ Most desktop and notebook computers sold today include at least 512 megabytes of
RAM (which is really the minimum to be able to install an operating system). They are
upgradeable, so you can add more when your computer runs really slowly.
TYPES OF RAM:

→ There are two types of RAM used in PCs - Dynamic and Static RAM.

→ Dynamic RAM (DRAM): The information stored in Dynamic RAM has to be refreshed
after every few milliseconds otherwise it will get erased. DRAM has higher storage
capacity and is cheaper than Static RAM.

→ Static RAM (SRAM): The information stored in Static RAM need not be refreshed, but
it remains stable as long as power supply is provided. SRAM is costlier but has higher
speed than DRAM.

Disk Storage Unit


→ Data and instruction enters into a computer system through input device have to stored
inside the computer before actual processing start.
→ Two types of storage unit are primary and secondary storage unit.
CHAPTER 1 INTRODUCTION TO COMPUTER PROGRAMMING

1.4 INPUT DEVICES


They are used for accepting the data on which the operations are to be performed.
Examples are keyboard, mouse, trackball etc.

1.4.1 keyboard
→ A standard keyboard includes alphanumeric keys, function keys, modifier keys,
cursor movements keys, spacebar, escape key, numeric keypad, and some special keys
such as Page Up, Page Down etc.
→ The alphanumeric keys include the number keys and the alphabet keys.
→ The function keys are the keys that help perform specific task such as refreshing
the web page, searching the file.
→ Modifier keys such as shift and control keys modify the casing style of a character
or symbol.
→ Cursor-control keys and function keys are common features on general purpose
keyboards. Function keys allow users to enter frequently used operations in a single
keystroke, and cursor-control keys can be used to select displayed objects or coordinate
positions by positioning the screen cursor.

1.4.2 Mouse

→ A mouse is small hand-held box used to position the screen cursor. Wheels or
rollers on the bottom of the mouse can be used to record the amount and direction of
movement.
→ Another method for detecting mouse motion is with an optical sensor. For these
systems, the mouse is moved over a special mouse pad that has a grid of horizontal and
vertical lines. The optical sensor detects movement across the lines in the grid.
→ The mouse allows the users to selects elements on the screen such as tool, icons
and buttons, by positioning and clicking them.
→ The mouse consists of two buttons, a wheel at the top and a ball at the bottom of
the mouse.
→ When the ball moves, the cursor moves on the screen moves in the direction in
which the ball rotates.
→ The left button of the mouse is used to select an element and the right button,
when clicked, displays the special options such as open, explore and shortcut menus.
→ The wheel is used to scroll down in a document or a web page.

1.4.3 Scanner
→ Drawings, graphs, colour and black-and-white photos, or text can be stored for
computer processing with an image scanner by passing an optical scanning mechanism
over the information to be stored.
→ A scanner is an input device that converts documents and images as the digitized
images understandable by the computer system.
→ The digitized images can be produced as the black and white images, Gray images
or coloured images.
→ In case of coloured images, an image is considered as a collection of dots with each
dot representing a combination of red, green and blue colour, in varying proportion.
→ The scanner uses the colour description of the dot to produce the digitized images.
→ There are following type of scanners:
1. Flatbed scanner
2. Drum scanner
3. Slide Scanner
4. Handheld scanner

1.4.4 Joysticks

→ A joystick consists of a small, vertical lever (called the stick) mounted on a base
that is used to steer the screen cursor around.
→ Most joysticks select screen positions with actual stick movement; others respond
to the pressure on the stick. Some joysticks are mounted on a keyboard; others function
as stand-alone units.
→ In another type of movable joystick, the stick is used to activate switches that
cause the screen cursor to move at a constant rate in the direction selected.
1.4.5 Digitizer
→ A common device for drawing, painting, or interactively selecting coordinate
positions on an object is a digitizer. These devices can be used to input coordinate values
in either a two-dimensional or a three-dimensional space.
→ Typically, a digitizer is used to scan over a drawing or object and to input a set of
discrete coordinate positions, which can be joined with straight-Line segments to
approximate the curve or surface shapes.
1.5 OUTPUT DEVICES
The data, processed by the CPU, is made available to the end user by the output
devices. The most commonly used output devices are:
1. Monitor
2. Printer
3. Speaker
4. Plotter

1.5.1 Monitor
→ A monitor is the most commonly used output devices that produces visual displays
generated by the computer.it is also known as screen.
→ The monitor connected using the cables, is connected to the video card placed on
the expansion slot of the motherboard. The display devices are used for visual
presentation of textual and graphical information.
→ The monitor can be classified as cathode ray tube (CRT) monitors or liquid crystal
displays (LCD) monitor. The CRT monitors are large and occupy more space in the
computer, whereas LCD monitors are thin, light weighted and occupy less space.
→ The inner side of the screen contains the red, green and the blue phosphors. When
the beam of electrons strikes the screen, it illuminates the and produce the image.
→ To change the colour displayed by the monitor, the intensity of the beam striking
the screen varied.
→ Monitor can be categorised by its monitor size and its resolution. Resolution is the
number of the pixel of the screen.
1.5.2 Printer
→ The printer is the output device that transfer the text displayed on the screen, onto
the sheet that can be used by end user.
→ Printer is an external device connected to computer using cables.
→ The printer/print device software is used to convert a document to a form
understandable by the computer.
→ The performance of a printer is measured in terms of dots per inch (DPI) and
pages per minute (PPM) produced by the printer.
→ Printers can be classified as: Dot Matrix printers, Inkjet printers and Laser
printers.
→ Dot Matrix Printers: They are impact printers used in low quality and high-volume
applications like invoice printing, etc. It strikes a pin against a ribbon to produce its
impression on the paper. This striking motion of pins help in making carbon copies of a
text.
→ Inkjet printers: They generate high quality photographic prints but are slower
than dot matrix printers. They are not impact printers. The ink cartridges are attached to
the to the printer head that moves horizontally from left to right. The print out is
developed as the ink of the cartridges is sprayed onto the paper. The ink in the inkjet is
heated to create a bubble. The bubble burst out at high pressure, emitting a jet of the ink
on the paper thus producing images.
→ laser printers: Laser printers generate high quality photographic prints and are
fast as it contains microprocessor, ROM and RAM, which can be used to store the textual
information. The printer uses a cylindrical drum, a toner and a laser beam. The toner
stores the ink that is used in generating the output. The font used for printing in a laser
printer are stored in the ROM.

1.5.3 Speaker
→ The speaker is an electromechanical transducer that converts an electrical signal
into sound.
→ They are attached to a computer as output devices, to provide audio output, such
as warning sounds and internet audios. We can have built in speakers or attached speaker
in the computer to warn end users with error audio messages and alerts.
→ The sound card being used in the computer system decides the quality of audio
that we listen using music CDs or over the internet.
→ The computer speaker varies widely in terms of quality and price.

1.5.4 Plotter
→ The plotter is another commonly used output devices that is connected to a
computer to print large documents, such as engineering or constructional drawings.
Plotters use multiple ink pens or inkjets with colours cartridges for printing. A computer
transmits binary signal to all the print heads of the plotters.

→ Each binary signal contains the coordinates of where a print head needs to be
positioned for the printing. Plotters are classified on the basis of their performance, as
follow:
→ Drum plotter-they are used to draw perfect circles and other graphics images, they
use a drawing arm to draw the image. The drum plotter moves the paper back and forth
thought a roller and the drawing arm moves across the paper.

→ Flat bed plotter-A flat bed plotter has a flat drawing surface and the two-drawing
arm that move across the paper.

→ Inkjet plotter- Spray nozzles are used to generate images by spraying the droplet
of the ink onto paper. however, the spray nozzles can get clogged and require regular
cleaning, thus resulting in a high maintenance cost.

1.6 SOFTWARE AND HARDWARE CONCEPTS


1.6.1 Software
→ Software is defined as computer program, which include logical instructions used
for performing a particular task on a computer system using hardware components. The
following are the two categories of software under which different types of computer
program can be classified:
System software:

→ System Software is the type of software which is the interface between application
software and system. Low level languages are used to write the system software.
→ System Software maintain the system resources and give the path for application
software to run. An important thing is that without system software, system cannot run.
It is a general-purpose software.
→ It refers to a computer program that manages and controls hardware components
of a computer.it is also responsible for proper functioning of the application software on
a computer system.
→ The system program includes general program, which are written to provide an
environment for developing new application software.
→ There are several type of system software, such as operating systems and utility
programs.

The following are the various function of system software:


1. Process management
2. Memory management
3. Secondary storge management
4. I/O system management
5. File management

Application software

→ Application software is computer program that is executed on the system


software.
→ It is designed and developed for performing specific tasks and is also known as
end-user program.
→ Application software is unable to run without the system software, which such as
os and utility program.
→ It resides above system software. First user deal with system software after that
he/she deals with application software.

→ The end user uses applications software for a specific purpose. It programmed for
simple as well as complex tasks. It either be installed or access online. It can be a single
program or a group of small programs that referred to as an application suite. Some
examples of Application Software are Word processing software, Spreadsheets Software,
Presentation, Graphics, CAD/CAM, Sending email etc.

1.6.2 hardware

→ The physical devices that make up the computer are called hardware. The
hardware units are responsible for entering, storing and processing the given data and
then displaying the output to the users.
→ The basic hardware units of a general-purpose computer are keyboard, mouse,
memory, CPU, monitor and printer etc.
→ CPU is the main component inside the computer that is responsible for performing
various operation and also for managing the input and output devices. It includes two
components for its functioning, arithmetic logic unit (ALU) and control unit (CU).
CHAPTER 1 Introduction to computer and programming

1.5 LANGUAGE TRANSLATORS


 The programs are written mostly in high level languages like Java, C++, Python
etc. and are called source code.
 A computer understands instructions in machine code, i.e. in the form of 0s and
1s. It is a tedious task to write a computer program directly in machine code.
 This source code cannot be executed directly by the computer and must be
converted into machine language to be executed.
 Hence, a special translator system software is used to translate the program
written in high-level language into machine code is called Language Translator
and the program after translated into machine code (object program / object
code).
 The language translator can be any of the following three types:
1) Compiler
2) Interpreter
3) Assembler

1.5.2 COMPILER
 A compiler is a computer program that transforms code written in a high-level
programming language into the machine code.
 It is a program which translates the human-readable code to a language a
computer processor understands (binary 1 and 0 bits).
 The computer processes the machine code to perform the corresponding tasks.

 A compiler should comply with the syntax rule of that programming language in
which it is written.
 However, the compiler is only a program and cannot fix errors found in that
program.
 So, if you make a mistake, you need to make changes in the syntax of your
program. Otherwise, it will not compile.
 An interpreter is a computer program, which coverts each high-level program
statement into the machine code.
 This includes source code, pre-compiled code, and scripts.
 Both compiler and interpreters do the same job which is converting higher level
programming language to machine code.
 However, a compiler will convert the code into machine code (create an exe)
before program run. Interpreters convert code into machine code when the
program is run.

1.5.4 ASSEMBLER
 An assembler translates a program written in assembly language into machine
language, an even lower-level language that the processor can directly
understand.
 An assembler enables software and application developers to access, operate
and manage a computer's hardware architecture and components.

 An assembler primarily serves as the bridge between symbolically coded


instructions written in assembly language and the computer processor, memory
and other computational components.
 An assembler works by assembling and converting the source code of assembly
language into object code or an object file that constitutes a stream of zeros and
ones of machine code, which are directly executable by the processor.
1.5.5 COMPILER VS. INTERPRETER
Interpreter Compiler
Translates program one statement at Scans the entire program and translates
a time. it as a whole into machine code.
Interpreters usually take less Compilers usually take a large amount
amount of time to analyze the source of time to analyze the source code.
code. However, the overall execution However, the overall execution time is
time is comparatively slower than comparatively faster than interpreters.
compilers.
No intermediate object code is Generates intermediate object code
generated, hence are memory which further requires linking, hence
efficient. requires more memory.

Programming languages like Programming languages like C, C++,


JavaScript, Python, Ruby use Java use compilers.
interpreters.

1.6 PROGRAMMING LANGUAGES


 A Language is a medium of interaction between two objects. It is a system of
communication between any two objects either spoken or written.
 Programming language is the language of computers. Through programming
language, we can communicate with a computer system.
 Computers can only understand binary, but humans are not comfortable with
binary number system.
 Humans cannot interact fluently with computers in the language of 0's and 1's.
Programming language act as an interface between computers and humans.
 Programming languages are used to create programs. A computer program
is intended to perform some specific task through computer or to control the
behaviour of computer.
 Using a programming language, we write instructions that the computer should
perform. Instructions are usually written using characters, words, symbols and
decimal. These instructions are later encoded to the computer understandable
language i.e. binary language. So that the computer can understand the
instructions given by human and can perform specified task.
 Programming Languages are classified into three categories as shown below.
1.6.2 LOW LEVEL LANGUAGE
 A Low level language abbreviated as LLL, are languages close to the machine
level instruction set. They provide less or no abstraction from the hardware.
 A low-level programming language interacts directly with the registers and
memory.
 Since, instructions written in low level languages are machine dependent.
Programs developed using low level languages are machine dependent and are
not portable.
 Low level language does not require any compiler or interpreter to translate the
source to machine code.
 Low level languages are further classified in two more categories – Machine
language and assembly language.
1) Machine Language
 Machine language is closest language to the hardware. It consists set
of instructions that are executed directly by the computer. These
instructions are a sequence of binary bits. Each instruction performs a
very specific and small task. Instructions written in machine language
are machine dependent and varies from computer to computer.

 Example: SUB AX, BX = 00001011 00000001 00100010 is an


instruction set to subtract values of two registers AX and BX.

2) Assembly Language
 Assembly language is an improvement over machine language. Similar
to machine language, assembly language also interacts directly with
the hardware. Instead of using raw binary sequence to represent an
instruction set, assembly language uses mnemonics.

 Mnemonics gave relief to the programmers from remembering binary


sequence for specific instructions. As English words like ADD, MOV,
SUB are easy to remember.

 Example: ADD R2,10

1.6.3 HIGH LEVEL LANGUAGE


 A High level language is abbreviated as HLL. High level languages are similar to
the human language. Unlike low level languages, high level languages are
programmers friendly, easy to code, debug and maintain.
 High level language provides higher level of abstraction from machine language.
 High level programs require compilers/interpreters to translate source code to
machine language. We can compile the source code written in high level
language to multiple machine languages. Thus, they are machine independent
language.
 Today almost all programs are developed using a high level programming
language. We can develop a variety of applications using high level language.
They are used to develop desktop applications, websites, system software’s,
utility software’s and many more.
 High level languages are grouped in two categories based on execution model:
1) compiled language
2) Interpreted language
 High level languages are grouped in three categories based on programming
paradigm:
1) Structured language
2) Procedural language
3) Object Oriented languages

1.6.4 LOW LEVEL LANGUAGE VS. HIGH LEVEL LANGUAGE

Low level language High level language


They are faster than high level language. They are comparatively slower.
Low level languages are memory High level languages are not memory
efficient. efficient.
Low level languages are difficult to
High level languages are easy to learn.
learn.
Programming in low level requires Programming in high level do not
additional knowledge of the computer require any additional knowledge of
architecture. the computer architecture.
They are machine dependent and are They are machine independent and
not portable. portable.
They provide less or no abstraction fromThey provide high abstraction from
the hardware. the hardware.
They are more error prone. They are less error prone.
Debugging and maintenance is
Debugging and maintenance is difficult.
comparatively easier.
They are used to develop a variety of
They are generally used for developing
applications such as – desktop
system software’s (Operating systems)
applications, websites, mobile
and embedded applications.
REFERENCES:

 https://codeforwin.org/2017/05/programming-language-history-popular-
languages.html
 https://www.sitesbay.com/cpp/cpp-compiler
 https://www.geeksforgeeks.org/language-processors-assembler-compiler-and-
interpreter/
 https://nptel.ac.in/courses/106/104/106104128/
 https://youtu.be/kb0MG6oUCTw
CHAPTER 1 INTRODUCTION
1.7 FLOWCHART AND ALGORITHM
1.7.1 INTRODUCTION
A typical programming task can be divided into two phases:
Problem Solving Phase
 Developing the algorithm.
 Drawing the flowchart.
Implementation Phase
 Implement the program in some programming language.
1.7.2 ALGORITHM
 Definition
 ”An algorithm is a sequence of steps written in the form of English phrases
that specify the tasks that are performed while solving a problem.”
 It helps the programmer in breaking down the solution of a problem into a
number of sequential steps.
 Corresponding to each step a statement is written in a programming
language.
 Qualities of Good Algorithm
 Input and output should be defined precisely.
 Each step in the algorithm should be clear and unambiguous.
 Algorithms should be most effective among many different ways to solve a
problem.
 An algorithm shouldn't include computer code. Instead, the algorithm
should be written in such a way that it can be used in different programming
languages.
 Advantages of Algorithm
 It is a step-wise representation of a solution to a given problem, which makes
it easy to understand.
 An algorithm uses a definite procedure.
 It is not dependent on any programming language, so it is easy to understand
for anyone even without programming knowledge.
 Every step in an algorithm has its own logical sequence so it is easy to debug.
 By using algorithm, the problem is broken down into smaller pieces or steps
hence, it is easier for programmer to convert it into an actual program
 Disadvantages of Algorithm
 Writing algorithm takes a long time.
1.7.3 ALGORITHM EXAMPLES
Example 1: Write an algorithm to add two integers and display the result.
Step 1: Start
Step 2: Declare variables num1, num2 and sum.
Step 3: Read values for num1, num2.
Step 4: Add num1 and num2 and assign the result to a variable sum.
Step 5: Display sum
Step 6: Stop

Example 2: Write an algorithm to find out whether a given number is even or odd.
Step 1: Start
Step 2: Take any number and store it in n
Step 3: if n=multiple of 2 print "even" else print "odd"
Step 4: Stop

Example 3: Write an algorithm to find out whether a given number is prime or not.
Step 1: Start
Step 2: Read number n
Step 3: Set f=0
Step 4: For i=2 to n-1
Step 5: If n mod 1=0 then
Step 6: Set f=1 and break
Step 7: Loop
Step 8: If f=0 then print 'The given number is prime' else print 'The given number is not
prime'
Step 9: Stop

Example 4: Write an algorithm to find largest number amongst three numbers.


Step 1: Start
Step 2: Declare variables a, b and c.
Step 3: Read variables a, b and c.
Step 4: If a > b If a > c
Display a is the largest number.
Else
Else
Display c is the greatest number.
Step 5: Stop

Example 5: Write an algorithm to determine whether the given year is leap year or
not.

Step 1: Start
Step 2: Read year
Step 3: If the year is divisible by 4 then go to Step 4 else go to Step 7
Step 4: If the year is divisible by 100 then go to Step 5 else go to Step 6
Step 5: If the year is divisible by 400 then go to Step 6 else go to Step 7
Step 6: Print "Leap year"
Step 7: Print "Not a leap year"
Step 8: Stop

Example 6: Write an algorithm to determine whether a given string is a palindrome


or not.

Step 1: Start
Step 2: Accept a string from the user (str).
Step 3: Calculate the length of string str (len).
Step 4: Initialize looping counters left=0, right=len-1, and chk=’t’
Step 5: Repeat steps 6-8 while left<right and chk=’t’
Step 6: If str(left)=str(right) goto Step 8 else goto step 7
Step 7: Set chk=’f’
Step 8: Set left=left+1 and right=right+1
Step 9: If chk=’t’ goto Step 10 else goto Step 11
Step 10: Display “The string is a palindrome” and goto Step 12
Step 11: Display “The string is not a palindrome”
Step 12: Stop

1.7.4 FLOWCHART
 Definition
 ”It is defined as the pictorial representation of a process, which describes the
sequence and flow of control and information within the process.”
 The flow of information is represented inside the flowchart in a step by step
form.
 Flowchart uses different symbols for depicting different activities, which are
 Advantages of Flowchart
 The flowchart is a good way of conveying the logic of the system.
 Facilitates the analysis of the problem.
 Provides a proper documentation.
 Easy identification of the errors and bugs.
 It directs the program development.
 Maintenance of the program becomes easy.

 Disadvantages of Flowchart
 The complex logic could result in the complex flow chart.
 A flowchart must be recreated to employ modification and alterations.

 The various symbols used in a flowchart are:


1.7.5 FLOWCHART EXAMPLES
Example 1: Draw a flowchart to add two integers and display the result.

Example 2: Draw a flowchart to find out whether a given number is even or odd.
Example 3: Draw a flowchart to find out whether a given number is prime or not.

Example 4: Draw a flowchart to find largest number amongst three numbers.


Example 5: Draw a flowchart to determine whether the given year is leap year or
not.

Example 6: Draw a flowchart to determine whether a given string is a palindrome


or not.
1.7.6 COMPARISION BETWEEN ALGORITHM AND FLOWCHART
CHAPTER 1 INTRODUCTION
1.7 FLOWCHART AND ALGORITHM
1.7.1 INTRODUCTION
A typical programming task can be divided into two phases:
Problem Solving Phase
 Developing the algorithm.
 Drawing the flowchart.
Implementation Phase
 Implement the program in some programming language.
1.7.2 ALGORITHM
 Definition
 ”An algorithm is a sequence of steps written in the form of English phrases
that specify the tasks that are performed while solving a problem.”
 It helps the programmer in breaking down the solution of a problem into a
number of sequential steps.
 Corresponding to each step a statement is written in a programming
language.
 Qualities of Good Algorithm
 Input and output should be defined precisely.
 Each step in the algorithm should be clear and unambiguous.
 Algorithms should be most effective among many different ways to solve a
problem.
 An algorithm shouldn't include computer code. Instead, the algorithm
should be written in such a way that it can be used in different programming
languages.
 Advantages of Algorithm
 It is a step-wise representation of a solution to a given problem, which makes
it easy to understand.
 An algorithm uses a definite procedure.
 It is not dependent on any programming language, so it is easy to understand
for anyone even without programming knowledge.
 Every step in an algorithm has its own logical sequence so it is easy to debug.
 By using algorithm, the problem is broken down into smaller pieces or steps
hence, it is easier for programmer to convert it into an actual program
 Disadvantages of Algorithm
 Writing algorithm takes a long time.
1.7.3 ALGORITHM EXAMPLES
Example 1: Write an algorithm to add two integers and display the result.
Step 1: Start
Step 2: Declare variables num1, num2 and sum.
Step 3: Read values for num1, num2.
Step 4: Add num1 and num2 and assign the result to a variable sum.
Step 5: Display sum
Step 6: Stop

Example 2: Write an algorithm to find out whether a given number is even or odd.
Step 1: Start
Step 2: Take any number and store it in n
Step 3: if n=multiple of 2 print "even" else print "odd"
Step 4: Stop

Example 3: Write an algorithm to find out whether a given number is prime or not.
Step 1: Start
Step 2: Read number n
Step 3: Set f=0
Step 4: For i=2 to n-1
Step 5: If n mod 1=0 then
Step 6: Set f=1 and break
Step 7: Loop
Step 8: If f=0 then print 'The given number is prime' else print 'The given number is not
prime'
Step 9: Stop

Example 4: Write an algorithm to find largest number amongst three numbers.


Step 1: Start
Step 2: Declare variables a, b and c.
Step 3: Read variables a, b and c.
Step 4: If a > b If a > c
Display a is the largest number.
Else
Else
Display c is the greatest number.
Step 5: Stop

Example 5: Write an algorithm to determine whether the given year is leap year or
not.

Step 1: Start
Step 2: Read year
Step 3: If the year is divisible by 4 then go to Step 4 else go to Step 7
Step 4: If the year is divisible by 100 then go to Step 5 else go to Step 6
Step 5: If the year is divisible by 400 then go to Step 6 else go to Step 7
Step 6: Print "Leap year"
Step 7: Print "Not a leap year"
Step 8: Stop

Example 6: Write an algorithm to determine whether a given string is a palindrome


or not.

Step 1: Start
Step 2: Accept a string from the user (str).
Step 3: Calculate the length of string str (len).
Step 4: Initialize looping counters left=0, right=len-1, and chk=’t’
Step 5: Repeat steps 6-8 while left<right and chk=’t’
Step 6: If str(left)=str(right) goto Step 8 else goto step 7
Step 7: Set chk=’f’
Step 8: Set left=left+1 and right=right+1
Step 9: If chk=’t’ goto Step 10 else goto Step 11
Step 10: Display “The string is a palindrome” and goto Step 12
Step 11: Display “The string is not a palindrome”
Step 12: Stop

1.7.4 FLOWCHART
 Definition
 ”It is defined as the pictorial representation of a process, which describes the
sequence and flow of control and information within the process.”
 The flow of information is represented inside the flowchart in a step by step
form.
 Flowchart uses different symbols for depicting different activities, which are
 Advantages of Flowchart
 The flowchart is a good way of conveying the logic of the system.
 Facilitates the analysis of the problem.
 Provides a proper documentation.
 Easy identification of the errors and bugs.
 It directs the program development.
 Maintenance of the program becomes easy.

 Disadvantages of Flowchart
 The complex logic could result in the complex flow chart.
 A flowchart must be recreated to employ modification and alterations.

 The various symbols used in a flowchart are:


1.7.5 FLOWCHART EXAMPLES
Example 1: Draw a flowchart to add two integers and display the result.

Example 2: Draw a flowchart to find out whether a given number is even or odd.
Example 3: Draw a flowchart to find out whether a given number is prime or not.

Example 4: Draw a flowchart to find largest number amongst three numbers.


Example 5: Draw a flowchart to determine whether the given year is leap year or
not.

Example 6: Draw a flowchart to determine whether a given string is a palindrome


or not.
1.7.6 COMPARISION BETWEEN ALGORITHM AND FLOWCHART

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy