lecturenotesintroductiontocomputersprogrammingincunit1pdf
lecturenotesintroductiontocomputersprogrammingincunit1pdf
C
Unit 1: Introduction to Computer, Programming &
Algorithms
June 20, 2023
Contents
1 Introduction to Computer Programming & Algorithms 3
1.6 Assembler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Compiler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Interpreter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.9 Linker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.10 Loader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.11 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.14 Flowcharts: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
This unit covers
• Defining Computer and its Components.
Algorithms
A computer is an electronic device that accepts data, performs operations, displays results, and stores the data or
results as needed. It is a combination of hardware and software resources that integrate together and provides various
functionalities to the user. Hardware is the physical components of a computer like a processor, memory devices, monitor,
keyboard, etc.
1. Input Unit
3. Output Unit
1. Input Unit The input unit consists of input devices that are attached to the computer. These devices take input and
convert it into binary language that the computer understands. Some of the common input devices are keyboard,
(a) The Input Unit is formed by attaching one or more input devices to a computer.
(b) A user input data and instructions through input devices such as a keyboard, mouse, etc.
(c) The input unit is used to provide data to the processor for further processing.
2. Central Processing Unit(CPU) Once the information is entered into the computer by the input device, the processor
processes it. The CPU is called the brain of the computer because it is the control centre of the computer. It first
fetches instructions from memory and then interprets them so as to know what is to be done. If required, data is
fetched from memory or input device. Thereafter CPU executes or performs the required computation, and then
either stores the output or displays it on the output device. The CPU has three main components, which are
responsible for different functions: Arithmetic Logic Unit (ALU), Control Unit (CU) and Memory registers
iii. Arithmetic and Logical Unit is a digital circuit that is used to perform arithmetic and logical operations.
(b) Control Unit: The Control unit coordinates and controls the data flow in and out of the CPU, and also controls
all the operations of ALU, memory registers and also input/output units. It is also responsible for carrying out
all the instructions stored in the program. It decodes the fetched instruction, interprets it and sends control
signals to input/output devices until the required operation is done properly by ALU and memory.
i. The Control Unit is a component of the central processing unit of a computer that directs the operation
of the processor.
ii. It instructs the computer’s memory, arithmetic and logic unit, and input and output devices on how to
iii. In order to execute the instructions, the components of a computer receive signals from the control unit.
iv. It is also called the central nervous system or brain of the computer.
(c) Memory Registers: A register is a temporary unit of memory in the CPU. These are used to store the data,
which is directly used by the processor. Registers can be of different sizes(16 bit, 32 bit, 64 bit and so on) and
each register inside the CPU has a specific function, like storing data, storing an instruction, storing address
of a location in memory etc. The user registers can be used by an assembly language programmer for storing
operands, intermediate results etc. Accumulator (ACC) is the main register in the ALU and contains one of
iii. Data and instructions are stored permanently in this unit so that they are available whenever required.
3. Output Unit The output unit consists of output devices that are attached to the computer. It converts the binary
data coming from the CPU to human understandable form. The common output devices are monitor, printer,
plotter, etc.
(a) The output unit displays or prints the processed data in a user-friendly format.
(b) The output unit is formed by attaching the output devices of a computer.
(c) The output unit accepts the information from the CPU and displays it in a user-readable form.
Characteristics of a Computer
• Speed: Computers can perform millions of calculations per second. The computation speed is extremely fast.
• Accuracy: Because computers operate on pre-programmed software, there is no space for human error.
• Diligence: They can perform complex and long calculations at the same time and with the same accuracy.
• Versatile: Computers are designed to be versatile. They can carry out multiple operations at the same time.
• Storage: Computers can store a large amount of data/ instructions in its memory, which can be retrieved at any
point of time.
How computer works ? A computer processes data by following a series of steps. Input devices capture user commands
and data, sending them to the central processing unit (CPU). The CPU executes instructions, manipulating data stored
in temporary memory (RAM). The operating system manages hardware resources and software, enabling applications to
run. Results are sent to output devices for user interaction. Storage devices store data and programs for long-term use.
interact with each other in executing the machine’s purpose of processing data.
Examples of computer architectures include Von Neumann Architecture (a) and Harvard Architecture (b).
Computers are integral to any organization’s infrastructure, from office equipment to remote devices like cell phones
and wearables. Computer architecture establishes the principles governing how hardware and software connect to make
Figure 1:
core of its functioning. It defines the machine interface for which programming languages and associated processors are
designed.
Two predominant approaches to architecture are Complex Instruction Set Computer (CISC) and Reduced Instruction
CISC processors have one processing unit, auxiliary memory, and a large register set with hundreds of unique com-
mands, simplifying programming by executing tasks with single instructions. However, this approach may require more
RISC architecture emerged to create high-performance computers with simpler hardware designed for faster execution
How does computer architecture work? Computer architecture allows a computer to compute, retain, and
retrieve information. This data can be digits in a spreadsheet, lines of text in a file, dots of color in an image, sound
• Purpose of computer architecture: Everything a system performs, from online surfing to printing, involves the
transmission and processing of numbers. A computer’s architecture is merely a mathematical system intended to
• Data in numbers: The computer stores all data as numerals. When a developer is engrossed in machine learning
code and analyzing sophisticated algorithms and data structures, it is easy to forget this.
• Manipulating data: The computer manages information using numerical operations. It is possible to display an
image on a screen by transferring a matrix of digits to the video memory, with every number reflecting a pixel of
color.
• Multifaceted functions: The components of a computer architecture include both software and hardware. The
processor — hardware that executes computer programs — is the primary part of any computer.
• Booting up: At the most elementary level of a computer design, programs are executed by the processor whenever
the computer is switched on. These programs configure the computer’s proper functioning and initialize the different
hardware sub-components to a known state. This software is known as firmware since it is persistently preserved
• Support for temporary storage: Memory is also a vital component of computer architecture, with several types
often present in a single system. The memory is used to hold programs (applications) while they are being executed
• Support for permanent storage: There can also be tools for storing data or sending information to the external world
as part of the computer system. These provide text inputs through the keyboard, the presentation of knowledge
on a monitor, and the transfer of programs and data from or to a disc drive.
• User-facing functionality: Software governs the operation and functioning of a computer. Several software ‘layers’
exist in computer architecture. Typically, a layer would only interface with layers below or above it.
Computer Hardware Hardware refers to the physical components of a computer. Computer Hardware is any part of
the computer that we can touch these parts. These are the primary electronic devices used to build up the computer.
Examples of hardware in a computer are the Processor, Memory Devices, Monitor, Printer, Keyboard, Mouse, and
1. Input Devices: Input Devices are those devices through which a user enters data and information into the Computer
or simply, User interacts with the Computer. Examples of Input Devices are Keyboard, Mouse, Scanner, etc.
2. Output Devices: Output Devices are devices that are used to show the result of the task performed by the user.
3. Storage Devices: Storage Devices are devices that are used for storing data and they are also known as Secondary
Storage Data. Examples of Storage Devices are CDs, DVDs, Hard Disk, etc
4. Internal Component: Internal Components consists of important hardware devices present in the System. Examples
Computer Software Software is a collection of instructions, procedures, and documentation that performs different
tasks on a computer system. we can say also Computer Software is a programming code executed on a computer processor.
The code can be machine-level code or code written for an operating system. Examples of software are MS- Word, Excel,
1. System Software: System Software is a component of Computer Software that directly operates with Computer
Hardware which has the work to control the Computer’s Internal Functioning and also takes responsibility for
controlling Hardware Devices such as Printers, Storage Devices, etc. Types of System Software include Operating
2. Application Software: Application Software are the software that works the basic operations of the computer. It
performs a specific task for users. Application Software basically includes Word Processors, Spreadsheets, etc.
Types of Application software include General Purpose Software, Customized Software, etc.
Programming language can be divided into three categories based on the levels of abstraction:
1. Low-level Language:
The low-level language is a programming language that provides no abstraction from the hardware and is represented
There are two types of Low-level programming language. The Machine level language and Assembly language are
A machine-level language is one that consists of a set of binary instructions that are either 0 or 1. Because
computers can only read machine instructions in binary digits, i.e., 0 and 1, the instructions sent to the
i. It is difficult for programmers to write programs in machine instructions, hence creating a program in a
ii. It is prone to errors because it is difficult to comprehend, and it requires a lot of upkeep.
A machine-level language is not portable since each computer has its own set of machine instructions, therefore
Some commands in the assembly language are human-readable, such as move, add, sub, and so on. The
challenges we had with machine-level language are mitigated to some extent by using assembly language,
i. Assembly language instructions are easier to write and understand since they use English words like move,
ii. We need a translator that transforms assembly language into machine code since computers can only
iii. Assemblers are the translators that are utilized to translate the code. Because the data is stored in
computer registers, and the computer must be aware of the varied sets of registers, the assembly language
A high-level language is a programming language that allows a programmer to create programs that are not
dependent on the type of computer they are running on. High-level languages are distinguished from machine-level
languages by their resemblance to human languages. When writing a program in a high-level language, the logic of
the problem must be given complete attention. To convert a high-level language to a low-level language, a compiler
(d) C and C++ used for General purposes and it is very popular
(a) Because it is written in English like words, the high-level language is simple to read, write, and maintain.
(b) The purpose of high-level languages is to overcome the drawbacks of low-level languages, namely portability.
Programming languages with features of both Low Level and High-Level programming languages are referred to as
(c) Medium level language is a type of programming language that has features of both low-level and high-level
programming languages.
C, C++, and JAVA programming languages are the best example of Middle-Level Programming languages since
1.6 Assembler
In computer science, an assembler is a program that converts the assembly language into machine code.The output
of an assembler is called an object file, which contains a combination of machine instructions as well as the data required
Assembly LanguageIt is a low-level programming language in which there is a very strong correspondence between
the instructions in the language and the computer’s hardware’s machine code instructions
1.7 Compiler
1. A Compiler is a program that translates source code from a high-level programming language to a lower level
language computer understandable language(e.g. assembly language, object code, or machine code) to create an
executable program
2. It is more intelligent than interpreter because it goes through the entire code at once
5. It is platform-dependent
6. It help to detect error and get displayed after reading the entire code by compiler.
7. In other words we can say that, “Compilers turns the high level language to binary language or machine code at
1.8 Interpreter
1. An interpreter is also a program like a compiler that converts assembly language into Machine Code
2. But an interpreter goes through one line of code at a time and executes it and then goes on to the next line of the
code and then the next and keeps going on until there is an error in the line or the code has completed.
3. It is 5 to 25 times faster than a compiler but it stops at the line where error occurs and then again if the next line
5. Also, a compiler saves the machine codes for future use permanently but an interpreter doesn’t, but an interpreter
1.9 Linker
For a code to run we need to include a header file or a file saved from the library which are pre-defined if they are not
included in the beginning of the program then after execution the compiler will generate errors, and the code will not
work.
Linker is a program that holds one or more object files which is created by compiler, combines them into one executable
file.Linking is implemented at both time,load time and compile time. Compile time is when high level language is turns
to machine code and load time is when the code is loaded into the memory by loader.
(c) In dynamic linking there are many chances of error and failure chances.
(d) Linking stored the program in virtual memory to save RAM,So we have need to shared library
2. Static Linker:-
(e) In static linking there are less chances to error and No chances to failure.
1.10 Loader
A loader is a program that loads the machine codes of a program into the system memory. It is part of the OS of the
computer that is responsible for loading the program. It is the bare beginning of the execution of a program. Loading a
program involves reading the contents of an executable file into memory. Only after the program is loaded on operating
system starts the program by passing control to the loaded program code. All the OS that supports loading have loader
1.11 Algorithm
What is an Algorithm?
An algorithm is a set of commands that must be followed for a computer to perform calculations or other problem-
solving operations.According to its formal definition, an algorithm is a finite set of instructions carried out in a specific
order to perform a particular task. It is not the entire program or code; it is simple logic to a problem represented as an
1. Problem: A problem can be defined as a real-world problem or real-world instance problem for which you need to
2. Algorithm: An algorithm is defined as a step-by-step process that will be designed for a problem.
3. Input: After designing an algorithm, the algorithm is given the necessary and desired inputs.
4. Processing unit: The input will be passed to the processing unit, producing the desired output.
Algorithms are step-by-step procedures designed to solve specific problems and perform tasks efficiently in the realm
of computer science and mathematics. These powerful sets of instructions form the backbone of modern technology and
govern everything from web searches to artificial intelligence. Here’s how algorithms work:
1. Input: Algorithms take input data, which can be in various formats, such as numbers, text, or images.
Figure 2:
2. Processing: The algorithm processes the input data through a series of logical and mathematical operations,
3. Output: After the processing is complete, the algorithm produces an output, which could be a result, a decision,
4. Efficiency: A key aspect of algorithms is their efficiency, aiming to accomplish tasks quickly and with minimal
resources.
5. Optimization: Algorithm designers constantly seek ways to optimize their algorithms, making them faster and more
reliable.
6. Implementation: Algorithms are implemented in various programming languages, enabling computers to execute
Example: Now, use an example to learn how to write algorithms. Problem1: Create an algorithm that multiplies two
Step 1 - Start
Step 6 - print z
Step 7 - Stop
Algorithms instruct programmers on how to write code. In addition, the algorithm can be written as:
Step 1 - Start mul Step 2 - get values of x and y Step 3 - z ← x * y Step 4 - display z Step 5 - Stop
In algorithm design and analysis, the second method is typically used to describe an algorithm. It allows the analyst
to analyze the algorithm while ignoring all unwanted definitions easily. They can see which operations are being used and
how the process is progressing. It is optional to write step numbers. To solve a given problem, you create an algorithm.
As a result, many solution algorithms for a given problem can be derived. The following step is to evaluate the
2. Correctness: It must produce the correct and accurate output for all valid inputs.
3. Clarity: The algorithm should be easy to understand and comprehend, making it maintainable and modifiable.
4. Scalability: It should handle larger data sets and problem sizes without a significant decrease in performance.
5. Reliability: The algorithm should consistently deliver correct results under different conditions and environments.
6. Optimality: Striving for the most efficient solution within the given problem constraints.
8. Adaptability: Ideally, it can be applied to a range of related problems with minimal adjustments.
9. Simplicity: Keeping the algorithm as simple as possible while meeting its requirements, avoiding unnecessary
complexity.
1. Time Complexity The amount of time required to complete an algorithm’s execution is called time complexity. The
big O notation is used to represent an algorithm’s time complexity. The asymptotic notation for describing time
complexity, in this case, is big O notation. The time complexity is calculated primarily by counting the number of
steps required to complete the execution. Let us look at an example of time complexity.
mul = 1;
of n numbers.
for i=1 to n
// when the loop ends, then mul holds the multiplication of the n numbers return mul;
The time complexity of the loop statement in the preceding code is at least n, and as the value of n escalates,
so does the time complexity. While the code’s complexity, i.e., returns mul, will be constant because its value is
not dependent on the importance of n and will provide the result in a single step. The worst-time complexity is
generally considered because it is the maximum time required for any given input size.
2. Space Complexity The amount of space an algorithm requires to solve a problem and produce an output is called
its space complexity. Space complexity, like time complexity, is expressed in big O notation. The space is required
Space Complexity = Auxiliary Space + Input Size Finally after understanding what is an algorithm, its analysis and
1.14 Flowcharts:
A flowchart is a diagram that illustrates the steps, sequences, and decisions of a process or workflow. While there are
many different types of flowcharts, a basic flowchart is the simplest form of a process map. It’s a powerful tool that can
be used in multiple fields for planning, visualizing, documenting, and improving processes.
Figure 4:
Now, we will discuss some examples on flowcharting. These examples will help in proper understanding of flowcharting
technique. This will help you in program development process in next unit of this block.
Problem2: Flowchart for an algorithm which gets two numbers and prints sum of their value.
Figure 6:
Problem3: Algorithm for find the greater number between two numbers.
Figure 7:
problem4: Flowchart for the calculate the average from 25 exam scores.
Figure 8: