0% found this document useful (0 votes)
2 views17 pages

Algorithm Basics_090158

The document provides an overview of algorithms, their design, characteristics, and the importance of analyzing their complexities in terms of time and space. It discusses the steps involved in problem development, the significance of pseudocode, and the role of functions and recursion in programming. Additionally, it emphasizes the need for scalable solutions and the trade-offs between time and memory efficiency in algorithm design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views17 pages

Algorithm Basics_090158

The document provides an overview of algorithms, their design, characteristics, and the importance of analyzing their complexities in terms of time and space. It discusses the steps involved in problem development, the significance of pseudocode, and the role of functions and recursion in programming. Additionally, it emphasizes the need for scalable solutions and the trade-offs between time and memory efficiency in algorithm design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Algorithm and Complexities

General Introduction

An algorithm is a set of steps of operations to solve a problem performing calculation, data


processing, and automated reasoning tasks. An algorithm is an efficient method that can
be expressed within finite amount of time and space.

An algorithm is the best way to represent the solution of a particular problem in a very
simple and efficient way. If we have an algorithm for a specific problem, then we can
implement it in any programming language, meaning that the algorithm is independent
from any programming languages.

Algorithm Design

The important aspects of algorithm design include creating an efficient algorithm to solve
a problem in an efficient way using minimum time and space. To solve a problem, different
approaches can be followed. Some of them can be efficient with respect to time consumption,
whereas other approaches may be memory efficient.

However, one has to keep in mind that both time consumption and memory usage cannot
be optimized simultaneously. If we require an algorithm to run in lesser time, we have to
invest in more memory and if we require an algorithm to run with lesser memory, we need
to have more time.

Problem Development Steps

The following steps are involved in solving computational problems.

 Problem definition

 Development of a model

 Specification of an Algorithm

 Designing an Algorithm

 Checking the correctness of an Algorithm

 Analysis of an Algorithm

 Implementation of an Algorithm

 Program testing

1|Page
 Documentation

Characteristics of Algorithms

The main characteristics of algorithms are as follows:

 Algorithms must have a unique name

 Algorithms should have explicitly defined set of inputs and outputs

 Algorithms are well-ordered with unambiguous operations

 Algorithms halt in a finite amount of time. Algorithms should not run for infinity, i.e., an
algorithm must end at some point

Pseudocode
Pseudocode gives a high-level description of an algorithm without the ambiguity associated
with plain text but also without the need to know the syntax of a particular programming
language.
The running time can be estimated in a more general manner by using Pseudocode to
represent the algorithm as a set of fundamental operations which can then be counted.

Use of Data Structures and Algorithms Make Your Code Scalable

Time is precious.

Suppose, Alice and Bob are trying to solve a simple problem of finding the sum of first
1011natural numbers. While Bob was writing the algorithm, Alice implemented it proving that
it is as simple as criticizing Donald Trump.

Algorithm (by Bob)

Initialize sum = 0

for every natural number n in range 1 to 1011 (inclusive):

add n to sum

2|Page
sum is your answer

Code (by Alice)

1. int findSum() {
2. int sum = 0;
3. for (int v = 1; v<= 100000000000; v++) {
4. sum += v;
5. }
6. return sum;
7. }

Alice and Bob are feeling euphoric of themselves that they could build something of their own
in almost no time. Let's sneak into their workspace and listen to their conversation.

Two of the most valuable resources for a computer program are time and memory.

The time taken by the computer to run a code is:

Time to run code = number of instructions * time to execute each instruction

The number of instructions depends on the code you used, and the time taken to execute
each code depends on your machine and compiler.

In this case, the total number of instructions executed (let's say x) are x = 1 + (1011 + 1) +
(1011) + 1, which is x = 2 * 1011 + 3
Let us assume that a computer can execute y = 108 instructions in one second (it can vary
subjected to machine configuration). The time taken to run above code is

Time taken to run code = x/y (greater than 16 minutes)

3|Page
Is it possible to optimize the algorithm so that Alice and Bob do not have to wait for 16
minutes every time they run this code?

I am sure that you already guessed the right method. The sum of first N natural numbers is
given by the formula:

Sum = N * (N + 1) / 2

Converting it into code will look something like this:

int sum(int N) {

return N * (N + 1) / 2;

This code executes in just one instruction, and gets the task done no matter what the value
is. Let it be greater than the total number of atoms in the universe. It will find the result in
no time.

The time taken to solve the problem in this case is 1/y (which is 10 nanoseconds). By the
way, fusion reaction of a hydrogen bomb takes 40-50 ns, which means your program will
complete successfully even if someone throws a hydrogen bomb on your computer at the
same time you ran your code. :)

Note: Computers take a few instructions (not 1) to compute multiplication and division. I
have said 1 just for the sake of simplicity.

More on Scalability

Scalability is scale plus ability, which means the quality of an algorithm / system to handle
problem of larger size.

4|Page
Consider a problem of setting up a classroom of 50 students. One of the simplest solutions is
to book a room, get a blackboard, few chalks, and the problem is solved.

But what if the size of problem increases? What if the number of students increased
to 200?

The solution still holds but it needs more resources. In this case, you will probably need a
much larger room (probably a theater), a projector screen and a digital pen.

What if the number of students increased to 1000?

The solution fails or uses a lot of resources when the size of problem increases. This means,
your solution wasn't scalable.

What is a scalable solution then?


Consider a site like Khanacademy, millions of students can see videos, read answers at the
same time and no more resources are required. So, it was the ability of solution to solve the
problems of larger size under resource crunch.

If you see our first solution to find the sum of first N natural numbers, it wasn't scalable. It's
because it required linear growth in time with the linear growth in size of the problem. Such
algorithms are also known as linearly scalable algorithms.

Our second solution was very scalable and didn't require use of any more time to solve a
problem of larger size. These are known as constant time algorithms.

Memory is expensive

Memory is not always available in abundance. While dealing with code/system which requires
you to store or produce a lot of data, it is critical for your algorithm to save the usage of
memory wherever possible. For example: While storing data about people, you can save
memory by storing only their age not date of birth. You can always calculate it on the fly using
their age and current date.

Here are some examples of what learning algorithms and data structures enables you to do:

5|Page
Example 1: Age Group Problem

Problems like finding the people of certain age group can easily be solved with a little modified
version of the binary search algorithm (assuming that the data is sorted).

The naive algorithm which goes through all the persons one by one, and checks if it falls in
the given age group is linearly scalable. Whereas, binary search claims itself to be a
logarithmically scalable algorithm. This means that if the size of problem is squared, the time
taken to solve it is only doubled.

Suppose, it takes 1 second to find all the people in a certain age for a group of 1000. Then
for a group of 1 million people,

 the binary search algorithm will take only 2 seconds to solve the problem
 the naive algorithm might take 1 million seconds, which is around 12 days.

The same binary search algorithm is used to find the square root of a number.

Example 2: Rubik's Cube Problem

Imagine you are writing a program to find the solution of a Rubik's cube.

This cute looking puzzle has annoyingly 43,252,003,274,489,856,000 positions, and these
are just positions! Imagine the number of paths one can take to reach wrong positions.

Fortunately, the way to solve this problem can be represented by the graph data structure.
There is a graph algorithm known as Dijkstra's algorithm which allows you to solve this
problem in linear time. It means that it allows you to reach the solved position in minimum
number of states.

Example 3: DNA Problem

DNA is a molecule which carries the genetic information. They are made up of smaller units
which are represented by Roman characters A, C, T and G.

6|Page
Imagine yourself working in the field of bioinformatics. You are assigned the work of finding
out the occurrence of a particular pattern in a DNA strand.

It is a famous problem in computer science academia. And, the simplest algorithm takes the
time proportional to

number of character in DNA strand) * (number of characters in pattern)

A typical DNA strand has millions of such units and say pattern has just 100. KMP
algorithm can get this done in time which is proportional to

(number of character in DNA strand) + (number of characters in pattern)

The * operator replaced by + makes a lot of change.


Considering that the pattern was of 100 characters, your algorithm is now 100 times faster.
If your pattern was of 1000 characters, the KMP algorithm would be almost 1000 times faster
i.e.. If you were able to find the occurrence of pattern in 1 seconds, it will now take you just
1 ms. We can also put this in another way. Instead of matching 1 strand, you can match 1000
strands of similar length in same time.

Functions
A function is a block of code that performs a specific task.

Suppose, you need to create a program to create a circle and color it. You can
create two functions to solve this problem:

 create a circle function


 create a color function

Dividing a complex problem into smaller chunks makes our program easy to
understand to reuse.

Types of function
7|Page
There are two types of function in C programming:

Standard library functions

User-defined functions

Standard library functions


The standard library functions are built-in functions in C programming.

These functions are defined in header files. For example,

 The printf() is a standard library function to send formatted output to the screen
(display output on the screen). This function is defined in the stdio.h header file.
Hence, to use the printf() function, we need to include the stdio.h header file
using #include <stdio.h>.
 The sqrt() function calculates the square root of a number. The function is defined
in the math.h header file.

User-defined function
You can also create functions as per your need. Such functions created by
the user are known as user-defined functions.

How user-defined function works?


#include <stdio.h>

void functionName()

... .. ...

int main()

... .. ...

8|Page
functionName();

... .. ...

The execution of a C program begins from the main() function.


When the compiler encounters functionName();, control of the program jumps to

void functionName()

And, the compiler starts executing the codes inside functionName().


The control of the program jumps back to the main() function once code inside the
function definition is executed.

9|Page
Note, function names are identifiers and should be unique.

This is just an overview of user-defined functions. Visit these pages to learn more on:

 User-defined Function in C programming


 Types of user-defined Functions

Advantages of user-defined function


1. The program will be easier to understand, maintain and debug.
2. Reusable codes that can be used in other programs
3. A large program can be divided into smaller modules. Hence, a large project can be
divided among many programmers.

10 | P a g e
C User-defined functions

A function is a block of code that performs a specific task.

C allows you to define functions according to your need. These functions are known as
user-defined functions. For example:

Suppose, you need to create a circle and color it depending upon the radius and color.
You can create two functions to solve this problem:

 createCircle() function
 color() function

Example: User-defined function


Here is an example to add two integers. To perform this task, we have created an user-
defined addNumbers().
1. #include <stdio.h>
2. int addNumbers(int a, int b); // function prototype
3.
4. int main()
5. {
6. int n1,n2,sum;
7.
8. printf("Enters two numbers: ");
9. scanf("%d %d",&n1,&n2);
10.
11. sum = addNumbers(n1, n2); // function call
12. printf("sum = %d",sum);
13.
14. return 0;
15. }
16.
17. int addNumbers(int a, int b) // function definition
18. {
19. int result;
20. result = a+b;
21. return result; // return statement
22. }

11 | P a g e
Recursion
A function that calls itself is known as a recursive function. And, this technique is known
as recursion.

How recursion works?


void recurse()

... .. ...

recurse();

... .. ...

int main()

... .. ...

recurse();

... .. ...

12 | P a g e
The recursion continues until some condition is met to prevent it.

To prevent infinite recursion, if...else statement (or similar approach) can be used where
one branch makes the recursive call, and other doesn't.

Example: Sum of Natural Numbers Using Recursion


1. #include <stdio.h>
2. int sum(int n);
3.
4. int main() {
5. int number, result;
6.
7. printf("Enter a positive integer: ");
8. scanf("%d", &number);
9.
10. result = sum(number);
11.
12. printf("sum = %d", result);
13. return 0;
14. }
15.
16. int sum(int n) {
17. if (n != 0)
18. // sum() function calls itself

13 | P a g e
19. return n + sum(n-1);
20. else
21. return n;
22. }
Output
Enter a positive integer:3
sum = 6

Initially, the sum() is called from the main() function with number passed as an
argument.
Suppose, the value of n inside sum() is 3 initially. During the next function call,
2 is passed to the sum() function. This process continues until n is equal to 0.
When n is equal to 0, the if condition fails and the else part is executed
returning the sum of integers ultimately to the main() function.

14 | P a g e
15 | P a g e
Advantages and Disadvantages of Recursion
Recursion makes program elegant. However, if performance is vital, use loops
instead as recursion is usually much slower.

ANALYSIS OF ALGORITHMS

In theoretical analysis of algorithms, it is common to estimate their complexity in the


asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. The term
"analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important
part of computational complexity theory, which provides theoretical estimation for the
required resources of an algorithm to solve a specific computational problem. Most algorithms
are designed to work with inputs of arbitrary length. Analysis of algorithms is the
determination of the amount of time and space resources required to execute it. Usually, the
efficiency or running time of an algorithm is stated as a function relating the input length to
the number of steps, known as time complexity, or volume of memory, known as space
complexity.

The Need for Analysis

We will discuss the need for analysis of algorithms and how to choose a better algorithm for
a particular problem as one computational problem can be solved by different algorithms.
By considering an algorithm for a specific problem, we can begin to develop pattern
recognition so that similar types of problems can be solved by the help of this algorithm.
Algorithms are often quite different from one another, though the objective of these
algorithms are the same. For example, we know that a set of numbers can be sorted using
different algorithms. Number of comparisons performed by one algorithm may vary with
others for the same input. Hence, time complexity of those algorithms may differ. At the
same time, we need to calculate the memory space required by each algorithm.
Analysis of algorithm is the process of analyzing the problem-solving capability of the
algorithm in terms of the time and size required (the size of memory for storage while
implementation). However, the main concern of analysis of algorithms is the required time
or performance. Generally, we perform the following types of analysis:

Worst-case: The maximum number of steps taken on any instance of size a.

Best-case: The minimum number of steps taken on any instance of size a.

16 | P a g e
Average case: An average number of steps taken on any instance of size a.

Amortized: A sequence of operations applied to the input of size a averaged over time.
To solve a problem, we need to consider time as well as space complexity as the program
may run on a system where memory is limited but adequate space is available or may be
vice-versa. In this context, if we compare bubble sort and merge sort. Bubble sort does
not require additional memory, but merge sort requires additional space. Though time
complexity of bubble sort is higher compared to merge sort, we may need to apply bubble
sort if the program needs to run in an environment, where memory is very limited.

17 | P a g e

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy