0% found this document useful (0 votes)
23 views121 pages

Career Fair 2024 - Interview Preparation Kit

The document is an interview preparation kit for the Career Fair 2024 at the University of Moratuwa, covering essential topics in computer science, software design, databases, web and mobile development, operating systems, security, cloud computing, and testing. It includes detailed sections on data structures like arrays and linked lists, algorithms such as sorting techniques, and various programming languages. The content is structured into chapters, providing a comprehensive guide for students to prepare for technical interviews.

Uploaded by

Gayni Nin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views121 pages

Career Fair 2024 - Interview Preparation Kit

The document is an interview preparation kit for the Career Fair 2024 at the University of Moratuwa, covering essential topics in computer science, software design, databases, web and mobile development, operating systems, security, cloud computing, and testing. It includes detailed sections on data structures like arrays and linked lists, algorithms such as sorting techniques, and various programming languages. The content is structured into chapters, providing a comprehensive guide for students to prepare for technical interviews.

Uploaded by

Gayni Nin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 121

Career Fair - 2024

Interview Preparation Kit

Batch 19

Faculty of Information Technology

University of Moratuwa
Table of Contents
CHAPTER 1 : Core Computer Science Fundamentals........................................................ 1
1.1 - Data Structures .............................................................................................................. 1
1.1.1 - Arrays ..................................................................................................................... 1
1.1.2 - Linked Lists ............................................................................................................ 2
1.1.3 - Stacks ...................................................................................................................... 5
1.1.4 - Queues .................................................................................................................... 6
1.2 - Algorithms ..................................................................................................................... 7
1.2.1 - Sorting Algorithms: ................................................................................................ 7
1.2.1.1 Bubble Sort .................................................................................................... 7
1.2.1.2 Selection Sort ................................................................................................. 9
1.2.1.3 Insertion Sort ................................................................................................ 12
1.3 - Big O notation and Time Complexity Analysis .......................................................... 14
1.3.1 - Time Complexity .................................................................................................. 14
1.3.2 - Space Complexity ................................................................................................. 15
1.4 - Object-Oriented Programming (OOP) principles ........................................................ 17
1.4.1 - Encapsulation........................................................................................................ 17
1.4.2 - Inheritance ............................................................................................................ 17
1.4.3 - Polymorphism....................................................................................................... 17
1.4.4 - Abstraction............................................................................................................ 18
1.5 - Programming Languages ............................................................................................. 18
1.5.1 - C............................................................................................................................ 18
1.5.2 - Java ....................................................................................................................... 19
CHAPTER 2 : Software Design and Architecture.............................................................. 21
2.1 - Software Development Methodologies ....................................................................... 21
2.1.1 - Agile Methodology ............................................................................................... 21
CHAPTER 3 : Databases & Data Management ................................................................. 25
3.1 - SQL.............................................................................................................................. 25
3.1.1 - What is SQL?........................................................................................................ 25
3.1.2 - Indexing and Query Optimization ........................................................................ 32
CHAPTER 4 : Web Development ........................................................................................ 33
4.1 - Web Development Fundamentals ................................................................................ 33
4.2 - ReactJS ........................................................................................................................ 33

ii
4.3 - NodeJS ......................................................................................................................... 48
4.4 - Spring boot .................................................................................................................. 48
4.4.1 - What is a spring boot? (intro) ............................................................................... 48
4.4.2 - Differences Between Spring and Spring Boot? .................................................... 48
4.4.3 - Core concepts of Spring boot ............................................................................... 49
4.4.3.1 Dependency injection................................................................................... 49
4.4.3.2 Inversion Of Control (IOC) ......................................................................... 51
4.4.3.3 Spring Boot Annotations.............................................................................. 52
CHAPTER 5 : Mobile Development .................................................................................... 56
5.1 - ReactNative ................................................................................................................. 56
5.1.1 - What is React Native? .......................................................................................... 56
5.1.2 - React Native CLI vs Expo .................................................................................... 57
5.1.3 - Components .......................................................................................................... 58
5.1.4 - Props ..................................................................................................................... 58
5.1.5 - State ...................................................................................................................... 58
5.1.5.1 State Management ........................................................................................ 59
5.1.6 - Lifecycle Methods ................................................................................................ 61
5.1.7 - Navigation ............................................................................................................ 61
5.1.8 - Styling in React Native ......................................................................................... 62
5.1.9 - AsyncStorage ........................................................................................................ 63
5.1.10 - API Integration ................................................................................................... 64
5.1.11 - Testing in React Native ...................................................................................... 65
5.1.12 - Performance Optimization .................................................................................. 67
5.1.13 - Best Practices and Patterns ................................................................................. 67
5.2 - Flutter........................................................................................................................... 68
5.2.1 - Overview .............................................................................................................. 68
5.2.2 - Introduction .......................................................................................................... 68
5.2.3 - Download flutter. .................................................................................................. 68
5.2.4 - Create a flutter project. ......................................................................................... 68
5.2.5 - Dart ....................................................................................................................... 69
5.2.5.1 Dart programming language Basic .............................................................. 69
5.2.6 - Flutter widgets ...................................................................................................... 73
5.2.6.1 main.dart ...................................................................................................... 73
5.2.6.2 AppBar ......................................................................................................... 75
iii
5.2.6.3 Text .............................................................................................................. 75
5.2.6.4 Icon .............................................................................................................. 76
5.2.6.5 Container ...................................................................................................... 76
5.2.6.6 Center ........................................................................................................... 77
5.2.6.7 ImageAsset ................................................................................................... 77
5.2.6.8 sizedBox ....................................................................................................... 79
5.3 - Create flutter layouts ................................................................................................... 80
CHAPTER 6 : Operating Systems ....................................................................................... 82
CHAPTER 7 : Security ......................................................................................................... 91
7.1 - Authorization & Authentication .................................................................................. 91
7.1.1 - Authentication ...................................................................................................... 91
7.1.1.1 Authentication Methods ............................................................................... 91
7.1.1.2 OAuth ........................................................................................................... 91
7.1.2 - Authorization ........................................................................................................ 92
7.1.2.1 Authorization methods ................................................................................. 92
7.2 - Password Storing, Hashing & Salt .............................................................................. 93
7.2.1 - Password Storing .................................................................................................. 93
7.2.2 - Password Hashing................................................................................................. 93
7.2.3 - Password Salt ........................................................................................................ 93
7.3 - API Security ................................................................................................................ 94
7.3.1 - JWT ...................................................................................................................... 94
7.4 - .env file usage in Public Repositories .......................................................................... 97
7.4.1 - What is a `.env` file .............................................................................................. 97
7.4.2 - Why use a `.env` file............................................................................................. 97
7.4.3 - Using `.env` with GitHub ..................................................................................... 97
7.4.3.1 Secrets Management in GitHub ................................................................... 98
7.4.4 - Best Practices ........................................................................................................ 98
CHAPTER 8 : Cloud Computing & Devops ..................................................................... 101
8.1 - Introduction to Cloud Computing.............................................................................. 101
8.1.1 - What is Cloud Computing? ................................................................................ 101
8.1.2 - Cloud Scalability ................................................................................................ 103
8.1.3 - Cloud Elasticity .................................................................................................. 104
8.1.4 - How cloud services hosted ................................................................................. 105
8.1.5 - Types of Cloud Computing ................................................................................ 107
iv
8.1.6 - Deployment models for cloud computing .......................................................... 108
8.1.7 - Benefits of cloud computing............................................................................... 110
CHAPTER 9 : Testing and Quality Assurance ................................................................. 112
9.1 - Software testing Fundamentals .................................................................................. 112
9.2 - Quality Assurance...................................................................................................... 114
9.3 - Test Automation ........................................................................................................ 114

v
CHAPTER 1 : Core Computer Science Fundamentals

1.1 - Data Structures

1.1.1 - Arrays

Arrays are used to store and organize a collection of elements.

• Arrays have a fixed size, meaning that once they are created, their size cannot be
changed. If you need to store a different number of elements, you would typically need
to create a new array.
• The elements in an array are stored in contiguous memory locations. This
characteristic allows for efficient random access to any element based on its index.
• Each element in an array is accessed using an index. The index indicates the position
of the element in the array. In many programming languages, array indices start from
0.

• Arrays store elements of the same data type. For example, an array of integers will
only contain integer values.
• One dimensional array

1
• Two dimensional array

1.1.2 - Linked Lists

Linked List is a linear data structure, in which elements are not stored at a contiguous
location. Linked List forms a series of connected nodes, where each node stores the data and
the address of the next node in the sequence.

• Data - It holds the actual value or data associated with the node.
• Next pointer - It stores the memory address (reference) of the next node in the
sequence.
• Head - The linked list is accessed through the head node, which points to the first node
in the list.
• Tail - The last node in the list points to NULL or nullptr, indicating the end of the list.
This node is known as the tail node.

Linked lists offer dynamic memory allocation, Efficient Memory Utilization and efficient
insertion and deletion operations compared to arrays.

2
3
Advantages of linked list

• Linked lists can easily grow or shrink in size during program execution.
• Inserting or deleting elements in a linked list is generally more efficient than in an
array, especially when dealing with large datasets.
• Linked lists can efficiently utilize memory, as memory is allocated on demand. Unlike
arrays, linked lists do not require pre-allocation of memory for a specific size.
• Linked lists do not require elements to be stored in contiguous memory locations.
• Linked lists do not suffer from the issue of wasted space that can occur in arrays due
to the need to allocate a fixed-size block of memory.
• Reordering elements in a linked list involves updating pointers, making it easier to
rearrange the structure.
Disadvantages of linked list

• Random Access: Unlike arrays, linked lists do not allow direct access to elements by
index. Traversal is required to reach a specific node.
• Extra Memory: Linked lists require additional memory for storing the pointers,
compared to arrays.
Difference between arrays and linked list

Array Linked list

Memory Allocation Elements in an array are Elements in a linked list are


stored in contiguous stored in non-contiguous
memory locations. The memory locations. Memory
memory for an array is is allocated for each element
allocated during the individually, and the linked
declaration, and it is a fixed- list can dynamically grow or
size data structure. shrink.

Size Arrays have a fixed size, Linked lists can dynamically


determined at the time of adjust their size by adding or
declaration. The size cannot removing nodes. They are
be changed during runtime. not constrained by a fixed
size.

Insertion and Deletion Inserting or deleting Insertion and deletion


elements in an array, operations are more efficient
especially in the middle, can in linked lists. Adding or
be inefficient. Elements may removing a node involves
need to be shifted to updating pointers, without
accommodate changes. the need to shift other
elements.

Random Access Arrays provide constant-time Random access in linked lists


random access to elements. is slower compared to arrays.
4
Accessing an element at a Traversing the list to find a
specific index is fast because specific element takes linear
the elements are stored in time.
contiguous memory.

Memory Overhead Arrays have less memory Linked lists have additional
overhead since they only memory overhead due to the
need to store the actual data storage of
values. pointers/references for each
node.

1.1.3 - Stacks

A linear data structure that follows the Last in First Out (LIFO) principle. This means that the
last element added to the stack is the first one to be removed.

Imagine a pile of plates in a canteen. When you need a plate, you take the one from the top. So,
the plate at the bottom stays there the longest. This is like a rule - the Last In is the First Out,
or you can say the First In is the Last Out. It's like a stack of plates where the newest one goes
on top, and you always use the one on the top first.

5
1.1.4 - Queues

A linear data structure that follows the First In First Out (FIFO) principle. This means that the
first element added to the queue will be the first one to be removed.

Queues are commonly used for tasks like managing tasks in a print queue, handling requests
in a web server, or in scenarios where elements must be processed in the order they are
received.

6
One of the commonly used classes that implements the Queue interface is LinkedList.

1.2 - Algorithms

1.2.1 - Sorting Algorithms:

1.2.1.1 Bubble Sort


Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent
elements if they are in the wrong order. This algorithm is not suitable for large data sets as its
average and worst-case time complexity is quite high.

In Bubble Sort algorithm,

• Traverse from left and compare adjacent elements and the higher one is placed at right
side.
• In this way, the largest element is moved to the rightmost end at first.
• This process is then continued to find the second largest and place it and so on until the
data is sorted.
How does Bubble Sort Work?

Input: arr[] = {6, 0, 5}

First Pass: The largest element is placed in its correct position, i.e., the end of the array.

Second Pass: Place the second largest element at correct position

7
Third Pass: Place the remaining two elements at their correct positions.

Implementation of Bubble Sort

8
1.2.1.2 Selection Sort
Selection sort is a simple and efficient sorting algorithm that works by repeatedly selecting the
smallest (or largest) element from the unsorted portion of the list and moving it to the sorted
portion of the list.

How does Selection Sort Algorithm work?

Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}

First pass:

For the first position in the sorted array, the whole array is traversed from index 0 to 4
sequentially. The first position where 64 is stored presently, after traversing whole array it is
clear that 11 is the lowest value.

Thus, replace 64 with 11. After one iteration 11, which happens to be the least value in the
array, tends to appear in the first position of the sorted list.

Second Pass:

For the second position, where 25 is present, again traverse the rest of the array in a sequential
manner.

After traversing, we found that 12 is the second lowest value in the array and it should appear
at the second place in the array, thus swap these values.

9
Third Pass:

Now, for third place, where 25 is present again traverse the rest of the array and find the third
least value present in the array.

While traversing, 22 came out to be the third least value and it should appear at the third place
in the array, thus swap 22 with element present at third position.

Fourth pass:

Similarly, for fourth position traverse the rest of the array and find the fourth least element in
the array. As 25 is the 4th lowest value hence, it will place at the fourth position.

Fifth Pass:

At last the largest value present in the array automatically get placed at the last position in the
array. The resulted array is the sorted array.

10
Implementation of Selection Sort

11
1.2.1.3 Insertion Sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing cards
in your hands. The array is virtually split into a sorted and an unsorted part. Values from the
unsorted part are picked and placed at the correct position in the sorted part.

To sort an array of size N in ascending order iterate over the array and compare the current
element (key) to its predecessor, if the key element is smaller than its predecessor, compare it
to the elements before. Move the greater elements one position up to make space for the
swapped element.

12
Implementation of Insertion Sort

13
1.3 - Big O notation and Time Complexity Analysis

1.3.1 - Time Complexity

Time complexity is a measure of the amount of time an algorithm takes to complete as a


function of the length of the input. It's an essential concept in algorithm design and
optimization, providing a theoretical estimate of the scalability and efficiency of an algorithm.
Big O Notation: This is the most common metric for calculating an algorithm's time
complexity. It describes the upper limit of the time complexity, representing the worst-case
scenario. Big O notation simplifies complexity by focusing on the most significant factors and
ignoring constants or smaller terms.

Examples and Calculations:

1. Linear Search Algorithm:


o Description: Searches each item in a list sequentially until the target is found or
the list ends.
o Time Complexity: O(n) - where n is the number of elements in the list.
o Calculation: In the worst case, every element is checked once. Thus, the time
taken grows linearly with the size of the input.
2. Binary Search Algorithm:
o Description: Searches a sorted array by repeatedly dividing the search interval
in half.
o Time Complexity: O(log n) - where n is the number of elements in the array.
o Calculation: In each step, the algorithm halves the array size, so the maximum
number of steps is proportional to the logarithm of the array size.

Time Complexities of Sorting Algorithms:

Algorithm Worst Case Best Case

Bubble Sort 𝑂(𝑛2 ) 𝑂(𝑛)

Insertion Sort 𝑂(𝑛2 ) 𝑂(𝑛)

Selection Sort 𝑂(𝑛2 ) 𝑂(𝑛2 )

Merge Sort 𝑂(𝑛 log 𝑛) 𝑂(𝑛 log 𝑛)

Quick Sort 𝑂(𝑛2 ) 𝑂(𝑛 log 𝑛)

Possible Interview Questions from Time Complexity:

1. Explain the difference between O(n) and O(log n) time complexities.

14
O(n) Time Complexity: This denotes linear time complexity. Here, the execution time of an
algorithm increases linearly with the size of the input data. In simple terms, if the input data
doubles, the time taken for the algorithm also doubles. This is typical for algorithms that need
to examine each element of the input data once, such as a linear search in an array.

O(log n) Time Complexity: This denotes logarithmic time complexity. In this case, the
execution time of an algorithm increases logarithmically with the size of the input data. This
means that when the input data size increases exponentially, the time taken increases only
linearly. Algorithms with O(log n) complexity are much faster than O(n) for large data sets.

2. How does time complexity affect the scalability of an algorithm?

The time complexity of an algorithm is crucial in determining its scalability, that is, how well
it performs as the size of the input data increases. Algorithms with lower time complexity (like
O(1), O(log n)) scale better because their execution time increases more slowly as the input
size grows. They are more suitable for large data sets.

On the other hand, algorithms with higher time complexity (like O(n^2), O(2^n)) do not scale
as well. Their execution time increases rapidly with the input size, making them less practical
for large data sets. These algorithms can become a bottleneck in processing and are often targets
for optimization.

3. Can you provide an example of an algorithm with O(1) time complexity?


• Accessing an element in an array by index. In an array, each element is stored at a
specific index, and accessing any element by its index is done in constant time,
regardless of the size of the array.
• A hash table lookup, where data is accessed via a key. Theoretically, in the best case,
this operation has an O(1) time complexity because it doesn't depend on the number of
elements in the hash table.

1.3.2 - Space Complexity

Space complexity is a measure of the amount of memory an algorithm needs in terms of the
size of the input data. Like time complexity, it helps in evaluating the efficiency of an
algorithm, focusing on the resources it requires.

Big O Notation for Space Complexity: Similar to time complexity, Big O notation is used to
describe the upper limit of the space complexity, focusing on the most significant factors.

Examples and Calculations:

1. Array Sum Algorithm:


o Description: Computes the sum of all elements in an array.
o Space Complexity: O(1) - Constant space complexity.

15
o Calculation: The algorithm uses a fixed amount of space (a single variable for
the sum) regardless of the input array size.
2. Merge Sort Algorithm:
o Description: A divide and conquer algorithm that sorts an array by dividing it
into halves, sorting them, and then merging them.
o Space Complexity: O(n) - where n is the number of elements in the array.
o Calculation: Merge sort requires additional space proportional to the size of the
input array for the merging process.

Space Complexities of Sorting Algorithms:

Algorithm Space Complexity Notes

Bubble Sort 𝑂(1) No additional space is used

Insertion Sort 𝑂(1) Only requires additional space for the


element being inserted

Selection Sort 𝑂(1) No additional space is used

Merge Sort 𝑂(𝑛) Requires additional space to merge sorted


subarrays

Quick Sort 𝑂(log 𝑛) Space complexity comes from the stack


space due to recursion, in the best or average
case

Possible Interview Questions from Space Complexity:

1. How does space complexity affect an algorithm's memory usage?

The effect of space complexity on an algorithm's memory usage is critical in scenarios where
memory is a constrained resource. A higher space complexity often means that an algorithm
will be less efficient in terms of memory usage, which can lead to increased costs for memory
allocation and potentially cause memory exhaustion for large data sets, which in turn can lead
to performance issues like longer garbage collection in languages that have it, or even out-of-
memory errors.

2. Can you differentiate between in-place and out-of-place sorting algorithms in


terms of space complexity?

• In-Place Sorting Algorithms: An in-place sorting algorithm rearranges the elements


within the array with only a small, constant amount of extra storage being used. The
space complexity of an in-place sorting algorithm is O(1), meaning that the amount of

16
memory required does not increase with the size of the input. Examples of in-place
sorting algorithms include Bubble Sort, Selection Sort, and Insertion Sort.
• Out-of-Place Sorting Algorithms: An out-of-place sorting algorithm, on the other
hand, requires additional storage whose size is dependent on the size of the input. The
additional space is used for copying the elements which are then sorted. This leads to
space complexities that can be larger than O(1), such as O(n) in the case of Merge Sort,
where 'n' represents the size of the input array.

1.4 - Object-Oriented Programming (OOP) principles

1.4.1 - Encapsulation

Encapsulation means the bundling of variables and methods inside a class. Furthermore, it can
achieve data hiding, and getters and setters are provided. A completely encapsulated class can
be created in Java by making all the class variables private. Data hiding, Increased flexibility,
and Reusability are some advantages of encapsulation

1.4.2 - Inheritance

Inheritance is a core concept of object-oriented programming and eliminates redundant code.


Therefore, we can use inheritance for method overriding and code reusability. Sub class and
super class are the two main important terms, we use in inheritance in Java.

Super class- base/parent class where a subclass inherits the features.

Sub class- an extended class that inherits the other classes.

Reference Diagrams:
https://miro.medium.com/v2/resize:fit:1400/format:webp/1*_D2AWEp18oxFG2IJj4tbWg.pn
g

1.4.3 - Polymorphism

Having many forms is the simple definition of polymorphism. When considering the real
world, one person can play different roles. For example, a person at the same time is a
brother, father, husband, or friend. But the behaviour will be changed according to the
situation. This is the meaning of polymorphism. There are two types of polymorphism

Compile time polymorphism (static polymorphism) The method used to create this form of
polymorphism is function overloading. Methods will be overloaded when there are multiple
methods with the same name but different parameter lists (methods with different method
signatures).

Runtime polymorphism (dynamic polymorphism) It is a procedure by which a function call


to an overridden method is handled at runtime. By using Method Overriding, this kind of
polymorphism is accomplished. Method overriding is the process of declaring a method in a
subclass that already exists in the parent class. Constructors, static methods, and final methods
17
cannot be overridden. Overridden is allowed when the instance methods are inherited by the
child class.

1.4.4 - Abstraction

The process of hiding specific details and showing only essential information to the user is data
abstraction. The abstract keyword (non-access modifier) is used for methods and classes.
Abstract classes cannot be used to create objects; abstract methods can only be used inside an
abstract class. But non-abstract classes can be contained inside the abstract class. These abstract
methods don’t have a body and the body is provided by the subclass in inherited form. Abstract
classes should be extended, and the abstract methods should be overridden.

1.5 - Programming Languages

1.5.1 - C

1. How does the execution of a C program differ from that of a Java program?
In C, the program is typically compiled to machine code, and the resulting executable interacts
directly with the operating system. There is no intermediate bytecode or virtual machine as in
Java.

2. Explain the concept of a process and a thread in the context of C programming.


A process in C is an independent program with its own memory space, while a thread is a
lightweight, independent unit of execution within a process. Processes do not share memory,
but threads within the same process do.

3. What is a pointer in C, and how is it different from other data types?


A pointer in C is a variable that stores the memory address of another variable. It allows indirect
access to the value or data stored at that address. Pointers provide greater flexibility but require
careful handling to avoid errors.

4. Discuss the difference between the stack and the heap in C.


The stack in C is used for storing function call information, local variables, and control flow
data. The heap is used for dynamic memory allocation, where memory is allocated and
deallocated manually by the programmer using functions like malloc() and free()

5. Explain the steps involved in the compilation of a C program.


Compilation involves preprocessing, compiling, assembling, and linking. Preprocessing
includes handling directives like #include, compiling translates source code to assembly,
assembling converts assembly to machine code, and linking combines object files into an
executable.

18
6. What is the role of header files in C, and how are they different from source files?
Header files contain function prototypes, macro definitions, and declarations used in a program.
Source files contain the actual code. Header files are included using #include in source files to
share declarations among multiple source files.

7. How does C achieve platform-specific compilation for different operating


systems?
C achieves platform-specific compilation by using conditional compilation directives like
#ifdef and #endif. Different sections of code can be included or excluded based on the target
platform during compilation.

1.5.2 - Java

1. What is garbage collection in Java?


Garbage collection in Java is a process of automatic memory management where the Java
Virtual Machine (JVM) identifies and frees up the memory occupied by objects that are no
longer reachable or in use by the program.

2. How does the garbage collector determine which objects are eligible for collection?
The garbage collector considers objects that are not reachable through references from the root
of the object graph (e.g., local variables, static variables, etc.) as eligible for garbage collection.

3. Explain the difference between garbage collection in Java and manual memory
management in languages like C.
In Java, the garbage collector automatically manages memory by reclaiming unused objects,
while in languages like C, developers must manually allocate and deallocate memory using
functions like malloc() and free().

4. What is the Java Virtual Machine (JVM)?


The JVM is an abstract machine that provides a runtime environment for Java programs to
execute. It interprets Java bytecode or translates it to native machine code for execution.

5. Describe the life cycle of a Java program's execution in the JVM.


The life cycle typically involves the loading of classes, bytecode verification, execution,
garbage collection, and unloading of classes. The JVM manages these processes to execute
Java programs.

6. What are the differences between the stack and the heap in Java?
The stack is used for storing method call information, local variables, and partial results, while
the heap is used for dynamic memory allocation, where objects are created and stored.

19
7. Explain the concept of multithreading in Java.
Multithreading in Java allows multiple threads of execution to run concurrently within the same
program. Each thread represents an independent flow of control, sharing the same resources
and memory space.

8. Explain the difference between compile-time and runtime in Java.


Compile-time refers to the period when the source code is translated into bytecode or machine
code. Runtime is when the compiled code is executed by the Java Virtual Machine (JVM).

9. What is the Just-In-Time (JIT) compiler in Java, and how does it improve
performance?
The JIT compiler in Java translates bytecode into native machine code at runtime. This can
lead to improved performance as the JVM can adapt the code to the underlying hardware and
execute it more efficiently.

10. How does Java support platform independence through the compilation and
runtime process?
Java achieves platform independence by compiling source code into an intermediate bytecode,
which is then interpreted or compiled at runtime by the JVM. This allows the same bytecode
to run on different platforms with a compatible JVM.

20
CHAPTER 2 : Software Design and Architecture

2.1 - Software Development Methodologies

2.1.1 - Agile Methodology

Agile methodology is a project management framework that breaks projects down into
several dynamic phases, commonly known as sprints. The Agile framework is an iterative
methodology. After every sprint, teams reflect and look back to see if there was anything that
could be improved so they can adjust their strategy for the next sprint.

The Agile Manifesto is a document that focuses on four values and 12 principles for Agile
software development. It was published in February 2001 by 17 software developers who
needed an alternative to the more linear product development process.

Agile’s four main values are:

Individuals and interactions over processes and tools - Agile teams value team collaboration
and teamwork over working independently and doing things "by the book.”

Working software over comprehensive documentation - The software that Agile teams develop
should work. Additional work, like documentation, is not as important as developing good
software.

Customer collaboration over contract negotiation - Customers are extremely important within
the Agile methodology. Agile teams allow customers to guide where the software should go.
Therefore, customer collaboration is more important than the finer details of contract
negotiation.

21
Responding to change over following a plan - One of the major benefits of Agile project
management is that it allows teams to be flexible. This framework allows for teams to quickly
shift strategies and workflows without derailing an entire project.

Benefits of using Agile methodology

Agile is one of the most popular approaches to project management because it is flexible, it is
adaptable to changes and it encourages customer feedback.

Many teams embrace the Agile approach for the following reasons:

Rapid progress: By effectively reducing the time it takes to complete various stages of a
project, teams can elicit feedback in real time and produce working prototypes or demos
throughout the process

Customer and stakeholder alignment: Through focusing on customer concerns and


stakeholder feedback, the Agile team is well positioned to produce results that satisfy the right
people

Continuous improvement: As an iterative approach, Agile project management allows teams


to chip away at tasks until they reach the best end result

What is a lifecycle in Agile?

The Agile software development life cycle is the structured series of stages that a product goes
through as it moves from beginning to end. It contains six phases: concept, inception, iteration,
release, maintenance, and retirement.

The Agile life cycle will vary slightly depending on the project management methodology
chosen by a team. For example, Scrum teams work in short time periods known as sprints,
which are similar to iterations. They also have clearly defined roles, such as Scrum master. On
the other hand, Kanban teams have more of a continuous flow with no required roles. Another
example is Extreme Programming, where teams tend to work in shorter iterations and place an
extra focus on engineering practices.

22
However, the goal of all software development teams is the same: to deliver working software
to users on time.

Let’s dive deeper into the six phases of Agile life cycle.

Concept

First up is the concept phase. Here, a product owner will determine the scope of their project.
If there are numerous projects, they will prioritize the most important ones. The product owner
will discuss key requirements with a client and prepare documentation to outline them,
including what features will be supported and the proposed end results. It is advisable to keep
the requirements to a minimum as they can be added to in later stages. In the concept stage, the
product owner will also estimate the time and cost of potential projects. This detailed analysis
will help them to decide whether or not a project is feasible before commencing work.

Inception

Once the concept is outlined, it is time to build the software development team. A product
owner will check their colleagues’ availability and pick the best people for the project while
also providing them with the necessary tools and resources. They can then start the design
process. The team will create a mock-up of the user interface and build the project architecture.
The inception stage involves further input from stakeholders to fully flesh out the requirements
on a diagram and determine the product functionality. Regular check-ins will help to ensure
that all requirements are built into the design process.

Iteration

Next up is the iteration phase, also referred to as construction. It tends to be the longest phase
as the bulk of the work is carried out here. The developers will work with UX designers to
combine all product requirements and customer feedback, turning the design into code. The
goal is to build the bare functionality of the product by the end of the first iteration or sprint.
Additional features and tweaks can be added in later iterations. This stage is a cornerstone of
Agile software development, enabling developers to create working software quickly and make
improvements to satisfy the client.

Release

The product is almost ready for release. But first, the quality assurance team needs to perform
some tests to ensure the software is fully functional. These Agile team members will test the
system to ensure the code is clean — if potential bugs or defects are detected, the developers
will address them swiftly. User training will also take place during this phase, which will
require more documentation. When all of this is complete, the product’s final iteration can then
be released into production.

Maintenance

The software will now be fully deployed and made available to customers. This action moves
it into the maintenance phase. During this phase, the software development team will provide
ongoing support to keep the system running smoothly and resolve any new bugs. They will
23
also be on hand to offer additional training to users and ensure they know how to use the
product. Over time, new iterations can take place to refresh the existing product with upgrades
and additional features.

Retirement

There are two reasons why a product will enter the retirement phase: either it is being replaced
with new software, or the system itself has become obsolete or incompatible with the
organization over time. The software development team will first notify users that the software
is being retired. If there is a replacement, the users will be migrated to the new system. Finally,
the developers will carry out any remaining end-of-life activities and remove support for the
existing software.

Each phase of the Agile life cycle contains numerous iterations to refine deliverables and
deliver great results. Let’s take a look at how this iteration workflow works within each phase:

The Agile iteration workflow

Agile iterations are usually between two and four weeks long, with a final completion date.
The workflow of an Agile iteration will typically consist of five steps:

• Plan requirements
• Develop product
• Test software
• Deliver iteration
• Incorporate feedback
Each Agile phase will contain numerous iterations as software developers repeat their
processes to refine their product and build the best software possible. In essence, these
iterations are smaller cycles within the overarching Agile life cycle.

The Agile life cycle is a key structural model for software development teams, enabling them
to stay on course as they move their product from conception to retirement. To support all
activities in the Agile cycle, team members need to have access to the appropriate resources
and tools, including an Agile project management platform.

24
CHAPTER 3 : Databases & Data Management

3.1 - SQL

3.1.1 - What is SQL?

Standard Programming Language: Designed for managing data in relational databases.

Data Manipulation: Used for querying, updating, inserting, and deleting data.

Relational Database Management: Essential for RDBMS operations.

Querying Data: Allows selection and retrieval of data from databases.

Data Definition: Enables creation, alteration, and deletion of tables and database structures.

Data Control: Manages access and permissions for database users.

Wide Usage: Integral for data analysis, business intelligence, and managing data-driven
applications.

Cross-Platform: Compatible with database systems like MySQL, PostgreSQL, and SQL
Server.

What are the different types of statements supported by SQL?

SQL supports various types of statements, each serving different purposes:

Data Definition Language (DDL): These statements define and modify the database structure.
Examples include:

CREATE: To create databases and tables.

ALTER: To modify existing database structures.

DROP: To delete databases and tables.

Data Manipulation Language (DML): These statements handle data within the database.
They include:

SELECT: To retrieve data from a database.

INSERT: To add new data to tables.

UPDATE: To modify existing data.

DELETE: To remove data from tables.

Data Control Language (DCL): These are used to control access to data in the database:
25
GRANT: To give users access privileges to the database.

REVOKE: To remove access privileges.

Transaction Control Language (TCL): These statements manage the changes made by DML
statements:

COMMIT: To save the work done.

ROLLBACK: To undo changes.

SAVEPOINT: To set a save point within a transaction.

What is DBMS?

A Database Management System (DBMS) is software designed to store, retrieve, define, and
manage data in a database. It provides users and programmers with a systematic way to create,
retrieve, update, and manage data. DBMS ensures the data is consistently organized and
remains easily accessible. It typically supports defining, creating, querying, updating, and
administrating databases. DBMSes are crucial in modern computing environments, handling
the data for various applications, from business information systems to personal databases.

Data Storage: Manages data storage in databases.

Data Retrieval: Allows retrieval of data efficiently.

Data Manipulation: Enables operations like insert, update, delete, and query.

Data Integrity: Maintains data accuracy and consistency.

Data Security: Controls data access and authorization.

Backup and Recovery: Manages data backup and restoration.

Multi-User Environment: Supports multiple users concurrently.

Data Abstraction: Provides an abstract view of data to users.

What is RDBMS?

RDBMS stands for Relational Database Management System. RDBMS stores the data in the
collection of tables, which are related by common fields between the columns of the table. It
also provides relational operators to manipulate the data stored in the tables.

Based on the Relational Model: Organizes data into tables with rows and columns.

Table Relationships: Uses keys (primary and foreign) to establish relationships between tables.

Data Retrieval with SQL: Employs SQL for querying and managing data.

ACID Properties: Ensures Atomicity, Consistency, Isolation, and Durability in transactions.


26
Normalization: Supports data normalization to reduce redundancy.

Examples:

MySQL: Popular in web applications.

PostgreSQL: Known for advanced features and reliability.

Oracle Database: Widely used in enterprise environments.

Microsoft SQL Server: Common in .NET environments.

Why do we use SQL constraints? Which constraints can we use while creating a database
in SQL?

SQL constraints are used to specify rules for the data in a table, ensuring the accuracy and
reliability of the data within the database. Here are the key constraints used in SQL with
examples:

PRIMARY KEY: Uniquely identifies each row in a table.

Example: CREATE TABLE Students (ID int PRIMARY KEY, name varchar(255));

FOREIGN KEY: Ensures referential integrity of the data in one table to match values in another
table.

Example: CREATE TABLE Orders (OrderID int, OrderNo int, CustID int FOREIGN
KEY REFERENCES Customers(CustID));

UNIQUE: Ensures all values in a column are different.

Example: CREATE TABLE Users (UserID int, Email varchar(255) UNIQUE);

NOT NULL: Ensures that a column cannot have a NULL value.

Example: CREATE TABLE Employees (EmpID int NOT NULL, name varchar(255));

CHECK: Ensures the value in a column meets a specific condition.

Example: CREATE TABLE Products (ProductID int, Price decimal CHECK (Price
> 0));

DEFAULT: Provides a default value for a column when no value is specified.

Example: CREATE TABLE Orders (OrderID int, Status varchar(255) DEFAULT


'Pending');

These constraints are integral in maintaining data integrity and consistency in a relational
database.

27
What are the different JOINS used in SQL?

In SQL, different types of JOINs are used to combine rows from two or more tables based on
a related column between them:

INNER JOIN: Returns rows when there is a match in both tables.

Example: SELECT * FROM table1 INNER JOIN table2 ON table1.key =


table2.key;

LEFT JOIN (or LEFT OUTER JOIN): Returns all rows from the left table, and the matched
rows from the right table. Unmatched rows in the left table will have NULL in the columns of
the right table.

Example: SELECT * FROM table1 LEFT JOIN table2 ON table1.key = table2.key;

RIGHT JOIN (or RIGHT OUTER JOIN): Opposite of LEFT JOIN. Returns all rows from the
right table, with matched rows from the left table. Unmatched rows in the right table will have
NULL in the columns of the left table.

Example: SELECT * FROM table1 RIGHT JOIN table2 ON table1.key = table2.key;

FULL OUTER JOIN: Combines LEFT JOIN and RIGHT JOIN. Returns rows when there is a
match in one of the tables.

Example: SELECT * FROM table1 FULL OUTER JOIN table2 ON table1.key =


table2.key;

Each JOIN type is used for specific scenarios depending on the data relationships and the
desired output.

What are the Aggregate Functions available there in SQL?

SQL provides several aggregate functions to perform calculations on a set of values, returning
a single value. Some of the key aggregate functions include:

COUNT: Counts the number of rows in a column.

Example: SELECT COUNT(EmployeeID) FROM Employees;

SUM: Calculates the total sum of a numeric column.

Example: SELECT SUM(Salary) FROM Employees;

AVG: Determines the average value of a numeric column.

Example: SELECT AVG(Salary) FROM Employees;

MAX: Finds the maximum value in a column.

28
Example: SELECT MAX(Salary) FROM Employees;

MIN: Finds the minimum value in a column.

Example: SELECT MIN(Salary) FROM Employees;

These functions are essential for data analysis and reporting in SQL

What is the difference between “Primary Key” and “Unique Key”?

Uniqueness: Both keys ensure uniqueness of the values in the column, but a primary key doesn't
allow NULL values, while a unique key can contain a single NULL value.

Number per Table: A table can have only one primary key, but it can have multiple unique
keys.

Purpose: The primary key is used to uniquely identify each record in a table, while the unique
key is to prevent duplicate values in a column, not necessarily to uniquely identify a record.

Indexing: By default, a primary key is a clustered index, and a unique key is a non-clustered
index.

What is a trigger?

A Trigger in SQL is a type of stored procedure that automatically executes in response to


certain events on a table or view. Commonly, triggers are used for maintaining data integrity
and enforcing business rules.

Example:

Let's say you have a table Orders, and you want to automatically update the OrderCount column
in another table Customer every time a new order is placed.

Create Tables:

CREATE TABLE Customer (

CustomerID int PRIMARY KEY,

OrderCount int

);

CREATE TABLE Orders (

OrderID int PRIMARY KEY,

CustomerID int

);

29
Create Trigger:

CREATE TRIGGER UpdateOrderCount

AFTER INSERT ON Orders

FOR EACH ROW

BEGIN

UPDATE Customer

SET OrderCount = OrderCount + 1

WHERE CustomerID = NEW.CustomerID;

END;

In this example, the trigger UpdateOrderCount is set to fire after an insert operation on Orders.
It increases the OrderCount in the Customer table for the corresponding customer.

What is an SQL view?

In SQL, a view is a virtual table based on the result set of an SQL statement. It contains rows
and columns, just like a real table, and can be used in the same way as a table but does not
store data itself.

Example:

Suppose you have a table Employees with columns EmployeeID, Name, Department, and
Salary. You can create a view to show only the Name and Department of each employee:

CREATE VIEW DepartmentView AS

SELECT Name, Department

FROM Employees;

You can then query this view like a regular table:

SELECT * FROM DepartmentView;

This view provides a simplified view of the Employees table, showing only names and
departments.

Advantages of database views

A database view allows you to simplify complex queries: a database view is defined by an SQL
statement that associates with many underlying tables. You can use database view to hide the

30
complexity of underlying tables to the end-users and external applications. Through a database
view, you only have to use simple SQL statements instead of complex ones with many joins.

A database view helps limit data access to specific users. You may not want a subset of
sensitive data can be queryable by all users. You can use a database view to expose only non-
sensitive data to a specific group of users.

A database view provides an extra security layer. Security is a vital part of any relational
database management system. The database view offers additional protection for a database
management system. The database view allows you to create the read-only view to expose
read-only data to specific users. Users can only

A database view enables backward compatibility. Suppose you have a central database, which
many applications are using it. One day, you decide to redesign the database to adapt to the
new business requirements. You remove some tables and create new tables, and you don’t want
the changes to affect other applications. In this scenario, you can create database views with
the same schema as the legacy tables that you will remove.

Disadvantages of database views

Performance: querying data from a database view can be slow especially if the view is created
based on other views.

Tables dependency: you create a view based on underlying tables of the database. Whenever
you change the structure of these tables that view associated with, you have to change the view
as well.

What are stored procedures?

Stored Procedures in SQL are pre-written SQL commands which are saved and stored in the
database. They can be executed whenever needed, which makes them efficient for repetitive
tasks. Stored procedures can also include control-of-flow language, allowing for complex
processing logic, and they can accept parameters, making them versatile for various operations.

Example:

Suppose you frequently need to update the salary of an employee. Instead of writing the update
statement every time, you can create a stored procedure like this:

CREATE PROCEDURE UpdateSalary

@EmployeeID int,

@NewSalary decimal

AS

BEGIN

UPDATE Employees
31
SET Salary = @NewSalary

WHERE EmployeeID = @EmployeeID;

END;

To use this stored procedure, you simply call it with the appropriate paramet

EXEC UpdateSalary 123, 50000;

3.1.2 - Indexing and Query Optimization

Normalization

What is normalization?

Normalization in the context of a relational database is a process designed to minimize


redundancy and dependency by organizing data into multiple related tables. It involves
structuring a database in a way that reduces data duplication and improves data integrity.
Normalization is achieved through a series of steps known as normal forms, starting from the
first normal form (1NF) and going up to the fifth normal form (5NF). Each normal form
addresses a specific type of anomaly and dependency, ensuring that the database is efficiently
structured and supports consistent data retrieval and updating.

What are all the different normalizations?

First Normal Form (1NF): Ensures each table cell contains only a single value and each record
is unique.

Second Normal Form (2NF): Requires being in 1NF and that all non-key attributes are fully
functional and dependent on the primary key.

Third Normal Form (3NF): Must be in 2NF and all attributes must be dependent only on the
primary key, not on other non-key attributes.

Boyce-Codd Normal Form (BCNF): A stronger version of 3NF where every determinant must
be a candidate key.

32
CHAPTER 4 : Web Development

4.1 - Web Development Fundamentals

4.2 - ReactJS

React is an open-source JavaScript library used for building user interfaces, particularly for
single-page applications. It's maintained by Facebook and a community of individual
developers and companies.

Single Page Application (SPA)

• A Single Page Application (SPA) is a type of web application or website that interacts
with the user by dynamically rewriting the current page rather than loading entire new
pages from the server.
• This approach avoids interruption of the user experience between successive pages,
making the application behave more like a desktop application.
• In an SPA, most resources (HTML, CSS, JavaScript) are only loaded once during the
initial loading of the site. Only data is transmitted back and forth between the client and
server.
• SPAs typically handle most of the user interface logic in the browser, using JavaScript.
When the URL in the browser is updated, it navigates to different views within the
application without refreshing the application.
Virtual DOM

The Virtual DOM (Document Object Model) is a concept of React that provides a more
efficient way of updating the view in a web application. Virtual DOM is a lightweight copy of
the real DOM. It's a JavaScript representation of the actual DOM.

Directly manipulating the real DOM is slow and inefficient because each change can trigger a
series of re-renders in the browser. Therefore, when your React app first renders, react creates
a Virtual DOM that represents the UI.

When something changes in your app (like user input or data fetching), React updates the
Virtual DOM. React then compares the updated Virtual DOM with the previous version to
figure out what are the changes. Once React knows what has changed, it updates only those
parts in the real DOM. This selective updating makes it much faster.

Example:

• Suppose you have a list of items rendered on a page, and you add one more item.
• Without a Virtual DOM, the whole list might need to be re-rendered to include the new
item.
• With a Virtual DOM, react understands that only one item needs to be added to the list,
so it updates only that part of the real DOM.

33
Advantages of using React

1. Component-Based Architecture: React uses a component-based architecture, allowing


developers to build reusable UI components.

Example: If you have a button that appears in several places across your app,
you can create a single Button component in React. This component can be
reused wherever a button is needed, ensuring consistency and reducing code
duplication.
2. Strong Community and Ecosystem: React has a large community and ecosystem. This
includes a vast array of libraries, tools, and extensions.

Example: Libraries like Redux for state management or React Router for
navigation enhance React’s capabilities, and the large community means
abundant resources, tutorials, and third-party tools.

3. Flexible and Integrable: React can be used in a variety of projects, from small widgets
to large-scale applications.

Example: You can integrate React into existing projects or use it for just a part
of an application, like a single widget in a webpage otherwise not built in React.

Code Splitting

Code splitting in React is a technique used to split a large JavaScript bundle into smaller
chunks, which can then be loaded on demand. This improves the initial load time of the
application, as users only download the code, they need for the page they're visiting.

• Dynamic Imports: Suppose you have a component HeavyComponent that is large and
not always needed.

34
• Route-based Code Splitting: Here, the Home and About components are only loaded
when the user navigates to their respective routes, reducing the initial load time of the
app.

Class components and Functional components

Class components and functional components are two different ways of writing components in
React. Each has its own characteristics and use cases.

Class Components

Class components are ES6 classes that extend from React.Component. They can hold and
manage local state and lifecycle methods.

35
Functional Components

Functional components are simpler and written as JavaScript functions. They do not have their
own state or lifecycle methods by default. React hooks were introduced later as a fix for that.

Class components often require more code and boilerplate. They need a constructor to initialize
state and lifecycle methods for side effects. Functional components are generally more succinct
and easier to read, with less boilerplate. They encourage the use of plain JavaScript functions.

State and Props in React

State

State is a local data storage that is private to the component and can be changed within the
component. It's used for data that changes over time or due to user interactions. State is
managed within the component (similar to variables declared within a function).

36
• count is a state variable in the Counter component. It starts at 0 and is updated every
time the button is clicked.

Props

Props are used to pass data and functions from parent to child components, making components
reusable and dynamic.

• App component renders the Welcome component and passes a prop name with the
value "Alice". The Welcome component then uses this prop to display the greeting.

37
Lifting State up

"Lifting state up" is a common pattern in React for managing shared state across multiple
components. When two or more child components need access to the same state, you "lift" this
state up to their closest common ancestor.

• Imagine two sibling components, ComponentA and ComponentB, both need access to
and the ability to modify a piece of state, say data. If data is managed independently in
both ComponentA and ComponentB, keeping them synchronized becomes complex
and error prone.
• Instead, you move the data state to their closest common ancestor, let's call it
ParentComponent. ParentComponent passes the state down to ComponentA and
ComponentB as props.

38
React Hooks

Introduced in React 16.8, they enable you to use state and other React features without writing
a class. Here are some of the most commonly used hooks.

• useState
• useEffect
• useLayoutEffect
• useMemo
• useRef
• useContext

useState Hook

It provides a way to declare state variables in functional components. When the state changes,
the component automatically re-renders, reflecting the new state in the UI.

Declare a state variable and a function to update it:

• const [stateValue, setStateValue] = useState(initialValue)


o stateValue is the current value of the state.
o setStateValue is the function used to update the state.
o initialValue is the initial value of the state.

39
The count state starts at 0. Every time the button is clicked, setCount is called with the new
count (count + 1). This updates the count state, causing the Counter component to re-render
and display the new count.

useEffect Hook

useEffect is called after the component renders.

• It takes two arguments: a function (the effect) and a dependencies array.


o The effect function is executed after every completed render by default.
o The dependencies array can limit this effect to only re-run when certain values
have changed.

• The Counter component consists of a count state and a button to increment the count.
• The useEffect hook is used to log a message to the console each time the count changes.
• The message Counter updated: ${count} is logged every time the button is clicked and
the state updates.
• The effect runs after each render that results from count state changes, due to the [count]
dependency array.

useLayoutEffect Hook

The useLayoutEffect hook in React is quite similar to useEffect, but it fires synchronously after
all DOM mutations. This hook ensures that any changes it enacts on the DOM (like adjusting
element sizes or positions) are done before the browser has a chance to paint, preventing any
flickering or layout shifting.

40
• useEffect is fired after layout and paint, making it non-blocking for UI updates,
useLayoutEffect is fired immediately after DOM mutations, before paint.
• This means useLayoutEffect will block the painting process until your code inside is
executed.

Suppose you want to measure the width of a DOM element right after it is rendered and then
update the component state based on this measurement.

• We use useLayoutEffect to measure the width of the div right after it has been rendered.
• boxRef is a reference to the div element.
• When the component mounts, useLayoutEffect executes, measures the width of the div,
and updates the state with that width.
• The measurement is done before the browser has a chance to paint, preventing any
flicker that might have occurred if the width changes significantly.

useMemo Hook

The useMemo hook in React is used for performance optimization. It memorizes (caches)
expensive function results between renders and only recalculates the result when one of the
dependencies has changed. This helps avoid expensive calculations on every render when the
inputs haven't changed.

• The hook takes a function and an array of dependencies.

41
• useMemo will only recompute the memorized value when one of the dependencies has
changed.
• Otherwise, it returns the memorized value from the previous render.

• The function inside useMemo performs a time-consuming calculation.


• It's dependent on the num prop, meaning the calculation will only re-run when num
changes.
• If the num prop stays the same between renders, the previously calculated result is used,
avoiding unnecessary recalculations.

42
useRef Hook

The useRef hook in React is used to access and interact with DOM elements directly.

• In this example, useRef is used to create a reference (inputEl) to a text input element.
• When the button is clicked, we use the inputEl reference to set focus to the input
element.

useContext Hook

The useContext hook in React is used for consuming context in functional components.

Context provides a way to pass data through the component tree without having to pass props
down manually at every level. This is particularly useful for sharing data that can be considered
“global” for a tree of React components, such as current authenticated user, theme, or preferred
language.

• First, you create a Context using React.createContext()

• Context data is "provided" to a tree of components using the Provider

43
• In functional components, you can use the useContext hook to access the context value.

React Routing

React handles routing using third-party libraries, the most popular one being React Router.
React Router enables the creation of single-page applications (SPAs) where the URL changes
without reloading the page, improving the user experience.

• Setting Up Routes
o Import BrowserRouter, Route, and Switch from react-router-dom
o Wrap your application in BrowserRouter.
o Use Route components within Switch to define different routes.

44
• The Router wraps the entire application.
• The Switch component is used to render only the first Route that matches the current
location.
• Route components define the path and the component to render for that path.
• The Link component is used for navigation without causing a page reload.
• React Router allows for dynamic routing, where parts of the URL can be parameters.
o The Product component displays a product ID, which is a dynamic part of the
URL.

Prop Drilling

Prop drilling in React refers to the process of passing data through multiple levels of
components to get it from one part of an application to another.

It often occurs in deeply nested component structures where intermediate components simply
pass down props without using them. This can lead to less maintainable and harder-to-
understand code.

45
How to Avoid Prop Drilling

1. Context API:
The Context API in React allows you to share data globally across all levels of
the component tree.
This avoids the need to pass props through every level.

46
2. State Management Libraries:
Libraries like Redux or MobX help manage state outside of the component tree.
They can be overkill for simple scenarios but are very effective for complex
applications.

47
4.3 - NodeJS

4.4 - Spring boot

4.4.1 - What is a spring boot? (intro)

• Spring Boot is essentially a framework for rapid application development built on top
of the Spring Framework.
• It is used to create stand-alone spring-based applications that you can just run because
it needs very little spring configuration.
• Create stand-alone Spring applications that can be started using java -jar.
• Embed Tomcat, Jetty or Undertow directly. You don't need to deploy WAR files.
It automatically configures Spring whenever possible

4.4.2 - Differences Between Spring and Spring Boot?

• Spring is a web application framework based on Java. It provides tools and libraries to
create a complete customized web application.
• Whereas Spring Boot is a spring module which is used to create spring application
project that can just run.

Feature Spring Spring Boot

Comprehensive framework forSimplifies the development of


Purpose Java development production-ready applications

Manual configuration usingConvention over configuration,


Configuration XML or annotations reducing manual setup

Starters with predefined


Manual management usingdependencies for common use
Dependency Management tools like Maven or Gradle cases

Involves configuring various


components and managingConvention over configuration,
Project Setup dependencies minimal setup required

48
Offers flexibility but mayDesigned for rapid
require more manualdevelopment, reduces
Ease of Development configuration boilerplate code

Can be used for buildingSimplifies microservices


microservices, but manualdevelopment, integrates well
Microservices configuration is required with Spring Cloud

Less opinionated, providesMore opinionated, follows


Opinionated Approach flexibility convention over configuration

4.4.3 - Core concepts of Spring boot

4.4.3.1 Dependency injection


• Dependency Injection (DI) in a Spring Boot application is facilitated by the Spring IoC
(Inversion of Control) container. The IoC container is responsible for managing and
wiring together the various components (beans) in your application. Spring Boot, being
built on top of the Spring Framework, inherits and simplifies the Dependency Injection
capabilities.
• Here are the steps of how dependency injection happens,
a. Component Scanning:
i. Spring Boot automatically scans certain packages for components
(beans) using the @SpringBootApplication annotation. This
annotation is commonly placed on the main class of your application.
ii. Components such as @Controller, @Service, @Repository, and
@Component are discovered during the scanning process

.
b. Bean Declaration:
i. Classes annotated with @Component, @Service, @Repository, etc.,
are considered as Spring beans. These classes are automatically
registered in the IoC container during component scanning.
49
c. Dependency Injection:
i. Spring Boot uses various annotations to perform Dependency Injection.
The most common one is @Autowired. By annotating a field,
constructor, or a method with @Autowired, you indicate that Spring
should inject the required dependency at runtime.

o Extra
ii. Alternatively, starting from Spring 4.3, you can use @Autowired on
constructors without explicitly mentioning @Autowired for the
constructor.

50
iii. Constructor Injection (Recommended):
· Constructor injection is the recommended way to inject
dependencies in Spring Boot. It ensures that the required
dependencies are available when the bean is created, providing
better immutability and making it easier to reason about the state
of the object.
iv. Qualifier Annotation:
· In cases where multiple beans of the same type exist, you may
use the @Qualifier annotation to specify which bean should be
injected.

4.4.3.2 Inversion Of Control (IOC)

• In Spring Boot, as in the broader Spring Framework, Inversion of Control (IoC) is a


fundamental design principle that promotes the separation of concerns and enhances
modularity in your application.
• The IoC container in Spring Boot is responsible for managing the lifecycle of beans
(objects) and controlling their dependencies. This is achieved through dependency
injection, a key aspect of IoC.
• It gets the information about the objects from a configuration file (XML) or Java
Code or Java Annotations and Java POJO class. These objects are called Beans.

51
Since the Controlling of Java objects and their lifecycle is not done by the developers,
hence the name Inversion of Control.
• Main features of IOC,
o Creating Object for us,
o Managing our objects,
o Helping our application to be configurable,
o Managing dependencies
• two main types of IoC containers in Spring are,
1. the BeanFactory
2. the ApplicationContext.
• Bean factory
o most basic version of IoC containers
• ApplicationContext.
o ApplicationContext extends the features of BeanFactory(additional features).
o ApplicationContext is often created through annotations like
@SpringBootApplication.

4.4.3.3 Spring Boot Annotations


• Spring Boot annotations are special markers used in Java code that provide metadata to
the Spring Boot framework. These annotations help,
o configure the Spring Boot application.
o define beans.
o manage dependencies.
o enable certain features.
o simplify various aspects of application development.
Annotations play a crucial role in Spring Boot by reducing the need for XML
configurations and promoting a more concise and readable code style.

• In simpler terms, Spring Boot annotations are like special instructions written in Java
code that tell the Spring Boot framework how to set up and run your application. They
make it easier to do things like create components, connect to databases, and handle
web requests without writing a lot of extra configuration code.
• Some examples for Spring boot annotations used in RESTful APIs

52
o @Controller - Represents a Spring MVC controller. It handles HTTP requests
and defines methods to process those requests. In more simpler terms it marks
the controller class which is responsible for processing incoming REST API
requests, preparing a model, and returning the view.

o @ResponseBody - the @ResponseBody annotation is used to indicate that the


return value of a method should be serialized directly to the HTTP response
body
o @RestController – Combines @Controller and @ResponseBody. It is used for
creating RESTful APIs, where methods return data directly serialized into
the response body.

53
o @RequestMapping - used to map HTTP requests to specific handler methods.
It is applied at the class level and/or method level to define how incoming
requests should be handled by the application.
At the Class Level:

You can use @RequestMapping at the class level to specify a base URI
for all handler methods within the class. This allows you to group related
endpoints under a common base path.

54
At the Method Level:

At the method level, @RequestMapping is used to specify the HTTP method(s)


that a particular handler method should respond to. It supports various HTTP
methods like GET, POST, PUT, DELETE, etc.

• @RequestParam VS @PathVariable
o @PathVariable - Used to extract values from URI templates (parts of the
URL enclosed in curly braces {}).

o @RequestParam - Used to extract values from the query parameters of the


URL. (e.g.: ‘/books?category=fiction’)

55
CHAPTER 5 : Mobile Development

5.1 - ReactNative

5.1.1 - What is React Native?

React Native is an open-source mobile application framework developed by Facebook that


enables developers to build native mobile apps using JavaScript and React. It offers a native-
like experience on both iOS and Android platforms.
Cross-Platform Nature:
One of the key features of React Native is its cross-platform compatibility. Developers can use
a single codebase to create applications that run seamlessly on both iOS and Android devices.
This is achieved through the use of a common language (JavaScript) and a set of reusable
components, while still allowing for platform-specific customization when necessary.
React Native is like React, but it uses native components instead of web components as building
blocks. So to understand the basic structure of a React Native app, you need to understand some
of the basic React concepts, like JSX, components, state, and props.

• import React from 'react'; → import React to be able to use JSX, which will then be
transformed to the native components of each platform.
• import {Text, View} from 'react-native'; → import the Text and View components from
react-native

56
5.1.2 - React Native CLI vs Expo

React Native CLI

• React Native CLI (Command Line Interface) is a tool that enables developers to create, manage,
and build React Native projects.
• It provides a set of commands for various development tasks, including project initialization,
running the app on emulators or physical devices, linking native modules, and building the app
for deployment.
• When using the React Native CLI, developers have more control over the native modules,
configurations, and the entire build process.

Pros Cons

• Full Control Over the Project • Setup Complexity

• Native Module Integration • Initial setup process and development


turnaround time might be slower

• Allow to write native code • Apps built with React Native CLI might
have a larger size compared to Expo

• Developers have more control over the


build process

Expo

• Expo is a set of tools, services, and a framework built around React Native that aims to simplify
the development of React Native applications.
• It provides an opinionated workflow that abstracts away much of the complexity of native
development, allowing developers to focus on building features without dealing with the
intricacies of configuring native modules and build processes.

Pros Cons

• Quick development with minimal setup • Limited Access to Native Code

• Simplified Workflow • Dependency limitations on Expo Services

• Provides built-in support for Over-the- • Reduced Control Over Build Process
Air updates
• Includes various built-in services such as
push notifications, asset hosting

57
The choice between React Native CLI and Expo depends on various factors, including project
requirements, development preferences, and the desired level of control over the development
environment.
Use React Native CLI if:
• Full control over native modules, build process, and access to native code is crucial.
• The project requires custom native modules not supported by Expo.
• Ejecting to React Native CLI is anticipated for future flexibility.

Use Expo if:


• Quick development with minimal setup is a priority.
• The project does not require extensive custom native modules.
• Built-in Expo services align with the project's requirements.
• Over-the-Air updates and simplified workflows are essential.

5.1.3 - Components

React Native is built around components, which are reusable UI elements that you can assemble
to create your app. So this code defines HelloWorldApp, a new Component. When you're
building a React Native app, you'll be making new components a lot. Anything you see on the
screen is some sort of component. It can be functional components or class components.

5.1.4 - Props

Most components can be customized when they are created, with different parameters. These
creation parameters are called props. Props are read-only properties that are passed from a
parent component to a child component.

5.1.5 - State

Unlike props that are read-only and should not be modified, the state allows React components
to change their output over time in response to user actions, network responses and anything
else. The state are also variables, with the difference that they are not passed as parameters, but
rather that the component initializes and manages them internally.
Can use the state of your components both in classes and in functional components using hooks.

58
5.1.5.1 State Management
State management allows you to manage and control the data within your components. In React
Native, state management is typically handled by using the built-in useState hook or by
integrating external state management libraries like Redux.
1. useState Hook

The useState hook is part of the React Hooks API and allows functional components to
manage state. In below example, the useState hook is used to manage the state of the
count variable.

2. Redux
Redux is a popular state management library for React Native (and React). There are 3 main
components; actions, reducers, and the store.

First create the Redux store.

59
Action - The addTodo action adds a new todo to the state.

Reducer

Wrap the App with provider.

60
5.1.6 - Lifecycle Methods

• Mounting Phase: Methods called when a component is being created and inserted into
the DOM. → constructor(), render(), componentDidMount()
• Updating Phase: Methods called when a component is being re-rendered as a result of
changes to either props or state. → render(), componentDidUpdate()
• Unmounting Phase: Method called when a component is being removed from the DOM.
→ componentWillUnmount()

5.1.7 - Navigation

Navigation is a key part of any mobile app, and React Native offers several libraries to help
you navigate between screens. React Navigation is a popular library for handling navigation in
React Native applications.
Navigator Types:
• Stack Navigator: Manages navigation using a stack, where each screen is pushed onto
the stack when navigated to and popped off when navigating back.

• Tab Navigator: Allows navigation between screens using tabs.

• Drawer Navigator: Implements a navigation drawer for side menu navigation.

61
5.1.8 - Styling in React Native

React Native uses a subset of CSS properties and introduces its own set of styling conventions.
Styles are typically defined using the StyleSheet module provided by React Native.

Flexbox
Flexbox is a layout model that allows designing complex layouts more efficiently and
dynamically. React Native uses a simplified version of the Flexbox model to define layouts
and handle the distribution of space along a single axis.

62
In this example, the container uses the flex property to distribute available space among its
child components (box1, box2, box3). The flexDirection, justifyContent, and alignItems
properties control the layout direction and alignment.
• flex: Specifies how a component should grow relative to its siblings.
• flexDirection: Defines the primary axis along which the components will be placed.
• justifyContent: Determines the distribution of space along the primary axis.
• alignItems: Aligns components along the secondary axis.

5.1.9 - AsyncStorage

AsyncStorage is a simple, asynchronous, unencrypted, persistent, key-value storage system


that is globally available to an application. AsyncStorage is commonly used for storing small
amounts of data like user preferences, authentication tokens, or application settings persistently
across app launches.

63
5.1.10 - API Integration

1. Fetch API - The Fetch API is a modern, flexible JavaScript API for making network
requests.

2. Axios - popular JavaScript library for making HTTP requests. It simplifies the process
and provides additional features.

64
Key Features of Axios:
• Promise-Based: Axios uses promises, making it easy to handle asynchronous
operations.
• Request and Response Interceptors: Allows manipulation of requests or responses
globally before they are sent or received.
• Automatic JSON Parsing: Automatically parses JSON responses.

Choosing between Fetch API and Axios often depends on project requirements and developer
preferences. Axios simplifies common tasks and provides a consistent API, making it a
popular choice.

5.1.11 - Testing in React Native

1. Unit Testing: Jest for unit testing React Native applications


• Jest is a JavaScript testing framework widely used for unit testing React and
React Native applications.
• It is designed to be fast, easy to set up, and provides a comprehensive suite of
testing utilities.
• Key Features:
o Snapshot Testing: Captures the output of components and ensures it
does not change unexpectedly.
o Mock Functions: Allows creating mock functions and tracking their
usage.
o Async Testing: Supports testing asynchronous code using promises or
async/await.

65
Add a script to your package.json and ‘npm test’ to run tests

2. UI Testing: Testing libraries like Detox for UI testing:


• Detox is an end-to-end testing library for React Native applications.
• It allows writing UI tests that simulate real user interactions and interactions with native
modules.
• Key Features:
o Declarative API: Describes user interactions in a declarative way.
o Support for Async Operations: Handles asynchronous operations, waits for
animations, and network requests.
o Fast and Reliable: Parallelizes test runs, making them fast and reliable.

66
5.1.12 - Performance Optimization

1. Code Splitting:
• Code splitting is a technique used to improve application performance by
breaking the code into smaller chunks and loading only the necessary parts
when they are needed.
• In React Native, code splitting can be achieved using dynamic imports or tools
like React Navigation's React.lazy for components.

2. React Native Debugger: Debugging tools for performance analysis


• React Native Debugger is a standalone debugging tool that provides a variety
of tools for debugging and profiling React Native applications.
• It includes features like the React DevTools, Redux DevTools, and a
performance monitor.

5.1.13 - Best Practices and Patterns

1. Folder Structure - A commonly recommended folder structure for React Native projects
includes;

2. Code Patterns
• Functional Components and Hooks: Prefer functional components and use
hooks for state and side effects.
• Redux for State Management: Use Redux for managing global state in larger
applications.
• Consistent Styling: Adopt a consistent styling approach, either using inline
styles or a styling library like styled-components.
• Error Handling: Implement robust error handling and use tools like react-error-
boundary for graceful error recovery.
• Optimized Image Loading: Use tools like react-native-fast-image for optimized
image loading.
• Performance Monitoring: Integrate tools like Firebase Performance Monitoring
for analyzing and optimizing app performance.

67
5.2 - Flutter

5.2.1 - Overview

• Introduction
• Install flutter – android studio.
• Dart basics
• Layout planning advanced.
• Flutter widgets basics
• Flutter widgets advanced

5.2.2 - Introduction

Why we need to flutter?

When we create flutter mobile app for ios using language like swift. It cannot be used for
android development. in that case we need to learn java or Kotlin. however, we think we create
that kind of android app and ios app using swift and java. After several time we need to update
the app. Now we must update separate code bases separately. Other advantage of the flutter is
making responsive. In that scenario flutter comes to stage. and, it has super widget collection.
Another one is dart language is somewhat easy to learn.

5.2.3 - Download flutter.

https://docs.flutter.dev/get-started/install

next download and install android studio.

after installing android studio install, dart and flutter plugin from the IDE.

5.2.4 - Create a flutter project.

Next, we are going to create flutter app from vs code.

Here we need to install dart and flutter extensions.

Next the flutter app can be created using the command below.

flutter create app_name.

68
After creating the flutter app find the lib folder inside the project. In the lib folder there is a file
called main.dart. The flutter app is available in that file.

Now run the app run -> run without Debugging.

Extensions need to flutter.

• Awesome flutter Snippets (for flutter IntelliSense)


• Better Comments.

5.2.5 - Dart

5.2.5.1 Dart programming language Basic


It is almost the same as JavaScript.

Easy way is to learn Dart. The online code editor called Dart pad.

https://dartpad.dev/?

Main function is available in this programming language. It is the function which is called
initially.

Data types and Variables of Dart. You can refer more about variables and data types with flutter
documentation.

Concatenation of variables.

We can pass variable name with $ mark.

69
Conditional statement

Functions

70
Another example for functions

Most commonly using data structure in dart is List.

There are special functions related to Lists.

71
Classes

How to create an object in the Dart, new keyword is optional.

72
5.2.6 - Flutter widgets

- materialapp()
- Scaffold()

Folder structure

android – this folder has platform specific codes.

lib – dart files are included inside this lib folder.

test – unit testing is available inside the test folder.

web - flutter sdk can create web apps. so, code related web app is
available in this folder.

windows - in this folder, the codes which needs to build inside the
windows platform.

Linux/macos – build codes.

pubspec.yaml - dependencies and meta -data) (images, google fonts)

5.2.6.1 main.dart

From this library called material.dart . we can work with pre-build widgets of the flutter.

Main function runs our app.

As the next step we should define the widgets, there are two types of widgets.

• Stateless widgets
• Stateful widgets
stateful widgets - the widgets whose state can be altered once they are built are called stateful
widgets.

73
stateless widgets - the widgets whose state cannot be altered once they are called stateless
widgets.

As the first attempt. the stateless widget is created. In that case the output is black screen.

Let’s consider what is the reason for Black Screen

The flutter has widget tree. All the widgets are holding MaterialApp widget. In the previous
code it has not been used.so we need to return inside the build method.

This material app has many properties. It can be found from flutter documentation.

Scaffold widget is the widget which we can use for creating basic skeletons. It can create the
layout using another widget.

74
Now the layout is come to initial stage.

Next let’s try to work with several widgets.

• AppBar
• Text
• Icon
• Column
• Row

5.2.6.2 AppBar

5.2.6.3 Text
The appbar contains the title property. title can be displayed with the help of Text widget.

And, we can use several properties inside the AppBar widget.

75
5.2.6.4 Icon
Scaffold widget has another property called body. Here the notification bell Icon is displayed.

5.2.6.5 Container
We can create customize shapes with container widget.

If we need, we can decorate the container using BoxDecoration widget.

76
5.2.6.6 Center
If we need to get the object to the centre. We can wrap with center widget.

5.2.6.7 ImageAsset
We create specific folder in the root directory. Insert the images into the assets folder.

If we need to insert assets to the app. The file called pubspec.yaml should be updated.

77
assets/ means everything inside the assets folder should be imported.

Next we are going to insert image.

If we need we can customize the image size

Now the several images can be added verticallty. For that purpose we can use Column widget.

And also change the background color into the black .

78
5.2.6.8 sizedBox
If we need to make the gap between two components . we can use sizedBox widget

79
5.3 - Create flutter layouts

columns and rows can be created the above layout.

When I try to create green color screen . there are 5 rows. 2nd row has 2 columns. 4th row has
3 columns. And also all the ui elements are placed inside the single column. So as the first step
create a column widget and inside it we can use containers.

Width of the first child among the children of the Column goes to double.infinity

80
Row widget helps to create the next line. Using several containers inside the Row . several
cards can be created.

The layout can be fixed using padding and fixing the width of the Column

Applying the same concept with Column, Row , Container widgets the whole layout can be
implemented.

81
CHAPTER 6 : Operating Systems

What is an Operating System?


• An operating system acts as an interface between the software and different parts of the
computer or the computer hardware. The operating system is designed in such a way
that it can manage the overall resources and operations of the computer.
• Purpose - It provides an environment to the user so that the user can perform its task
in convenient and efficient way.
• Lies in the category of system software.
• Fully integrated set of specialized programs that handle all the operations of the
computer.
• Examples of Operating Systems are Windows, Linux, Mac OS, Android, iOS
Structure of a Computer System:

82
What is an Operating System used for?
• The operating system helps in improving the computer software as well as hardware.
Without OS, it became very difficult for any application to be user-friendly. The
Operating System provides a user with an interface that makes any application
attractive and user-friendly.
• The operating System comes with a large number of device drivers that make OS
services reachable to the hardware environment.
• Each and every application present in the system requires the Operating System. The
operating system works as a communication channel between system hardware and
system software.
• The operating system helps an application with the hardware part without knowing
about the actual hardware configuration. It is one of the most important parts of the
system and hence it is present in every device, whether large or small device.

What does an Operating System do?


1. Program execution - It is the Operating System that manages how a program is going
to be executed. It loads the program into the memory after which it is executed. The
order in which they are executed depends on the CPU Scheduling Algorithms. A few
are FCFS, SJF, etc. When the program is in execution, the Operating System also
handles deadlock i.e. no two processes come for execution at the same time. The
Operating System is responsible for the smooth execution of both user and system
programs. The Operating System utilizes various resources available for the efficient
running of all types of functionalities.
2. Input Output Operations - Operating System manages the input-output operations
and establishes communication between the user and device drivers. Device drivers are
software that is associated with hardware that is being managed by the OS so that the
sync between the devices works properly. It also provides access to input-output
devices to a program when needed.
3. Communication Between Processes - Operating System manages the input-output
operations and establishes communication between the user and device drivers. Device
drivers are software that is associated with hardware that is being managed by the OS
so that the sync between the devices works properly. It also provides access to input-
output devices to a program when needed.
4. Resource management - The operating system manages and allocates memory, CPU
time, and other hardware resources among the various programs and processes running
on the computer.
System resources are shared between various processes. It is the Operating system that manages
resource sharing. It also manages the CPU time among processes using CPU Scheduling
Algorithms. It also helps in the memory management of the system. It also controls input-output
devices. The OS also ensures the proper use of all the resources available by deciding which
resource to be used by whom.

83
5. Process management - The operating system is responsible for starting, stopping, and
managing processes and programs. It also controls the scheduling of processes and
allocates resources to them.
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happen and chef as the (OS) who uses kitchen-
stove(CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
6. CPU scheduling - Allows one process to use the CPU while another process is delayed
(in standby) due to unavailability of any resources such as I / O etc, thus making full
use of the CPU. The purpose of CPU Scheduling is to make the system more efficient,
faster, and fairer.
7. Process synchronization - coordination of execution of multiple processes in a multi-
process system to ensure that they access shared resources in a controlled and
predictable manner. It aims to resolve the problem of race conditions and other
synchronization issues in a concurrent system.
8. Memory management - The operating system manages the computer’s primary
memory and provides mechanisms for optimizing memory usage.
Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of player . The team manager (OS) decide whether the upcoming player will be
in playing 11 ,playing 15 or will not be included in team , based on his performance . In the
same way, OS first check whether the upcoming program fulfil all requirement to get memory
space or not ,if all things good, it checks how much memory space will be sufficient for program
and then load the program into memory at certain location. And thus , it prevents program from
using unnecessary memory.

9. File management - The operating system is responsible for organizing and managing
the file system, including the creation, deletion, and manipulation of files and
directories.
The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It
also provides a platform for the user to create, and delete files. The Operating System is
responsible for making decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.

10. Device management - The operating system manages input/output devices such as
printers, keyboards, mice, and displays. It provides the necessary drivers and interfaces
to enable communication between the devices and the computer.
11. Security and Privacy- The operating system provides a secure environment for the
user, applications, and data by implementing security policies and mechanisms such as
access controls and encryption.

84
Security: OS keeps our computer safe from an unauthorized user by adding a security layer to
it. Basically, Security is nothing but just a layer of protection which protect computer from bad
guys like viruses and hackers. OS provides us defenses like firewalls and anti-virus software
and ensures good safety of computer and personal information.

Privacy: OS gives us the facility to keep our essential information hidden like having a lock on
our door, where only you can enter, and others are not allowed. Basically, it respects our secrets
and provides us with the facility to keep it safe.

12. Networking - The operating system provides networking capabilities such as


establishing and managing network connections, handling network protocols, and
sharing resources such as printers and files over a network.
13. User Interface - The operating system provides a user interface that enables users to
interact with the computer system. This can be a Graphical User Interface (GUI), A
Command-Line Interface (CLI), or a combination of both.
14. Backup and Recovery - The operating system provides mechanisms for backing up
data and recovering it in case of system failures, errors, or disasters.
15. Virtualization - The operating system provides virtualization capabilities that allow
multiple operating systems or applications to run on a single physical machine. This
can enable efficient use of resources and flexibility in managing workloads.
16. Performance monitoring - The operating system provides tools for monitoring and
optimizing system performance, including identifying bottlenecks, optimizing resource
usage, and analyzing system logs and metrics.
17. Error Handling - The Operating System also handles the error occurring in the CPU,
in Input-Output devices, etc. It also ensures that an error does not occur frequently and
fixes the errors. It also prevents the process from coming to a deadlock. It also looks
for any type of error or bugs that can occur while any task. The well-secured OS
sometimes also acts as a countermeasure for preventing any sort of breach of the
Computer System from any external source and probably handling them.
Objectives of Operating Systems
• Convenient to use: One of the objectives is to make the computer system more
convenient to use in an efficient manner.
• User Friendly: To make the computer system more interactive with a more convenient
interface for the users.
• Easy Access: To provide easy access to users for using resources by acting as an
intermediary between the hardware and its users.
• Management of Resources: For managing the resources of a computer in a better and
faster way.
• Controls and Monitoring: By keeping track of who is using which resource, granting
resource requests, and mediating conflicting requests from different programs and
users.
• Fair Sharing of Resources: Providing efficient and fair sharing of resources between
the users and programs.

85
Types of Operating Systems
• Batch Operating System: A Batch Operating System is a type of operating system
that does not interact with the computer directly. There is an operator who takes similar
jobs having the same requirements and groups them into batches.
Examples: payroll system, a bank statement, etc.

• Time-sharing Operating System: Time-sharing Operating System is a type of


operating system that allows many users to share computer resources (maximum
utilization of the resources).
• Distributed Operating System: Distributed Operating System is a type of operating
system that manages a group of different computers and makes appear to be a single
computer. These operating systems are designed to operate on a network of computers.
They allow multiple users to access shared resources and communicate with each other
over the network. Examples include Microsoft Windows Server and various
distributions of Linux designed for servers.
• Network Operating System: Network Operating System is a type of operating system
that runs on a server and provides the capability to manage data, users, groups, security,
applications, and other networking functions.
• Real-time Operating System: Real-time Operating System is a type of operating
system that serves a real-time system and the time interval required to process and
respond to inputs is very small. These operating systems are designed to respond to
events in real time. They are used in applications that require quick and deterministic
responses, such as embedded systems, industrial control systems, and robotics.
• Multiprocessing Operating System: Multiprocessor Operating Systems are used in
operating systems to boost the performance of multiple CPUs within a single computer
system. Multiple CPUs are linked together so that a job can be divided and executed
more quickly.
• Single-User Operating Systems: Single-User Operating Systems are designed to
support a single user at a time. Examples include Microsoft Windows for personal
computers and Apple macOS.
• Multi-User Operating Systems: Multi-User Operating Systems are designed to
support multiple users simultaneously. Examples include Linux and Unix.
• Embedded Operating Systems: Embedded Operating Systems are designed to run on
devices with limited resources, such as smartphones, wearable devices, and household
appliances. Examples include Google’s Android and Apple’s iOS.
• Cluster Operating Systems: Cluster Operating Systems are designed to run on a group
of computers, or a cluster, to work together as a single system. They are used for high-
performance computing and for applications that require high availability and
reliability. Examples include Rocks Cluster Distribution and OpenMPI.

86
Introduction of Process Management
If the operating system supports multiple users then services under this are very important. In
this regard, operating systems have to keep track of all the completed processes, Schedule them,
and dispatch them one after another. However, the user should feel that he has full control of
the CPU. Process management refers to the techniques and strategies used by organizations to
design, monitor, and control their business processes to achieve their goals efficiently and
effectively. It involves identifying the steps involved in completing a task, assessing the
resources required for each step, and determining the best way to execute the task.

Process management can help organizations improve their operational efficiency, reduce costs,
increase customer satisfaction, and maintain compliance with regulatory requirements. It
involves analyzing the performance of existing processes, identifying bottlenecks, and making
changes to optimize the process flow.

Some of the systems call in this category are as follows.

• Create a child’s process identical to the parent’s.


• Terminate a process
• Wait for a child process to terminate
• Change the priority of the process
• Block the process
• Ready the process
• Dispatch a process
• Suspend a process
• Resume a process
• Delay a process
• Fork a process

How does a Process look like in Memory?
Text Section: A Process, sometimes known as the Text Section, also includes the current
activity represented by the value of the Program Counter.

87
Stack: The stack contains temporary data, such as
function parameters, returns addresses, and local
variables.
Data Section: Contains the global variable.

Heap Section: Dynamically memory allocated to


process during its run time.

Key components of Process Management


• Process mapping: Creating visual
representations of processes to understand how tasks
flow, identify dependencies, and uncover improvement
opportunities.
• Process analysis: Evaluating processes to
identify bottlenecks, inefficiencies, and areas for
improvement.
• Process redesign: Making changes to existing processes or creating new ones to
optimize workflows and enhance performance.
• Process implementation: Introducing the redesigned processes into the organization
and ensuring proper execution.
• Process monitoring and control: Tracking process performance, measuring key
metrics, and implementing control mechanisms to maintain efficiency and
effectiveness.

What is a process?
In computing, a process is the instance of a computer program that is being executed by
one or many threads. It contains the program code and its activity. Depending on the operating
system (OS), a process may be made up of multiple threads of execution that execute
instructions concurrently.
How is process memory used for efficient operation?
The process memory is divided into four sections for efficient operation:

• The text category is composed of integrated program code, which is read from fixed
storage when the program is launched.
• The data class is made up of global and static variables, distributed and executed before
the main action.
• Heap is used for flexible, or dynamic memory allocation and is managed by calls to
new, delete, malloc, free, etc.

88
• The stack is used for local variables. The space in the stack is reserved for local
variables when it is announced.

What is Process Scheduling?


Process Scheduling is the process of the process manager handling the removal of an active
process from the CPU and selecting another process based on a specific strategy.

Process Scheduling is an integral part of Multi-programming applications. Such operating


systems allow more than one process to be loaded into usable memory at a time and the loaded
shared CPU process uses repetition time.

There are three types of process schedulers:

• Long term or Job Scheduler


• Short term or CPU Scheduler
• Medium-term Scheduler

Why do we need to schedule processes?

• Scheduling is important in many different computer environments. One of the most


important areas is scheduling which programs will work on the CPU. This task is
handled by the Operating System (OS) of the computer and there are many different
ways in which we can choose to configure programs.

89
• Process Scheduling allows the OS to allocate CPU time for each process. Another
important reason to use a process scheduling system is that it keeps the CPU busy at all
times. This allows you to get less response time for programs.
• Considering that there may be hundreds of programs that need to work, the OS must
launch the program, stop it, switch to another program, etc. The way the OS configures
the system to run another in the CPU is called “context switching”. If the OS keeps
context-switching programs in and out of the provided CPUs, it can give the user a
tricky idea that he or she can run any programs he or she wants to run, all at once.
• So now that we know we can run 1 program at a given CPU, and we know we can
change the operating system and remove another one using the context switch, how do
we choose which programs we need. run, and with what program?
• That’s where scheduling comes in! First, you determine the metrics, saying something
like “the amount of time until the end”. We will define this metric as “the time interval
between which a function enters the system until it is completed”. Second, you decide
on a metrics that reduces metrics. We want our tasks to end as soon as possible.

90
CHAPTER 7 : Security

7.1 - Authorization & Authentication

7.1.1 - Authentication

Authentication is the process of verifying the identity of a user, system, or application. It


ensures that the entity claiming an identity is indeed who it says it is.

7.1.1.1 Authentication Methods


1. Username and Password:
Example: When you log in to your email account, you provide a username (email address) and
a password to prove your identity.

2. Biometric Authentication:
Example: Your smartphone may use fingerprint or face recognition to authenticate you.

3. Multi-Factor Authentication (MFA):


Example: After entering a password, you might receive a one-time code on your phone that
you need to enter to complete the login.

4. Token-based Authentication:
Example: OAuth, where you log in to a third-party service using your credentials, and the
service provides a token that is then used for authentication to other services.

7.1.1.2 OAuth
OAuth, which stands for "Open Authorization," is an open standard and framework for
authorization in the context of software engineering. It provides a way for third-party
applications to access resources on a user's behalf without exposing the user's credentials (such
as passwords). OAuth is commonly used in scenarios where a user wants to grant a third-party
application limited access to their resources, like their data on another website or service.

How it works:

Imagine you have a favorite app or website, and you want to use another cool app, but they
need some of your information from the first one. Instead of giving your password to the new
app (which is risky), OAuth helps you grant access without sharing your password.

1. You want to use a new app:


Let's say you found a fitness app that wants to track your runs but needs access to your running
history from your favorite running website.

2. The new app asks for permission:


91
Instead of asking for your password, the fitness app says, "Hey, to get your running history,
can I have permission?" This is done by redirecting you to the running website's login page.

3. You log in to your running website:


You're now on the running website's login page. You log in with your username and password
directly on the running website, not on the fitness app.

5. Permission granted:
The running website asks, "Hey, the fitness app wants to see your running history. Is that
okay?" If you say yes, the running website gives the fitness app a special key (like a digital
keycard) called an access token.

6. Access granted with the keycard (access token):


The fitness app can now use this access token to get your running history from the running
website. However, it can only do what you allowed it to do—view your running history, not
post on your behalf or access your personal details.

So, OAuth is like a secure middleman that helps different apps work together without sharing
your passwords. It keeps your information safe while allowing apps to do cool things with your
data.

7.1.2 - Authorization

Authorization is the process of granting or denying access rights and permissions to resources
after a user or system has been authenticated. It ensures that authenticated users have the
appropriate privileges to perform specific actions or access certain information.

7.1.2.1 Authorization methods


1. Role-based Access Control (RBAC):
Example: In a company's system, a manager may have access to financial reports, while a
regular employee can only view project-related documents.

2. Attribute-based Access Control (ABAC):


Example: A document-sharing platform may grant access based on attributes like user role,
department, or project membership.

3. Rule-based Access Control:


Example: A firewall rule that allows or denies traffic based on predefined criteria like IP
addresses or port numbers.

92
7.2 - Password Storing, Hashing & Salt

7.2.1 - Password Storing

Password storing refers to the way user passwords are managed and kept secure in a system.

• Importance: Storing passwords securely is crucial to prevent unauthorized access and


protect user data.
• Best Practice: Avoid storing passwords in plain text, as it exposes them to potential
breaches.

7.2.2 - Password Hashing

Hashing is a one-way process that transforms a password into a fixed-length string of


characters, which is typically a hash code.

E.g.

Hashing ensures that even if the hashed password is compromised, it is challenging to reverse-
engineer the original password.

7.2.3 - Password Salt

A salt is a random value unique to each user that is combined with their password before
hashing.

E.g.

Salting prevents attackers from using precomputed tables (rainbow tables) and enhances
security, even if users have the same password.
93
For further learn please refer this: https://auth0.com/blog/adding-salt-to-hashing-a-better-
way-to-store-passwords/

7.3 - API Security

APIs (Application Programming Interfaces) are interfaces that allow different software systems
to communicate with each other. Securing APIs is crucial to prevent unauthorized access and
protect sensitive data. Here are key aspects of API security:

Authentication and Authorization: Implement robust authentication and authorization


mechanisms for API endpoints. Use tokens (JWT or OAuth) to verify the identity and
permissions of the requester.

HTTPS: Use HTTPS to encrypt data transmitted between the client and the server, ensuring
data integrity and confidentiality.

Input Validation: Validate and sanitize input data to prevent injection attacks (e.g., SQL
injection or Cross-Site Scripting).

Rate Limiting: Implement rate limiting to prevent abuse and DDoS attacks on your API.

7.3.1 - JWT

JSON Web Tokens (JWT) are a compact, URL-safe means of representing claims between two
parties. They are commonly used for authentication and authorization in API security. Here's a
detailed explanation of how to use JWT tokens in API security

1. Token Creation (Authentication):

When a user logs in or authenticates, a JWT is generated on the server side. This JWT typically
contains information such as user ID, role, and expiration time. The token is then signed with
a secret key to ensure its integrity.

94
2. Token Transmission:

Once the JWT is created, it is sent to the client (usually in the response body or header) as part
of the authentication process.

3. Token Verification (Authorization):

When the client makes subsequent requests to the API, it includes the JWT in the request
header. The server then verifies the token's authenticity using the secret key. If the token is
valid, the server extracts the claims and processes the request.

95
4. Token Expiration:

JWTs often have an expiration time (exp claim). Clients should check the expiration and, if
expired, request a new token. This adds an extra layer of security, reducing the risk of token
misuse.

5. Token Payload (Claims):

The payload of a JWT contains claims, which are statements about an entity (typically the user)
and additional metadata.

6. Revoking Tokens:

Since JWTs are stateless, revoking tokens can be challenging. To handle token revocation,
consider using token blacklisting or short-lived tokens with frequent refreshes.

7. Securing the Secret Key:

Keep the secret key used to sign the JWTs secure. Any compromise of the key could lead to
unauthorized access.

96
7.4 - .env file usage in Public Repositories

Using a .env file in a GitHub repository involves storing configuration variables and sensitive
information in an external file. This is a common practice to keep sensitive data, like API keys
or database credentials, separate from the source code. Here's a detailed explanation of how to
use a .env file in a GitHub repository.

7.4.1 - What is a `.env` file

A `.env` file is a plain text configuration file that typically stores environment variables. Each
variable in the file has a key-value pair, separated by an equal sign (`=`).

7.4.2 - Why use a `.env` file

Security: It allows you to keep sensitive information separate from your source code,
preventing accidental exposure of secrets.

Configuration Management: It makes it easy to manage configuration settings for different


environments (development, testing, production) without modifying code.

7.4.3 - Using `.env` with GitHub

Version Control: By default, you should NOT include your `.env` file in version control. Add
it to your `. gitignore` file to prevent it from being tracked by Git.

Environment Variable Setup: In your application code, you need to read values from the
`.env` file and set them as environment variables. This is typically done in your application's
configuration or startup process.

Example in Node.js using the `dotenv` package,

97
7.4.3.1 Secrets Management in GitHub
GitHub Secrets: If you need to use environment variables in GitHub Actions or workflows,
you can use GitHub Secrets. These are encrypted variables that you can set in the GitHub
repository settings.

Encrypted `.env` files: For other use cases, you might want to encrypt your `.env` file and
store the encrypted version in the repository. GitHub provides a feature called Encrypted
Secrets that you can use to store sensitive files securely.

7.4.4 - Best Practices

Never expose secrets: Always ensure that sensitive information is kept confidential, and never
expose it unintentionally.

Use .env.example: Include a `.env.example` file in your repository with placeholders for each
variable. Developers can copy this file to create their own `.env` files.

Example GitHub Actions Workflow,

98
99
In this example, the secrets (`API_KEY`, `DATABASE_URL`, `SECRET_KEY`) are
accessed as environment variables during the workflow run. These secrets should be added to
the GitHub repository's settings.

Remember, the key to using a `.env` file securely in a GitHub repository is to carefully manage
sensitive information and follow best practices for secrets and configuration management.

100
CHAPTER 8 : Cloud Computing & Devops

8.1 - Introduction to Cloud Computing

8.1.1 - What is Cloud Computing?

Let’s understand cloud computing with a simple real-world example. Suppose you are planning

to go on a trip with your friends. After finalizing the schedule, you will book a bus or any other

transportation option for travel to your destination. Imagine that, on the morning of the trip,

several other friends suddenly joined. What do you do now? The vehicle we booked earlier has

insufficient space, so we have to add one or more vehicles depending on the number of friends

who joined. The below image simply shows the above scenario. So how is this scenario related

to cloud computing?

Example

Before mapping this scenario with cloud computing, see the definition for cloud computing that
we can find on the AWS website.
101
“Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-

go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you

can access technology services, such as computing power, storage, and databases, on an as-

needed basis from a cloud provider like Amazon Web Services (AWS).”

Now let’s try to map out the example of starting a website. Suppose you have an idea to develop

an E-Commerce web application to sell products, and after developing the application you have

to deploy that. Since you don’t have a server, you need to choose a cloud provider, like AWS,

GCP, or Azure and then you can pick a virtual machine (VM). Here we can select a VM

according to our demand (Ex: number of users) and those cloud providers are always willing to

supply their services on demand.

This is similar to when we plan the trip and book the bus first. But as mentioned earlier, we had

to book more buses because more friends joined our trip. This is related to the fact that the

number of users reaching your application is increasing over time. But at one point, our

application may not be able to handle all the users because of a shortage of resources.

Then what will do is, we can spin up one or more servers according to our demand, we called

this as scaling up in cloud computing. And over time, if demand decreases there can be a surplus

in our resources. Then we can remove additional servers and maintain the resources according

to our demand and we called this as scaling down in cloud computing. Here the most important

thing is we need to pay only for the time they have been executed (pay-as-you-go model). In

cloud computing, we don’t need to manually configure the scaling up and down, we just need

to define a rule to listen to our website traffic, and depending on the increase and decrease of

traffic, spin up more servers or reduce existing servers.

102
Figure 8.1Vertical Scaling

8.1.2 - Cloud Scalability

There are two types of scalabilities.

01) Vertical Scalability (Scaling Up)

Let’s suppose we use a server that has 1GB RAM and 500GB HDD to host our web application.
Over the time increase of traffic, we decide to increase the capacity of our server by increasing
RAM to 8GB RAM and the Hard drive to 1TB. This procedure we called a Scaling Up. On the
other hand, when traffic decreases, we can also reduce the resources that we allocate.

02) Horizontal Scalability (Scaling Out)

In this approach, to enhance performance, we do not upgrade our existing server. What we do
is, spin up more similar servers. So here we use an application load balancer (Ex: — Amazon
Elastic Load Balancer) to distribute traffic among those servers.

103
Cloud providers allow us to manage horizontal scaling and vertical scaling automatically. We
just need to define a rule that when it needs to scale up and down (Auto Scaling).

8.1.3 - Cloud Elasticity

Scalability and Elasticity are two different concepts but look more similar, these two are going
hand in hand. Basically, elasticity relates to the ability to automatically expand or compress
infrastructure resources in response to a sudden increase or decrease in demand while ensuring
the workload is managed efficiently.

On an E-Commerce website, we do not have the same number of customers always. Traffic
varies from time to time. So sometimes our resources may not be enough to handle higher traffic.
Imagine that if we assume higher traffic and allocate resources, it will not always be used, right?
So, what can we do?

104
Isn’t it better to have a mechanism to allocate resources as needed when the traffic increases,
and to reduce the resources when the traffic decreases? Definitely, it should be good. That’s
what we called as Cloud Elasticity. The cloud provider has the ability to manage resources
according to demand. We just need to define a rule for behaving like that. How is that? cool
huh?

Figure 8.2Cloud Elasticity

8.1.4 - How cloud services hosted

Cloud providers have their own large distributed data centers around the world, inside the data
centers there are many physical machines and they provide an isolated environment for our
requirements. To do that they use virtualization technology. Now let’s see how it happens.

105
Suppose we have a physical server that consists of processors, memory, graphics, network,
storage, etc. On top of that, we have an OS, and our applications are running on top of the OS.
In this approach, our server can have one Operating system. Suppose we want multiple operating
systems and distribute resources among those operating systems, we can use a virtualization
layer on top of physical hardware. The virtualization layer is also software, which allows us to
run multiple Operating systems on top of the physical hardware layer. In cloud computing,
virtualization is mainly done with the help of hypervisors or virtual machine monitors (VMMs),
which are programs that build and take care of virtual machines (VMs). When we create Virtual
machines on top of the virtual layer, it creates virtual hardware (Virtualized processors, RAM,
network, etc.) inside these virtual machines. Each virtual machine is given a share of the host’s
computational resources, such as CPU, memory, and storage, and runs its guest operating system
and applications. Because of that, users do not need to buy their own hardware and
infrastructure, users can manage everything from an administrative console provided by the
cloud provider.

106
8.1.5 - Types of Cloud Computing

Mainly there are three types of cloud computing.

01) Infrastructure as a Service (IaaS)

In this service, we get only infrastructure facilities from the cloud provider. This means we get
servers, hard drives, Ram, CPU, virtual machines, etc. Then we have to install an operating
system (ex: Windows, Linux), Database servers (ex: MySQL, NoSQL server), and other
software and servers that need to run our application. Here we have full control over our virtual
machine. Since we have full control, we have to manage a lot of administration things (ex: we
need to manually update antivirus software).

02) Platform as a Service (PaaS)

Here we can get a virtual machine with the installed software.

Ex: we can get a virtual machine with an installed Linux server with MySQL database. Then
we just need to define the DB schema and create the tables, no need to install MySQL again,
when compared to IaaS, PaaS has less administrative stuff.

Ex: — Azure app services, AWS Elastic Beanstalk

03) Software as a Service (SaaS)

In SaaS, a cloud provider hosts software applications and users don’t have to do administrative
stuff, users can use their services directly.

Ex: — Google apps, Microsoft Office 365

107
8.1.6 - Deployment models for cloud computing

When selecting a cloud strategy, a company must consider factors such as required cloud
application components, preferred resource management tools, and any legacy IT infrastructure
requirements.

The three cloud computing deployment models are cloud-based, on-premises, and hybrid.

Cloud-Based Deployment

• Run all parts of the application in the cloud.


• Migrate existing applications to the cloud.
• Design and build new applications in the cloud.

In a cloud-based deployment model, you can migrate existing applications to the cloud, or you
can design and build new applications in the cloud. You can build those applications on low-
level infrastructure that requires your IT staff to manage them. Alternatively, you can build
them using higher-level services that reduce the management, architecting, and scaling

108
requirements of the core infrastructure. For example, a company might create an application
consisting of virtual servers, databases, and networking components that are fully based in the
cloud.

On-Premises Deployment

• Deploy resources by using virtualization and resource management tools.


• Increase resource utilization by using application management and virtualization
technologies.

On-premises deployment is also known as a private cloud deployment. In this model,


resources are deployed on premises by using virtualization and resource management tools.

For example, you might have applications that run on technology that is fully kept in your on-
premises data center. Though this model is much like legacy IT infrastructure, its incorporation
of application management and virtualization technologies helps to increase resource
utilization.

Hybrid Deployment

• Connect cloud-based resources to on-premises infrastructure.


• Integrate cloud-based resources with legacy IT applications.

In a hybrid deployment, cloud-based resources are connected to on-premises infrastructure.


You might want to use this approach in a number of situations. For example, you have legacy
applications that are better maintained on premises, or government regulations require your
business to keep certain records on premises.

For example, suppose that a company wants to use cloud services that can automate batch data
processing and analytics. However, the company has several legacy applications that are more
suitable on premises and will not be migrated to the cloud. With a hybrid deployment, the
company would be able to keep the legacy applications on premises while benefiting from the
data and analytics services that run in the cloud.

109
8.1.7 - Benefits of cloud computing

Consider why a company might choose to take a particular cloud computing approach when
addressing business needs.

Trade upfront expense for variable expense

Upfront expense refers to data centres, physical servers, and other resources that you would
need to invest in before using them. Variable expense means you only pay for computing
resources you consume instead of investing heavily in data centres and servers before you know
how you’re going to use them.

By taking a cloud computing approach that offers the benefit of variable expense, companies
can implement innovative solutions while saving on costs.

Stop spending money to run and maintain data centres.

Computing in data centres often requires you to spend more money and time managing
infrastructure and servers.

A benefit of cloud computing is the ability to focus less on these tasks and more on your
applications and customers.

Stop guessing capacity.

With cloud computing, you don’t have to predict how much infrastructure capacity you will
need before deploying an application.

For example, you can launch Amazon EC2 instances when needed, and pay only for the
compute time you use. Instead of paying for unused resources or having to deal with limited
capacity, you can access only the capacity that you need. You can also scale in or scale out in
response to demand.

Benefit from massive economies of scale

By using cloud computing, you can achieve a lower variable cost than you can get on your
own.

110
Because usage from hundreds of thousands of customers can aggregate in the cloud, providers,
such as AWS, can achieve higher economies of scale. The economy of scale translates into
lower pay-as-you-go prices.

Increase speed and agility.

The flexibility of cloud computing makes it easier for you to develop and deploy applications.
This flexibility provides you with more time to experiment and innovate. When computing in
data centres, it may take weeks to obtain new resources that you need. By comparison, cloud
computing enables you to access new resources within minutes.

Go global in minutes.

The global footprint of the AWS Cloud enables you to deploy applications to customers around
the world quickly, while providing them with low latency. This means that even if you are
located in a different part of the world than your customers, customers are able to access your
applications with minimal delays. Later in this course, you will explore the AWS global
infrastructure in greater detail. You will examine some of the services that you can use to
deliver content to customers around the world.

111
CHAPTER 9 : Testing and Quality Assurance

9.1 - Software testing Fundamentals

Testing Life Cycle (STLC):

• Requirement Analysis: Understand project goals, user needs, and functional/non-


functional requirements.
• Test Planning: Define testing scope, schedule, resources, and methodologies.
• Test Design:
• Black-Box Testing: Focuses on user perspective and functionality without internal code
knowledge.
• White-Box Testing: Examines internal structure and logic of the code.
• Grey-Box Testing: Combines elements of both approaches.
• Test Case Creation: Develop specific steps to verify requirements and identify potential
issues.
• Test Execution: Manually or automatically run tests according to the plan.
• Defect Management: Report, track, and resolve issues efficiently.
• Test Result Analysis: Evaluate overall test coverage, identify trends, and make
recommendations.
• Closure: Verify defect fixes and document lessons learned.
Types of Testing

"Types of testing" refers to the various approaches and categories used to evaluate software
applications and systems. Each type focuses on different aspects, ensuring the software
functions as intended, meets quality standards, and delivers a positive user experience.

• Functional Testing: Ensures features work as intended (unit, integration, system).


• Non-Functional Testing: Evaluates performance, security, usability, etc.
• Regression Testing: Verifies existing functionality remains intact after changes.
• Automation Testing: Uses scripts to automate repetitive tasks and increase efficiency.
Categories of Testing:

Testing types can be broadly categorized based on different criteria:

Based on Execution:

❖ Static Testing: Analyzes code, requirements, and design documents without executing
the program. Focuses on finding logical errors, code smells, and potential security
vulnerabilities. Examples: code reviews, static code analysis tools.
❖ Dynamic Testing: Executes the program and evaluates its behavior under various
conditions. Ensures functionality, performance, usability, and security. Examples: unit
testing, integration testing, performance testing.

112
Based on Functionality:

❖ Functional Testing: Verifies features work as per requirements and specifications.


Focuses on user interactions, inputs, and outputs. Examples: unit testing, integration
testing, system testing, acceptance testing.
❖ Non-Functional Testing: Evaluates characteristics beyond core functionality, such as
performance, security, usability, and accessibility. Examples: performance testing,
security testing, usability testing, accessibility testing.
Based on Scope:

❖ Unit Testing: Tests individual units of code (e.g., functions, classes) in isolation.
Ensures individual units work correctly.
❖ Integration Testing: Tests how different units interact and integrate with each other.
Verifies data flow and communication between units.
❖ System Testing: Tests the entire system as a whole, including all integrated components
and functionalities. Evaluates overall system behavior.
❖ Acceptance Testing: Performed by end-users or stakeholders to validate the system
meets their needs and requirements. Provides final approval for deployment.

Based on Methodology:

❖ Black-Box Testing: Tests the system from the user's perspective, without knowledge of
internal code structure. Focuses on inputs, outputs, and expected behavior.
❖ White-Box Testing: Tests the system with knowledge of internal code structure.
Explores logic, data flow, and implementation details.
❖ Grey-Box Testing: Combines elements of black-box and white-box testing, utilizing
some internal knowledge to enhance test coverage.
Based on Other Properties:

❖ Regression Testing: Ensures existing functionalities remain intact after code changes.
❖ Smoke Testing: Basic tests to verify critical functionalities before further testing.
❖ Sanity Testing: Quick tests to ensure major features work after changes.
❖ Mutation Testing: Introduces intentional errors into code to verify if tests can detect
them.
❖ Fault Injection Testing: Simulates hardware or software failures to assess system
resilience.
❖ Exploratory Testing: Ad-hoc testing driven by tester creativity and knowledge, often
used for finding edge cases.
❖ Security Testing: Aims to identify and exploit vulnerabilities that could compromise
system security.
❖ Accessibility Testing: Verifies the system is usable by people with disabilities.
❖ Usability Testing: Evaluates how easy and satisfying it is for users to interact with the
system.

113
13.1.3 Best Practices & Methodologies:

• Agile Testing: Adapts to rapid development cycles (e.g., Kanban, TDD).


• Continuous Integration/Continuous Delivery (CI/CD): Integrates testing phases
seamlessly into development process.
• Exploratory Testing: Unstructured, creative testing to find unforeseen issues.
• Test-Driven Development (TDD): Writes tests before code to guide implementation.
• Behavior-Driven Development (BDD): Focuses on user stories and expected behavior.

9.2 - Quality Assurance

Quality Management:

• Quality Standards & Models: Follow established guidelines (e.g., ISO 9001, CMMI)
for process improvement.
• Quality Control vs. Quality Assurance: QA emphasizes prevention, while QC focuses
on detection and correction.
• Defect Prevention & Process Improvement: Identify recurring issues and implement
measures to minimize them.
• Risk Management & Mitigation: Proactively assess and address potential risks to
project quality.
Test Management:

• Test Strategy & Planning: Define overall testing approach and objectives.
• Test Resource Management: Effectively allocate human and technical resources.
• Test Budget & Cost Estimation: Accurately predict testing costs and manage
expenditure.
• Metrics & Reporting: Track key performance indicators (KPIs) and report progress to
stakeholders.
• Communication & Collaboration: Maintain clear communication with developers,
stakeholders, and other teams.
9.3 - Test Automation

Test automation falls under the Software Testing Fundamentals umbrella, specifically within
the subtopic of Types of Testing. However, due to its importance and growing use, it often
merits its own separate section or course material.

114
Test automation involves utilizing software tools to automate the execution and evaluation of
manual test cases. It helps:

• Increase efficiency and speed up testing: Run repetitive tests quickly and more often.
• Improve test coverage: Execute more test cases than manually feasible.
• Enhance accuracy and consistency: Reduce human error in test execution.
• Free up testing resources: Allow testers to focus on exploratory and critical testing.

There are numerous test automation tools available, each with its own strengths and
weaknesses. Choosing the right tool depends on various factors like your project needs,
budget, and team's technical expertise. Here's an overview of some popular options
categorized as open-source and commercial:

Open-Source:

• Selenium: A widely used, powerful framework supporting various programming


languages and browsers. Offers browser automation, API testing, and more.
• Appium: Designed for automating native, hybrid, and web mobile applications.
Supports multiple platforms and programming languages.
• Robot Framework: Keyword-driven framework known for its simplicity and
readability. Integrates well with other tools and languages.
• Cucumber: Behavior-driven development (BDD) tool focusing on describing tests in
plain language, promoting collaboration between testers and developers.
• Cypress: Modern web testing tool offering ease of use, fast setup, and automatic waiting
capabilities.
Commercial:

• Tricentis Tosca: Comprehensive platform covering multiple testing types, including


functional, performance, and mobile testing. Offers AI-powered features and
integrations.
• HP UFT (Unified Functional Testing): Long-standing tool supporting desktop, web,
and mobile app testing. Offers record and playback functionality, visual scripting, and
advanced capabilities.
• Micro Focus Silk Test: Offers various modules for web, mobile, API, and service-
oriented architecture (SOA) testing. Known for its visual test creation and data-driven
testing features.
• Ranorex Studio: User-friendly tool that supports desktop, web, and mobile app testing.
Offers visual test automation, keyword-driven testing, and integration capabilities.
Cloud-Based:

• Kobiton: Cloud-based platform for mobile app testing across various devices and
platforms. Offers automated testing, manual testing tools, and data insights.

115
• BrowserStack: Cloud-based platform for web and mobile app testing across various
browsers, devices, and operating systems. Includes automated and manual testing
features.
• Sauce Labs: Cloud-based platform for automated and manual testing of web, mobile,
and API applications. Provides access to a wide range of devices and operating systems.

116

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy