1727418583
1727418583
Cloud computing is the delivery of computer system resources (e.g., servers, storage, databases,
networking, software, analytics) over the internet
“The National Institute of Standards and Technology (NIST) defines cloud computing as a
"payper-use model for enabling available, convenient and on-demand network access to a
shared pool of configurable computing resources (e.g., networks, servers, storage, applications
and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction."
Redundancy: Since many different machines are providing the same service, that service can
keep running even if one (or more) of the computers goes down.
3. Bring out the differences between private cloud and public cloud.
Public Cloud: Public clouds are owned and operated by third-party service providers who deliver
computing resources like servers and storage over the Internet.
Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP) are prime
examples. They offer scalability and flexibility, making them ideal for many businesses.
Private Cloud: A private cloud refers to cloud computing resources used exclusively by one
business or organization.
It can be hosted on-site or by a third-party provider, offering enhanced security and control,
which is crucial for industries with strict data regulations.
1. When the computing infrastructure and When the computing infrastructure and the
resources are shared to the public via the resources are shared to the private network via
internet, it is known as a public cloud. the internet, it is known as a private cloud.
2. A public cloud is like a multi-tenant in which A private cloud is like a single-tenant in which
the network is managed by your service the network is handled by the in-house team.
provider.
3. Here the data of several enterprises is Here the data of a single enterprise is stored.
stored.
4. It supports the activity performed over the It supports the activity performed over the
public network or internet. private network or internet.
5. The scalability is high in a public cloud. The scalability is limited in a private cloud.
9. In the public cloud, the performance is low The performance is high in a private cloud.
to medium.
4. Define SOA.
SOA, or service-oriented architecture, defines a way to make software components reusable and
interoperable through service interface. is a method of software development that uses
software components called services to create business applications.
5. Write the name of Web services tools
XML-RPC, UDDI, SOAP, and REST, Google Maps
1. Distributed Systems
In the networks, different systems are connected. When they target to send the message from
different independent systems which are physically located in various places but are connected
through the network. Some examples of distributed systems are Ethernet which is a LAN
technology, Telecommunication network, and parallel processing. The Basic functions of the
distributed systems are −
• Resource Sharing − The Resources like data, hardware, and software can be shared
between them.
• Open-to-all − The software is designed and can be shared.
• Fault Detection − The error or failure in the system is detected and can be corrected.
2. Mainframe Computing
It was developed in the year 1951 and provides powerful features. Mainframe Computing is still
in existence due to its ability to deal with a large amount of data. For a company that needs to
access and share a vast amount of data then this computing is preferred. Among the four types
of computers, mainframe computer performs very fast and lengthy computations easily.
The type of services handled by them is bulk processing of data and exchanging large-sized
hardware. Apart from the performance, mainframe computing is very expensive.
3. Cluster Computing
In Cluster Computing, the computers are connected to make it a single computing. The tasks in
Cluster computing are performed concurrently by each computer also known as the nodes which
are connected to the network. So the activities performed by any single node are known to all
the nodes of the computing which may increase the performance, transparency, and processing
speed.
To eliminate the cost, cluster computing has come into existence. We can also resize the cluster
computing by removing or adding the nodes.
4. Grid Computing
It was introduced in the year 1990. As the computing structure includes different computers or
nodes, in this case, the different nodes are placed in different geographical places but are
connected to the same network using the internet.
The other computing methods seen so far, it has homogeneous nodes that are located in the
same place. But in this grid computing, the nodes are placed in different organizations. It
minimized the problems of cluster computing but the distance between the nodes raised a new
problem.
5. Web 2.0
This computing lets the users generate their content and collaborate with other people or share
the information using social media, for example, Facebook, Twitter, and Orkut. Web 2.0 is a
combination of the second-generation technology World Wide Web (WWW) along with the web
services and it is the computing type that is used today.
6. Virtualization
It came into existence 40 years back and it is becoming the current technique used in IT firms. It
employs a software layer over the hardware and using this it provides the customer with cloud-
based services.
7. Utility Computing
Based on the need of the user, utility computing can be used. It provides the users, company,
clients or based on the business need the data storage can be taken for rent and used.
6.b) Examine in detail about the multi core CPUs and multithreading technologies
Multicore CPUs:
Unlike multi-processor systems that make use of multiple processors to carry out
concurrent operations, multicore CPUs make use of multiple cores within a single processor. The
idea here is to divide the processor into multiple cores such as dual, quad etc to carry out
operations in parallel. The main advantage of these systems is that it improves potential
performance of the overall system. One of the major examples of such systems in Intel
processors whose speed of processing increased from 10 MHZ to 4 GHZ. This value is considered
as limit for most (or) all of the chips that are based on CMOS because of the power constraints.
These constraints can be removed by employing ILP (Instruction level Parallelism) mechanisms
which are based on super scalar architecture and speculative execution.
Some systems use many-core GPU (Graphics Processing Units) that make use of thousands of
processor cores. These GPUs are capable of managing instructions with varying magnitudes
similar to that of multi core CPU.
Some of the processors that are based on multi-core and multithreaded processing are Intel i7.
AMD opteron, IBM power 6 and many more. Multithreading: Multithreading is a feature which
enables multiple threads to execute on a single processor in an overlapping manner. A thread is
an atomic unit of a process and many threads usually make up a process. In a multithreading
environment, the resources of a processor are being shared by multiple threads, so each thread
gets a separate copy of the functional unit or resource. Functional units generally include
a register file, a separate Program Counter (pc) or a separate page table to enable virtual
memory access, which in turn enables multiple program to execute simultaneously by sharing
the memory
Multithreading
To enable multithreading, the hardware must be able to perform threads switching which is
more efficient than switching the processes, as each process consists of threads and usually
takes many clock cycles for its execution. As threads are lightweight, they can execute and switch
among themselves during the execution. Therefore, they are considered more efficient and fast
than processes.
1. Fine-grained multithreading
2. Coarse-grained multithreading.
1. Fine Grained Multithreading: In this approach, the threads are switched on each instruction.
The delay caused because of the switch operation of threads is very little. The threads are
switched only if the current running thread encounters a stall. The subsequent thread is chosen
from a pool of waiting threads in a round-robin fashion. The approach becomes effective if
threads are switched at every clock cycle.
Advantage: The advantage of fine-grained multithreading is that it can efficiently recover the
losses of throughput which come from short and long stalls of the thread.
Disadvantage: The execution of the stalled thread is delayed that in turn decreases the execution
speed of that individual thread since another thread is being executed in its place.
The key advantage of coarse grained multithreading is that it stops the execution of threads
which encounter costly stall and replaces with a new thread. A costly stall consumes more clock
cycles when compared to the time taken to remove a frozen thread and replace a new thread
into the pipeline.
Publish/Subscribe systems are nowadays considered a key technology for information diffusion.
Each participant in a publish/subscribe communication system can play the role of a publisher or
a subscriber of information. Publishers produce information in form of events, which are then
consumed by subscribers. Subscribers can declare their interest on a subset of the whole
information issuing subscriptions. There are two major roles: Publisher and Subscriber
Push strategy: It is the responsibility of the publisher to notify all the subscribers. Eg: Method
invocation.
Pull strategy : The publisher simply makes available the message for a specific event. It is the
responsibility of the subscribers to check whether there are messages on the events that are
registered. Subscriptions are used to filter out part of the events produced by publishers. In
Software Architecture, Publish/Subscribe pattern is a message pattern and a network oriented
architectural pattern It describes how two different parts of a message passing system connect
and communicate with each other.
There are three main components to the Publish Subscribe Model:
Publishers: Broadcast messages, with no knowledge of the subscribers.
Subscribers: They ‗listen‘ out for messages regarding topic/categories that they are interested in
without any knowledge of who the publishers are.
Event Bus: Transfers the messages from the publishers to the subscribe
Pub/Sub messaging allows to create decoupled applications easily with a reliable communication
method and enables users to create Event-driven architectures. Event-driven architecture (EDA)
is a software design pattern that enables a system to detect events (such as a transaction or site
visit) and act on them in real time or near real time. This pattern replaces the traditional
Request/Response architecture where services would have to wait for a reply before they could
move onto the next task.
In computing, virtualization refers to the act of creating a virtual (rather than actual) version of
something, like computer hardware platforms, operating systems, storage devices, and
computer network resources
Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing, users
store data in the cloud, but with the help of Virtualization, users have the extra benefit of sharing
the infrastructure. Cloud Vendors take care of the required physical resources, but these cloud
providers charge a huge amount for these services which impacts every user or organization.
Virtualization helps Users or Organizations in maintaining those services which are required by a
company through external (third-party) people, which helps in reducing costs to the company.
This is the way through which Virtualization works in Cloud Computing.
Taxonomy of Virtualization
Virtualization is technology that you can use to create virtual representations of servers, storage,
networks, and other physical machines. Virtual software mimics the functions of physical
hardware to run multiple virtual machines simultaneously on a single physical machine.
Virtualization is mainly used to emulate the execution environment, storage, and networks.
The execution environment is classified into two:
1. Process-level — implemented on top of an existing operating system.
• High Initial Investment: Clouds have a very high initial investment, but it is also true that
it will help in reducing the cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires
highly skilled staff who have skills to work with the cloud easily, and for this, you have to
hire new staff or provide training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it
has the chance of getting attacked by any hacker or cracker very easily.
Characteristics of Virtualization
• Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the
most relevant features.
2. Network Virtualization: The ability to run multiple virtual networks with each having a
separate control and data plan. It co-exists together on top of one physical network. It can be
managed by individual parties that are potentially confidential to each other. Network
virtualization provides a facility to create and provision virtual networks, logical switches,
routers, firewalls, load balancers, Virtual Private Networks (VPN), and workload security within
days or even weeks.
5. Server Virtualization: This is a kind of virtualization in which the masking of server resources
takes place. Here, the central server (physical server) is divided into multiple different virtual
servers by changing the identity number, and processors. So, each system can operate its
operating systems in an isolated manner. Where each sub-server knows the identity of the
central server. It causes an increase in performance and reduces the operating cost by the
deployment of main server resources into a sub-server resource. It’s beneficial in virtual
migration, reducing energy consumption, reducing infrastructural costs, etc.
6. Data Virtualization: This is the kind of virtualization in which the data is collected from various
sources and managed at a single place without knowing more about the technical information
like how data is collected, stored & formatted then arranged that data logically so that its virtual
view can be accessed by its interested people and stakeholders, and users through the various
cloud services remotely. Many big giant companies are providing their services like Oracle, IBM,
At scale, Cdata, etc.
8. a) Explain in detail underlying principles of Parallel and Distributed Computing. Tabulate the
differences between these computing.
Parallel computing, also known as parallel processing, speeds up a computational task by dividing
it into smaller jobs across multiple processors inside one computer. Distributed computing,
on the other hand, Distributed computing uses a distributed system, such as the internet, to
increase the available computing power and enable larger, more complex tasks to be executed
across multiple machines.
Parallel computing is the process of performing computational tasks across multiple processors
at once to improve computing speed and efficiency. It divides tasks into sub-tasks and executes
them simultaneously through different processors.
There are three main types, or “levels,” of parallel computing: bit, instruction, and task.
• Bit-level parallelism: Uses larger “words,” which is a fixed-sized piece of data handled as a
unit by the instruction set or the hardware of the processor, to reduce the number of
instructions the processor needs to perform an operation.
• Instruction-level parallelism: Employs a stream of instructions to allow processors to
execute more than one instruction per clock cycle (the oscillation between high and low
states within a digital circuit).
• Task-level parallelism: Runs computer code across multiple processors to run multiple
tasks at the same time on the same data.
Distributed computing is the process of connecting multiple computers via a local network or
wide area network so that they can act together as a single ultra-powerful computer capable of
performing computations that no single computer within the network would be able to perform
on its own.
Distributed computers offer two key advantages:
• Easy scalability: Just add more computers to expand the system.
• Redundancy: Since many different machines are providing the same service, that service
can keep running even if one (or more) of the computers goes down.
S.No Parallel Computing Distributed Computing
1 Parallel computing involves the use of multiple Distributed computing involves the use of
processors within a single computer to work on a multiple computers across a network to
problem. work together on a problem.
2 If one processor fails, others can continue If one computer fails, others can continue
working. working.
5 In parallel computing, the computer can have In distributed computing, each computer
a shared memory. has its own memory.
Hypervisors
The hypervisor is the virtualization software that you install on your physical machine. It is a
software layer that acts as an intermediary between the virtual machines and the underlying
hardware or host operating system. The hypervisor coordinates access to the physical
environment so that several virtual machines have access to their own share of physical
resources.
For example, if the virtual machine requires computing resources, such as computer processing
power, the request first goes to the hypervisor. The hypervisor then passes the request to the
underlying hardware, which performs the task.
The following are the two main types of hypervisors.
Type 1 hypervisors
A type 1 hypervisor—also called a bare-metal hypervisor—runs directly on the computer
hardware. It has some operating system capabilities and is highly efficient because it interacts
directly with the physical resources.
Type 2 hypervisors
A type 2 hypervisor runs as an application on computer hardware with an existing operating
system. Use this type of hypervisor when running multiple operating systems on a single
machine.
The basic emulation method is through code interpretation. An interpreter program interprets
the source instructions to target instructions one by one. One source instruction may require
tens or hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired. This approach
translates basic blocks of dynamic source instructions to target instructions. The basic blocks can
also be extended to program traces or super blocks to increase translation efficiency. Instruction
set emulation requires binary translation and optimization. A virtual instruction set architecture
(V-ISA) thus requires adding a processor-specific software translation layer to the compiler.
b) Hardware Abstraction Level
Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. On the other hand, the
process manages the underlying hardware through virtualization. The idea is to virtualize a
computer’s resources, such as its processors, memory, and I/O devices. The intention is to
upgrade the hardware utilization rate by multiple users concurrently. The idea was implemented
in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor has been applied to
virtualize x86-based machines to run Linux or other guest OS applications. We will discuss
hardware virtualization approaches in more detail in Section 3.3.
e) User-Application Level
Virtualization at the application level virtualizes an application as a VM. On a traditional OS, an
application often runs as a process. Therefore, application-level virtualization is also known
as process-level virtualization. The most popular approach is to deploy high level language (HLL)
VMs. In this scenario, the virtualization layer sits as an application program on top of the
operating system, and the layer exports an abstraction of a VM that can run programs written
and compiled to a particular abstract machine definition. Any program written in the HLL and
compiled for this VM will be able to run on it. The Microsoft .NET CLR and Java Virtual Machine
(JVM) are two good examples of this class of VM. Other forms of application-level virtualization
are known as application isolation, application sandboxing, or application streaming. The process
involves wrapping the application in a layer that is isolated from the host OS and other
applications.