Unit 1
Unit 1
● The cloud users can simply log on to the network without installing anything. They do not pay for
hardware and maintenance. But the service providers pay for physical equipment and maintenance.
● The concept of cloud computing becomes much more understandable when one begins to think about
what modern IT environments always require scalable capacity or additional capabilities to their
infrastructure dynamically, without investing money in the purchase of new infrastructure, all the
while without needing to conduct training for new personnel and without the need for licensing new
software.
● The cloud model is composed of three components.
● In general, First generation computers were built using hard-wired circuits and vacuum tubes.
● Data were stored using paper punch cards.
1.3.1.2 Second Generation Computers
● Another general-purpose computer of this era was ENIAC (Electronic Numerical
Integrator and Computer), which was built in 1946. This was the first Turing complete, digital
computer that capable of reprogramming to solve a full range of computing problems.
● ENIAC composed of 18,000 thermionic valves, weighed over 60,000 pounds, and
consumed 25 kilowatts of electrical power per hour. ENIAC was capable of performing one lakh
calculations a second.
● Vannevar Bush was written a visionary description of the potential uses for information
technology with his description of an automated library system called MEMEX.
● Bush introduced the concept of the MEMEX in late 1930s as a microfilm based device in
which an individual can store all his books and records.
● Speed of computation is never increase linearly. It is proportional to the square root of system
cost. Therefore, the faster a system becomes, the more expensive it is to increase its speed.
Data-centered Repository
Blackboard
Data flow Pipe and filter
Batch sequential
● The repository architectural style is the most relevant reference model in this category. It is
characterized by two main components: the central data structure, which represents the current
state of the system, and a collection of independent components, which operate on the central
data.
● The batch sequential style is characterized by an ordered sequence of separate programs
executing one after the other. These programs are chained together by providing as input for the
next program the output generated by the last program after its completion, which is most likely
in the form of a file.
● The pipe and filter style is a variation of the previous style for expressing the activity of a
software system as a sequence of data transformations. Each component of the processing chain
is called a filter, and the connection between one filter and the next is represented by a data
stream.
● Rule-Based Style architecture is characterized by representing the abstract execution
environment as an inference engine. Programs are expressed in the form of rules or predicates
that hold true.
● The core feature of the interpreter style is the presence of an engine that is used to interpret
a pseudo code expressed in a format acceptable for the interpreter. The interpretation of
the pseudo-program constitutes the execution of the program itself.
● Top Down Style is quite representative of systems developed with imperative
programming, which leads to a divide and conquer approach to problem resolution.
● Object Oriented Style encompasses a wide range of systems that have been designed and
implemented by leveraging the abstractions of object oriented programming
● The layered system style allows the design and implementation of software systems in terms of
layers, which provide a different level of abstraction of the system.
● Each layer generally operates with at most two layers: the one that provides a lower
abstraction level and the one that provides a higher abstraction layer.
● In Communicating Processes architectural style, components are represented by
independent processes that leverage IPC facilities for coordination management.
● On the other hand, Event Systems architectural style where the components of the system
are loosely coupled and connected.
● System architectural styles cover the physical organization of components and
processes over a distributed infrastructure. They provide two fundamental reference styles:
client/server and peer-to-peer.
● The client/server model features two major components: a server and a client. These two
components interact with each other through a network connection using a given protocol. The
communication is unidirectional. The client issues a request to the server, and after processing the
request the server returns a response.
● The important operations in the client-server paradigm are request, accept (client side), and
listen and response (server side).
● The client/server model is suitable in many-to-one scenarios.
● In general, multiple clients are interested in such services and the server must be
appropriately designed to efficiently serve requests coming from different clients. This
consideration has implications on both client design and server design.
● For the client design, there are two models: Thin client model and Fat client model.
● Thin client model, the load of data processing and transformation is put on the server side, and
the client has a light implementation that is mostly concerned with retrieving and returning the data
it is being asked for, with no considerable further processing.
● Fat client model, the client component is also responsible for processing and transforming
the data before returning it to the user, whereas the server features a fairly light implementation that
is mostly concerned with the management of access to the data.
● The three major components in the client-server model are presentation, application logic, and
data storage.
● Presentation, application logic, and data maintenance can be seen as conceptual layers, which are
more appropriately called tiers.
● The mapping between the conceptual layers and their physical implementation in modules and
components allows differentiating among several types of architectures, which go under the name of
multi-tiered architectures.
● Two major classes are Two-tier architecture and Three-tier architecture.
● Two-tier architecture partitions the systems into two tiers, which are located one in the client
component and the other on the server. The client is responsible for the presentation tier by providing
a user interface. The server concentrates the application logic and the data store into a single tier.
● Three-tier architecture separates the presentation of data, the application logic, and the data storage
into three tiers. This architecture is generalized into an N-tier model in case it is necessary to further
divide the stages composing the application logic and storage tiers.
● The peer-to-peer model introduces a symmetric architecture in which all the components are
called as peers, play the same role and incorporate both client and server capabilities of the
client/server model.
● The most relevant example of peer-to-peer systems is constituted by file sharing
applications such as Gnutella, BitTorrent, and Kazaa.
1.4.4 Models for inter process communication
● There are several different models in which processes can interact with each other; these
map to different abstractions for IPC. Among the most relevant models are shared memory,
remote procedure call (RPC), and message passing.
● Message passing introduces the concept of a message as the main abstraction of the model.
The entities exchanging information explicitly encode in the form of a message the data to be
exchanged. The structure and the content of a message vary according to the model. Examples of
this model are the Message-Passing Interface (MPI) and OpenMP.
● Remote procedure call paradigm extends the concept of procedure call beyond the
boundaries of a single process, thus triggering the execution of code in remote processes.
In this case, underlying client/server architecture is implied. A remote process hosts a server
component, thus allowing client processes to request the invocation of methods, and returns the
result of the execution.
1.4.5 Models for message-based communication
Point-to-point message model
● This model organizes the communication among single components. Each message is sent from
one component to another, and there is a direct addressing to identify the message receiver. In a
point-to-point communication model it is necessary to know the location of or how to address
another component in the system.
● SOA encompasses a set of design principles that structure system development and provide
means for integrating components into a coherent and decentralized system.
● SOA-based computing packages functionalities into a set of interoperable services, which
can be integrated into different software systems belonging to separate business domains.
● There are two major roles within SOA: the service provider and the service consumer.
1.5 Cloud Characteristics
From the cloud computing’s various definitions; a certain set of key characteristics emerges. Figure
1.15 illustrates various key characteristics related to cloud computing paradigm.
1.5.1 On-demand Provisioning
● On-demand provisioning is the single most important characteristic of cloud computing, it allows
the users to request or release resources whenever they want.
● These demands are thereafter automatically granted by a cloud provider’s service and the users
are only charged for their usage, i.e., the time they were in possession of the resources.
● The reactivity of a cloud solution, with regard to resource provisioning is indeed of prime
importance as it is closely related to the cloud’s pay-as-you-go business model.
● It is one of the important and valuable features of Cloud Computing as the user can
continuously monitor the server uptime, capabilities, and allotted network storage. With this
feature, the user can also monitor the computing capabilities.
1.5.2 Universal Access
● Resources in the cloud need not only be provisioned rapidly but also accessed and managed
universally, using standard Internet protocols, typically via RESTful web services.
● This enables the users to access their cloud resources using any type of devices, provided
they have an Internet connection.
● Universal access is a key feature behind the cloud’s widespread adoption, not only by
professional actors but also by the general public that is nowadays familiar with cloud based
solutions such as cloud storage or media streaming.
● Capabilities are available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms such as mobile phones,
tablets, laptops, and workstations.
1.5.5 Multitenancy
● As the grid before, the cloud’s resources are shared by different simultaneous users.
These users had to reserve in advance a fixed number of physical machines for a fixed amount
of time.
● In virtualized data centers, a user’s provisioned resources no longer correspond to the
physical infrastructure and can be dispatched over multiple physical machines.
● They can also run alongside other users’ provisioned resources thus requiring a lesser amount
of physical resources. Consequently, important energy savings can be made by shutting down the
unused resources or putting them in energy saving mode.
1.5.6 Resource pooling
● The provider’s computing resources are pooled to serve multiple consumers using a multi-
tenant model, with different physical and virtual resources dynamically assigned and
reassigned according to consumer demand.
● There is a sense of location independence in that the customer generally has no control or
knowledge over the exact location of the provided resources but may be able to specify
location at a higher level of abstraction (e.g., country, state, or datacenter).
● Examples of resources include storage, processing, memory, and network bandwidth.
1.5.7 Rapid elasticity and Scalability
● Elasticity is the ability of a system to include and exclude resources like CPU cores,
memory, Virtual Machine and container instances to adapt to the load variation in real time.
● Elasticity is a dynamic property for cloud computing. There are two types of elasticity.
Horizontal and Vertical.
● Horizontal elasticity consists in adding or removing instances of computing resources
associated with an application.
● Vertical elasticity consists in increasing or decreasing characteristics of computing
resources, such as CPU time, cores, memory, and network bandwidth.
● There are other terms such as scalability and efficiency, which are associated with
elasticity but their meaning is different from elasticity while they are used
interchangeably in some cases.
● Scalability is the ability of the system to sustain increasing workloads by making use of
additional resources, it is time independent and it is similar to the provisioning state in elasticity
but the time has no effect on the system (static property).
● The following equation that summarizes the elasticity concept in cloud computing.
Auto scaling = Scalability +Automation
Elasticity = Auto scaling + Optimization
● It means that the elasticity is built on top of scalability. It can be considered as an
automation of the concept of scalability, however, it aims to optimize at best and as quickly as
possible the resources at a given time.
● Capabilities can be elastically provisioned and released, in some ca ses automatically, to scale
rapidly outward and inward commensurate with demand.
● To the consumer, the capabilities available for provisioning often appear to be unlimited and can
be appropriated in any quantity at any time.
1.5.8 Easy Maintenance
● The servers are easily maintained and the downtime is very low and even in some cases,
there is no downtime.
● Cloud Computing comes up with an update every time by gradually making it better. The updates
are more compatible with the devices and perform faster than older ones along with the bugs
which are fixed.
1.5.9 High Availability
● The capabilities of the Cloud can be modified as per the use and can be extended a lot.
It analyzes the storage usage and allows the user to buy extra Cloud storage if needed for a very
small amount.
1.5.10 Security
● Cloud Security is one of the best features of cloud computing. It creates a snapshot of the data
stored so that the data may not get lost even if one of the servers gets damaged.
● The data is stored within the storage devices, which cannot be hacked and utilized by any
other person. The storage service is quick and reliable.