0% found this document useful (0 votes)
14 views60 pages

U19ads501 - CC U-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views60 pages

U19ads501 - CC U-1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 60

U19ADS501 - Cloud Computing

U19ADS501 - Cloud Computing

UNIT I HISTORY OF CLOUD COMPUTING 9


Overview of Distributed Computing, Cluster computing, Grid computing. Technologies for
Network based systems- System models for Distributed and cloud computing- Software
environments for distributed systems and clouds.
UNIT II CLOUD COMPUTING DEPLOYMENT MODELS AND VIRTUALIZATION 9
Cloud issues and challenges - Properties - Characteristics - Deployment models. Cloud
resources: Network and API - Virtual and Physical computational resources - Data-
storage. Virtualization concepts - Types of Virtualization
UNIT III CLOUD COMPUTING SERVICES 9
Infrastructure as a Service (IaaS) - Resource Virtualization: Server, Storage, Network - Case
studies. Platform as a Service (PaaS) - Cloud platform & Management: Computation, Storage
- Case studies. Software as a Service (SaaS) - Anything as a service (XaaS).
UNIT IV CLOUD COMPUTING TOOLS 9
Overview of services - Conceptual architecture - Controller - Compute - Block Storage -
Object Storage – Networking - Environment – Security - Identity service - Image service -
Installation - Google Web Services- Amazon Web Services- Microsoft Cloud Services-
Openstack –Introduction to OpenNebula Architecture- Introduction to Aneka.
UNIT V MANAGING AND SECURING THE CLOUD 9
Administrating the cloud – Cloud Management Products – Cloud Management Standards -
Securing the cloud – Securing Data –Establishing Identity and Presence.
U19ADS501 - Cloud Computing

COURSE OUTCOMES

At the end of the course, the student will be able to

•Describe the main concepts, key technologies, strengths, and limitations of cloud
computing and the possible applications for state-of-the-art cloud computing.
•Explain the different cloud deployment models and virtualization.
•Explain the types of services that a cloud computing can provide. Apply the
appropriate cloud computing solutions and recommendations according to the
applications used.
•Describe different cloud computing tools.
•Explain about the core issues of cloud computing such as security and privacy.
U19ADS501 - Cloud Computing
Unit-1

Overview of Distributed Computing, Cluster computing, Grid computing. Technologies


for Network-based systems- System models for Distributed and cloud computing-
Software environments for distributed systems and clouds.
Cloud Computing

Cloud Computing – NIST Definition:

“A model for enabling convenient, on-demand network access to a


shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction”

Cloud Computing – Wiki Definition:

Cloud computing is an information technology (IT) paradigm that


enables ubiquitous access to shared pools of configurable system resources and
higher-level services that can be rapidly provisioned with minimal management
effort, often over the Internet.
Cloud Computing

Cloud computing is the delivery of computing services—servers, storage,


databases, networking, software, analytics and more—over the Internet (“the
cloud”). Companies offering these computing services are called cloud providers
and typically charge for cloud computing services based on usage, similar to how
you are billed for water or electricity at home.
- Microsoft Azure

Cloud computing, often referred to as simply “the cloud,” is the delivery of on-
demand computing resources — everything from applications to data centers —
over the internet on a pay-for-use basis.
- IBM
Overview of Distributed Computing

• Distributed computing is a field of computer science that studies distributed


systems. A distributed system is a system whose components are located on
different networked computers, which then communicate and coordinate their
actions by passing messages to each other. The components interact with each
other in order to achieve a common goal. (Wiki)
• A distributed system is a networked collection of independent machines that
can collaborate remotely to achieve one goal. In contrast, distributed computing
is the cloud-based technology that enables this distributed system to operate,
collaborate, and communicate.

https://www.ibm.com/docs/en/txseries/8.2?topic=overview-what-is-distributed-computing
Overview of Distributed Computing

• A distributed computer system consists of multiple software components that


are on multiple computers, but run as a single system. The computers that are
in a distributed system can be physically close together and connected by a local
network, or they can be geographically distant and connected by a wide area
network. A distributed system can consist of any number of possible
configurations, such as mainframes, personal computers, workstations,
minicomputers, and so on. The goal of distributed computing is to make such a
network work as a single computer. (IBM)
• Distributed systems offer many benefits over centralized systems, including the
following:
• Scalability-The system can easily be expanded by adding more machines as needed.
• Redundancy-Several machines can provide the same services, so if one is
unavailable, work does not stop.
Overview of Distributed Computing

• The client/server model


• A common way of organizing software to run on distributed systems is to
separate functions into two parts: clients and servers.
• A client is a program that uses services that other programs provide.
• The programs that provide the services are called servers.
• The client makes a request for a service, and a server performs that service.

A common design of client/server systems uses three tiers:


• A client that interacts with the user
• An application server that contains the business logic of the application
• A resource manager that stores data
Overview of Distributed Computing

Three-tiered client/server architecture


Cluster Computing

Cluster computing
•Cluster computing is a type of computing where a group of computers or nodes
are interconnected through Local Area Network (LAN) so that they may function
as one unit.
•Some critical cluster computing applications include Earthquake Simulation,
Petroleum Reservoir Simulation, Weather Forecasting, and Google Search Engine.
Classification of Clusters
•High-Availability Clusters (HA), High-Performance-Computing Clusters (HPC), and
Load-Balancing Clusters(LB).
•High-Availability(HA) Cluster is a configuration where several computers work
hand in hand so that if one machine fails, the computer’s requests are redirected
to other computers in the network. This function results in zero downtime, and it
is for this reason that High-Availability clusters are known as Failover Clusters.
Cluster Computing

• High-Performance-Computing Clusters (HPC) are synergetic computers that


work hand in hand to offer higher processing power, storage, higher speeds,
and more massive datasets.
• One of the most popular techniques of transferring data between the
computers in a cluster is the MPI (Message-Passing Interface).
Grid Computing

• Grid Computing can be defined as a network of computers working together


to perform a task that would rather be difficult for a single machine. All
machines on that network work under the same protocol to act as a virtual
supercomputer.
• Grid Computing is a subset of distributed computing, where a virtual
supercomputer comprises machines on a network connected by some bus,
mostly Ethernet or sometimes the Internet.
Working:
A Grid computing network mainly consists of these three types of machines
Control Node:
A computer, usually a server or a group of servers which administrates the
whole network and keeps the account of the resources in the network pool.
Provider:
The computer contributes its resources to the network resource pool.
User:
The computer that uses the resources on the network.
Grid Computing

• Grid Computing
Grid Computing

Advantages of Grid Computing:


•It is not centralized, as there are no servers required, except the control node
which is just used for controlling and not for processing.
•Multiple heterogeneous machines i.e. machines with different Operating
Systems can use a single grid computing network.
•Tasks can be performed parallelly across various physical locations and the users
don’t have to pay for them (with money).
Disadvantages of Grid Computing :
•The software of the grid is still in the involution stage.
•A super fast interconnect between computer resources is the need of hour.
•Licensing across many servers may make it prohibitive for some applications.
•Many groups are reluctant (unwilling)with sharing resources .
Cluster vs Grid Computing

Cluster Computing Grid Computing


Nodes must be homogeneous i.e. they should Nodes may have different Operating systems and
have same type of hardware and operating hardwares. Machines can be homogeneous or
system. heterogeneous.
Computers in a grid contribute their unused
Computers in a cluster are dedicated to the same
processing resources to the grid computing
work and perform no other task.
network.
Computers may be located at a huge distance
Computers are located close to each other.
from one another.
Computers are connected by a high speed local Computers are connected using a low speed bus
area network bus. or the internet.
Computers are connected in a centralized Computers are connected in a distributed or de-
network topology. centralized network topology.
Whole system has a centralized resource Every node manages it’s resources
manager. independently.
Every node is autonomous, and anyone can opt
Whole system functions as a single system.
out anytime.
Grid computing is used in areas such as
Cluster computing is used in areas such as
predictive modeling, Automation, simulations,
WebLogic Application Servers, Databases, etc.
etc.
Cloud vs Grid Computing

S.NO Cloud Computing Grid Computing


Cloud computing is a Client-server computing While it is a Distributed computing
1.
architecture. architecture.

While grid computing is a decentralized


2. Cloud computing is a centralized executive.
executive.

In cloud computing, resources are used in While in grid computing, resources are used in
3.
centralized pattern. collaborative pattern.

4. It is more flexible than grid computing. While it is less flexible than cloud computing.

While in grid computing, the users do not pay


5. In cloud computing, the users pay for the use.
for use.

While grid computing is a low accessible


6. Cloud computing is a high accessible service.
service.

It can be accessed through standard web


7. While it is accessible through grid middleware.
protocols.

Grid computing is based on application-


8. Cloud computing is based on service-oriented.
oriented.
Technologies for network-based systems
Multicore CPUs and Multithreading Technologies
 In Figure below, processor speed is measured in millions of instructions per second
(MIPS) and network bandwidth is measured in megabits per second (Mbps) or
gigabits per second (Gbps). The unit GE refers to 1 Gbps Ethernet bandwidth.

 Each core is essentially a processor with its own private cache (L1 cache). Multiple
cores are housed in the same chip with an L2 cache that is shared by all cores.
 In the future, multiple CMPs(Chip Multiprocessor) could be built on the same CPU
chip with even the L3 cache on the chip. Multicore and multithreaded CPUs are
equipped with many high-end processors, including the Intel i7, Xeon, AMD
Opteron, Sun Niagara, IBM Power 6, and X cell processors.

18
Technologies for Network-based Systems

33 year Improvement in Processor and Network Technologies


Technologies for Network-based Systems
Technologies for network-based systems
Multithreading Technology
Consider in Figure below, the dispatch of five independent threads of instructions to
four pipelined data paths (functional units) in each of the following five processor
categories, from left to right: a four-issue superscalar processor, a fine-grain
multithreaded processor, a coarse-grain multithreaded processor, a two-core CMP,
and a simultaneous multithreaded (SMT) processor.

Each row represents the issue slots for a single execution cycle:
•A filled box indicates that the processor found an instruction to execute in that issue slot
on that cycle;
•An empty box denotes an unused slot. 21
Technologies for network-based systems
Multithreading Processors
• Four-issue (e.g. Sun Ultrasparc I)
– Implements instruction level parallelism (ILP) within a single
processor.
– Executes more than one instruction during a clock cycle by
sending multiple instructions to redundant functional units.
• Fine-grain multithreaded processor
– Switch threads after each cycle
– Interleave instruction execution
– If one thread stalls, others are executed
• Coarse-grain multithreaded processor
– Executes a single thread until it reaches certain situations
• Simultaneous multithread processor (SMT)
– Instructions from more than one thread can execute in any given
pipeline stage at a time.

22
Technologies for network-based systems
GPU Computing
 A GPU is a graphics coprocessor or accelerator mounted on a computer’s graphics card
or video card. A GPU offloads the CPU from tedious graphics tasks in video editing
applications. The world’s first GPU, the GeForce 256, was marketed by NVIDIA in 1999.
 Early GPUs functioned as coprocessors attached to the CPU. Today, the NVIDIA GPU
has been upgraded to 128 cores on a single chip. Furthermore, each core on a GPU can
handle eight threads of instructions.
 GPUs are designed to handle large numbers of floating-point operations in parallel.
 Conventional GPUs are widely used in mobile phones, game consoles, embedded
systems, PCs, and servers.

23
Technologies for network-based systems
Memory and Storage Technology
 The upper curve in Figure below plots the growth of DRAM chip capacity from 16
KB in 1976 to 64 GB in 2011.
 For hard drives, capacity increased from 260 MB in 1981 to 250 GB in 2004. The
Seagate Barracuda XT hard drive reached 3 TB in 2011.
 A storage area network (SAN) connects servers to network storage such as disk
arrays.
 Network attached storage (NAS) connects client hosts directly to the disk arrays.

24
Virtual Machines and Virtualization
Virtual Machines Middleware
 Virtual machines (VMs) offer novel solutions to underutilized resources,
application inflexibility, software manageability, and security concerns in existing
physical machines.
 Figure below illustrates the architectures of three VM configurations.

A virtual machine (VM) is the virtualization/emulation of a computer system. Virtual


machines are based on computer architectures and provide functionality of a physical
computer. Their implementations may involve specialized hardware, software, or a
combination.
25
Virtual Machines and Virtualization Middleware
• Benefits
– Cross platform compatibility
– Increase Security
– Enhance Performance
– Simplify software migration
• Virtual software placed between underlying machine and conventional
software
– Conventional software sees different ISA from the one supported by
the hardware
• Virtualization process involves:
– Mapping of virtual resources (registers and memory) to real hardware
resources
– Using real machine instructions to carry out the actions specified by
the virtual machine instructions

26
Virtual Machines and Virtualization Middleware
VM Primitive Operations
Low-level VMM operations are illustrated in Figure below.
 First, the VMs can be multiplexed between hardware machines, as shown in Figure
(a).
 Second, a VM can be suspended and stored in stable storage, as shown in Figure
(b).
 Third, a suspended VM can be resumed or provisioned to a new hardware
platform, as shown in Figure (c).
 Finally, a VM can be migrated from one hardware platform to another, as shown in
Figure (d).

27
Data Center Virtualization for Cloud Computing
• Data center virtualization typically uses virtualization software along with cloud
computing technology to replace traditional physical servers and other equipment
housed in a physical data center.
• A data center that uses virtualization in data centers, sometimes referred to as
a Software Defined Data Center (SDCC), allows organizations to control their entire
IT framework as a singular entity—and often from a central interface.
• The approach can trim capital and operational costs; improve agility, flexibility and
efficiency; save IT staff time and resources; and allow businesses to focus on core
business and IT issues.
• Research firm MarketersMedia reports that the global data center virtualization
market is projected to grow by 8 percent annually from 2017 through 2023. That
would make data center virtualization a U.S. $10 billion market.
• Virtualization of a data center and all its hardware components—including servers,
storage, and appliances—isn’t a new concept (it dates back to the 1960s).
• But now, advances in cloud computing, software and other components have
made the concept viable and even desirable.

28
Data Center Virtualization for Cloud Computing
Data Center Disaster Recovery
• Disaster recovery and business continuity-the benefits:
• Faster backups and recovery. In many cases, it’s possible to accomplish the task in
hours rather than day when data is virtualized.
• Better visibility into assets. A well-designed virtualized environment with the right
tools can aid in identifying and managing documents, files and other data.
• Failover is simplified. If it’s necessary to switch to a redundant system or go back
to a known working state, virtualization can help by speeding recovery time. It can
also provide a platform for testing systems before moving software back into a
production state.
• The need for a smaller footprint. Fewer servers, storage and other devices
translates directly into lower costs for disaster recovery.

29
Data Center Virtualization for Cloud Computing
Data Center Virtualization Products

• VMware Vsphere, VMware NSX, VMware vSAN,


• Red Hat JBoss Enterprise Data Services: The open source leader is a well
respected name in the enterprise data center.
• IBM InfoSphere Information Server: Arguably the leading legacy IT name, IBM
certainly knows virtualization data centers.
• NEC Nblock: NEC sells a solution that provides the building blocks to construct a
virtual data center.
• CDW Software Defined Data Center: The CDW solution aims to offer flexibility and
scalability that covers the compete data center.

30
System models for distributed and cloud
computing
In Table below, massive systems are classified into four groups: clusters, P2P
networks, computing grids, and Internet clouds over huge data centers.

31
1. Clusters of cooperative computers
A computing cluster consists of interconnected stand-alone computers which work cooperatively
as a single integrated computing resource.
Cluster Architecture
 Figure below shows the architecture of a typical server cluster built around a low-latency,
high bandwidth interconnection network. This network can be as simple as a SAN (e.g.,
Myrinet) or a LAN (e.g., Ethernet).
 To build a larger cluster with more nodes, the interconnection network can be built with
multiple levels of Gigabit Ethernet, Myrinet, or InfiniBand switches.

32
1. Clusters of cooperative computers
Single-System Image
 An SSI is an illusion created by software or hardware that presents a collection of
resources as one integrated, powerful resource. SSI makes the cluster appear like a
single machine to the user.
 Cluster designers desire a some middleware to support SSI at various levels, including
the sharing of CPUs, memory, and I/O across all cluster nodes.
 A cluster with multiple system images is nothing but a collection of independent
computers.
Hardware, Software, and Middleware Support
 Clusters exploring massive parallelism are commonly known as MPPs.
 The building blocks are computer nodes (PCs, workstations, servers, or SMP), special
communication software such as PVM or MPI, and a network interface card in each
computer node.
 Most clusters run under the Linux OS. The computer nodes are interconnected by a
high-bandwidth network (such as Gigabit Ethernet, Myrinet, InfiniBand, etc.).
 Special cluster middleware supports are needed to create SSI or high availability (HA).
 Both sequential and parallel applications can run on the cluster, and special parallel
environments are needed to facilitate use of the cluster resources.

33
1. Clusters of cooperative computers
Major Cluster Design Issues
 Middleware or OS extensions were developed at the user space to achieve SSI at
selected functional levels.
 Without this middleware, cluster nodes cannot work together effectively to
achieve cooperative computing.
 The software environments and applications must rely on the middleware to
achieve high performance.

34
2. Grid computing infrastructures
 Internet services such as the Telnet command enables a local computer to connect
to a remote computer.
 A web service such as HTTP enables remote access of remote web pages.
 Grid computing is envisioned to allow close interaction among applications
running on distant computers simultaneously.
Computational Grids
 A computing grid offers an infrastructure that couples computers,
software/middleware, special instruments, and people and sensors together.
 The grid is often constructed across LAN, WAN, or Internet backbone networks at
a regional, national, or global scale.
 The computers used in a grid are primarily workstations, servers, clusters, and
supercomputers. Personal computers, laptops, and PDAs can be used as access
devices to a grid system.
 The NSF TeraGrid in US, EGEE in Europe, and ChinaGrid in China for various
distributed scientific grid applications.

35
2. Grid computing infrastructures
• Figure below shows an example computational grid built over multiple resource sites owned
by different organizations.

Grid Families
National grid projects are followed by industrial grid platform development by
IBM, Microsoft, Sun, HP, Dell, Cisco, EMC, Platform Computing, and others.
New grid service providers (GSPs) and new grid applications have emerged
rapidly, similar to the growth of Internet and web services in the past two
decades.

36
3. Peer-to-Peer (P2P) Network

• A distributed system architecture


• Each computer in the network can act as a client or server for other
network computers.
• No centralized control
• Typically many nodes, but unreliable and heterogeneous
• Nodes are symmetric in function
• Take advantage of distributed, shared resources (bandwidth, CPU, storage)
on peer-nodes
• Fault-tolerant, self-organizing
• Operate in dynamic environment, frequent join and leave is the norm
3. Peer-to-Peer (P2P) Network

Overlay network - computer network built on top of another network.


•Nodes in the overlay can be thought of as being connected by virtual or logical links,
each of which corresponds to a path, perhaps through many physical links, in the
underlying network.
•For example, distributed systems such as cloud computing, peer-to-peer networks,
and client-server applications are overlay networks because their nodes run on top of
the Internet.
3. Peer-to-Peer (P2P) Network

Copyright © 2012, Elsevier Inc. All rights reserved. 1 - 39


4. The Cloud
• Historical roots in today’s
Internet apps
• Search, email, social networks
• File storage (Live Mesh, Mobile
Me, Flicker, …)
• A cloud infrastructure provides a framework to manage
scalable, reliable, on-demand access to applications
• A cloud is the “invisible” backend to many of our mobile
applications
• A model of computation and data storage based on “pay
as you go” access to “unlimited” remote data center
capabilities

Copyright © 2012, Elsevier Inc. All rights reserved.


Basic Concept of Internet Clouds
• Cloud computing is the use of computing resources (hardware and software)
that are delivered as a service over a network (typically the Internet).
• The name comes from the use of a cloud-shaped symbol as an abstraction for
the complex infrastructure it contains in system diagrams.
• Cloud computing entrusts remote services with a user's data, software and
computation.

Copyright © 2012, Elsevier Inc. All rights reserved. 1 - 41


The Next Revolution in IT - Cloud Computing

• Classical Computing • Cloud Computing


– Buy & Own – Subscribe
• Hardware, System – Use
Software, Applications
often to meet peak needs.
– Install, Configure, Test, Verify,
Every 18 months?

Evaluate
– Manage
– ..
– Finally, use it
– $$$$....$(High CapEx)
– $ - pay for what you use, based
on QoS
(Courtesy of Raj Buyya, 2012)

Copyright © 2012, Elsevier Inc. All rights reserved.


Software Environments For Distributed Systems & Clouds
1. Service-oriented architecture (SOA)
• SOA is an evolution of distributed computing based on the
request/reply design paradigm for synchronous and
asynchronous applications.
• An application's business logic or individual functions are
modularized and presented as services for consumer/client
applications.
• Key to these services - their loosely coupled nature;
– i.e., the service interface is independent of the implementation.
• Application developers or system integrators can build
applications by composing one or more services without
knowing the services' underlying implementations.
– For example, a service can be implemented either in .Net or J2EE, and
the application consuming the service can be on a different platform or
language.
SOA key characteristics:
• SOA services have self-describing interfaces in platform-independent XML
documents.
– Web Services Description Language (WSDL) is the standard used to describe the services.
• SOA services communicate with messages formally defined via XML Schema (also
called XSD).
– Communication among consumers and providers or services typically happens in
heterogeneous environments, with little or no knowledge about the provider.
– Messages between services can be viewed as key business documents processed in an
enterprise.
• SOA services are maintained in the enterprise by a registry that acts as a directory
listing.
– Applications can look up the services in the registry and invoke the service.
– Universal Description, Definition, and Integration (UDDI) is the standard used for service
registry.
• Each SOA service has a quality of service (QoS) associated with it.
– Some of the key QoS elements are security requirements, such as authentication and
authorization, reliable messaging, and policies regarding who can invoke services.
Scenario: Online Shopping Website and a
Mobile App
Context: You have an online shopping website (let's call it "ShopEase") that maintains its own
product catalog, including details like product names, descriptions, prices, and stock availability.
There’s also a mobile app (let’s call it "ShopEase Mobile") that users can use to browse products
and make purchases.

How Web Services Facilitate This:


1.ShopEase Website Web Service:
– Purpose: The ShopEase website exposes a web service that allows external
applications to interact with its product catalog and order processing system.
– Functionality: This web service provides endpoints to:
• Retrieve product details (e.g., name, description, price).
• Check stock availability.
• Place orders.
• Get order status updates.
2. Web Service Example:
•Endpoint: https://api.shopease.com/products
•Request to Get Product Details:
– Request URL: https://api.shopease.com/products?productId=12345
– Request Method: GET
– Response: JSON or XML containing product details like:
{
"productId": "12345",
"name": "Wireless Mouse",
"description": "Ergonomic wireless mouse",
"price": 29.99,
"stock": 100
}
3. ShopEase Mobile App:
•Purpose: The mobile app needs to display product information to users and handle
purchases.
•Interaction with Web Service:
– When a user opens the app and searches for products, the app makes a request to the
ShopEase web service to get product details.
– Example Request: The app sends a GET request to https://api.shopease.com/products?
productId=12345 to fetch details for a specific product.
– Example Response: The app receives the product details in JSON format and displays
them to the user.
4. Placing an Order:
•When a user decides to purchase a product, the app sends a POST request to the
ShopEase web service to place an order.
•Request URL: https://api.shopease.com/orders
•Request Body: JSON or XML with order details:jsonCopy code
•{ "userId": "user789", "productId": "12345", "quantity": 1, "paymentMethod":
"credit_card" }
•Response: Confirmation of the order, including an order ID and estimated delivery
time.

5. Order Status:
•The app can periodically check the status of an order by querying the web service.
•Request URL: https://api.shopease.com/orders/status?orderId=98765
•Response: JSON or XML with order status information.
Layered Architecture for Web Services
Service Oriented Architecture Data Catalogue
Grid Bank

Grid Market Information


Services Service
Sign-on Health
Monitor
Info ?
Grid Explorer Grid Node N


Environments
Programming


Job Secure
Control Grid Node1
Schedule Advisor
Agent QoS Pricing
Algorithms
Applications

Trade Server

Trade Manager Trading


Accounting
Resource
Reservation
Misc. services


Deployment Agent
JobExec Resource Allocation

Grid Resource Broker Storage


R1 R2 … Rm
Core Middleware
Grid Service Consumer Services Grid Service Providers
Different Types of SOA’s
2. Distributed Operating Systems
• Distributed Operating System is a model where distributed applications are
running on multiple computers linked by communications. A
distributed operating system is an extension of the network operating system
that supports higher levels of communication and integration of the machines
on the network.
• This system looks to its users like an ordinary centralized operating system but
runs on multiple, independent central processing units (CPUs).
• To promote resource sharing and fast communication among node machines,
it is best to have a distributed OS that manages all resources coherently and
efficiently.
Distributed systems provide the following advantages:
1 Sharing of resources.
2 Reliability.
3 Communication.
4 Computation speedup
2. Distributed Operating Systems
• Tanenbaum identifies three approaches for distributing resource management
functions in a distributed computer system.

• The first approach is to build a network OS over a large number of


heterogeneous OS platforms. Such an OS offers the lowest transparency to
users, and is essentially a distributed file system, with independent computers
relying on file sharing as a means of communication.

• The second approach is to develop middleware to offer a limited degree of


resource sharing, similar to the MOSIX/OS developed for clustered systems.

• The third approach is to develop a truly distributed OS to achieve higher use or


system transparency.
2. Distributed Operating Systems
2. Distributed Operating Systems
Below given are some of the examples of distributed operating systems:

l. IRIX operating system; is the implementation of UNIX System V, Release 3 for


Silicon Graphics multiprocessor workstations.
2.DYNIX operating system running on Sequent Symmetry multiprocessor
computers.
3.AIX operating system for IBM RS/6000 computers.
4.Solaris operating system for SUN multiprocessor workstations.
5.Mach/OS is a multi-threading and multitasking UNIX compatible operating
system;
6.OSF/1 operating system developed by Open Foundation Software: UNIX
compatible.
2. Distributed Operating Systems
3. Parallel and Distributed Programming Models
Parallel and Distributed Programming Models and Tool Sets
3. Parallel and Distributed Programming Models
a) Message-Passing Interface (MPI)
•This is the primary programming standard used to develop parallel and
concurrent programs to run on a distributed system.
•MPI is essentially a library of subprograms that can be called from C or
FORTRAN to write parallel programs running on a distributed system.
b) MapReduce
•This is a web programming model for scalable data processing on large clusters
over large data sets.
•The model is applied mainly in web-scale search and cloud computing
applications.
•The user specifies a Map function to generate a set of intermediate key/value
pairs. Then the user applies a Reduce function to merge all intermediate values
with the same intermediate key.
•MapReduce is highly scalable to explore high degrees of parallelism at different
job levels.
•A typical MapReduce computation process can handle terabytes of data on tens
of thousands or more client machines
3. Parallel and Distributed Programming Models
c) Hadoop Library
•Hadoop offers a software platform that was originally developed by a Yahoo!
group.
•The package enables users to write and run applications over vast amounts of
distributed data.
•Users can easily scale Hadoop to store and process petabytes of data in the web
space.
d) Open Grid Services Architecture (OGSA)
•OGSA as a common standard for general public use of grid services.
•Key features include a distributed execution environment, Public Key
Infrastructure (PKI) services using a local certificate authority (CA), trust
management, and security policies in grid computing.
e) Globus Toolkits and Extensions
•Globus is a middleware library jointly developed by the U.S. Argonne National
Laboratory and USC Information Science Institute over the past decade.
•This library implements some of the OGSA standards for resource discovery,
allocation, and security enforcement in a grid environment.
3. Parallel and Distributed Programming Models
Grid Standards and Toolkits for Scientific and Engineering Applications

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy