CC_NOTES[1]
CC_NOTES[1]
Internet Technologies:
1. Web Services: Open standards for Web services facilitated the integration of diverse applications
across different platforms, offering a standardized method for applications to communicate and share
data.
2. Service-Oriented Architectures (SOA): SOA promoted loosely coupled, protocol-independent, and
standards-based distributed computing. It described services in a standardized language, encouraging
interoperability.
3. Web 2.0: The Web 2.0 paradigm emphasized programmable aggregation of information and
services, allowing for the creation of complex applications through the combination of building
blocks and APIs.
Grid Computing:
Definition: Grid computing is a distributed computing approach that aggregates resources from
different administrative domains, allowing transparent access to these resources. Its primary aim is to
enhance the performance of scientific applications, like climate modeling, drug design, and protein
analysis, by sharing computational and storage resources.
Key Aspects:
Standard Web Services: Grid computing relies on standard Web services-based protocols to discover,
access, allocate, monitor, account for, and manage distributed resources as a single virtual system.
This standardization is crucial for efficient grid operation.
OGSA: The Open Grid Services Architecture (OGSA) defines core capabilities and behaviors to
address key concerns in grid systems, promoting uniformity and interoperability.
Examples: TeraGrid and EGEE are prominent production grids that seek to accelerate a range of
scientific applications. These grids enable researchers to harness vast computational resources
distributed across different institutions
Utility Computing:
Definition: Utility computing involves users assigning a "utility" value to their computing jobs,
which reflects their specific requirements, such as deadlines and importance. They pay a service
provider based on this utility, while providers aim to maximize their own utility, often linked to
profit. Utility computing creates a marketplace where resources are allocated based on the perceived
value of jobs.
Key Aspects:
Resource Allocation: Utility computing optimizes resource allocation by considering job-specific
utility values, ensuring that higher-value jobs receive prioritized access to resources.
Flexible Resource Management: Users can set their utility value based on their job's specific
requirements, offering flexibility in resource allocation.
Examples: Amazon Web Services (AWS) is a prime example of utility computing, where users pay
for computing resources based on their specific needs. AWS offers a variety of services that can be
scaled up or down as required, making it a cost-effective solution for many businesses.
Q.3 Explain Hardware Virtualization with suitable diagram.
Ans :
Hardware virtualization is a technology that allows multiple operating systems and software stacks
to run on a single physical platform. This is achieved by using a software layer known as the Virtual
Machine Monitor (VMM), or hypervisor, which mediates access to the physical hardware. The
hypervisor presents each guest operating system with a Virtual Machine (VM), which is a set of
virtual platform interfaces.
There are three basic capabilities regarding the management of workload in a virtualized system:
Isolation: Each program runs in its own isolated space (VM), enhancing security,
reliability, and performance control. Failures in one VM don't affect others.
Consolidation: Multiple different tasks run on one machine, optimizing system use and
overcoming software and hardware compatibility issues during upgrades.
Migration: Applications can be moved between machines (migration) for maintenance,
balancing loads, or disaster recovery by packaging the entire state of an operating
system into a VM.
A hardware virtualized server hosting three virtual machines, each one running distinct
operating system and user level software stack.
Several VMM platforms exist that form the basis of many utility or cloud computing environments.
The most notable ones are VMWare, Xen, and KVM.
1. VMWare ESXi: A bare-metal hypervisor from VMWare that installs directly on the physical
server and provides advanced virtualization techniques of processor, memory, and I/O.
2. Xen: An open-source hypervisor that pioneered the para-virtualization concept, allowing the
guest operating system to interact with the hypervisor for improved performance.
3. KVM (Kernel-based Virtual Machine): A Linux virtualization subsystem that is part of the
mainline Linux kernel, leveraging hardware-assisted virtualization to support unmodified
guest operating systems.
Q.4 Define Atomic computing with its properties.
Ans :
Autonomic computing is a field of research that aims to reduce human involvement in the
operation of complex computing systems. The goal is for these systems to manage
themselves, guided by high-level human input.
Autonomic, or self-managing, systems operate based on several key components:
Monitoring probes and gauges (sensors): These collect data about the system’s
operation.
Adaptation engine (autonomic manager): This computes optimizations based
on the data collected by the sensors.
Effectors: These implement changes in the system based on the optimizations
computed by the autonomic manager.
IBM’s Autonomic Computing Initiative has defined four key properties of autonomic
systems:
1. Self-Configuration: The system can configure itself based on high-level policies
provided by humans.
2. Self-Optimization: The system can optimize its own performance and resources to
achieve the best possible results.
3. Self-Healing: The system can detect and correct faults and problems without
human intervention.
4. Self-Protection: The system can anticipate and protect itself from threats and
attacks.
IBM also suggested a reference model for autonomic control loops of autonomic managers,
called MAPE-K (Monitor, Analyse, Plan, Execute—Knowledge).
Q.5 Explain the cloud computing stack using suitable diagram.
Ans : The cloud computing stack, often referred to as the cloud computing service
models, is a conceptual framework that categorizes cloud computing services into three
distinct layers based on the level of abstraction and the service model provided by cloud
providers.
These layers are explained as follows :
Infrastructure as a Service (IaaS):
IaaS offers virtualized computing resources like VMs, storage, and networking on
demand.
Users have control over VM configurations, and it follows a pay-as-you-go model.
Example: AWS Elastic Compute Cloud (EC2).
Platform as a Service (PaaS):
PaaS abstracts the underlying infrastructure, providing a development and
deployment platform.
Developers focus on code, and it offers programming languages and frameworks.
Example: Google AppEngine.
4. Storage Virtualization:
• Abstracts logical storage from physical devices, creating virtual disks
independent of hardware and location.
• Commonly used with storage area networks (SANs) and storage
controllers.
5. Interface to Public Clouds:
• Integrates resources from public clouds to meet varying workload
demands.
• Transparence usage of leased resources for applications.
6. Virtual Networking:
• Creates isolated virtual networks like VLANs, enhancing network
management and security.
• Supports secure VPN connections for local and remote VMs.
7. Dynamic Resource Allocation:
• Monitors resource utilization and reallocates resources among VMs to
match supply and demand.
• Aids in energy conservation and SLA optimization.
8. Virtual Clusters:
• Holistically manages groups of VMs for on-demand computing clusters.
• Useful for multi-tier Internet applications.
9. Reservation and Negotiation Mechanism:
• Supports advance reservations and complex leasing terms to meet resource
demands.
• Accommodates scenarios where resources are in high demand with
negotiation features.
10. High Availability and Data Recovery:
• Offers high availability features to minimize downtime through VM
failover.
• Implements redundancy and synchronized VMs for mission-critical
applications.
• Utilizes data backup mechanisms, often with incremental backups and
proxies, to protect VM images.
Developer Tools:
Description: Tools and development environments that simplify the process of building,
deploying, and managing applications on the platform.
Examples: Google App Engine provides an Eclipse-based IDE, while Microsoft Azure offers
tools for Visual Studio.
Persistence Options:
Description: Methods for storing and retrieving application data, including support for
relational databases and distributed storage solutions.
Examples: Google App Engine uses BigTable, while Salesforce's Force.com uses its own
object database.
Automatic Scaling:
Description: PaaS platforms often feature auto-scaling capabilities, enabling applications to
adapt dynamically to varying levels of traffic and resource demand.
Examples: Platforms like Google App Engine and Heroku provide automatic scaling.
Backend Infrastructure:
Description: Information about the underlying infrastructure and services supporting the
PaaS, including components like load balancers, databases, and data storage.
Examples: Google App Engine leverages its own infrastructure, while Microsoft Azure relies
on Microsoft's data centers.
Q.11 Explain challenges and risks of cloud computing.
Ans :
Challenges and Risks in Cloud Computing:
Security, Privacy, and Trust:
Issue: Cloud services are often public, making them susceptible to attacks.
Challenges: Ensuring cloud environments are as secure as in-house systems. Trust in
providers is crucial for privacy.
Legal Considerations: Data centers' physical locations can affect data management and
compliance with local laws.
Data Lock-In and Standardization:
Concern: Users worry about being locked into one cloud provider, making it hard to move
their data.
Solution: Standardization efforts aim to create open standards for cloud computing. Examples
include the Cloud Computing Interoperability Forum (CCIF) and the Unified Cloud Interface
(UCI).
Availability, Fault-Tolerance, and Disaster Recovery:
User Expectations: Users expect service availability, performance, and clear measures for
handling system failures.
SLAs: Service Level Agreements (SLAs) are needed to specify service details, including
availability and performance guarantees.
Resource Management and Energy Efficiency:
Resource Allocation: Efficiently managing physical resources shared among virtual machines
is a challenge.
Dynamic VM Mapping: Policies for mapping VMs must consider factors like CPU, memory,
and network bandwidth.
Data Management: Handling large data quantities in VM management activities requires
efficient mechanisms.
Energy Efficiency: Data centers consume substantial electricity, impacting costs and the
environment. Dynamic resource management can optimize performance and minimize energy
consumption.
These challenges and risks must be carefully addressed by cloud providers, developers, and users to
ensure the benefits of cloud computing while mitigating potential issues related to security, data
management, and resource optimization.
Q.12 Explain the promise of the cloud computing services with suitable
diagram.
Ans :
1. Simplicity and Ease of Use: Cloud computing offers simplicity, uniformity, and
ease of use through abstractions, making it accessible without the need to
understand underlying complexities.
2. Economic Savings: Small and medium enterprises can achieve substantial
economic savings by using cloud computing for cyclical IT needs, as documented in
numerous success stories.
3. Cloudonomics: The economic benefits and trade-offs of leveraging cloud services,
known as "cloudonomics," have become a topic of deep interest among IT managers
and technology architects.
4. Spawning Non-Mission Critical Needs: The promise of the cloud, both in terms
of economics and technology, allows enterprises to move non-mission-critical IT
needs to cloud services, particularly those that are web-oriented, seasonal,
parallelizable, and not highly security-dependent.
5. Successful Adoption by Startups: Many startups have successfully established
their IT departments exclusively using cloud services, achieving high returns on
investment (ROI).
6. Pilots in Large Enterprises: Large enterprises are running successful pilots for
cloud adoption, including experimenting with running complex applications like SAP
on cloud offerings.
7. Predicted Widespread Adoption: Industry analysts predict that a significant
percentage of top enterprises would have migrated the majority of their IT needs to
cloud offerings, demonstrating the widespread impact and benefits of cloud
computing.
The following features help you configure a VPC to provide the connectivity that your
applications need:
1. Subnets: Subnets are like address ranges within your virtual private cloud (VPC).
Each subnet is confined to a single Availability Zone and serves as a place to deploy
AWS resources.
2. IP Addressing: You can assign IP addresses (IPv4 and IPv6) to your VPCs and
subnets. You can even bring your existing public IP addresses to AWS and assign
them to your resources, like EC2 instances or load balancers.
3. Routing: Route tables determine where network traffic from your subnets or
gateways should go. They act like roadmaps for directing data within your network.
4. Gateways and Endpoints: Gateways, like internet gateways, connect your VPC to
external networks (e.g., the internet). VPC endpoints enable private connections to
AWS services without exposing them to the internet.
5. Peering Connections: VPC peering lets you connect resources in different VPCs
securely, allowing them to communicate as if they were in the same network.
6. Traffic Mirroring: You can copy network traffic from your instances and send it to
security and monitoring tools for in-depth analysis.
7. Transit Gateways: Think of transit gateways as central hubs that facilitate traffic
routing between your VPCs, VPN connections, and Direct Connect connections. They
simplify network architecture.
8. VPC Flow Logs: These logs capture details about the IP traffic flowing to and from
network interfaces within your VPC, helping with troubleshooting and security
analysis.
9. VPN Connections: AWS VPN lets you establish secure connections between your
VPCs and your on-premises networks.
Q.14 Draw and explain cloud computing service offering and deployment
models.
Ans :
Definition: IaaS is a cloud computing service offering that provides virtualized and scalable
hardware resources such as computing power, storage, and network infrastructure.
Users: Targeted at IT professionals and system administrators.
Examples: Amazon Web Services (AWS) with services like EC2 (Elastic Compute Cloud) and
S3 (Simple Storage Service).
2. Platform as a Service (PaaS):
Definition: SaaS on the cloud is about delivering software applications to end-users via the
internet, with underlying cloud infrastructure providing support.
Users: Architects and end-users of large software packages.
Examples: Salesforce.com, Gmail, Yahoo Mail, Facebook, Twitter, and other cloud-supported
applications.
Definition: Public clouds are cloud environments provided by cloud vendors to the general
public or multiple organizations. These environments offer shared resources, such as virtual
servers and storage, over the internet.
Characteristics: Scalable, cost-effective, and maintained by the service provider.
Examples: Amazon Web Services (AWS), Google Cloud, Microsoft Azure.
2. Private Clouds:
Definition: Private clouds are cloud infrastructures exclusively operated and used by a single
organization. They are hosted on-premises or by a third-party provider, and access is
restricted to that organization.
Characteristics: Enhanced control, customization, and privacy.
Examples: On-premises data centers or cloud solutions dedicated to a single enterprise.
3. Hybrid Clouds:
Definition: Hybrid clouds combine elements of both public and private clouds, allowing data
and applications to be shared between them. Organizations can use hybrid models to meet
specific business needs.
Characteristics: Flexibility, security, and the ability to balance on-premises and cloud
resources.
Examples: Combining public cloud services with private data centers or proprietary cloud
solution
Q.15 Explain Challenges in the Cloud with suitable diagram.
Ans :
1 .Simplistic View vs. Complexity: Cloud service offerings present a simplified view of IT, programming,
and resource usage, but the underlying systems face significant complexity and challenges.
Failure-Prone: Cloud systems operate in an environment where failures can occur, which
necessitates robust mechanisms for fault tolerance and recovery.
Heterogeneity: Cloud environments often consist of diverse hardware and software
components, making system management more complex.
Resource Hogging: Efficient resource allocation and management are essential to ensure
optimal performance.
Security Shortcomings: Cloud systems must address serious security concerns, such as data
privacy and protection.
3 Idealized View vs. Realistic Management: Cloud services often promise features like network
reliability, instant network latency, and infinite bandwidth, which must be managed realistically to
avoid design and implementation fallacies.
4 Security Challenges: Security is a paramount concern in cloud computing, involving issues related to
data protection, compliance, and trust. The Cloud Security Alliance is actively addressing these
challenge
Terms used :
Cl: Lower weightage threshold.
Ch: Higher weightage threshold.
Aij: Specific constant assigned for a question.
Xij: Fraction between 0 and 1 representing the relevance and applicability of the answer to
a question.