Cloud Computing (Bca)
Cloud Computing (Bca)
1. Grid Computing
○ Definition: Grid computing involves connecting multiple, geographically
dispersed computers to form a single virtual supercomputer. It utilizes idle
resources to perform tasks that require significant computational power.
○ Use Case: Commonly used in scientific research, simulations, and big data
processing.
○ Key Feature: Resources are shared but remain independent.
2. Cluster Computing
○ Definition: Cluster computing refers to the use of a group of tightly
connected computers (nodes) that work together as a single system. These
nodes are usually in close physical proximity.
○ Use Case: Popular in industries requiring high availability and fault tolerance,
like e-commerce and financial services.
○ Key Feature: High-speed interconnects and homogeneous hardware.
3. Distributed Computing
○ Definition: A distributed system uses multiple interconnected computers
that work together to solve problems but are not necessarily in close
proximity. Each node works independently while communicating with others.
○ Use Case: Applications like online banking, internet search engines, and
social networks.
○ Key Feature: Emphasis on scalability and resource sharing.
4. Utility Computing
○ Definition: A pay-per-use model where computing resources, storage, and
services are provided like a utility (e.g., electricity). Users only pay for what
they consume.
○ Use Case: Used for on-demand scaling in small businesses and startups.
○ Key Feature: Cost efficiency and scalability.
5. Cloud Computing
○ Definition: Cloud computing provides on-demand access to computing
resources (like servers, storage, and applications) over the internet,
eliminating the need for local hardware or software.
○ Use Case: Used for data storage (e.g., Google Drive), computing (e.g., AWS),
and applications (e.g., Microsoft Office 365).
○ Key Feature: Scalability, flexibility, and availability.
Introduction to Cloud Computing
Detailed ans:
○ Cloud services store data on remote servers, so there's always a risk of data
breaches, hacking, or unauthorized access.
○ Sensitive information, like personal data or company secrets, could be
exposed if the provider’s security isn’t strong enough.
○ Users often worry about who can access their data and how it’s being used.
Example: If a company’s financial data is stored in the cloud and the server is
hacked, it can lead to huge losses or reputation damage.
Example: An e-commerce site hosted on the cloud going offline during a sale can
lead to lost revenue and angry customers.
Vendor Lock-in
○ Once you choose a cloud provider and build your systems around their
platform, it’s hard and expensive to switch to another provider.
○ Different providers use different technologies, so moving data or
applications might require rewriting them to fit the new system.
○ This limits flexibility and forces you to stick with one company, even if they
increase prices or reduce services.
Example: A business using AWS for years may find it nearly impossible to switch
to Google Cloud because of technical differences and the cost of migration.
Migrating into a Cloud
Introduction
Migrating into a cloud means moving your data, applications, and IT infrastructure from
on-premises systems (or other environments) to cloud-based platforms. It’s done to take
advantage of the cloud's scalability, cost-effectiveness, and efficiency. However, this
process involves careful planning, as it can affect the way an organization operates.
Cloud migration isn’t just about copying data—it’s about redesigning systems to fit the
cloud environment, which might involve changes to architecture, security, and application
performance.
● Rehosting (Lift-and-Shift):
This approach involves moving applications and data "as-is" to the cloud, without
any changes to the architecture. It’s quick and cost-effective, but might not fully
utilize the benefits of cloud services.
● Replatforming:
In this method, small modifications are made to optimize applications for the
cloud. For instance, switching to a cloud-compatible database while keeping most
of the architecture intact.
● Refactoring:
Applications are rebuilt to take full advantage of cloud capabilities like
auto-scaling or serverless architectures. This approach can be costly but results in
better performance and flexibility.
● Rebuilding:
Applications are completely redesigned and developed from scratch for the cloud.
It’s the most expensive and time-consuming method but offers maximum benefits.
● Retiring:
Some applications or systems that are no longer needed may be retired instead of
migrating them to reduce costs and complexity.
● Retaining:
In some cases, certain workloads might remain on-premises due to security,
compliance, or performance requirements.
Virtualization
Introduction
1. Hardware Virtualization:
○ Creates virtual machines by using a hypervisor.
○ Examples: VMware, Hyper-V.
2. Operating System Virtualization:
○ Allows multiple isolated user-space instances on the same OS.
○ Examples: Docker, OpenVZ.
3. Storage Virtualization:
○ Combines multiple physical storage devices into a single logical storage unit.
○ Examples: SAN, NAS.
4. Network Virtualization:
○ Abstracts and manages physical network resources as virtual networks.
○ Examples: SDN (Software-Defined Networking), VLANs.
5. Application Virtualization:
○ Allows applications to run in a virtualized environment without installation on
the physical machine.
○ Examples: Citrix, ThinApp.
Virtualization and Cloud Computing
Pros of Virtualization
● Cost Savings: Reduces the need for physical hardware and lowers maintenance
costs.
● Efficiency: Utilizes hardware resources effectively by running multiple VMs on a
single machine.
● Flexibility: Easily scales resources up or down as needed.
● Disaster Recovery: Simplifies backup and restoration of systems due to
encapsulated VMs.
● Environment Simulation: Enables testing and development in isolated
environments.
Cons of Virtualization
Hypervisor Technology
● Xen:
○ Open-source hypervisor used in many cloud platforms like AWS.
○ Efficient for large-scale virtualization with high scalability.
● VMware:
○ Industry leader in virtualization technology.
○ Offers products like VMware ESXi (Type 1 hypervisor) and VMware
Workstation (Type 2 hypervisor).
● Microsoft Hyper-V:
○ Built into Windows Server and Windows 10/11.
○ Offers seamless integration with Microsoft’s ecosystem and supports both
Linux and Windows VMs.
Capacity Planning
Introduction
Capacity planning involves determining the necessary resources (like computing power,
storage, and network bandwidth) required to handle current and future demands. It’s a
crucial step in ensuring that IT infrastructure can scale efficiently and meet the needs
of users without over-provisioning resources (which leads to unnecessary costs) or
under-provisioning (which leads to performance issues). Effective capacity planning
ensures that resources are allocated appropriately to support workloads efficiently.
Elasticity vs Scalability
● Elasticity:
○ Refers to the ability of a system to automatically adjust its resources
based on the workload.
○ It’s dynamic and allows the system to quickly scale up when demand
increases or scale down when demand decreases.
○ Example: Cloud services like AWS automatically scale your computing
resources up or down based on traffic.
● Scalability:
○ Refers to a system’s ability to handle increasing load by adding more
resources (either vertically or horizontally) without affecting performance.
○ Vertical Scaling (Scaling Up): Increasing the power of an existing server
(e.g., adding more RAM or CPU).
○ Horizontal Scaling (Scaling Out): Adding more servers to distribute the load
evenly.
○ Example: Expanding a database cluster by adding more servers to handle
more transactions.
● Baseline Measurements:
○ A baseline is a reference point for measuring performance. It's the set of
initial conditions under which a system operates optimally.
○ These measurements are taken under normal conditions to understand how
the system performs before any scaling or changes are made.
○ Baselines help in identifying when the system is underperforming or when
resources need to be scaled.
○ Example: Monitoring CPU usage, memory, and disk space usage on a server
under normal load.
● System Metrics:
○ CPU Utilization: Measures the percentage of CPU being used. High CPU
usage could indicate the need for more processing power.
○ Memory Usage: Measures the percentage of RAM in use. High memory usage
might require more memory or optimization of the application.
○ Disk I/O: Measures how quickly data is read from or written to disk. A high
disk I/O could indicate the need for faster storage or more efficient data
management.
○ Network Throughput: Measures the amount of data being sent and received
over the network. High network throughput suggests that bandwidth might
need to be upgraded.
Load Testing
● Load testing involves simulating normal and peak load conditions to measure how a
system behaves under varying levels of stress.
● It helps determine if a system can handle the expected load and identifies
potential bottlenecks.
● Load testing helps set performance baselines and ensures that the system can
manage the expected user traffic.
● Example: Testing how a website handles 1,000 users simultaneously to ensure it
performs well under traffic spikes.
Resource Ceilings
● A resource ceiling is the maximum capacity a system can handle before it starts
to experience performance degradation.
● When a system reaches its resource ceiling, it may require additional resources,
optimization, or scaling to avoid issues like slow response times or crashes.
● Example: If a server can handle 100 requests per second, but its CPU and memory
are maxed out beyond this point, it has reached its resource ceiling.
Server and Instance Types
● Server Types:
○ Servers are classified based on their specifications (CPU, RAM, storage
capacity) and their intended use (e.g., web server, database server).
○ Choosing the right server type ensures that the system can handle its
expected load and workload efficiently.
● Instance Types:
○ Cloud platforms like AWS, Google Cloud, and Azure offer various instance
types with different CPU, memory, and storage configurations.
○ For example, a compute-optimized instance is ideal for CPU-heavy
workloads, while a memory-optimized instance is better for applications
that require a lot of memory.
● Example: For a database application, an instance with high memory and fast disk
storage might be necessary, while a web server could use a smaller instance with a
balance of compute and memory.
Network Capacity
● Network capacity refers to the amount of data that can be transmitted over a
network in a given period of time.
● It’s crucial to plan for network bandwidth requirements, especially for cloud
applications and services that transfer large amounts of data.
● Example: A video streaming platform needs to ensure high network bandwidth to
stream videos without buffering, while a simple website might have lower network
requirements.
● Key Considerations:
○ Latency: Delay in data transmission, which can affect user experience.
○ Throughput: The amount of data transferred over the network per unit of
time.
○ Packet Loss: Loss of data packets can disrupt communication and affect
performance.
Scaling
Inspiration
Service Level Agreements (SLAs) in cloud computing are designed to establish clear
expectations between cloud service providers and their customers regarding the quality,
availability, and performance of services. The inspiration behind SLAs is to provide a
formal contract that defines the levels of service customers can expect and to hold the
service provider accountable for meeting those standards. This ensures that both
parties are aligned in terms of performance, security, and reliability expectations.
Service Level Objectives (SLOs) are specific, measurable targets that contribute to
achieving an SLA. Traditional approaches to SLO management typically involve:
● Manual Monitoring:
Performance metrics and system health are often monitored manually or through
basic monitoring tools. Providers and customers would manually track the
availability, uptime, and response time of services.
● Periodic Reporting:
Reports are created at set intervals (e.g., monthly or quarterly) to check if the
service meets the agreed-upon SLOs. These reports may be time-consuming to
generate and are subject to human error.
● Reactive Problem Solving:
Providers would typically respond to performance issues or failures only after they
occur, based on alerts or complaints from users.
Types of SLAs
SLAs can vary depending on the type of service provided. Some common types include:
● Service-Based SLA:
○ Applies to all customers who use a specific service.
○ The terms and conditions are the same for all customers who use the
service.
○ Example: A cloud storage service might guarantee 99.9% uptime for all its
users.
● Customer-Based SLA:
○ Tailored to a specific customer or customer group, taking into account the
unique needs or requirements of the customer.
○ Example: An enterprise customer may have a custom SLA for data security
or dedicated resources in a cloud-based virtual private server.
● Multi-Level SLA:
○ A combination of service-based and customer-based SLAs. It includes
different levels of agreements, such as corporate-level SLAs (covering
broad services) and service-level SLAs (for specific individual services).
○ Example: An SLA for a cloud infrastructure provider could include
corporate-level terms for uptime and data protection, and service-level
terms for specific applications or workloads.
The life cycle of an SLA in cloud computing includes several phases, from creation to
termination:
1. Negotiation:
○ The service provider and customer negotiate the terms and conditions of
the SLA, including service expectations, response times, and penalties for
failure to meet agreed-upon metrics.
2. Agreement:
○ Both parties sign the SLA, solidifying the commitment to the terms defined
in the agreement. This stage also includes setting up monitoring tools to
track the agreed-upon metrics.
3. Monitoring:
○ Continuous tracking of service performance against the defined SLOs (e.g.,
uptime, response time, resource usage). Automated monitoring tools are
often used in cloud environments to track these metrics in real-time.
4. Enforcement:
○ If the service provider fails to meet the agreed SLOs, penalties or
compensations may be triggered as outlined in the SLA. This might include
service credits, refunds, or contract termination.
5. Review and Renewal:
○ The SLA is periodically reviewed and renewed. It may be modified based on
the customer’s evolving needs or any changes in the service capabilities of
the provider. The review also involves assessing if the current agreement
still reflects the desired outcomes and service quality.
6. Termination:
○ The SLA is terminated if the service is no longer required, or at the end of
the contract term. In some cases, an SLA may be replaced with a new one if
the cloud service evolves.
Cloud computing introduces new challenges and opportunities for SLA management:
● Dynamic Scaling:
Cloud services often scale dynamically based on demand, which means that
managing SLAs requires continuous monitoring of resource allocation, ensuring that
performance standards are met during scaling events.
● Automated Monitoring and Alerts:
Cloud providers typically use automated monitoring tools that track system
performance in real-time, alerting both the provider and the customer of any SLA
violations. This reduces manual intervention and ensures quicker responses to
issues.
● Granular SLAs:
Cloud SLAs are often more granular, with specific terms for various service layers
(e.g., compute, storage, networking) and individual services (e.g., databases,
applications). This allows customers to have greater control and customization
based on their usage patterns.
● Third-Party Services:
In cloud environments, customers may use third-party services that are
integrated into their cloud infrastructure. These third parties may have their own
SLAs, creating a more complex web of agreements to manage.
Securing cloud services is critical as organizations move their operations to the cloud.
Cloud security involves protecting cloud environments, applications, data, and systems
from potential cyber threats. It requires both technology and policy frameworks to
ensure confidentiality, integrity, and availability of data and services.
Cloud Security
Cloud security refers to the set of policies, technologies, and controls designed to
protect data, applications, and services hosted in the cloud. This involves securing
various layers of the cloud environment—such as infrastructure, platforms, applications,
and data—against potential threats like hacking, data breaches, and service disruptions.
● Identity and Access Management (IAM): Ensuring only authorized users can
access the cloud services.
● Data Encryption: Protecting data both at rest and in transit to prevent
unauthorized access.
● Firewalls and Security Groups: Controlling traffic entering and exiting cloud
services.
Securing Data
Cloud data security involves securing sensitive data stored and processed in the cloud.
Cloud data needs to be protected against unauthorized access, loss, or corruption, and
ensuring that cloud services comply with privacy regulations.
● Data Encryption: Encrypt data during transit and at rest to prevent unauthorized
access, even if the cloud infrastructure is compromised.
○ At Rest: Data stored on cloud servers is encrypted to prevent unauthorized
access while it's stored.
○ In Transit: Data is encrypted while being transferred over networks to
prevent interception.
● Access Control: Implement strong access controls (authentication and
authorization) to ensure only authorized users can access sensitive data.
● Data Masking and Tokenization: Data masking can obscure sensitive information
so that unauthorized users can't view it. Tokenization replaces sensitive data with
a unique identifier (token), making it useless without proper decryption.
● Backup and Recovery: Ensure regular backups are made and that recovery
procedures are in place to recover data in the event of a breach or disaster.
Brokered Cloud Storage Access
Brokered cloud storage access refers to third-party services that manage data access
between users and cloud storage providers. These brokers act as intermediaries,
managing who can access the data and under what conditions.
● Access Control: The broker must enforce strict access controls to ensure that
only authorized users can access data stored in the cloud.
● Encryption: Data handled by brokers must be encrypted, both in transit and at
rest, to ensure its confidentiality and integrity.
● Audit Trails: Brokers should maintain logs of who accessed the data, when, and for
what purpose, to provide visibility into potential threats and comply with
regulatory requirements.
Cloud storage location refers to where data is physically stored, which can have security
implications. Tenancy refers to the sharing of resources by different customers in a
cloud environment.
Key considerations:
● Location of Data:
○ Choose cloud providers that offer transparency about where your data is
stored. Some jurisdictions have strict laws about where certain data types
(e.g., personal data) can be stored (e.g., EU's GDPR laws).
○ Data Residency: Certain industries or regions may require data to be stored
in specific geographic locations for compliance reasons.
● Single-Tenant vs Multi-Tenant:
○ Single-Tenant: The cloud resources (e.g., storage, servers) are dedicated to
one customer, reducing the risk of data exposure to other customers.
○ Multi-Tenant: Multiple customers share the same resources. Adequate
isolation mechanisms are essential to prevent data leakage or unauthorized
access between tenants.
● Data Segregation: Cloud providers need to ensure that tenants' data is isolated
from others, preventing cross-tenant data access or leakage.
Encryption
Encryption is one of the most critical methods for ensuring data security in the cloud.
● Encryption at Rest:
○ Ensures that data is encrypted when stored on the cloud provider’s servers.
If attackers gain access to storage, they cannot read the data without the
decryption key.
● Encryption in Transit:
○ Data transferred between the client and the cloud, or between cloud
services, is encrypted during transmission to prevent interception.
● Key Management:
○ Proper management of encryption keys is crucial. Organizations can choose
to manage keys themselves (with a customer-managed key model) or allow
the cloud provider to manage them (cloud-managed keys).
○ Hardware Security Modules (HSM): Physical devices used to manage
encryption keys securely and prevent unauthorized access.
Auditing and compliance ensure that cloud services meet security standards and
regulatory requirements. Regular audits are critical to verify that cloud providers
adhere to security policies and that organizations are compliant with laws and industry
standards.
● Audit Logs:
○ Cloud service providers should maintain detailed logs of who accessed what
data and when. These logs help track suspicious activities and meet
compliance requirements.
● Compliance Standards:
○ Cloud providers must comply with regulations like GDPR, HIPAA, ISO
27001, and SOC 2.
○ Customers should ensure their cloud provider offers services that are
certified against relevant standards to guarantee that data privacy,
integrity, and availability are protected.
● Penetration Testing and Vulnerability Scanning:
○ Regular penetration testing and vulnerability scanning ensure that potential
security holes are discovered and patched before they can be exploited.
● Third-Party Audits:
○ Independent audits by third-party firms help validate that the cloud
provider meets security and compliance standards, offering more
transparency and assurance to customers.
1. Data Encryption: Always encrypt sensitive data both at rest and in transit using
robust encryption standards.
2. Access Management: Implement strict identity and access management controls
(IAM) to ensure only authorized users can access cloud services and data. This
includes using multi-factor authentication (MFA).
3. Regular Audits: Regularly audit cloud usage and access logs to identify suspicious
activity and ensure compliance with security policies and regulations.
4. Backup and Disaster Recovery Plans: Ensure that data is regularly backed up, and
disaster recovery procedures are in place for data recovery after a breach or
system failure.
5. Monitoring: Continuously monitor the cloud environment for any unusual activity,
performance issues, or security threats using automated tools.
6. Secure API Access: Ensure that APIs are secure and only accessible by
authenticated and authorized users to prevent unauthorized access.
7. Vendor Risk Management: Ensure that cloud service providers meet security and
compliance standards, and understand their security practices and how they align
with your organization’s needs.
8. Network Security: Use firewalls, virtual private networks (VPNs), and other
network security measures to protect data as it moves through and between cloud
services.
9. Data Residency and Location Awareness: Be aware of the cloud provider’s data
residency policies and ensure compliance with regional data protection laws.
10.Security Awareness Training: Educate employees about cloud security best
practices, such as phishing attacks and safe handling of sensitive information, to
reduce human error.
Compute Services
Storage Services
Communication Services
Google App Engine (GAE) is a Platform-as-a-Service (PaaS) solution for developing and
hosting web applications in Google-managed data centers. It is designed to handle
applications written in various languages, like Python, Java, Go, and PHP.
● Managed Platform:
Google App Engine provides an environment where developers can focus purely on
writing code without worrying about infrastructure management, such as server
provisioning and scaling.
● Scalability:
Google App Engine automatically scales applications depending on the incoming
traffic. If the number of users increases, GAE provisions new instances of the
application without manual intervention.
● Services and APIs:
GAE offers several services, including storage (Google Cloud Datastore), user
authentication, and task scheduling, making it easy for developers to integrate
with Google’s ecosystem.
Cost Model
● Pay-As-You-Go:
Google App Engine uses a pay-per-use pricing model, meaning you only pay for the
resources your application consumes. This includes the number of instances,
storage, and bandwidth used.
● Free Tier:
GAE provides a free tier that allows developers to experiment with basic
resources without incurring any costs. It offers limited instances and storage.
● Scaling Costs:
As the application scales, the costs increase based on the additional resources
used. However, the automatic scaling feature ensures that the app remains
cost-efficient by only using resources when necessary.
Microsoft Azure
Microsoft Azure is a cloud platform offering a broad set of cloud services, including
compute, storage, databases, networking, and analytics.
Azure Core Concepts
● Resource Group:
A container for managing related Azure resources (e.g., virtual machines,
databases). Resource groups help organize and manage resources by project or
environment.
● Azure Virtual Machines (VM):
Virtual machines that provide scalable compute resources. Azure VMs support
Windows and Linux-based operating systems and allow users to run applications in
the cloud.
● Azure Storage:
Offers scalable and secure cloud storage solutions, including Blob Storage (for
unstructured data), Table Storage (for NoSQL data), and Disk Storage (for VM
disks).
● Azure Virtual Network (VNet):
Enables secure and isolated communication between Azure resources, such as VMs,
web apps, and databases.
SQL Azure
Summary
● Amazon Web Services (AWS): Offers a wide variety of cloud services, including
compute, storage, and communication services, with a strong emphasis on
scalability and reliability.
● Google App Engine: A platform-as-a-service (PaaS) for developing and hosting web
applications, providing automatic scaling and integrated Google Cloud services.
● Microsoft Azure: A versatile cloud platform offering compute, storage, and
networking services. It integrates well with Microsoft products and provides both
public and hybrid cloud solutions.
Each of these platforms offers unique capabilities and benefits, depending on the needs
of the organization, from basic cloud storage to complex enterprise applications.