0% found this document useful (0 votes)
28 views132 pages

Cloud

Uploaded by

Shantanu Aditya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views132 pages

Cloud

Uploaded by

Shantanu Aditya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 132

A Private Cloud is a cloud computing model that offers the same advantages of public cloud—

scalability, flexibility, and resource sharing—but is dedicated to a single organization. It provides an


exclusive cloud environment where resources like servers, storage, and networking are not shared
with other entities, offering a high level of control, security, and customization.

Here’s a detailed breakdown of the private cloud:

1. Definition

• A private cloud is a cloud infrastructure operated solely for a single organization. It can be
hosted on-premises or externally by a third-party provider.

• It delivers similar benefits to public cloud services, such as scalability and self-service, but
with additional control and security, making it ideal for organizations with strict regulatory or
security requirements.

2. Types of Private Cloud

• On-Premises Private Cloud:

o The organization manages and hosts the cloud infrastructure in its own data
centers. It offers maximum control but requires significant investment in hardware,
software, and personnel.

• Hosted Private Cloud:

o A third-party provider hosts the infrastructure but allocates it exclusively to the


organization. It reduces the complexity of managing hardware while providing
private cloud benefits.
• Managed Private Cloud:

o The organization outsources the management of their private cloud to a provider,


which handles maintenance, updates, and scaling, allowing the organization to
focus on its core business.

3. Key Features

• Dedicated Environment: Resources like storage, computing power, and networking are
dedicated to a single organization, ensuring maximum privacy and control.

• Customization: Private clouds can be customized to meet specific organizational needs,


including specialized security protocols, compliance requirements, or performance
optimizations.

• Scalability: Like public clouds, private clouds can scale up or down based on demand, but
the resources are only for the organization’s use, ensuring predictable performance.

• Security: Since the infrastructure is isolated, it offers enhanced security compared to


public clouds, with more granular control over data and infrastructure.

• Compliance: Private clouds are preferred by industries like finance, healthcare, and
government where strict compliance (e.g., HIPAA, GDPR) is mandatory. They allow
organizations to meet data residency, privacy, and security requirements more easily.

4. Technologies Involved

• Virtualization: Underpins most private cloud implementations, allowing multiple virtual


machines (VMs) to run on a single piece of hardware, improving resource utilization and
flexibility.

• Software-Defined Networking (SDN): Allows for programmable, scalable, and flexible


network management in the private cloud environment.

• Automation Tools: Private clouds use tools like VMware, OpenStack, and Microsoft Azure
Stack to automate provisioning, configuration, and management tasks, streamlining
operations.

5. Advantages

• Control and Customization: Complete control over the cloud environment, with the ability
to customize hardware, networking, and security to meet organizational needs.

• Security and Compliance: Enhanced security features like encryption, firewalls, and
network segmentation, making it easier to comply with regulations.

• Performance: Dedicated resources ensure consistent, predictable performance since


there is no competition for bandwidth or processing power.
• Cost Efficiency: For large enterprises with the resources to manage a private cloud, it can
be more cost-effective in the long run than public cloud services, especially for consistent
workloads.

• Regulatory Compliance: Private clouds offer better control over data residency and
privacy, making it easier to adhere to local and international regulatory requirements.

6. Disadvantages

• Higher Upfront Costs: Significant initial investment in hardware, software, and personnel is
required for on-premises private clouds.

• Management Complexity: The organization must manage, maintain, and update the cloud
infrastructure, which can require specialized IT staff and complex tools.

• Scaling Limitations: Unlike public clouds, which can scale infinitely, private clouds are
limited by the organization's own hardware and infrastructure. Scaling requires purchasing
additional hardware.

7. Use Cases

• Highly Regulated Industries: Financial services, healthcare, and government sectors that
require strict security and regulatory compliance often prefer private clouds.

• Data-Critical Applications: Enterprises with mission-critical applications that demand


high security, performance, and reliability use private clouds for full control over their
environment.

• Large Enterprises: Companies with sufficient resources for hardware, infrastructure, and
management teams prefer private clouds for their flexibility and long-term cost savings.

8. Comparison to Other Models

• Private Cloud vs. Public Cloud:

o Public cloud is cheaper and easier to scale but comes with less control and security
than private clouds.

• Private Cloud vs. Hybrid Cloud:

o Hybrid cloud combines both private and public cloud elements. Organizations
might use private clouds for critical workloads and public clouds for less sensitive
operations.

• Private Cloud vs. Community Cloud:

o A community cloud is a private cloud shared by multiple organizations with similar


needs, like several government agencies. A private cloud is exclusively for one
organization.

9. Popular Providers
• VMware vCloud Suite: Offers private cloud infrastructure with virtualization, management,
and automation features.

• Microsoft Azure Stack: An extension of Microsoft’s public cloud, Azure Stack allows
businesses to create their private cloud using Microsoft technologies.

• OpenStack: An open-source cloud computing platform used by many organizations to


build and manage private clouds.

• AWS Outposts: AWS brings its public cloud services into your own data centers, allowing
businesses to build a hybrid or private cloud with AWS infrastructure.

10. Conclusion

• Private clouds are ideal for organizations that prioritize control, security, and compliance.
Though they come with higher costs and management overhead, they offer unparalleled
benefits for businesses in regulated industries or those with unique computing needs.

Private Cloud – Benefits:::::::::::::::::::::::::;

1. Enhanced Security

• Private clouds provide a high level of security as they operate in isolated environments,
ensuring sensitive data and critical applications are protected from external threats.

2. Increased Control

• Organizations have full control over the hardware, software, and network, allowing them to
tailor the cloud environment to their specific needs, including configuration and
management.

3. Regulatory Compliance

• Private clouds enable organizations to meet strict regulatory standards and compliance
requirements (e.g., GDPR, HIPAA), especially in sectors like finance and healthcare.

4. Customizable Infrastructure

• Since the private cloud is dedicated to one organization, it can be customized for unique
business needs, allowing for specialized configurations, storage, and networking setups.

5. Consistent Performance

• Resources are not shared with other tenants, meaning there is no competition for
bandwidth or compute power, resulting in predictable and reliable performance.

6. Cost Efficiency for Large Enterprises

• While upfront costs can be high, private clouds can become cost-effective in the long run
for large enterprises with consistent workloads, reducing dependency on external services.
7. Data Privacy

• Data is stored on dedicated infrastructure, which reduces the risk of unauthorized access
and ensures that data remains private and secure, which is crucial for sensitive information.

8. Scalability

• Although not as limitless as the public cloud, private clouds offer scalability by allowing
organizations to expand resources (compute, storage) when needed, in line with business
growth.

9. High Availability

• Private clouds can be designed with high redundancy and failover mechanisms to ensure
that services and applications are always available, even in case of hardware failure.

10. Flexible Deployment Models

• Private clouds offer flexible deployment options, whether on-premises, hosted externally by
a provider, or managed by a third party, giving organizations the choice of how they want to
run their cloud environment.

PRIVATE CLOUD CHALLENGE

Here are some key challenges of implementing and managing a Private Cloud:

1. High Initial Costs

• Setting up a private cloud requires significant upfront investment in hardware,


infrastructure, and software. These costs can be a barrier for small to mid-sized
businesses.

2. Complex Management

• Private clouds demand specialized IT staff and expertise to manage and maintain the
infrastructure, including software updates, network management, and hardware
maintenance.

3. Scalability Limitations

• Unlike public clouds, which offer nearly infinite scalability, private clouds are limited by the
organization’s hardware and resources. Expanding the infrastructure requires purchasing
additional hardware.

4. Capacity Planning
• Proper planning is essential to avoid under-utilization or over-provisioning of resources.
Organizations must accurately predict demand, which can be difficult, especially for
growing businesses.

5. Ongoing Maintenance

• Managing a private cloud requires continuous maintenance, including updates, patches,


security monitoring, and hardware replacements, which can be resource-intensive.

6. Limited Flexibility

• While the private cloud provides control and customization, the flexibility to quickly scale
up and down like public cloud services may be limited by the available resources on hand.

Private cloud TYPES:

1. On-Premises Private Cloud

• Description: The cloud infrastructure is hosted and maintained within the organization’s
own data center. The company has full control over its setup, configuration, and
maintenance.

• Advantages: Complete control over data, hardware, and security. Suitable for organizations
with strict data governance and compliance needs.

• Challenges: Requires significant capital investment and ongoing maintenance costs. The
organization needs skilled IT staff to manage the infrastructure.

2. Hosted Private Cloud

• Description: The private cloud is hosted off-site by a third-party provider, but the
infrastructure is exclusively dedicated to one organization. The hosting provider manages
the hardware and physical infrastructure.

• Advantages: Reduces the burden of managing hardware and physical infrastructure.


Provides high availability and security without the need for on-site data centers.

• Challenges: Less control compared to on-premises solutions. The organization depends


on the provider for infrastructure management.

3. Managed Private Cloud

• Description: A third-party provider not only hosts the cloud infrastructure but also
manages it on behalf of the organization. This includes everything from software updates to
security management.

• Advantages: Offloads day-to-day IT operations, allowing the organization to focus on


business processes. Simplifies management and maintenance.
• Challenges: Costs can be higher than self-managed options, and the organization might
lose some control over how the infrastructure is managed.

4. Virtual Private Cloud (VPC)

• Description: A private cloud that exists within a public cloud environment but is logically
isolated from other tenants. It uses virtualization technology to create private networks and
resources within the public cloud provider’s infrastructure.

• Advantages: Combines the scalability of the public cloud with the security and isolation of
a private environment. Organizations benefit from the flexibility and cost-effectiveness of
the public cloud while maintaining control over their resources.

• Challenges: The organization still depends on the public cloud provider for some elements
of infrastructure and may face vendor lock-in.

5. Community Cloud

• Description: A private cloud shared by multiple organizations with similar goals or


requirements, such as government agencies or financial institutions. The infrastructure is
managed and operated by one of the participating organizations or a third-party provider.

• Advantages: Cost-sharing among multiple organizations reduces the financial burden.


Provides a tailored solution to meet specific industry regulations or requirements.

• Challenges: Limited customization and control for each organization, as resources and
decisions are shared among the community members.

6. Hybrid Private Cloud

• Description: A combination of private cloud and public cloud elements. Organizations may
use a private cloud for sensitive data and applications, while leveraging public cloud
resources for less critical operations.

• Advantages: Provides the flexibility to balance cost and performance. Organizations can
scale quickly by integrating public cloud services when additional capacity is needed.

• Challenges: Managing both private and public cloud infrastructure can be complex and
may require advanced networking and integration skills.

These types allow businesses to choose a private cloud solution that best meets their security,
compliance, scalability, and cost requirements.

Here are some key disadvantages of a Private Cloud:

1. High Initial Costs

• Setting up a private cloud requires significant capital investment in hardware, software, and
infrastructure. The cost is typically much higher than using public cloud services, especially
for small or medium-sized businesses.

2. Complex Management
• Managing a private cloud is more complex than using public cloud services. It requires
skilled IT personnel to handle tasks such as maintenance, security, and updates. The
organization is responsible for all infrastructure management.

3. Limited Scalability

• While private clouds offer scalability, they are limited by the available hardware and
infrastructure. Scaling up requires purchasing additional equipment, which can take time
and be costly, unlike public clouds, which can scale instantly.

4. Maintenance Overhead

• Ongoing maintenance is a major responsibility, including updates, patches, hardware


replacements, and security management. This can create significant overhead in terms of
time, resources, and costs.

5. Security Responsibility

• Although private clouds offer a secure environment, the responsibility for implementing and
managing security falls entirely on the organization. This includes encryption, firewalls,
access control, and compliance with regulations.

6. Disaster Recovery Complexity

• Setting up disaster recovery and backup solutions can be more complex in a private cloud
environment. It requires additional infrastructure and planning, often leading to higher
costs and more management complexity.

private cloud services::

Private cloud services refer to a range of offerings that provide cloud computing
functionalities within a private infrastructure, ensuring security, control, and customization.
These services can vary based on what an organization needs. Here are the key private cloud
services:

1. Infrastructure as a Service (IaaS)

• Description: IaaS allows organizations to rent or provision computing infrastructure like


virtual machines, storage, and networking within a private environment. It gives users full
control over their infrastructure while maintaining security.

• Examples: VMware vSphere, Microsoft Hyper-V, OpenStack.

• Benefits: Customization of the infrastructure, security, and full control over data and
applications.

2. Platform as a Service (PaaS)

• Description: PaaS provides a platform allowing users to develop, run, and manage
applications without worrying about the underlying infrastructure. It is ideal for developers
who need a secure environment to build and deploy applications.
• Examples: Red Hat OpenShift, Cloud Foundry.

• Benefits: Speeds up development processes, automates infrastructure management, and


offers scalability.

3. Software as a Service (SaaS)

• Description: SaaS in a private cloud is tailored specifically for an organization, offering


hosted software solutions that are securely deployed within the private environment. This is
ideal for critical business applications where privacy and data security are priorities.

• Examples: Private deployments of applications like Microsoft 365 or Salesforce for specific
organizations.

• Benefits: Secure access to applications, with the privacy of private cloud infrastructure.

4. Backup as a Service (BaaS)

• Description: BaaS provides backup solutions to ensure that data is securely stored and
easily recoverable in the event of data loss or a disaster. The service is hosted within a
private cloud environment, ensuring data confidentiality.

• Examples: Veeam Backup & Replication, CommVault.

• Benefits: Reliable, secure backup with quick recovery options, avoiding the risks associated
with public cloud backups.

5. Disaster Recovery as a Service (DRaaS)

• Description: DRaaS ensures business continuity by replicating and hosting servers and data
in a private cloud environment to provide failover in case of disasters, ensuring that
business-critical operations can continue.

• Examples: Zerto, VMware Site Recovery Manager.

• Benefits: Robust disaster recovery plans with minimal downtime, secure data
management, and easy failover/failback.

6. Storage as a Service (STaaS)

• Description: STaaS offers scalable storage solutions within a private cloud environment,
allowing organizations to store large volumes of data securely and efficiently.

• Examples: Dell EMC Storage, HPE 3PAR.

• Benefits: Secure and scalable storage with custom options for different data types,
including backups, archives, and real-time storage.

7. Database as a Service (DBaaS)

• Description: DBaaS provides secure, scalable, and managed database solutions within a
private cloud infrastructure. Organizations can focus on their data and applications while
leaving database management to the service provider.
• Examples: Oracle Cloud Database, IBM Db2, Microsoft SQL Server in private clouds.

• Benefits: Efficient database management, security, and easy scaling based on


requirements.

8. Identity as a Service (IDaaS)

• Description: IDaaS provides secure identity management solutions in a private cloud,


allowing organizations to manage user access, authentication, and authorization across
different platforms.

• Examples: Okta, Microsoft Azure Active Directory (in private cloud setups).

• Benefits: Secure, centralized identity management with support for multi-factor


authentication, role-based access control, and integration with existing applications.

9. Monitoring as a Service (MaaS)

• Description: MaaS offers real-time monitoring of private cloud infrastructure, applications,


and security. It helps organizations stay aware of performance issues, security threats, and
compliance risks.

• Examples: Nagios, SolarWinds (configured for private environments).

• Benefits: Proactive infrastructure management, ensuring uptime and performance


optimization.

10. Security as a Service (SECaaS)

• Description: SECaaS provides advanced security measures, including firewalls, intrusion


detection systems (IDS), and encryption, tailored for a private cloud environment. This
service ensures that security protocols are up-to-date and managed effectively.

• Examples: Cisco SecureX, Palo Alto Networks, McAfee (configured for private clouds).

• Benefits: Enhanced security features with continuous monitoring, automated threat


detection, and data protection.

These private cloud services provide businesses with tailored solutions that combine the flexibility
of cloud computing with the privacy and control of dedicated infrastructure

VMigration:

VM Migration refers to the process of moving a virtual machine (VM) from one physical host or
environment to another. This is done to optimize performance, maintain uptime, or ensure resource
efficiency. There are different types of VM migrations, each serving different purposes. Here are the
main types:

1. Cold Migration

• Description: In cold migration, the virtual machine is powered off before being moved from
one host to another.
• Use Case: When downtime is acceptable, and there’s no need for the VM to be running
during the migration.

• Advantages: Simpler to perform, lower resource overhead.

• Disadvantages: Requires VM shutdown, causing downtime.

2. Live Migration

• Description: Live migration allows moving a running VM from one physical host to another
without stopping the VM. The VM’s memory, state, and storage are transferred while it
remains active.

• Use Case: Used when high availability is required, and downtime is not acceptable.

• Advantages: No downtime, continuous operation.

• Disadvantages: More resource-intensive, network latency can affect migration speed.

3. Storage Migration

• Description: Storage migration refers to moving the VM’s data (virtual disks or storage) from
one storage location to another, either within the same host or across different storage
systems.

• Use Case: Used when upgrading storage systems, balancing storage load, or moving data to
faster storage.

• Advantages: Improves performance and optimizes storage utilization.

• Disadvantages: Potential for high I/O impact during the migration process.

4. Hot Migration

• Description: Hot migration is similar to live migration, where the VM remains powered on
while it is transferred from one host to another.

• Use Case: When you want to move a VM without interrupting its services or stopping it.

• Advantages: Minimal service disruption, near-zero downtime.

• Disadvantages: Requires efficient network bandwidth and resources.

5. Hybrid Migration

• Description: Hybrid migration is a combination of live migration and cold migration


techniques. Part of the VM’s state is moved while it’s running, and some parts are moved
while it’s powered off.

• Use Case: Useful in scenarios where partial downtime is acceptable, but you want to
minimize it.

• Advantages: Balances downtime and complexity, improves flexibility.


• Disadvantages: More complicated than either live or cold migration alone.

6. Cross-Data Center Migration

• Description: This involves migrating VMs between data centers, typically over wide-area
networks (WAN). The migration can be live or offline depending on the network speed and
latency.

• Use Case: Used when moving workloads between geographically separated data centers
for load balancing, disaster recovery, or regulatory reasons.

• Advantages: Enables geographic redundancy and disaster recovery.

• Disadvantages: High complexity and network latency can cause delays in the migration
process.

7. Manual Migration

• Description: In manual migration, an administrator manually moves the VM from one host
to another. This can be done either while the VM is powered off (cold) or powered on (live).

• Use Case: Typically used in smaller environments or when automated tools are unavailable.

• Advantages: Simple to perform in small environments.

• Disadvantages: Not scalable, more prone to human errors.

8. Automatic Migration

• Description: Automatic migration uses tools like VMware DRS (Distributed Resource
Scheduler) or Microsoft Hyper-V to automatically move VMs based on workload balancing
or fault tolerance needs.

• Use Case: Used for dynamic environments where workloads frequently change and need to
be optimized.

• Advantages: Optimizes resource usage, reduces manual intervention.

• Disadvantages: Requires advanced automation tools and configuration.

9. Host-to-Host Migration

• Description: This type involves migrating a VM from one physical host to another in the
same data center or cluster. It can be either live or cold.

• Use Case: Typically used for hardware maintenance or balancing the load across multiple
hosts in a cluster.

• Advantages: Ensures optimized hardware utilization, easy in well-connected environments.

• Disadvantages: Requires both hosts to be compatible and part of the same cluster.

10. Cloud-to-Cloud Migration


• Description: Moving a VM or workload from one cloud provider to another (e.g., from AWS to
Azure). This can involve reconfiguring the VM for the new environment.

• Use Case: When shifting workloads between cloud environments to avoid vendor lock-in or
for cost optimization.

• Advantages: Cloud flexibility, competitive pricing options.

• Disadvantages: May involve downtime, different cloud platforms might require


reconfiguration.

Each type of VM migration has its specific use cases and challenges, and the choice of method
depends on factors like downtime tolerance, resource availability, and migration complexit

Let's break down each stage of Hot/Live VM Migration and explain it step by step:

Stage 1: Reservation

• Process: A migration request is sent from the source host (Host A) to the target host (Host
B). During this process, Host B checks if it has the necessary resources (CPU, memory,
storage) to accommodate the migrating VM.

• Outcome:

o If resources are available, Host B reserves a VM container of the required size.

o If resources are insufficient, the migration request is rejected, and the VM continues
to run unaffected on Host A.

Stage 2: Iterative (Repetitive) Pre-Copy

• Process: In this phase, the VM’s memory pages from Host A are transferred to Host B in a
series of iterations. Initially, all memory pages are copied to Host B.

• Subsequent Iterations: After the initial copy, only the memory pages that have been
modified (referred to as "dirty pages") during the transfer process are sent over in the
subsequent iterations.

• Goal: To minimize the amount of data that needs to be transferred during the actual switch-
over (stop-and-copy phase) by iterating until the number of dirty pages becomes very small.

Stage 3: Stop and Copy

• Process: At this stage, the source VM (on Host A) is temporarily stopped. The remaining dirty
pages (those that were changed during the pre-copy phase) are copied to Host B. This is a
quick process as most of the data has already been transferred during the pre-copy phase.

• Goal: To transfer the final state of the VM to Host B while keeping the downtime as short as
possible.
Stage 4: Commitment

• Process: Once the remaining memory pages are copied, the migration process reaches the
commitment stage. The target VM (on Host B) is now ready to take over the operation.

• Action: At this point, the migration either proceeds to completion or, if any error occurs, the
migration can be aborted, and the VM continues to run on Host A.

Stage 5: Activation

• Process: Host B activates the VM using the copied data, and the VM resumes operation on
the new host. All network connections and processes are switched to Host B.

• Outcome: The migration is considered successful once the VM is activated and running on
Host B. The resources on Host A are freed, and the migration process is complete.
Benefits of VM Migration

1. Load Balancing

o Benefit: By migrating VMs from overloaded hosts to underutilized ones,


resource usage can be optimized. This helps balance the workload across
multiple physical servers, ensuring consistent performance.

o Use Case: Cloud service providers use VM migration to distribute workloads


across data centers, improving efficiency.

2. Fault Tolerance and High Availability

o Benefit: In case of hardware failure or system maintenance, VM migration


allows transferring VMs to another host without service interruption, ensuring
uptime and minimizing downtime.

o Use Case: Businesses can use live migration to prevent downtime during
hardware maintenance or upgrades.

3. Energy Efficiency

o Benefit: By consolidating VMs onto fewer physical servers, idle servers can be
powered down, reducing energy consumption in data centers.

o Use Case: Green IT initiatives use VM migration to reduce carbon footprints by


maximizing server utilization.

4. Hardware Maintenance and Upgrades

o Benefit: IT administrators can perform maintenance or hardware upgrades on


physical servers without shutting down VMs, ensuring continuous service
availability.

o Use Case: Migration can be done during maintenance windows, minimizing


impact on users.

5. Disaster Recovery

o Benefit: VM migration plays a critical role in disaster recovery strategies by


allowing VMs to be moved to a backup or safe location in case of an emergency,
ensuring data safety.

o Use Case: Enterprises use VM migration to migrate data and workloads to


remote sites for disaster recovery and business continuity.

6. Scalability

o Benefit: VM migration enables the dynamic allocation of resources to different


VMs, allowing IT teams to scale infrastructure based on application demands
without downtime.
o Use Case: During peak traffic periods, VMs can be moved to more powerful
hosts for increased performance.

7. Performance Optimization

o Benefit: VM migration allows moving applications to hardware that best fits the
performance requirements of that specific application, optimizing response
times and resource usage.

o Use Case: Migrating database VMs to high-performance servers can improve


application responsiveness.

8. Cost Efficiency

o Benefit: By maximizing resource usage through migration, companies can


reduce the need for additional hardware, saving costs associated with
purchasing, maintaining, and powering servers.

o Use Case: Data centers use VM migration to optimize hardware utilization,


lowering capital and operational expenses.

9. Geographic Flexibility

o Benefit: VMs can be migrated between data centers in different geographic


locations, allowing for better disaster recovery plans, localized content
delivery, or regulatory compliance.

o Use Case: Enterprises with global operations can migrate VMs across different
regions to serve local user bases better or meet regulatory requirements.

10. Testing and Development

o Benefit: VM migration allows easy transfer of virtual environments for testing


purposes without affecting production systems.

o Use Case: Developers can move VMs to isolated environments for testing new
software versions without disrupting live environments.

Challenges of VM Migration

1. Network Bandwidth Limitation

o Challenge: Live migration requires significant network bandwidth to transfer VM


state and data between hosts. Limited bandwidth can lead to slower migration
or failure.

o Impact: Slower migrations, performance degradation, and potential downtimes


during migration.

2. High Resource Consumption


o Challenge: The migration process consumes CPU, memory, and storage
resources on both the source and target hosts, potentially impacting
performance.

o Impact: Resource contention, increased latency, and potential performance


degradation on running VMs.

3. Latency and Downtime

o Challenge: Despite live migration being designed to minimize downtime, there


is still a brief pause during the stop-and-copy phase, which can affect real-time
applications.

o Impact: Services may face a small downtime or disruption during the final
phase of migration.

4. Complexity in Cross-Platform Migrations

o Challenge: Migrating VMs across different platforms (e.g., between different


hypervisors or cloud environments) can be complex and may require
reconfiguration or even application downtime.

o Impact: Additional effort in managing cross-platform compatibility,


reconfiguring systems, or even rebuilding VMs.

5. Storage Dependencies

o Challenge: VMs rely on underlying storage systems, and migrating VMs with
large amounts of storage (or complex storage setups) can be slow and
resource-intensive.

o Impact: Slow migration times for large or complex VMs, leading to potential
service interruptions.

6. Security Concerns

o Challenge: During migration, VMs are in transit and may be vulnerable to


security threats, such as man-in-the-middle attacks or data interception.

o Impact: Potential data breaches or exposure to unauthorized access if proper


encryption and security measures are not in place.

7. Compatibility Issues

o Challenge: Incompatible hardware or software configurations between the


source and target hosts can result in failed migrations or degraded
performance.

o Impact: Migration failures, downtime, or performance inconsistencies post-


migration.

8. Application Downtime for Non-Live Migrations


o Challenge: Cold migrations require shutting down the VM, which causes
downtime. For mission-critical applications, this downtime may not be
acceptable.

o Impact: Business continuity may be impacted by planned or unplanned


downtime during migration.

9. Risk of Data Loss

o Challenge: Improperly executed migrations can lead to data corruption or loss,


especially if the process is interrupted or not correctly managed.

o Impact: Business-critical data may be at risk if migrations are not performed


carefully.

10. Cost of Migration Tools

o Challenge: Implementing VM migration in large infrastructures often requires


sophisticated tools or platforms, which come with high costs.

o Impact: Increased operational costs for enterprises, especially those requiring


frequent migrations across multiple data centers.

VM migration offers several benefits like optimizing resources, reducing downtime, and ensuring
business continuity, but it also comes with challenges such as network limitations, security risks,
and complexities in managing large-scale migrations. Proper planning and infrastructure readiness
are key to ensuring a smooth migration process.

Here are the benefits of cloud provisioning in short:

1. Scalability: Quickly scale resources up or down based on demand, ensuring optimal


performance without over-provisioning.

2. Cost Efficiency: Pay only for the resources you use, reducing upfront capital expenditure on
hardware and infrastructure.

3. Speed and Agility: Provision resources on-demand in minutes, accelerating development


and deployment processes.

4. Flexibility: Supports various deployment models (private, public, hybrid) tailored to different
business needs.

5. Resource Optimization: Automates resource allocation, ensuring efficient use of


infrastructure without manual intervention.

6. Improved Disaster Recovery: Provision backup resources quickly to ensure business


continuity during failures or disasters.

7. Enhanced Collaboration: Teams can provision resources independently, boosting


productivity and reducing bottlenecks.
8. Global Accessibility: Provision cloud resources in different geographical locations for faster
access and compliance.

9. Security and Compliance: Automated provisioning ensures consistent application of


security policies and compliance requirements.

10. Reduced Management Overhead: Simplifies IT operations, reducing the burden of


maintaining physical hardware and infrastructure.

Types of Cloud Provisioning

1. Manual Provisioning

o Description: Resources are allocated and managed manually by IT staff.

o Use Case: Suitable for small organizations with less frequent resource changes.

2. Automated Provisioning

o Description: Uses scripts or tools to automate the provisioning process, allowing for
rapid and consistent resource allocation.

o Use Case: Ideal for large-scale deployments and dynamic environments.

3. Self-Service Provisioning

o Description: End-users can provision resources themselves through a user interface


or portal, reducing IT bottlenecks.

o Use Case: Enables development teams to quickly access resources as needed.

4. Dynamic Provisioning

o Description: Resources are provisioned automatically based on real-time usage


metrics, scaling in or out as required.

o Use Case: Useful for applications with variable workloads to ensure optimal
performance.

5. Hybrid Provisioning

o Description: Combines on-premises resources with public or private cloud


resources, allowing for flexibility and optimization.

o Use Case: Organizations with specific regulatory or performance needs can utilize
both environments.

Challenges of Cloud Provisioning

1. Complexity
o Challenge: Managing and integrating multiple provisioning types and environments
can become complicated.

o Impact: Increased risk of misconfiguration and management overhead.

2. Security Risks

o Challenge: Self-service and automated provisioning can lead to security


vulnerabilities if not properly controlled.

o Impact: Potential unauthorized access and data breaches.

3. Cost Management

o Challenge: Without proper monitoring, resource usage can lead to unexpected


costs due to over-provisioning or unused resources.

o Impact: Budget overruns and inefficiencies.

4. Performance Issues

o Challenge: Improperly provisioned resources can lead to performance bottlenecks


or inadequate resource allocation.

o Impact: Poor application performance and user experience.

5. Compliance Challenges

o Challenge: Ensuring that provisioning processes comply with regulatory standards


can be complex, especially in hybrid environments.

o Impact: Increased risk of non-compliance penalties.

6. Vendor Lock-In

o Challenge: Relying on specific cloud providers can create dependencies that are
difficult to migrate away from.

o Impact: Limited flexibility and increased costs over time.

7. Lack of Visibility

o Challenge: Difficulty in tracking and monitoring provisioned resources can lead to


inefficiencies and compliance issues.

o Impact: Inability to manage resources effectively.

8. Integration with Legacy Systems

o Challenge: Integrating cloud provisioning with existing on-premises systems can be


complex and resource-intensive.

o Impact: Potential disruptions and increased workload for IT teams.

9. Change Management
o Challenge: Managing changes in provisioning processes requires proper planning
and communication to avoid disruptions.

o Impact: Resistance to change and potential service outages.

10. Skill Gaps

• Challenge: Organizations may lack the necessary skills and expertise to effectively manage
cloud provisioning.

• Impact: Increased reliance on external consultants and higher operational risks.

Here are some examples of cloud provisioning across various cloud service providers and
scenarios:

1. AWS (Amazon Web Services)

• Example: Amazon EC2 Auto Scaling

o Description: Automatically adjusts the number of Amazon EC2 instances in


response to incoming traffic. It provisions instances based on predefined scaling
policies to handle peak loads and reduce costs during low-demand periods.

2. Microsoft Azure

• Example: Azure Resource Manager (ARM)

o Description: Allows users to create, update, and manage resources in Azure using
templates. ARM supports automated provisioning of multiple resources in a single
operation through declarative templates.

3. Google Cloud Platform (GCP)

• Example: Google Kubernetes Engine (GKE)

o Description: Automatically provisions and manages Kubernetes clusters for running


containerized applications. GKE can automatically scale clusters based on
workload demands and allows for self-service provisioning of resources.

4. IBM Cloud

• Example: IBM Cloud Schematics

o Description: Utilizes Terraform templates for automated provisioning of cloud


resources. Users can define infrastructure as code, allowing for consistent and
repeatable resource provisioning across environments.

5. Oracle Cloud

• Example: Oracle Cloud Infrastructure (OCI) Resource Manager


o Description: Provides automation for provisioning and managing infrastructure
using Terraform. Users can define their infrastructure in code and manage changes
easily through the resource manager.

6. VMware Cloud

• Example: VMware vRealize Automation

o Description: Enables automated provisioning of VMs and applications across on-


premises and cloud environments. It allows users to create blueprints for resources
that can be easily deployed and managed.

7. DigitalOcean

• Example: Droplets

o Description: DigitalOcean's virtual machines (Droplets) can be quickly provisioned


via a web interface or API. Users can select different configurations and deploy a VM
in seconds, enabling fast scaling.

8. Heroku

• Example: Dyno Provisioning

o Description: Heroku uses dynos to run applications. Developers can easily scale
their applications by provisioning additional dynos or changing dyno types through a
simple command in the CLI.

9. Alibaba Cloud

• Example: Elastic Compute Service (ECS)

o Description: Provides on-demand compute capacity that can be provisioned


quickly. Users can choose different instance types, and it supports auto-scaling to
adjust to traffic loads.

10. OpenStack

• Example: OpenStack Horizon

o Description: OpenStack's web dashboard allows users to provision and manage


cloud resources, including compute, storage, and networking, through a user-
friendly interface.

These examples illustrate the diverse cloud provisioning capabilities across different platforms,
highlighting automation, scalability, and ease of use for users in various scenarios.
OpenStack is an open-source cloud computing platform that enables users to deploy and manage
cloud infrastructure and services in a flexible and scalable manner. It provides a set of software
tools for building and managing cloud computing environments, typically deployed as
infrastructure-as-a-service (IaaS). Here’s a detailed overview of what OpenStack does and how it
works:

What OpenStack Does

1. Infrastructure Management:

o OpenStack allows users to create and manage virtualized computing resources (like
virtual machines), storage, and networking. It can run on standard hardware,
enabling users to turn their physical servers into a cloud environment.

2. Multi-Tenancy:

o It supports multiple users (tenants) on a single cloud infrastructure, allowing each


tenant to have their isolated environments and resources, enhancing security and
resource management.

3. Resource Provisioning:

o Users can dynamically provision and deprovision resources as needed, enabling


efficient resource utilization based on demand.

4. Self-Service:

o Users can deploy their applications and services through self-service dashboards or
APIs, allowing for increased agility and reduced dependency on IT departments.

5. Scalability:

o OpenStack can scale out by adding more hardware or resources as demand grows,
making it suitable for large-scale deployments.

6. Modularity:

o OpenStack consists of several interconnected components (services) that can be


deployed individually or together, depending on user needs.

How OpenStack Works

OpenStack operates through a set of core components, each responsible for different aspects of
cloud management. Here are the main components and their functions:

1. Nova (Compute):

o Manages the lifecycle of virtual machines, including provisioning, scheduling, and


managing instances. It interacts with hypervisors (like KVM, VMware, etc.) to run the
virtual machines.

2. Neutron (Networking):
o Provides network connectivity as a service. It allows users to create and manage
networks, subnets, and routers, supporting advanced networking features like load
balancing and VPNs.

3. Cinder (Block Storage):

o Manages block storage for virtual machines. It enables users to create, attach, and
manage volumes of storage, ensuring data persistence.

4. Swift (Object Storage):

o A highly scalable object storage system that allows users to store and retrieve large
amounts of unstructured data, such as images and backups.

5. Glance (Image Service):

o Provides a registry for storing and retrieving virtual machine disk images. It allows
users to create snapshots and manage images for instance deployment.

6. Horizon (Dashboard):

o A web-based user interface for OpenStack that allows users to manage and
visualize resources, services, and configurations easily.

7. Keystone (Identity Service):

o Manages authentication and authorization for users and services. It provides a


centralized directory for users and their roles within the cloud environment.

8. Heat (Orchestration):

o Enables users to create and manage cloud applications using templates. It


automates the deployment of resources and their interdependencies.

9. Ceilometer (Telemetry):

o Collects and monitors usage metrics and statistics across all OpenStack services,
helping with billing and capacity planning.

Deployment and Operation

• Deployment: OpenStack can be installed on bare-metal servers, virtual machines, or in


hybrid environments. Deployment tools like DevStack and Packstack can simplify the
process.

• Configuration: Once installed, OpenStack is configured through a series of configuration


files, allowing customization based on organizational needs.

• Management: Administrators can manage the OpenStack environment through the Horizon
dashboard or command-line tools. APIs are available for programmatic access and
automation.
• Community and Support: Being open-source, OpenStack has a vibrant community that
contributes to its development. It also offers various distributions (like Red Hat OpenStack,
Canonical’s Charmed OpenStack, etc.) that provide additional support and enterprise
features.

Conclusion

OpenStack provides a flexible and powerful platform for building and managing cloud
environments. Its modular architecture, combined with open-source principles, allows
organizations to customize their cloud infrastructure to meet their specific needs while leveraging
the benefits of scalability, multi-tenancy, and self-service capabilities.

OpenStack consists of several key components, each designed to handle specific functionalities
within the cloud infrastructure. Here’s a breakdown of the main OpenStack components:

1. Nova (Compute)

• Function: Manages the lifecycle of virtual machines (VMs).

• Features: Handles provisioning, scheduling, and management of VMs, supporting various


hypervisors like KVM, VMware, and Xen.

2. Neutron (Networking)

• Function: Provides networking as a service.


• Features: Manages networks, subnets, routers, and load balancers. Enables advanced
networking functionalities like security groups and floating IPs.

3. Cinder (Block Storage)

• Function: Manages block storage resources.

• Features: Allows users to create, attach, and manage storage volumes for VMs, ensuring
data persistence across reboots and migrations.

4. Swift (Object Storage)

• Function: Provides scalable object storage.

• Features: Stores and retrieves unstructured data, like images and backups. It supports high
availability and durability.

5. Glance (Image Service)

• Function: Manages VM disk images.

• Features: Stores, retrieves, and manages images used for launching VMs, including
snapshot capabilities for existing instances.

6. Horizon (Dashboard)

• Function: Provides a web-based user interface.

• Features: Allows users and administrators to manage OpenStack resources and services
visually, including monitoring and configuring settings.

7. Keystone (Identity Service)

• Function: Manages authentication and authorization.

• Features: Provides a centralized directory for user identities, roles, and permissions across
OpenStack services, enabling secure access control.

8. Heat (Orchestration)

• Function: Manages the orchestration of cloud resources.

• Features: Enables users to define and deploy complex cloud applications through
templates, automating the resource provisioning process.

9. Ceilometer (Telemetry)

• Function: Monitors and collects metrics.

• Features: Gathers usage data across OpenStack services, facilitating billing, reporting, and
resource management through telemetry data.

10. Trove (Database as a Service)

• Function: Provides managed database services.


• Features: Allows users to provision and manage relational and non-relational databases,
such as MySQL, PostgreSQL, and MongoDB.

11. Magnum (Container Management)

• Function: Manages container orchestration engines.

• Features: Integrates with Kubernetes, Docker Swarm, and Apache Mesos, allowing users to
provision and manage container clusters.

12. Barbican (Key Management)

• Function: Manages secrets and encryption keys.

• Features: Provides a secure interface for storing and retrieving sensitive information like
encryption keys and passwords.

13. Ironic (Bare Metal Provisioning)

• Function: Provides bare metal provisioning services.

• Features: Manages physical servers as if they were virtual machines, allowing users to
deploy workloads directly on hardware.

14. Designate (DNS as a Service)

• Function: Provides DNS management.

• Features: Allows users to manage DNS records and zones within their OpenStack
environments.

15. Senlin (Cluster Management)

• Function: Manages clusters of similar resources.

• Features: Automates the lifecycle management of clusters, including scaling, healing, and
updating.

Conclusion

These components work together to create a comprehensive cloud infrastructure platform,


allowing organizations to deploy, manage, and scale their cloud resources efficiently. OpenStack's
modular architecture enables users to choose the components that best meet their needs, making
it a versatile solution for a wide range of cloud computing applications.

Here are the pros and cons of using OpenStack for cloud infrastructure:

Pros of OpenStack

1. Open Source:

o Free to use and modify, promoting innovation and flexibility without vendor lock-in.
o A large community contributes to continuous improvements and updates.

2. Modularity:

o Composed of multiple components (like Nova, Neutron, etc.) that can be deployed
independently, allowing for customized solutions based on specific needs.

3. Scalability:

o Easily scales to accommodate growing workloads by adding more hardware or


nodes, making it suitable for large enterprises and service providers.

4. Flexibility:

o Supports a wide variety of hypervisors, storage backends, and networking options,


enabling users to tailor the environment to their requirements.

5. Multi-Tenancy:

o Supports multiple users and projects on a single cloud infrastructure, providing


resource isolation and enhancing security.

6. Self-Service Portal:

o Offers a user-friendly dashboard (Horizon) and APIs for users to manage their
resources, enabling self-service provisioning and management.

7. Support for Various Workloads:

o Suitable for running a range of workloads, including web applications, big data, and
containerized services.

8. Integration with DevOps Tools:

o Easily integrates with various DevOps tools and CI/CD pipelines, streamlining
application deployment and management processes.

9. Robust Security Features:

o Provides multiple layers of security, including role-based access control, identity


management, and network isolation.

10. Vendor Independence:

o Users can choose their hardware and software stack without being tied to a specific
vendor, allowing for cost-effective solutions.

Cons of OpenStack

1. Complexity:
o Setting up and configuring OpenStack can be complex, requiring substantial
technical knowledge and expertise.

2. Resource Intensive:

o Requires significant resources (CPU, memory, storage) to run efficiently, which may
lead to higher infrastructure costs.

3. Steep Learning Curve:

o New users may find it challenging to understand the architecture, components, and
management processes, necessitating training and support.

4. Lack of Comprehensive Documentation:

o While there is a wealth of documentation, some areas may lack depth, making it
challenging to find solutions to specific issues.

5. Variable Performance:

o Performance can vary based on configuration and the underlying hardware, leading
to inconsistencies in resource availability.

6. Limited Vendor Support:

o While the community is active, official vendor support can be limited compared to
proprietary solutions, which may affect critical deployments.

7. Integration Challenges:

o Integrating OpenStack with existing enterprise systems or legacy applications can


be complex and time-consuming.

8. Frequent Updates:

o Regular updates and changes can introduce instability or require continuous


adjustments, affecting ongoing operations.

9. Potential for Configuration Errors:

o Given the complexity of components, misconfigurations can occur, leading to


security vulnerabilities or performance issues.

10. Fragmentation:

o Different OpenStack distributions may introduce compatibility issues, leading to


confusion regarding features and support.

Conclusion

OpenStack offers a powerful and flexible cloud computing platform suitable for various
organizations, from startups to large enterprises. However, potential users should carefully
consider the complexities and challenges associated with its deployment and management to
ensure it aligns with their operational capabilities and business goals.

This is a detailed outline of how to set up a private cloud on Google Cloud Platform (GCP).
Below is a refined version of your text, organized into clear sections for better readability.

Private Cloud on Google Cloud Platform (GCP)

A private cloud on GCP is a dedicated environment that provides a level of control and security
similar to a traditional on-premises data center. This environment is isolated from other customers,
offering enhanced security and compliance.

Core Components of a Private Cloud in GCP

1. VPC Network:

o The fundamental building block that provides a logical network for your resources.

2. VM Instances:

o Virtual machines running applications and workloads within the VPC network.

3. Firewall Rules:

o Control network traffic in and out of your VPC, ensuring security and isolation.

4. Cloud Storage:

o Provides persistent storage for your data, including files, images, and other content.

5. Cloud SQL:

o Fully managed relational database service for your applications.

6. Cloud DNS:

o Scalable and reliable DNS service for your domain names.

7. Cloud Load Balancing:

o Distributes traffic across multiple VM instances, improving performance and


availability.

8. Cloud Identity and Access Management (IAM):

o Offers fine-grained control over access to your resources.


Steps to Create a Private Cloud on GCP

1. Create a VPC Network

• Go to the VPC Networks page in the GCP console.

• Click "Create VPC Network."

• Provide details:

o Name and description for your VPC network.

o Region and subnet configuration.

• Set up firewall rules:

o Default Allow: Allow all internal traffic within the VPC network.

o Ingress rules: Allow incoming traffic from external networks (e.g., SSH for remote
access or HTTP for web servers).

o Egress rules: Allow outgoing traffic from your VPC network (e.g., outbound internet
access).

2. Create VM Instances

• Go to the VM Instances page in the GCP console.

• Click "Create Instance."

• Provide details:

o Name and description for your VM instance.

o Choose the machine type, zone, and boot disk.

• Configure network interfaces:

o Assign them to your VPC network.

• Set up boot disk and network configuration:

o Choose a boot disk image (e.g., Ubuntu, CentOS) or create a custom image.

3. Configure Firewall Rules (Detailed)

• Create rules to allow necessary traffic:

o Specify: Source and destination IP ranges, protocols, ports, and actions.

o Example rules:

▪ SSH access: Allow inbound TCP traffic on port 22 from specific IP


addresses.
▪ HTTP/HTTPS access: Allow inbound TCP traffic on ports 80 and 443 from the
internet.

▪ Database access: Allow inbound TCP traffic on specific ports (e.g., 3306 for
MySQL) from specific IP addresses.

4. Set Up Cloud Storage, Cloud SQL, and Cloud DNS

• Cloud Storage:

o Create Cloud Storage buckets for data.

• Cloud SQL:

o Set up instances for your databases, choosing a database engine (e.g., MySQL,
PostgreSQL).

o Configure database settings and user permissions.

• Cloud DNS:

o Create DNS zones for your domain names and add DNS records.

5. Set Up Cloud Load Balancing (Optional)

• Create a load balancer to distribute traffic across multiple VM instances.

• Choose load balancer type (e.g., HTTP(S), TCP) and configuration.

• Configure health checks to monitor the health of your VM instances.

6. Configure Cloud Identity and Access Management (IAM)

• Create IAM roles and assign them to users or groups.

• Grant or deny permissions based on roles.

• Implement strong authentication and authorization practices.

7. Connect to Your Private Cloud

• Use SSH or other methods to connect to your VM instances and manage resources within
the private cloud.

This structured approach outlines the key components and steps required to set up a private cloud
on Google Cloud Platform effectively. If you have any further questions or need more details on any
specific area, feel free to ask!

Here’s a structured breakdown of the steps involved in setting up an OpenStack cloud


environment, incorporating your details and organizing them into clear sections:
Setting Up an OpenStack Cloud Environment

Step 1: Configure the Keystone (Identity Service)

• Overview: Keystone is a major project within the OpenStack software stack, responsible for
identity management.

• Functionality:

o Provides identity, token, credential, catalog, and policy services.

o Manages user permissions and tracks users.

o Maintains a service catalog detailing available services and their API endpoints.

• Installation:

o All installations are done at Controller Nodes (e.g., 10.208.X.X).

o After installation, restart the service.

o Add the admin and demo users, along with various services and their endpoint
URLs.

Step 2: Configure the Glance (Image Service)

• Overview: Glance enables users to access, retrieve, and store images and snapshots.

• Default Storage Location: Images and snapshots are stored at /var/lib/glance/images/ on


the controller nodes.

• Services:

o glance-api: Accepts API requests for image discovery, retrieval, and storage.

o glance-registry: Stores, processes, and retrieves metadata about images.

• Action: Create a Glance database user.

Step 3: Configure Nova Services

• Overview: Nova is the core service and the heart of OpenStack, responsible for managing
compute resources.

• Functionality:

o Initially managed networking, virtualization, and other tasks.

o As OpenStack evolved, many functionalities were separated into distinct services.

o Supports single or multi-node installations with comprehensive task management.

Step 4: Add a Networking Service (Neutron Service)

• Overview: Networking is a critical component for the success of any cloud.


• Configuration:

o OpenStack provides various options and compatibility with different vendors.

o Utilize Neutron with the ML2 plugin.

Step 4.2: Configuration at Network Node (10.208.X.X)

• Consideration: The network node must have three NICs (Network Interface Cards):

o One for external access.

o One for management.

o One for instance tunneling.

Step 5: Add the Dashboard at Controller Node (10.208.X.X)

• Overview: Although OpenStack is primarily managed via the command line, it also provides
a GUI dashboard named Horizon.

• Functionality: Horizon allows users to:

o Deploy images.

o Configure virtual networks.

o Manage other cloud resources easily.

Step 6: Launch an Instance

• Overview: After completing the major setup processes, it's time to launch an instance.

• Preparation: Before launching, ensure that:

o An image is uploaded.

o A virtual network is created.

• Note: The setup steps are more time-consuming the first time; subsequent instance
launches are simpler.
1. Why Are Data Centers Important?

• Data Storage: Centralized data storage ensures that businesses can store large volumes of
data securely and access it efficiently.

• Business Continuity: Data centers provide backup and disaster recovery services to
maintain operations in case of failures or disasters.

• Scalability: They offer the infrastructure for scaling IT resources (e.g., compute, storage) as
the demand grows.

• Data Processing: Data centers house high-performance computing systems for processing
large-scale data (e.g., for analytics, AI).

• Secure Connectivity: Data centers offer secure networks, connecting businesses globally
while safeguarding against cyber threats.

• Efficiency: Centralized management of IT resources reduces costs, energy usage, and


maintenance overhead.

2. Evolution of Modern Data Centers:

• 1960s - Mainframes: Early data centers housed large mainframes in controlled


environments with heavy cooling systems.

• 1980s - Client-Server Era: With the rise of client-server computing, businesses began
using smaller, distributed systems, increasing the number of data centers.

• 1990s - Internet Age: The explosion of internet usage created the need for larger-scale data
centers. Virtualization started to gain traction, optimizing server usage.

• 2000s - Cloud Computing: The advent of cloud computing led to massive data centers
managed by cloud providers (e.g., AWS, Azure, Google Cloud). Organizations began to
migrate to the cloud.

• Present - Edge Computing: Modern data centers now integrate edge computing to process
data closer to the source, reducing latency and improving response times. The rise of IoT
and AI has also influenced data center designs.

3. Inside a Data Center:

Data centers are composed of several critical infrastructures:

a. Compute Infrastructure:

• Servers: High-performance machines that process data and run applications.

• Virtualization: Techniques like hypervisors to run multiple virtual machines (VMs) on a


single physical server, optimizing hardware utilization.
• Containers: Lightweight, virtualized application environments (e.g., Docker, Kubernetes) for
efficient resource allocation.

b. Storage Infrastructure:

• Storage Area Networks (SAN): High-speed, dedicated networks connecting storage


devices with servers.

• Network-Attached Storage (NAS): File-based storage solutions accessible over a standard


network.

• Direct-Attached Storage (DAS): Storage devices attached directly to servers.

• Cloud Storage: Remote storage resources provided over the internet by cloud services.

c. Network Infrastructure:

• Routers and Switches: Manage data traffic and ensure efficient communication within the
data center.

• Firewalls: Secure the network from cyber threats by controlling incoming and outgoing
traffic.

• Load Balancers: Distribute network or application traffic across multiple servers to ensure
reliability and performance.

• Cabling: Fiber-optic or copper cables to connect servers, storage, and networking devices.

d. Support Infrastructure:

• Power Supply: Redundant power systems (generators, UPS) to ensure uninterrupted


operation.

• Cooling Systems: Air conditioning and cooling towers to regulate temperature and prevent
overheating.

• Fire Suppression: Automatic systems to detect and suppress fires, protecting sensitive
equipment.

• Security Systems: Physical security like biometric access, surveillance, and alarms to
safeguard the facility.

• Monitoring Systems: Tools to monitor performance, temperature, and power usage in real-
time.
. Data Center Levels and Tiers (Tier 1, 2, 3, 4)

The Tier Classification system, established by the Uptime Institute, is used to rate the
performance, redundancy, and uptime of data centers. There are four tiers that indicate varying
levels of reliability and infrastructure investment:

Tier 1 - Basic Capacity:

• Description: This is the most basic data center infrastructure.

• Features:

o Non-redundant capacity for power and cooling.

o Single path for power and cooling distribution.

o No backup components.

• Uptime Guarantee: 99.671% (about 28.8 hours of downtime per year).

• Use Case: Small businesses or non-mission-critical operations.

• Limitation: Susceptible to outages from planned maintenance or unexpected events.

Tier 2 - Redundant Capacity:

• Description: Offers some level of redundancy in critical components.

• Features:

o Redundant components (e.g., UPS, generators).

o Single path for power and cooling distribution.

o Partial protection against unexpected outages.

• Uptime Guarantee: 99.741% (about 22 hours of downtime per year).

• Use Case: Mid-sized companies with some tolerance for downtime.

• Limitation: Still vulnerable to maintenance downtime or system failures.

Tier 3 - Concurrently Maintainable:

• Description: A data center where maintenance can be performed without taking the
system offline.

• Features:

o Multiple paths for power and cooling, but only one active at a time.

o Redundant components.

o Concurrent maintenance possible without downtime.


• Uptime Guarantee: 99.982% (about 1.6 hours of downtime per year).

• Use Case: Larger enterprises and organizations with high availability needs.

• Limitation: More expensive to implement and maintain than Tier 1 or 2.

Tier 4 - Fault Tolerant:

• Description: The highest level, ensuring continuous operation even during unplanned
events.

• Features:

o Fully redundant infrastructure, with multiple active paths for power and cooling.

o Can tolerate any single failure without downtime.

o Fault-tolerant site infrastructure.

• Uptime Guarantee: 99.995% (about 26 minutes of downtime per year).

• Use Case: Critical services like banking, e-commerce, and cloud providers.

• Limitation: Very high implementation and operational costs.

2. Types of Data Center Services

a. Colocation Services:

• Description: Businesses rent physical space in a data center to house their own servers
and networking equipment.

• Benefits:

o Cost Savings: Reduces the need to build and maintain an in-house data center.

o Scalability: Easy to expand or contract as needed.

o Security: Access to a secure, managed facility with built-in physical and


cybersecurity measures.

• Limitations:

o Limited Control: Customers may not have complete control over facility
operations.

o Connectivity Costs: High bandwidth requirements can increase costs.

b. Cloud Data Centers:

• Description: These are hosted by third-party cloud providers (e.g., AWS, Azure, Google
Cloud) offering infrastructure, platforms, and software as services.
• Benefits:

o Scalability: Instantly scale resources up or down based on demand.

o Flexibility: Pay-as-you-go pricing models reduce capital expenditure.

o Global Access: Data and applications are accessible from anywhere.

• Limitations:

o Latency: May experience latency issues depending on the location of the cloud
provider’s data center.

o Security: Sensitive data stored in third-party cloud providers may raise compliance
and security concerns.

c. Managed Hosting:

• Description: A service provider leases dedicated servers to customers, managing the


hardware, software, and security on behalf of the client.

• Benefits:

o Hands-Off Management: The service provider handles updates, patches, and


monitoring.

o High Performance: Dedicated hardware ensures optimized performance.

o Security: Advanced security measures such as firewalls and intrusion detection are
managed by the host.

• Limitations:

o Cost: Can be expensive compared to other hosting models.

o Limited Customization: Managed services might restrict deep customization of


servers.

d. Edge Data Centers:

• Description: These are smaller, decentralized data centers located close to the end-users
or data sources, primarily for low-latency applications.

• Benefits:

o Low Latency: Process data closer to users, improving response times for
applications like IoT or autonomous vehicles.

o Distributed Architecture: Reduces the load on central data centers and networks.

• Limitations:

o Limited Capacity: Edge data centers may lack the extensive capacity of traditional
data centers.
o Maintenance: More challenging to manage and maintain when distributed across
locations.

3. Benefits and Limitations of Data Center Services

Benefits:

• Cost Efficiency: Data center services reduce the need for on-premise hardware and
facilities, reducing upfront capital costs.

• Scalability: Companies can scale infrastructure based on demand, avoiding


overprovisioning.

• Expert Management: Access to trained professionals managing complex infrastructure,


security, and compliance.

• Business Continuity: Data centers offer built-in disaster recovery and redundancy to
ensure continuous operation.

• Focus on Core Business: Outsourcing data center management allows businesses to


focus on their primary objectives rather than IT infrastructure.

Limitations:

• Dependence on Third Parties: Organizations often lose full control over infrastructure
when relying on external providers.

• Security Concerns: Storing sensitive data offsite or in the cloud can raise concerns over
privacy and regulatory compliance.

• Customization: Some managed or cloud services may limit customization of hardware or


software environments.

• Connectivity: Remote data centers may introduce latency or downtime risks, especially in
regions with poor connectivity.

• Costs: While operational expenses might be reduced, scaling services (especially in the
cloud) can lead to higher-than-expected operational costs.

How does AWS manage its data centers?

Amazon Web Services (AWS) manages its data centers with a focus on reliability, scalability,
security, and efficiency. AWS operates one of the largest and most complex infrastructures in the
world to support cloud services. Here’s an overview of how AWS manages its data centers:

1. Global Infrastructure Setup

• Regions and Availability Zones (AZs): AWS data centers are organized into geographic
regions (such as North America, Europe, and Asia-Pacific), each consisting of multiple
availability zones. Each AZ is essentially a cluster of physically separate data centers,
offering redundancy and fault isolation.

o Regions: AWS has 32+ regions across the globe, allowing users to deploy their
applications closer to their end-users.

o Availability Zones: AZs within each region are independent and isolated from one
another to prevent a single point of failure. This ensures high availability even if one
data center or AZ goes offline.

2. Redundancy and High Availability

• Redundant Power and Cooling: AWS data centers are equipped with multiple layers of
redundant power sources, including backup generators and uninterruptible power supply
(UPS) systems. Cooling is also designed redundantly to ensure temperature control for
hardware reliability.

• Multiple Network Paths: AWS uses multiple network connections and paths to ensure that if
one path fails, traffic can be rerouted. This improves uptime and network resilience.

• Data Replication: Data is replicated across multiple availability zones and sometimes
across regions to provide high availability and fault tolerance.

3. Security Management

• Physical Security: AWS data centers are equipped with:

o 24/7 Surveillance: CCTV, guards, and motion detection ensure only authorized
personnel can access the facility.

o Biometric Scanning and Badge Access: Only authorized staff can enter secure areas
using multi-factor authentication methods like biometric scanning and keycards.

o Fire Detection and Suppression: Systems to detect smoke, heat, or fire early and
automatically suppress them.

• Logical Security: AWS uses various encryption techniques for data at rest and in transit to
secure user data. They also enforce strict access controls, audits, and monitoring systems
to detect and respond to threats.

4. Energy Efficiency and Sustainability

• Energy Optimization: AWS optimizes energy usage with advanced cooling technologies,
such as evaporative cooling, and uses low-power servers to reduce its carbon footprint.

• Green Energy Initiatives: AWS is committed to renewable energy and aims to achieve 100%
renewable energy for its global infrastructure by 2025. They invest in wind, solar farms, and
other sustainable energy projects.

• Data Center Designs: AWS continually redesigns its data centers to optimize power usage
effectiveness (PUE) and reduce energy consumption.
5. Monitoring and Maintenance

• Real-Time Monitoring: AWS uses advanced monitoring systems for real-time visibility into
data center conditions. Monitoring includes hardware performance, power supply,
temperature, network health, and security events.

• Automated Management: AWS employs automation for system maintenance, such as


patch management, infrastructure scaling, and performance tuning. Tools like AWS
CloudWatch monitor workloads and automatically scale resources based on traffic and
performance demands.

• Regular Maintenance: Scheduled maintenance and updates are performed without


disrupting operations, leveraging multiple availability zones to prevent downtime during
updates.

6. Data Durability and Backup

• Data Replication: AWS uses replication to achieve high durability (e.g., 99.999999999% for
Amazon S3). Data is stored across multiple devices and AZs, ensuring that even in the case
of hardware failures, the data remains intact.

• Backup and Disaster Recovery: AWS has built-in backup solutions and disaster recovery
mechanisms that replicate data across geographically dispersed regions, minimizing the
impact of regional failures or natural disasters.

7. Elastic and Scalable Infrastructure

• Elastic Compute and Storage: AWS data centers are designed for elasticity, allowing users
to scale their compute and storage resources on demand. Services like EC2 (compute) and
S3 (storage) automatically scale based on user demand without manual intervention.

• Auto-Scaling: AWS Auto-Scaling dynamically adjusts the number of compute resources


(e.g., virtual machines) based on traffic and load, optimizing costs and performance for
users.

8. Compliance and Certifications

• Certifications: AWS data centers comply with numerous international standards, including:

o ISO 27001, 27017 (Security management standards)

o SOC 1, SOC 2, and SOC 3 (Service Organization Control)

o PCI-DSS (Payment Card Industry Data Security Standard)

o FedRAMP (Federal Risk and Authorization Management Program) for government


use

• Audit and Transparency: AWS provides audit reports and transparency regarding its data
center security and operational procedures, ensuring that customers meet their regulatory
requirements.
9. Automation and Software-Defined Infrastructure

• AWS Control Plane: AWS uses a software-defined infrastructure with automation tools to
manage large-scale resources and services efficiently. This includes deploying, configuring,
and scaling infrastructure resources without manual intervention.

• Self-Healing: AWS automates recovery from hardware failures by automatically replacing or


migrating instances in case of any fault, minimizing downtime.

10. Edge Locations and Latency Reduction

• Edge Computing: AWS offers AWS Local Zones and AWS Outposts for edge computing,
bringing compute and storage closer to customers, reducing latency for applications like
video streaming, gaming, and IoT services.

• Content Delivery Network (CDN): AWS uses Amazon CloudFront to deliver content from
edge locations, reducing latency and improving speed for end-users.

Summary:

AWS manages its data centers with a strong emphasis on global infrastructure (regions and
availability zones), redundancy, security, energy efficiency, and automation. By utilizing advanced
technologies for monitoring, scaling, and recovery, AWS provides a highly reliable and secure
environment to support its vast array of cloud services.

AWS Data Center - Security Layers

AWS follows a defense-in-depth


approach to secure its data centers, applying multiple security layers to ensure protection at
various levels. Here's a breakdown of these security layers:

1. Perimeter Security Layer

• Fencing and Barriers: AWS data centers have physical barriers like high fences, gates, and
walls to protect the facility's perimeter.

• Guard Patrols: Security personnel continuously monitor and patrol the perimeter.
• Surveillance Systems: AWS uses 24/7 CCTV surveillance with infrared and motion-
detection cameras around the perimeter to detect unauthorized access.

• Entry Points Control: The number of entry points into the facility is limited and tightly
controlled. Only approved personnel can access the data center, and their identities are
verified through badges or biometric systems.

• Anti-Vehicle Defenses: Bollards and crash barriers are placed at the perimeter to prevent
unauthorized vehicle entry.

2. Infrastructure Security Layer

• Access Control: Inside the data center, access is highly restricted using multi-factor
authentication mechanisms, such as biometrics (fingerprints or iris scanning) and RFID-
based badges.

• Physical Segmentation: Different parts of the infrastructure (like server rooms) are
separated, and only authorized personnel can access specific areas.

• Monitoring and Logging: AWS continuously monitors infrastructure for unusual activity,
unauthorized access attempts, and logs all access to sensitive areas.

• Fire Suppression Systems: The infrastructure layer includes fire detection and
suppression systems, such as smoke detectors and waterless fire extinguishing systems, to
protect equipment from fire damage.

• Redundant Power Systems: Backup generators, UPS (Uninterruptible Power Supplies), and
redundant power lines ensure continuous operation during outages.

3. Data Security Layer

• Encryption: Data at rest is encrypted using AES-256 encryption. Data in transit is encrypted
using SSL/TLS to protect against interception.

• Data Replication: Data is often replicated across multiple availability zones to ensure high
availability and fault tolerance.

• Access Control: AWS provides strict access control mechanisms for user data, including
the use of Identity and Access Management (IAM) to enforce the principle of least privilege.

• Data Durability: Services like Amazon S3 provide eleven 9s (99.999999999%) of durability,


ensuring minimal risk of data loss.

• Auditing and Logging: AWS CloudTrail and AWS Config are used to track API calls and
configuration changes, ensuring data activity is traceable and auditable.

4. Environmental Security Layer

• Temperature and Humidity Control: HVAC (Heating, Ventilation, and Air Conditioning)
systems are deployed to maintain the optimal environment for server performance, with
redundant systems in place to ensure uptime.
• Water Detection Systems: Sensors are installed to detect leaks or flooding that could
damage infrastructure.

• Seismic and Structural Design: AWS data centers are built in locations and structures that
are designed to withstand natural disasters like earthquakes and floods.

• Fire Detection and Suppression: In addition to infrastructure-based fire systems, the


environmental layer includes early smoke detection systems and environmentally friendly
fire suppression that doesn’t harm equipment (e.g., FM-200 or Argonite systems).

Summary:

AWS uses perimeter, infrastructure, data, and environmental layers to build a comprehensive
security strategy for its data centers. Each layer is fortified with multiple tools and protocols to
ensure that physical and digital assets are protected from intrusion, environmental threats, and
data loss.
Cloud Management - Definition

Cloud management refers to the set of tools, processes, and technologies used to monitor,
manage, and optimize cloud computing resources and services. It encompasses the control of
both public, private, and hybrid cloud environments, enabling organizations to oversee resource
provisioning, cost, performance, security, and compliance.

Goals of Cloud Management

1. Resource Optimization: Efficiently utilize and allocate cloud resources to avoid wastage
and ensure performance.

2. Cost Efficiency: Control and reduce cloud spending by managing pay-per-use models and
identifying underutilized resources.

3. Performance Monitoring: Continuously monitor applications, services, and resources to


ensure optimal performance and uptime.

4. Security and Compliance: Ensure that cloud environments meet security protocols and
comply with regulatory standards like GDPR, HIPAA, etc.

5. Scalability: Enable easy scaling of cloud resources based on demand, ensuring agility and
flexibility.
6. Automation: Automate routine tasks like provisioning, backup, and disaster recovery to
reduce manual efforts.

7. Governance: Enforce policies and controls for cloud usage, ensuring that the organization
follows best practices and meets internal and external requirements.

Challenges of Managing a Cloud Environment

1. Complexity: Multi-cloud and hybrid cloud environments add layers of complexity, making it
harder to manage diverse resources, services, and configurations.

2. Cost Overruns: Unmanaged cloud environments can lead to unanticipated costs due to
pay-as-you-go models and uncontrolled resource usage.

3. Security: Protecting sensitive data, managing access control, and ensuring compliance in a
cloud environment are ongoing challenges.

4. Performance Variability: Ensuring consistent performance across distributed cloud


resources can be difficult, especially in a multi-cloud setup.

5. Resource Sprawl: Without proper oversight, cloud resources can multiply, leading to
inefficient usage and difficulties in monitoring them.

6. Visibility: Gaining full visibility into resource utilization, billing, and performance across
cloud platforms can be challenging.

7. Data Governance: Ensuring data integrity, encryption, and proper storage policies in a
cloud environment can be hard to enforce consistently.

Cloud Management Features

1. Provisioning and Automation: Automating resource allocation (compute, storage,


networking) and application deployment.

2. Monitoring and Alerts: Real-time tracking of system performance, usage patterns, and
resource health, with automated alerts for anomalies or failures.

3. Cost Management and Reporting: Tools to track cloud spend, generate reports, and
identify cost-saving opportunities by optimizing resource use.

4. Security Management: Implementing firewalls, access control, encryption, and ensuring


compliance with data protection laws.

5. Backup and Disaster Recovery: Automated data backups and recovery processes to
ensure business continuity in case of failures.

6. Governance and Policy Management: Enforcing governance policies for cloud usage,
access control, and compliance.
7. Scalability and Elasticity: Dynamically scaling resources up or down based on demand
without manual intervention.

8. Service Integration: Seamless integration with third-party services like DevOps tools,
monitoring platforms, and security services.

How Does Cloud Management Work?

Cloud management typically involves:

1. Centralized Dashboard: Provides a unified view of all cloud resources across various
platforms (AWS, Azure, GCP) and helps manage multiple environments from a single point.

2. Automated Workflows: Automates routine tasks such as provisioning, scaling, backups,


monitoring, and compliance checks.

3. Monitoring Tools: Track and report key metrics like CPU utilization, memory, storage, and
network performance to ensure health and performance.

4. Cost Tracking: Continuously track spending, offering suggestions for optimizing costs
based on resource usage trends.

5. Security Controls: Implement access controls and monitor for security threats,
vulnerabilities, and ensure encryption of sensitive data.

6. Integration with DevOps: Continuous integration and deployment pipelines, along with
infrastructure-as-code, can be managed to ensure agility in cloud-based development
environments.

Cloud Management Strategies

1. Centralized Control: Use a cloud management platform (CMP) that integrates different
cloud providers (AWS, Azure, GCP) into one dashboard, providing visibility across all
environments.

2. Automation: Implement automation for tasks like provisioning, monitoring, scaling, and
cost management. Tools like AWS CloudFormation and Terraform help automate
infrastructure management.

3. Multi-Cloud Strategy: Utilize multiple cloud platforms to avoid vendor lock-in and optimize
costs and performance based on workloads. Multi-cloud management solutions like
VMware or CloudBolt can assist.

4. Cost Management: Continuously monitor cloud usage, identify underused resources, and
implement cost-optimization techniques, such as right-sizing instances and leveraging spot
instances.
5. Security and Compliance Management: Ensure security protocols (encryption, access
control, firewalls) are consistently applied across cloud environments. Use cloud-native
tools (e.g., AWS GuardDuty) or third-party platforms to monitor and secure cloud
environments.

6. Monitoring and Performance Optimization: Implement real-time monitoring and


performance optimization strategies using tools like AWS CloudWatch, Azure Monitor, or
third-party APM solutions (e.g., Datadog).

7. Governance and Policy Enforcement: Establish and enforce governance policies that
control access, resource usage, and compliance with regulatory requirements. Tools like
AWS Config, Azure Policy, or GCP’s Resource Manager can help maintain policy adherence.

8. Backup and Disaster Recovery Planning: Ensure regular backups and disaster recovery
plans are in place. Use cloud-native services like AWS Backup, Azure Backup, or GCP
snapshots to automate these processes.

Cloud Automation

Cloud automation refers to the use of software and tools to automate cloud management tasks,
such as provisioning, configuring, scaling, monitoring, and decommissioning cloud resources.
Automation helps streamline repetitive processes, improving efficiency and reducing human
intervention.

Why Use Cloud Automation?

1. Efficiency: Automation eliminates manual tasks, reducing time spent on repetitive


processes like resource provisioning, monitoring, and scaling.

2. Consistency: Automating tasks ensures that processes are executed the same way every
time, leading to fewer errors and more predictable results.

3. Cost Savings: By automatically optimizing cloud resources, scaling up or down based on


demand, and avoiding resource wastage, organizations save on operational costs.

4. Scalability: Automation enables organizations to scale their cloud resources dynamically


and instantly to handle fluctuating workloads.

5. Improved Performance: Real-time monitoring and automated response mechanisms


ensure performance issues are handled quickly, improving the overall system's reliability.

6. Security and Compliance: Automated tools help ensure security protocols,


configurations, and compliance regulations are applied consistently across all
environments.

Types of Cloud Automation


1. Provisioning Automation: Automatically deploying infrastructure (virtual machines,
storage, databases) based on predefined templates or policies.

o Example: Auto-deploying VMs or containers based on demand.

2. Scaling Automation: Automatically adjusting the number of instances or resources based


on the workload (horizontal scaling) or enhancing existing resources (vertical scaling).

o Example: Auto-scaling web servers based on incoming traffic.

3. Monitoring and Alerting Automation: Automating the monitoring of system health,


performance metrics, and sending alerts when thresholds are breached.

o Example: CloudWatch alarms triggering resource adjustments.

4. Backup and Disaster Recovery Automation: Automatically scheduling backups,


maintaining replicas, and enabling quick recovery during system failures.

o Example: Automatic daily database backups in cloud environments.

5. Security Automation: Automating security tasks such as patch management, vulnerability


scanning, and applying security policies.

o Example: Automated updates for security patches and configurations.

6. Configuration Management: Automating the configuration of cloud resources, ensuring all


deployments follow a standard template and are version controlled.

o Example: Automating environment setup using tools like Ansible or Chef.

Benefits of Cloud Automation

1. Increased Speed and Agility: Automation accelerates cloud deployment and


management, reducing the time it takes to set up resources and deliver services.

2. Reduced Human Errors: By automating tasks, the risk of manual errors is significantly
reduced, improving overall system reliability.

3. Cost Efficiency: Automating tasks such as provisioning, scaling, and monitoring optimizes
resource usage, leading to reduced costs by avoiding under- or over-provisioning.

4. Scalability: Cloud automation helps scale infrastructure instantly in response to demand,


maintaining application performance during traffic spikes.

5. Better Resource Utilization: Automation tools continuously optimize cloud resources,


ensuring minimal wastage.

6. Improved Security: Consistent application of security policies and automated patch


management ensures cloud environments remain secure.
Cloud Automation Challenges

1. Complexity: Setting up cloud automation can be complex, requiring the right tools,
configurations, and policies tailored to business needs.

2. Initial Setup Costs: Though automation saves money long-term, the upfront cost of
implementing automated solutions can be high.

3. Vendor Lock-in: Many automation tools are specific to cloud service providers, making it
difficult to switch platforms without reconfiguring automation tools.

4. Maintenance and Updates: Automated systems require regular updates and monitoring to
ensure they run optimally and adapt to changing needs.

5. Security Risks: While automation improves security, poorly configured automation scripts
can introduce vulnerabilities if not managed properly.

6. Lack of Expertise: Skilled personnel are required to set up and manage automation
processes, which may be a challenge for some organizations.

Differences Between Cloud Automation and Cloud Orchestration

1. Cloud Automation:

o Focuses on automating individual tasks like resource provisioning, scaling, and


monitoring.

o Handles repetitive, low-level tasks.

o Example: Automatically creating a virtual machine or scaling a web application


based on CPU usage.

2. Cloud Orchestration:

o Focuses on coordinating multiple automated tasks into workflows, managing the


relationships between these tasks.

o Involves high-level management and coordination of complex processes.

o Example: Orchestrating the deployment of a multi-tier application, where database,


application servers, and networking need to be coordinated.

Aspect Cloud Automation Cloud Orchestration

Scope Individual tasks Multiple tasks and workflows

Complexity Simpler More complex

Example Use Case Auto-scaling based on CPU Coordinating full application deployments
Cloud Automation Use Cases

1. Auto-Scaling: Automatically adding or removing cloud instances based on current


demand, ensuring optimal performance without manual intervention.

2. CI/CD Pipelines: Automating the deployment and testing of code changes through
continuous integration and continuous deployment workflows.

3. Disaster Recovery: Automating backup and recovery processes to ensure critical data is
protected and recoverable in case of failure.

4. DevOps Processes: Automating infrastructure as code (IaC) for DevOps teams, using tools
like Terraform or AWS CloudFormation to provision environments consistently.

5. Security Enforcement: Automating patch management, vulnerability scans, and access


controls to ensure cloud infrastructure meets security policies.

6. Load Balancing: Automatically redistributing network traffic across multiple servers to


ensure even distribution and prevent overload.

7. Compliance Audits: Automating the enforcement and verification of compliance


standards to ensure cloud environments follow necessary regulations.

Cloud Automation Tools

1. Terraform (by HashiCorp):

o Infrastructure as Code (IaC) tool that automates the provisioning of cloud


resources.

o Works with multiple cloud providers like AWS, Azure, and GCP.

2. AWS CloudFormation:

o AWS service for automating resource provisioning using templates.

o Helps in deploying and managing AWS infrastructure as code.

3. Azure Resource Manager (ARM):

o Automates the provisioning and management of Azure resources using templates.

o Enables resource grouping for easier management.

4. Ansible (by RedHat):

o Configuration management tool that automates the setup of cloud resources,


infrastructure, and applications.

o Works across multiple cloud environments.

5. Chef:
o A configuration management tool used for automating infrastructure and
application deployment.

o Helps ensure consistency across cloud environments.

6. Puppet:

o Automates infrastructure management by defining desired states for infrastructure


elements, ensuring consistency.

o Often used in cloud and hybrid environments.

7. Google Cloud Deployment Manager:

o Tool for managing Google Cloud resources through automation.

o Enables users to define resources as code using templates.

8. Kubernetes:

o Automates the deployment, scaling, and management of containerized applications


in the cloud.

o Ensures high availability and scalability of microservices.

9. Jenkins:

o A popular CI/CD tool that automates the building, testing, and deployment of
applications in cloud environments.

Summary:

Cloud automation simplifies and speeds up cloud resource management, ensuring efficiency,
scalability, and consistency across environments. It differs from cloud orchestration, which
coordinates complex workflows of automated tasks. By using automation tools like Terraform, AWS
CloudFormation, and Ansible, organizations can ensure better resource utilization, lower costs,
and enhanced security. Despite challenges such as complexity and initial setup costs, the benefits
of cloud automation, including reduced errors and improved performance, make it indispensable
for modern cloud operations.

Cloud Infrastructure Security

Cloud Infrastructure Security involves protecting cloud-based infrastructure from various threats
and vulnerabilities while ensuring compliance with regulations and maintaining data integrity.

Goals of Cloud Infrastructure Security


1. Data Protection: Safeguard sensitive information from unauthorized access and breaches.

2. Compliance: Ensure adherence to legal and regulatory standards (e.g., GDPR, HIPAA).

3. Availability: Maintain high availability and minimize downtime caused by attacks or


failures.

4. Integrity: Ensure data integrity and prevent tampering or corruption.

5. Risk Management: Identify, assess, and mitigate risks associated with cloud infrastructure.

Importance of Cloud Infrastructure Security

• Data Breaches: Cloud environments are prime targets for cybercriminals; effective security
reduces the risk of data breaches.

• Trust: Organizations must ensure the security of customer data to maintain trust and
reputation.

• Compliance: Regulatory compliance is crucial to avoid legal repercussions and financial


penalties.

• Operational Continuity: Security measures help prevent disruptions and ensure business
continuity.

• Cost Management: Effective security reduces the financial impact of data loss and
recovery efforts.

How Does Cloud Infrastructure Security Work?

• Defense-in-Depth: Implementing multiple layers of security controls (physical, network,


application, and data security) to protect cloud assets.

• Encryption: Encrypting data at rest and in transit to protect sensitive information from
unauthorized access.

• Monitoring and Logging: Continuous monitoring of cloud environments for suspicious


activities and maintaining logs for audit purposes.

• Access Control: Enforcing strict access controls and user authentication to limit who can
access resources.

• Incident Response: Developing and implementing an incident response plan to quickly


address security breaches.

Types of Cloud Infrastructure Security


1. Network Security: Protects the integrity and usability of networks and data, involving
firewalls, intrusion detection systems (IDS), and virtual private networks (VPNs).

2. Application Security: Focuses on securing applications from vulnerabilities throughout


their lifecycle, including secure coding practices and regular updates.

3. Data Security: Involves data encryption, data masking, and data loss prevention to protect
sensitive information.

4. Identity and Access Management (IAM): Manages user identities and access to resources,
ensuring that only authorized individuals can access sensitive data.

5. Endpoint Security: Protects devices that access cloud services, including mobile devices
and IoT devices, from threats.

Benefits of Cloud Infrastructure Security

1. Enhanced Data Protection: Reduces the risk of data breaches and unauthorized access.

2. Improved Compliance: Helps organizations meet regulatory requirements more


effectively.

3. Operational Efficiency: Automates security processes, reducing manual effort and


enhancing response times.

4. Increased Trust: Builds customer confidence in the organization’s ability to protect their
data.

5. Cost Savings: Minimizes the potential costs associated with data breaches and
compliance fines.

Cloud Infrastructure Security Best Practices

1. Implement Strong Access Controls: Use IAM to enforce the principle of least privilege.

2. Regularly Update and Patch: Ensure all systems, applications, and services are up to date
to protect against vulnerabilities.

3. Data Encryption: Encrypt sensitive data at rest and in transit to secure it from unauthorized
access.

4. Conduct Security Audits: Regularly assess the security posture of cloud environments
through audits and vulnerability assessments.

5. Monitor and Respond to Threats: Use monitoring tools to detect and respond to
suspicious activities in real-time.
5 Key Components of Cloud Infrastructure Security

1. Identity and Access Management (IAM):

o Controls user access to cloud resources through authentication and authorization


policies.

o Implements multi-factor authentication (MFA) to enhance security.

2. Network Security:

o Involves firewalls, VPNs, and network segmentation to protect data in transit.

o Monitors network traffic for anomalies and potential threats.

3. Data Security:

o Protects sensitive information through encryption, data masking, and backup


solutions.

o Implements data loss prevention (DLP) strategies to prevent data breaches.

4. Endpoint Security:

o Secures devices that connect to cloud services, including laptops, smartphones,


and IoT devices.

o Uses antivirus software, endpoint detection, and response (EDR) tools.

5. Application Security:

o Focuses on securing applications from development to deployment, including


secure coding practices and application testing.

o Regularly updates and patches applications to mitigate vulnerabilities.

Tools for Cloud Infrastructure Security

1. AWS Identity and Access Management (IAM): Manages user access and permissions in
AWS environments.

2. Azure Security Center: Provides unified security management and advanced threat
protection across hybrid cloud environments.

3. Google Cloud Identity: Offers IAM capabilities to manage access to Google Cloud
resources.

4. Palo Alto Networks Prisma Cloud: Provides comprehensive security for cloud
applications, including visibility and compliance.

5. Cloudflare: Offers network security, including DDoS protection and web application
firewall (WAF) services.
6. Splunk: Provides security information and event management (SIEM) capabilities to
monitor and analyze security data.

7. IBM Security Cloud Pak for Security: Integrates security tools and data across cloud
environments for better threat visibility and response.

8. Fortinet FortiGate: A cloud-native firewall that provides advanced threat protection for
cloud networks.

9. Zscaler: A cloud security platform that offers secure internet access and private
application access for users.

10. McAfee MVISION Cloud: Protects data across cloud services with visibility and
compliance tools.

Summary

• Cloud Infrastructure Security is essential for protecting sensitive data and maintaining
trust while ensuring compliance with regulations.

• Implementing a multi-layered security approach with best practices, key components, and
appropriate tools can significantly enhance the security posture of cloud environments.

Key Concepts in Cloud Security

Encryption

• Goal: The primary aim of encryption is to make data unreadable to unauthorized


individuals. Only those with the decryption keys can access and understand the encrypted
information.

• Importance:

o Once data is encrypted, it becomes useless to attackers, preventing data theft and
misuse.

o Encryption can be applied to data at rest (stored data) and in transit (data being
transferred), which is crucial for secure communication and data sharing.

• Applications:

o Protect sensitive information during storage in databases.

o Secure data transfer between systems, ensuring confidentiality and integrity.

Identity and Access Management (IAM)

• Definition: IAM is a critical security component in cloud computing that manages user
identities and access rights.
• Purpose: To verify user identities and prevent unauthorized access to cloud resources.

• Key Features:

o Identity Providers (IdP): Authenticate user identities.

o Single Sign-On (SSO): Allows users to log in once and gain access to all associated
cloud resources.

o Multi-Factor Authentication (MFA): Provides additional security by requiring more


than one form of verification for user access (e.g., SMS codes, authentication apps).

o Access Control: Grants or restricts access to resources based on user roles and
permissions.

Cloud Firewalls

• Overview: Cloud firewalls act as protective barriers for cloud infrastructure, filtering
malicious traffic and preventing cyberattacks.

• Types:

o Next-Generation Firewalls (NGFW): Protect IaaS and PaaS environments by


analyzing and controlling traffic.

o SaaS Firewalls: Secure Software as a Service (SaaS) applications by filtering traffic


in the cloud.

Virtual Private Cloud (VPC) and Security Groups

• Virtual Private Cloud (VPC):

o A VPC provides a secure and private cloud environment within a public cloud,
allowing organizations to customize their cloud settings.

o Enables on-demand access to resources and scalability based on business needs.

• Security Groups:

o Act as virtual firewalls to control incoming and outgoing traffic for VPC resources.

o Can be configured at the instance level, allowing granular control over resource
access.

Penetration Testing

• Definition: Cloud penetration testing involves simulating real-world attacks to identify


vulnerabilities in cloud environments.
• Process:

o Ethical hackers assess various components of cloud applications to uncover


security flaws.

o Document vulnerabilities, their severity (low, high, or critical), and recommend


remediation measures.

• Benefits:

o Identifies security vulnerabilities in the cloud infrastructure.

o Provides insights into potential impacts of vulnerabilities.

o Assists in meeting compliance requirements.

o Enhances the overall security posture of the cloud environment.

Summary

The outlined concepts form the backbone of cloud security, ensuring that data is protected, user
access is controlled, and infrastructure is fortified against potential threats. Implementing
encryption, IAM, cloud firewalls, VPCs, and regular penetration testing collectively contributes to a
robust security framework for cloud environments.

loud Network Security

Cloud network security encompasses the measures and protocols used to protect cloud
computing environments from various threats and vulnerabilities. Understanding the differences
between private and public cloud network security is crucial for organizations when choosing the
right infrastructure for their needs.

Private Cloud Network Security

• Definition: A private cloud is a cloud infrastructure exclusively used by a single


organization, either managed internally or by a third party. It offers a higher level of control
over security configurations and policies.

• Key Features:

o Customization: Organizations can tailor security measures based on their specific


requirements, regulatory needs, and risk assessments.

o Access Control: Enhanced access control mechanisms allow organizations to


restrict who can access data and resources, often leveraging IAM tools.
o Network Segmentation: Private clouds allow for better segmentation of networks
to isolate sensitive data and applications, reducing the attack surface.

o Enhanced Monitoring: Organizations can implement comprehensive monitoring


and logging systems to detect and respond to potential security incidents quickly.

o Data Sovereignty: Since data resides within the organization’s premises or


controlled environments, it is easier to comply with local data protection
regulations.

• Challenges:

o Cost: Implementing and maintaining private cloud infrastructure can be expensive


due to hardware, software, and operational costs.

o Management Overhead: Requires skilled IT personnel to manage and secure the


environment effectively.

Public Cloud Network Security

• Definition: A public cloud is a cloud infrastructure available to the general public and is
owned by a cloud service provider. Multiple organizations share the same resources, which
can present unique security challenges.

• Key Features:

o Shared Responsibility Model: Security is a shared responsibility between the


cloud provider and the customer. Providers typically secure the infrastructure, while
customers are responsible for securing their applications and data.

o Scalability: Public clouds can scale resources easily and rapidly, but security must
be carefully managed as the environment grows.

o Cost-Effective: Lower upfront costs since organizations do not need to invest in


physical infrastructure.

o Built-in Security Features: Many public cloud providers offer built-in security tools,
including firewalls, encryption, and IAM capabilities, to help organizations secure
their data.

• Challenges:

o Multi-Tenancy Risks: Multiple customers share the same infrastructure, increasing


the risk of data leaks and unauthorized access.

o Compliance Concerns: Organizations may struggle to meet regulatory


requirements depending on how the public cloud provider manages data.

o Less Control: Limited control over the underlying infrastructure can make it
challenging to implement custom security policies.
o


Cloud Network Security

Cloud network security is critical for protecting sensitive data and applications hosted in the cloud.
Here’s a comprehensive overview of its benefits, best practices, solutions, architecture, and key
components.

Benefits of Cloud Network Security

1. Data Protection: Ensures sensitive data is safeguarded from unauthorized access and
breaches through encryption and access controls.

2. Regulatory Compliance: Helps organizations meet industry standards and regulations


(e.g., GDPR, HIPAA) by implementing necessary security controls.

3. Improved Visibility: Offers enhanced monitoring and visibility into network activities,
enabling rapid identification and response to threats.

4. Threat Mitigation: Protects against cyber threats such as DDoS attacks, malware, and
unauthorized access through robust security measures.

5. Scalability: Provides scalable security solutions that can grow with the organization's
needs without significant investments in hardware.

6. Cost-Effectiveness: Reduces the overall cost of maintaining physical security


infrastructure by utilizing cloud-based security solutions.

7. Increased Collaboration: Facilitates secure remote access, enabling teams to collaborate


effectively without compromising security.

Cloud Network Security Best Practices

1. Implement Strong Access Controls:

o Use IAM policies to enforce the principle of least privilege.

o Utilize multi-factor authentication (MFA) for enhanced security.

2. Data Encryption:

o Encrypt data at rest and in transit to protect sensitive information.

o Use strong encryption protocols and regularly update encryption keys.

3. Regular Security Audits:

o Conduct periodic security assessments and audits to identify vulnerabilities.

o Monitor compliance with security policies and regulatory standards.

4. Network Segmentation:
o Segment networks to isolate sensitive data and applications, reducing the attack
surface.

o Apply strict access controls between segments.

5. Implement Firewalls and Traffic Filtering:

o Use firewalls to control incoming and outgoing traffic based on predefined security
rules.

o Regularly update firewall rules to adapt to new threats.

6. Monitoring and Logging:

o Implement continuous monitoring and logging of network activities to detect


anomalies.

o Use security information and event management (SIEM) tools for analysis.

7. Educate Employees:

o Provide regular security training to employees about phishing, social engineering,


and other threats.

o Foster a security-aware culture within the organization.

Cloud Network Security Solutions

1. Firewalls: Protect networks by filtering traffic based on security rules (Next-Generation


Firewalls, Web Application Firewalls).

2. Intrusion Detection and Prevention Systems (IDPS): Monitor network traffic for
suspicious activities and take action against potential threats.

3. Virtual Private Network (VPN): Secure remote access to the cloud by encrypting data
transmitted between devices and the cloud.

4. Identity and Access Management (IAM): Tools that manage user identities, access rights,
and authentication.

5. Encryption Tools: Solutions that encrypt data at rest and in transit to safeguard sensitive
information.

6. Cloud Security Posture Management (CSPM): Tools that help assess and manage the
security posture of cloud environments.

Architecture of Network Level Security

• Perimeter Security: Includes firewalls, intrusion detection/prevention systems, and VPNs


that establish a secure boundary around the cloud network.
• Network Segmentation: Divides the network into segments to isolate sensitive data and
applications, reducing the attack surface.

• Access Control Layer: Enforces policies for user access to various parts of the network
based on roles and permissions.

• Data Security Layer: Protects data through encryption, tokenization, and other data
protection methods.

• Monitoring Layer: Involves logging, monitoring, and analyzing network activities for threat
detection and incident response.

Network Segmentation

• Definition: Network segmentation involves dividing a larger network into smaller,


manageable segments to enhance security and performance.

• Benefits:

o Enhanced Security: Limits access to sensitive areas of the network, reducing the
risk of unauthorized access.

o Improved Performance: Reduces congestion by containing broadcast traffic within


segments.

o Easier Compliance: Simplifies regulatory compliance by isolating sensitive data


and applications.

Traffic Filtering and Firewall Rules

• Traffic Filtering:

o Purpose: To allow or deny network traffic based on specific criteria, enhancing


security.

o Techniques: Include whitelisting (allowing only approved traffic) and blacklisting


(blocking known malicious traffic).

• Firewall Rules:

o Definition: Set of defined rules that govern what traffic is allowed or denied on a
network.

o Best Practices:

▪ Regularly update and review firewall rules to adapt to new threats.

▪ Implement least privilege principles to restrict access only to necessary


traffic.
▪ Monitor and log all firewall activity for auditing and incident response.

Conclusion

Cloud network security is essential for safeguarding sensitive data and applications in cloud
environments. By implementing best practices, leveraging advanced security solutions, and
maintaining a robust security architecture, organizations can effectively mitigate risks and enhance
their overall security posture.
Host Level Security

Host level security refers to the measures taken to secure an individual computer or device within a
network. It is a critical aspect of overall network security, as it helps prevent unauthorized access
and protects sensitive data. Below is a detailed overview of host-level security, including its
importance, components, and specific considerations for different cloud service models.

Importance of Host Level Security

• Protection of Sensitive Information: Prevents attackers from gaining access to sensitive


data stored on the device.

• Prevention of Network Attacks: Reduces the risk of the device being used to launch
attacks on other devices within the network.

• Mitigation of Malware and Threats: Helps to detect and remove malicious software,
protecting the integrity of the host.

Key Components of Host Level Security

1. Antivirus Software:

o Detects and removes malicious code.

o Monitors for suspicious activity and alerts users of potential threats.

2. Firewalls:

o Provide a barrier against unauthorized access to the network.

o Filter incoming and outgoing traffic based on predefined security rules.


3. Intrusion Detection Systems (IDS):

o Monitor network traffic for suspicious activities.

o Alert users to potential intrusions and unauthorized access attempts.

4. User Authentication:

o Ensures that only authorized users can access the network and its resources.

o Can include multi-factor authentication (MFA) for enhanced security.

5. Endpoint Security Solutions:

o Protect individual devices and hosts by monitoring and controlling access to the
network and data.

digiALERT – Host Level Security

digiALERT is a host-level security solution that encompasses various security measures and
technologies to protect individual devices within a network. It focuses on comprehensive security
strategies to ensure the integrity and confidentiality of host devices.

Host Level Security in Cloud Services

Considerations for Cloud Service Delivery Models

When assessing host security, it’s essential to consider the context of different cloud service
models, such as:

1. Infrastructure as a Service (IaaS):

o Customers are primarily responsible for securing the hosts in the cloud.

o Key areas of focus include:

▪ Virtualization Software Security: Security of the software that creates and


manages virtual instances.

▪ Guest OS or Virtual Server Security: Securing the operating systems


running on virtual servers.

▪ Virtual Server Security: Protection measures for the virtual servers


themselves.

2. Platform as a Service (PaaS):

o Customers rely on cloud service providers for the security of the host platform.

o Host security processes are typically non-transparent to customers, limiting their


visibility into the security measures in place.
3. Software as a Service (SaaS):

o Similar to PaaS, security responsibilities for the host platform rest with the cloud
service providers.

o Customers must trust that the providers have adequate security measures in place.

Specifics of IaaS Host Security

• Virtualization Software Security:

o Customers can create and terminate virtual instances.

o Involves the security of different virtualization models, including OS-level


virtualization, paravirtualization, and hardware-based virtualization.

• Customer Guest OS or Virtual Server Security:

o Focuses on securing the operating systems running on virtual machines, including


regular updates and patches.

• Virtual Server Security:

o Involves measures to secure the virtual servers themselves, including configuration


management and access controls.

Conclusion

Host level security is essential for protecting individual devices within a network, particularly in
cloud environments. Understanding the responsibilities and best practices associated with
different cloud service models (IaaS, PaaS, and SaaS) is crucial for organizations to ensure robust
security measures are in place. By implementing strong host-level security solutions, organizations
can significantly reduce the risk of data breaches and cyber threats.

Importance of Host Level Security

1. Protection Against Unauthorized Access:

o Host level security prevents unauthorized users from accessing sensitive data and
applications stored on individual devices.

2. Data Integrity and Confidentiality:

o By safeguarding host devices, organizations can ensure the integrity and


confidentiality of the data, minimizing the risk of data breaches.
3. Prevention of Malware and Attacks:

o Implementing security measures at the host level helps detect and neutralize
malware and other malicious attacks, reducing the potential impact on the overall
network.

4. Compliance with Regulations:

o Many industries have regulatory requirements regarding data protection. Host level
security helps organizations meet these compliance standards (e.g., GDPR, HIPAA).

5. Minimized Risk of Lateral Movement:

o Effective host security limits an attacker’s ability to move laterally across the
network, confining potential damage to the compromised device.

6. Enhanced Visibility and Monitoring:

o Regular monitoring of host security helps organizations identify vulnerabilities and


respond quickly to potential threats, enhancing overall security posture.

Key Components of Host Level Security

1. Antivirus Software:

o Monitors the device for malicious software, provides real-time protection, and
regularly scans for threats.

2. Firewalls:

o Establish a barrier to control incoming and outgoing traffic, preventing unauthorized


access to the device.

3. Intrusion Detection Systems (IDS):

o Monitor network traffic for suspicious activities, providing alerts for potential
threats.

4. User Authentication:

o Employs measures such as passwords, biometrics, and multi-factor authentication


(MFA) to verify user identities before granting access.

5. Patch Management:

o Regularly updates software and operating systems to fix vulnerabilities and improve
security.

6. Endpoint Security Solutions:

o Protects individual devices by monitoring network traffic, controlling access, and


safeguarding data stored on the device.
7. Data Encryption:

o Encrypts sensitive data at rest and in transit, ensuring that unauthorized users
cannot read it.

Best Practices for Implementing Host Level Security

1. Regular Updates and Patch Management:

o Keep operating systems and software up to date to protect against known


vulnerabilities.

2. Implement Strong Password Policies:

o Use complex passwords and encourage regular password changes. Consider


employing password managers for better security.

3. Enable Multi-Factor Authentication (MFA):

o Implement MFA to provide an additional layer of security beyond just usernames


and passwords.

4. Conduct Regular Security Audits:

o Periodically assess security measures and policies to identify and rectify


vulnerabilities.

5. Use Antivirus and Anti-Malware Solutions:

o Deploy reliable antivirus and anti-malware software to protect against threats.

6. Establish Firewall Rules:

o Configure firewalls to restrict unauthorized access and regularly review and update
firewall rules.

7. Educate Employees:

o Provide training on security awareness, phishing attacks, and safe browsing


practices to foster a security-conscious culture.

8. Monitor and Log Activities:

o Implement continuous monitoring and logging to detect suspicious activities and


facilitate incident response.

9. Backup Data Regularly:

o Regularly back up important data to recover in case of a data loss event or


ransomware attack.

10. Limit User Privileges:


• Implement the principle of least privilege (PoLP) to ensure users only have access to the
data and systems necessary for their roles.

By focusing on these aspects of host level security, organizations can significantly enhance their
overall security posture and protect sensitive data from a wide range of threats.
CHAPTER

Figures

1.1 Three layers of computing facilities 6


1.2 Computing platforms in different forms 7
1.3 Different users/subscribers of three computing layers 7
1.4 Elements of three computing layers 8
1.5 The way to opt for cloud computing 15
2.1 A model of distributed computing environment 22
2.2 A compute cluster 23
2.3 A cluster computing model 24
2.4 A grid computing model 25
2.5 Characteristics of autonomic computing defined by IBM 30
2.6 Technological advancements towards maturity of cloud computing 31
2.7 Convergence of technologies for evolution of cloud computing 32
2.8 The way towards cloud computing 33
3.1 Utility model of service delivery 40
3.2 Advantages of cloud computing 45
3.3 The cloud from the point of view of users 49
4.1 The NIST cloud computing model 58
4.2 NIST cloud computing reference architecture 62
4.3 Actors of NIST cloud computing reference architecture 63
4.4 Interactions between the Actors in NIST model 64
4.5 Usage Scenario of Cloud Broker 65
4.6 Cloud Service Orchestration 66
4.7 On-premises private cloud 67
4.8 Off-premises private cloud 68
4.9 A hybrid cloud model 70
4.10 Physical locations of cloud deployments 71
4.11 Variations of cost-effectiveness with different cloud deployments 72
4.12 Variations in user’s control over different cloud deployments 72
5.1 Service Layer as part of Cloud Service Orchestration 77
5.2 IaaS component stack 78
xv
120 Fundamental of Cloud Computing

6.1 Google Cloud Platform a region and within a region are availability zones .
These zones are isolated from a single point of
Suite of cloud computing services offered by Google failure. HTTP global load balancer are global and
that provides a series of modular cloud services can receive requests from any of the Google edge
including computing, data storage, data analytics. locations and regions.. Other resources, like
GCP is a public cloud vendor — like competitors storage, can be regional. The storage is distributed
Amazon Web Services (AWS) and Microsoft Azure across multiple zones within a region for
.Customers are able to access computer resources redundancy. And finally zonal resources, including
housed in Google’s data centers around the world compute instances, are only available in one
for free or on a pay-per-use basis specific zone within one specific region. When
deploying applications on GCP, you must select the
Google Cloud & Google Cloud Platform
locations depending on the performance,
Google Cloud - includes a combination of services reliability, scalability, and security needs of your
available over the internet that can help organization.
organizations go digital .Google Cloud Platform
GCP Services Each GCP region offers a category of
provides public cloud infrastructure for hosting
services. Some services are limited to specific
web-based applications - part of Google Cloud.
region. Major services of Google Cloud Platform
Google Cloud - Other Services:
include: Computing and hosting, Storage and
Google Workspace (formerly known as G Suite and database, Networking ,Big Data ,Machine learning
Google Apps) - provides identity management for GCP pros and cons ▪ GCP strengths : Google Cloud
organizations, Gmail, and collaboration tools. Platform documentation.Global backbone network
Enterprise versions of Android and Chrome OS. that uses advanced software-defined networking
Application programming interfaces (APIs) for and edgecaching services to deliver fast, consistent,
machine learning and enterprise mapping services and scalable performance. GCP weaknesses
Google Cloud Platform has far fewer services than
6.1.1 History of GCP those offered by AWS and Azure. GCP has an
opinionated model of how their cloud services
GCP first came online in 2008 with the launch of a
should be used.
product called App Engine .Google announced a
developer tool that allowed customers to run their 6.1.3 GCP Compute Service Google Compute
web applications on Google infrastructure.To Engine
source the feedback needed to make
improvements to this preview release, App Engine Google Cloud offers users the facility of computing
was made available to 10,000 developers. These and hosting where they can pick from the following
early-adopter developers could run apps with 500 options: Work in a serverless environment .Use a
MB of storage, 200 million megacycles of CPU per managed application platform. Build cloud-based
day, and 10 GB of bandwidth per day .By late 2011, infrastructure to facilitate maximum control and
Google pulled App Engine out of preview mode and flexibility. Leverage container technologies to
made it an official, fully supported Google product achieve maximum flexibility. Compute Options :
.Today, Google Cloud Platform is one of the top Compute Engine , App Engine , Cloud Functions ,
public cloud vendors in the world. Google Cloud Kubernetes Engine , Cloud Run
customers include Nintendo, eBay, UPS, The Home
Depot, Etsy, PayPal, 20th Century Fox, and Twitter.

6.1.2 GCP - infrastructure, regions, and zones


Google’s global infrastructure currently has 24
locations around the world where Google Cloud
Platform resources are offered. Locations start with
121 Fundamental of Cloud Computing

6.1.4 Compute Engine Systems leveraging stateful and stateless services .


Strong CI/CD Pipelines. Google Kubernetes Engine
Compute engine - compute service offered by When Not to Use GKE When the burden of
Google Cloud . It is an IaaS (Infrastructure As A managing underlying infrastructure lies on the
Service) service that provides virtual machines team . Applications that require very low-level
hosted on Google’s infrastructure access to the underlying hardware like custom
kernel, networking, etc.
When & Where to Use Compute Engine?
Need low-level access to or fine-grained control of
6.1.5 Cloud Functions
the operating system, network, and other
operational characteristics. E.g. custom compiled
Cloud Functions is a lightweight compute solution
kernel .Applications with extremely consistent
utilization. 1:1 container to VM mapping. Migrating for developers for creating single-purpose, stand-
existing systems alone functions responding to cloud events without
the hassle of managing the server or runtime
Google App Engine : The App Engine is a PaaS environment. It works well for applications with
(Platform As A Service) - for building scalable web bursty or variable traffic patterns as it is highly
applications and IoT backends. It scales applications elastic and has minimal operational overhead
automatically based on the traffic received .It because it is a serverless platform. When & Where
facilitates users with built-in services and APIs, for to Use Cloud Functions? It is an excellent choice for
example, Datastores, NoSQL, user authentication dynamic, event-driven plumbing (connecting) such
API, etc. When & Where to Use App Engine? App as moving data between services or reacting to log
Engine is a great fit for green-field applications events.. Event-driven applications and functions.
where server-side processing and logic are Deploying simple APIs .Quick Data transformations
required. Stateless applications , Rapidly (ETL)
developing CRUD-heavy applications , Applications
composed of a few services , Deploying complex Google Cloud Run: Cloud Run is a managed
APIs compute platform enabling users to run stateless
containers that can be invoked via web requests or
Google App Engine When Not to Use App Engine Pub/Sub events . Since it is serverless, it abstracts
Stateful applications requiring lots of in-memory away all infrastructure management allowing users
states to meet the performance or functional to focus on building great applications . It provides
requirements .Applications built with large or many benefits of App Engine with the power of GKE.
opinionated frameworks or applications that have a It can also be run on your own GKE cluster if you
slow start-up time. Systems that require protocols want control over the runtime environment. When
other than HTTP. & Where to Use Cloud Run? Stateless services that
are easily containerized .Event-driven applications
Google Kubernetes Engine is an easy-to-use cloud- and systems. Applications that require custom
based Kubernetes service used for running system and language dependencies When Not to
containerized applications . Kubernetes is an open- Use Google Cloud Run Highly stateful systems.
source framework for container management and Systems that require protocols other than HTTP.
automation based on Google’s actual internal Compliance requirements that demand strict
container software. When & Where to Use controls over the low-level environment and
Kubernetes Engine? Use GKE when you want to infrastructure (might be okay with the Knative GKE
provide developers architectural flexibility or mode)
minimize operational costs. While GKE is the best-
managed offering, you’ll need in-house resources
to manage your Kubernetes clusters. Applications
that can be easily containerized or are already
containerized .Hybrid or multi-cloud environments.
122 Fundamental of Cloud Computing

change the size of the disk which makes them more


flexible it can be done without losing the data.
More secure we can encrypt the data by using a
Google key or customer-managed keys and also we
can restrict the access of the disk to specific users,
groups, or resources

Google Cloud Filestore (Network File Storage) It


enables reliable performance and high availability
for storing and sharing files . We can create files
with the aid of file storage that can be mounted
6.2.1 IAM onto the necessary path and accessed from an
instance operating on the GCP or on-premises. . File
storage is available in two types: Standard tier: It
provides a throughput of 800 MB/s per share, which
will result in minimal latency and good
performance. Premium tier: The premium tier’s
throughput is 1.2 GB/s per share, enables SSD
storage and can be particularly beneficial for
applications that require high IOPs and low latency.
Automatic snapshots will be taken in the file
storage, and since our storage is automatically
Ways of Accessing GCP backed up, we can prevent data loss. Google Cloud
Storage (Object Storage) Object storage is scalable,
Google Cloud Platform can be accessed in 2 ways : durable, and secure - can be accessed from
Google Cloud Console & Cloud SDK via Cloud Shell anywhere means the object storage is region
6.2.2 GCP Storage Options independent . Object storage is very different from
Block storage and file storage - store the data in the
Google Cloud provides a full range of services to form of objects it is more suited for static data like
satisfy all of your storage needs with file, block, videos, photos, etc.. We can save our data in
object, and mobile application storage options. accordance with our needs; for example, if we
Google Cloud Persistent Disk(Block Storage) , frequently use it, we will keep it in Standard
Google Cloud Filestore (Network File storage) , storage, while less frequently accessed data can be
Google Cloud Storage (Object Storage) , Google kept in Coldline and Archive for long-term data
Cloud Storage for Firebase , Google Cloud Storage access ▪ Object storage offers us data encryption,
Transfer Service data replication, and lifecycle management which
make it more reliable and we can integrate the
Google Cloud Persistent Disks (Block Storage)
object storage with multiple GCP services like
Block storage - offers dependable and quick storage
Google Cloud Functions, BigQuery, and AI Platform,
for your virtual machine instances on the Google
enabling you to build powerful applications
Cloud Platform . We can back up our storage using
persistent discs, which allow us to attach discs of Types of Storages Classes: Google Cloud
various sorts and sizes, such as SSDs or HDDs, to the Platform(GCP) offers different types of storage in
necessary virtual machines .This block storage will storage classes that can be used for different
boost throughput and decrease latency. High purposes and we can use them based on their
durability and support for snapshots, persistent performances. The following are the storage classes
discs enable us to take a disc backup when available in GCP
necessary without losing any data . Flexibility -Once
the disk is attached to the VMs then also we can
123 Fundamental of Cloud Computing

Standard Storage, Nearline Storage , Coldline analytics workloads. Media content storage and
Storage , Archival Storage delivery: Cloud Storage provides the availability and
throughput needed to stream audio or video
Standard Storage: Frequently accessed data - for directly to applications and websites. Backups and
a general purpose . Highly available and less Archives: Backup data in Cloud Storage can be used
latency. Nearline Storage: The data must be highly for more than just recovery because all storage
available but not accessed as frequently as standard classes have ms latency and are accessed through a
storage. The which needs to be accessed within single API. Features of GCP Object Lifecycle
seconds or minutes can be stored in Nearline Management: Define conditions that trigger data
Storage. Coldline Storage: The data which is deletion or transition to a cheaper storage class.
accessed infrequently can be stored in Coldline Object Versioning: Continue to store old copies of
storage The data which needs to be accessed within objects when they are deleted or overwritten.
hours can be stored in this Coldline Storage. Retention policies: Define minimum retention
Archival Storage: Archival storage is mainly used periods that objects must be stored for before
for storing data that is in infrequent access and can they’re deleted. The object holds: Place a hold on
be retained for long periods of time. Cost-effective an object to prevent its deletion. Customer-
option for storing data that is not accessed managed encryption keys: Encrypt object data with
frequently but must be preserved for legal, encryption keys stored by the Cloud Key
regulatory, or business reasons. Management Service and managed by you.
Customer-supplied encryption keys: Encrypt object
Benefits of using Archival Storage Low Cost: The
data with encryption keys created and managed by
data stored in Archival storage is not accessed that
you. Uniform bucket-level access: Uniformly control
frequently so the cost of the storage will also be
access to your Cloud Storage resources by disabling
very low. High durability: When compared to the
object ACLs. Requester Pays: Require access to your
durability of the Archival storage it same as the
data to include a project ID to bill for network
other storage. Long retention period: The data is
charges, operation charges, and retrieval fees.
stored in Archival storage it will be stored for long
Bucket Lock: Bucket Lock allows you to configure a
periods it will be available for more than 8 years.
data retention policy for a Cloud Storage bucket
Lifecycle management: With the help of lifecycle
that governs how long objects in the bucket must
management rules the data can be moved
be retained. Pub/Sub Notifications for Cloud
automatically to the Archival storage.
Storage: Send notifications to Pub/Sub when
6.2.3 Cloud Storage objects are created, updated, or deleted. Cloud
Audit Logs with Cloud Storage: Maintain admin
Cloud storage is a fully managed scalable service, no activity logs and data access logs for your Cloud
need to provision capacity ahead of time. Each Storage resources. Object- and bucket-level
object in Cloud storage has a URL. Cloud storage permissions: Cloud Identity and Access
consists of buckets you create and configure and Management (IAM) allows you to control who has
use to hold your storage objects(immutable – no access to your buckets and objects.
edit, create new versions). Cloud storage encrypts
your data on the server side before being written to GCP Storage Features
disk. (by default = https). You can move objects of
High performance , Internet-scale ,Data encryption
cloud storage to other GCP storage services. When
at rest , Data encryption in transit by default from
you create a bucket, it is given a globally unique Google to endpoint , Online and offline import
name, specify a geographic location where the services are available
bucket and its contents are stored, and a default
storage class. Use Cases of Cloud Storage GCP - Networking Google Cloud networking
Integrated repository for analytics and ML: Cloud services or technologies
Storage is strongly consistent giving accuracy in
Security & privacy

Chapter – 4 Private Security & privacy are one of the big advantages of cloud computing. Private cloud
improved the security level as compared to the public cloud.
Improved performance

Cloud Private cloud offers better performance with improved speed and space capacity.
What is the difference between private cloud vs. public cloud?
A public cloud is where an independent third-party provider, such as Amazon Web
Services (AWS) or Microsoft Azure, owns and maintains compute resources that
customers can access over the internet. Public cloud users share these
resources, a model known as a multi-tenant environment. For example, various
What is a private cloud?
virtual machine (VM) instances provisioned by public cloud users may share the
Private cloud is a type of cloud computing that delivers similar advantages
same physical server, while storage volumes created by users may coexist on the
to public cloud, including scalability and self-service, but through a proprietary
same storage subsystem.
architecture. A private cloud, also known as an internal or corporate cloud, is
What is the difference between private cloud vs. hybrid cloud?
dedicated to the needs and goals of a single organization whereas public clouds
A hybrid cloud is a model in which a private cloud connects with public cloud
deliver services to multiple organizations.
infrastructure, enabling an organization to orchestrate workloads -- ideally
A private cloud is a single-tenant computing infrastructure and environment,
seamlessly -- across the two environments. In this model, the public cloud
meaning the organization using it -- the tenant -- doesn't share resources with
effectively becomes an extension of the private cloud to form a single, uniform
other users. Private cloud resources can be hosted and managed by the
cloud. A hybrid cloud deployment requires a high level of compatibility between
organization in a variety of ways. The private cloud might be based on resources
the underlying software and services used by both the public and private clouds.
and infrastructure already present in an organization's on-premises data center.
Is it better to use a public cloud or a private cloud?
The main advantage of a private cloud is that users don't share resources.
Some businesses may prefer to use a private cloud, especially if they have
Because of its proprietary nature, a private cloud computing model is best for
extremely high security standards. Using a private cloud eliminates
businesses with dynamic or unpredictable computing needs that require direct
intercompany multitenancy (there will still be multitenancy among internal
control over their environments, typically to meet security, business governance
teams) and gives a business more control over the cloud security measures
or regulatory compliance requirements.
that are put in place.
Advantage of private cloud?
However, it may cost more to deploy a private cloud, especially if the
•increased security of an isolated network;
business is managing the private cloud themselves. Often, organizations that
•increased performance due to resources being solely dedicated to one
use private clouds will end up with a hybrid cloud deployment, incorporating
organization; and
some public cloud services for the sake of efficiency.
•increased capability for customization, such as specialized services or
Advantage of private cloud?
applications that suit the particular company.
•increased security of an isolated network;
•More Control
•increased performance due to resources being solely dedicated to one
Private clouds have more control over their resources and hardware than public
organization; and
clouds
•increased capability for customization, such as specialized services or
because it is only accessed by selected users.
1|Page

applications that suit the particular company. • Hosted private cloud


•More Control Hosted private cloud vendors offer cloud servers in their own data centers and
Private clouds have more control over their resources and hardware than public are also responsible for security management.
clouds • On-Premise private cloud
because it is only accessed by selected users. Unlike hosted private clouds, on-premise cloud solutions allow users to host a
Security & privacy cloud environment internally. For such a cloud model, it is necessary to have an
Security & privacy are one of the big advantages of cloud computing. Private cloud internal data center to host the cloud server.
improved the security level as compared to the public cloud. Vendor of private cloud?
Improved performance •Cisco. The vendor provides its Quickstart Private Cloud to create a self-service
Private cloud offers better performance with improved speed and space capacity. private cloud environment along with varied platforms.
Disadvantage of private cloud? •Google. The tech giant offers a virtual private cloud product that enables highly
Private cloud technologies -- such as increased automation and user self-service customizable network environments for hosting public or private workloads
-- can bring considerable complexity to enterprise IT. These technologies typically •AWS. Amazon Virtual Private Cloud lets users launch AWS resources in an
require an IT team to rearchitect some of its data center infrastructure as well as isolated virtual network -- either on premises or through a remote managed
adopt additional software layers and management tools. provider -- to create a private instance of public AWS resources.
1) High cost •IBM. IBM offers private cloud hardware, along with its Cloud Managed Services,
The cost is higher than a public cloud because set up and maintain hardware cloud security tools and cloud management and orchestration tools. IBM now
resources owns Red Hat with its private cloud capabilities.
are costly. •Microsoft. Azure Stack helps build and run applications across data centers and
2) Restricted area of operations edge locations to remote offices or even the public cloud.
As we know, private cloud is accessible within the organization, so the area of •Oracle. Private Cloud Appliance X8 by Oracle enables compute and storage
operations capabilities optimized for private cloud deployment.
is limited. A virtual private cloud (VPC) is a secure, isolated private cloud hosted within
3) Limited scalability a public cloud. VPC customers can run code, store data, host websites, and
Private clouds are scaled only within the capacity of internal hosted resources. do anything else they could do in an ordinary private cloud, but the private
4) Skilled people cloud is hosted remotely by a public cloud provider. (Not all private clouds
Skilled people are required to manage and operate cloud services. are hosted in this fashion.) VPCs combine the scalability and convenience of
Types of private cloud? public cloud computing with the data isolation of private cloud computing.
• Virtual private cloud A private cloud is single tenant. A private cloud is a cloud service that is
A virtual private cloud (VPC) is a type of cloud model that offers the benefits of a exclusively offered to one organization. A virtual private cloud (VPC) is a
private cloud (more control and an isolated environment) with the help of public private cloud within a public cloud; no one else shares the VPC with the VPC
cloud resources. customer.
• Managed private cloud The Challenges of a Private Cloud (and How to Overcome Them)
A managed private cloud is a type of private cloud model in which the The private cloud market is booming and is set to grow at a CAGR of 26.71% up
infrastructure is not shared. until 2028 with an estimated growth of USD 619.08 billion. At Storm the uptick in

2|Page
private cloud requests has us running around, and for good reason: Storm’s hardware. A bonus is that the CSP provides all the expertise needed to manage
managed hosting negates many of the challenges associated with private cloud. the physical infrastructure as well as the cloud infrastructure.
In this post we’ll look at the major drawbacks of a private cloud, and how Storm’s Complex, ongoing maintenance
managed hosting overcomes them. Despite the potential for automation, cloud monitoring and maintenance still
If you don’t know what the cloud or a private cloud is, here’s a quick recap: require experienced staff. This can be challenging for organisations given the
Imagine being able to pool the resources of all computing devices in your house. ongoing skills shortage as well as the high cost associated with training or
This includes everything that has a CPU, memory, and storage space. Instead of upskilling staff.
those individual devices, you now have one big unit with the combined resources CSPs are uniquely positioned to apply internal skill sets to DevOps tasks for a more
of those individual devices. The cloud is created in the same way: the physical holistic integration with cloud maintenance, including monitoring and
resources of several, tens, hundreds, or even thousands of physical servers are performance management, scalability and elasticity management, disaster
pooled together to create one big resource-rich unit. Software called a hypervisor recovery and backups, and security.
can be used to create virtual devices such as servers and networking equipment For example, CSPs can be tasked with the implementation and management of
using those pooled physical resources. advanced monitoring tools like AWS CloudWatch or Azure Monitor. The CSP
So what then is a private cloud? The best way to explain it is alongside the public configures custom dashboards and alerts to monitor application performance and
cloud hosting model: system health, helping the organisation maintain optimal performance and quickly
Public cloud: A public cloud follows a multi-tenant hosting model: just as with address potential issues before they affect operations.
shared hosting, all tenants on a public cloud make use of the same processing, In instances where organisations experience significant variability in workload due
memory, and storage resources. Unlike shared hosting, however, these resources to seasonal events (such as eCommerce companies), a CSP can help implement
are dedicated to the public cloud account (when we’re talking about virtual autoscaling solutions that adjust resources automatically, ensuring responsiveness
servers) and can be scaled as needed. Despite this, peak times can introduce under heavy loads without overspending on idle resources during off-peak
higher latency and reduced speeds. Given the public nature of a public cloud, periods.
privacy can be an issue when compliance with data protection regulations is Lower scalability
paramount. Private clouds have the benefit of hardware resources entirely dedicated to them.
Private cloud: A private cloud is cloud infrastructure built on hardware dedicated But that can also be a disadvantage since it means they’re less scalable than public
to your account. As such, the cloud infrastructure itself is also completely private, clouds with seemingly ‘limitless’ resources.
which means you get all the resources and don’t have to deal with noisy Given that CSPs own and maintain the hardware of a private cloud, the obvious
neighbours. Among its many advantages, private clouds also deliver increased solution would be to request more physical servers to increase the potential for
privacy which in itself boosts security. scaling; the onus (and cost) is on the CSP to acquire the hardware and add it to the
Dealing With Private Cloud Challenges cloud infrastructure. Scaling can also be overcome by employing a hybrid cloud
High initial costs and setup model where workloads are run in highly scalable public cloud environments
Private clouds can come with high startup costs to build, operate, and manage the when private cloud resources reach peak capacity.
hardware infrastructure. Very often organisations have to hire and / or train staff When adding more hardware is not an option for whatever reason,
to deliver the required expertise. To be fair, however, that only relates to on- containerisation and virtualisation can be used; they encapsulate applications in a
premise installations. Private cloud setups through a cloud service provider (CSP) way that consumes fewer resources than traditional virtual machines and allow for
negate much of the hardware costs since it’s the CSP that owns and maintains the

3|Page

more granular scaling, and can improve the utilisation efficiency of the underlying • Secondly, you need to load the appropriate software
physical resources. (operating System you selected in the previous step,
Efficient resource utilisation device drivers, middleware, and the needed applications
Because private clouds tend to have lower overall scalability compared to the for the service required).
public cloud, and where a hybrid model isn’t feasible, organisations that add more
• Thirdly, you need to customize and configure the
resources to their private cloud instances to deal with spikes in demand struggle
machine (e.g., IP address, Gateway) to configure an
to make efficient use of their resources outside of those peak times.
Cloud service providers can employ various tactics that can help organisations
associated network and storage resources.
make more efficient use of their resources without resorting to a hybrid cloud • Finally, the virtual server is ready to start with its
model (which could, for example, complicate already fickle compliance issues). newly loaded software.
Some of these include: VM Provisioning Process contd.
• autoscaling solutions that automatically adjust the amount of resources To summarize, server provisioning is defining server’s
based on the workload needs configuration based on the organization requirements,
• resource optimisation tools that identify idle or underused resources and a hardware, and software component ( pr ocessor,
suggest adjustments RAM, storage, networking, operating system, applications,
Ultimately, the challenges of managing a private cloud can vary significantly based etc.).
on the specific infrastructure and its usage. However, skill shortages or limited • Normally, virtual machines can be provisioned by manually
budgets should not deter organisations from leveraging the cloud in a way that installing an operating system, by using a preconfigured VM
best suits their needs. template, by cloning an existing VM, or by importing a
At Storm Internet, we are committed to a partnership model that goes beyond physical server or a virtual server from another hosting
mere service provision. We integrate closely with our customers’ businesses, platform.
offering tailored solutions that enhance growth and simplify operational • Physical servers can also be virtualized and
complexities. Our goal is not just to provide technology, but to enable real provisioned using P 2 V ( Physical to Virtual) tools and
business transformation by making cloud technology accessible and aligned with techniques (e.g., virt-p2v).
your strategic objectives. This close-knit integration ensures that every • After creating a virtual machine by virtualizing a physical
organisation can achieve its potential, regardless of its size or sector. server, or by building a new virtual server in the virtual
VM M ig r a t io n environment, a template can be created out of it.
VM Provisioning Process • M o s t v i r t u a l i z a t i o n ma n a g eme n t v e n d o r s ( V M wa r e ,
XenServer, etc.) provide the data center’s administration
• The common and normal steps of provisioning a virtual
with the ability to do such tasks in an easy way.
server are as follows:
VM Provisioning Process contd.
• Firstly, you need to select a server from a pool of • Provisioning from a template is an invaluable feature,
available servers ( physical servers with enough because it reduces the time required to create a new virtual
capacity) along with the appropriate OS template you machine.
need to provision the virtual machine. • •Administrators can create different templates for different

4|Page
purposes. For example, you can create a Windows 2003 logical steps that are executed when migrating an OS.
Server template for the finance department, or a Red Hat • In this research, the migration process has been viewed as a
Linux template for the engineering department. This enables transactional interaction between the two hosts involved:
the administrator to quickly provision a correctly configured
virtual server on demand.

VIRTUALMACHINEMIGRATION
Migration and High Availability)
SERVICES(Live
• Live migration ( which is also called hot or real- time
migration) can be defined as the movement of a virtual
machine from one physical host to another while being powered
on.
• When it is properly carried out, this process takes place without LIVE MIGRATION STAGES
any noticeable effect from the end user’s point of view (a Here is the extracted text from the image:
matter of milliseconds).
• One of the most significant advantages of live migration is the Stage-0: Pre-Migration. An active virtual machine exists on the physical host A.
fact that it facilitates proactive maintenance in case of Stage-1: Reservation. A request is issued to migrate an OS from host A to host B (a
failure, because the potential problem can be resolved before precondition is that the necessary resources exist on B and a VM container of that
the disruption of service occurs. size).
• Live migration can also be used for load balancing in which Stage-3: Stop-and-Copy. Running OS instance at A is suspended, and its network
work is shared among computers in order to optimize the traffic is redirected to B. As described in reference 21, CPU state and remaining
utilization of available CPU resources. inconsistent memory pages are then transferred. At the end of this stage, there is
Live Migration Anatomy, Xen Hypervisor Algorithm. a consistent suspended copy of the VM at both A and B. The copy at A is
• How to live migration’s mechanism and memory and virtual considered primary and is resumed in case of failure.
machine states are being transferred, through the network,
from one host A to another host B:
• the Xen hypervisor is an example for this mechanism. The
5|Page

Stage-4: Commitment. Host B indicates to A that it has successfully received a VMware VMotion:
consistent OS image. Host A acknowledges this message as a commitment of a) Automatically optimize and allocate an entire pool of resources for maximum
migration transaction. hardware utilization, flexibility, and availability.
Stage-5: Activation. The migrated VM on B is now activated. Post-migration code b) Perform hardware’s maintenance without scheduled downtime along with
runs to reattach the device’s drivers to the new machine and advertise moved IP migrating virtual machines away from failing or underperforming servers.
addresses. Citrix XenServer "XenMotion":
This approach to failure management ensures that at least one host has a Based on Xen live migrate utility, it provides the IT Administrator the facility to
consistent VM image at all times during migration: move a running VM from one XenServer to another in the same pool without
1. Original host remains stable until migration commits and that the VM may interrupting the service (hypothetically zero—downtime server maintenance),
be suspended and resumed on that host with no risk of failure. making it a highly available service and also a good feature to balance workloads
2. A migration request essentially attempts to move the VM to a new host on the virtualized environments.
and on any sort of failure, execution is resumed locally, aborting the REGULAR /COLD MIGRATION
migration. • Cold migration is the migration of a powered-off virtual
• LIVE MIGRATION TIMELINE machine. With cold migration:
• You have options of moving the associated disks from one data
store to another.
• The virtual machines are not required to be on a shared storage.
1)Live migrations needs to a shared storage for virtual machines
in the server’s pool, but cold migration does not. 2) In live
migration for a virtual machine between two hosts, there should be
certain CPU compatibility checks, but in cold migration this checks
do not apply.
• Cold migration (VMware product ) is easy to implement and is
summarized as follows:
• The configuration files, including NVRAM file (BIOS Setting), log
files, and the disks of the virtual machines, are moved from the
source host to the destination host’s associated storage area.
• The virtual machine is registered with the new host.
• After the migration is completed, the old version of the virtual
machine is deleted from the source host.
Generative AI Use-cases for Enterprise
1. Will Generative AI replace me at my job?
LIVE MIGRATION VENDOR IMPLEMENTATION EXAMPLE
There are lots of VM management and provisioning tools that provide the live 2. Introduction to Generative AI
migration of VM facility, two of which are VMware VMotion and Citrix XenServer 3. Generative AI and Use Cases
"XenMotion". 4. Tools and Technologies
6|Page
5. Auto-GPT and Autonomous Agents 4. Use of Personal Information: The document mentions that OpenAI can
6. Architecture Implications use Content to provide and maintain the
7. Safe and Responsible AI Services, comply with applicable law, and enforce policies. However, it is
8. Questions and Open Discussion not explicitly mentioned how long this
Generative AI data will be retained, which might be an issue especially for EU users under
Generative AI is a subset of Artificial GDPR rules.
Intelligence that leverages machine OpenStack Architecture and Components
learning techniques to generate new What is OpenStack ?
content. This could be in the form of OpenStack is a cloud operating system that controls large pools of
text, images, music, voice, or even compute, storage, and networking resources throughout a datacenter, all
new models. The generated output is managed and provisioned through APIs with common authentication
typically novel and created from mechanisms.
random or user-provided data. OpenStack is a free and open-source software platform for cloud
Examples of Generative AI Output - Advice computing that supports all types of cloud computing platform for public
Q. Find legal weaknesses in the following document```These Terms and private clouds.
of Use apply when you use the services of OpenAI, L.L.C. … ``` It is mostly deployed as infrastructure-as-a-service, OpenStack is basically
A. your key to building your own cloud infrastructure.
1. Ambiguity in Terms: While the document outlines the terms of use and Why we need OpenStack?
restrictions, there may be terms that are If we don't comfortable entrusting sensitive data to a third party and you
not explicitly defined or clear. For example, the term "Services" is defined have tons of it, then an on-premise or private cloud infrastructure would be
broadly, which might be interpreted the better choice. By building your own cloud in your own data center, we
differently by different parties. Similarly, the term "applicable laws" is not will
specified and could vary significantly have more control of your data.
depending on jurisdiction. 1. Compute (Nova)
2. Acceptance of Terms: The document assumes that by using the Services, OpenStack Compute (Nova) is a cloud computing fabric controller, which
users agree to the Terms. In some is the main part of an IaaS system.
jurisdictions, active consent may be required instead of implied consent. It is designed to manage and automate pools of computer resources and
3. Assignment of Rights: In section 3a, the document states that OpenAI can work with widely available virtualization technologies.
assigns to the user all its rights, title and KVM, VMware, and Xen are available choices for hypervisor technology
interest in and to Output. It may need further clarification whether it (virtual machine monitor), together with Hyper-V and Linux container
includes intellectual property rights as well. technology such as LXC.[59][60]
2. Networking (Neutron)

7|Page

OpenStack Networking (Neutron) is a system for managing networks and IP catalog an unlimited number of backups. The Image Service can store disk
addresses. and server images in a variety of
OpenStack Networking provides networking models for different back-ends, including Swift. The Image Service API provides a standard
applications or REST interface for querying information about disk images and lets clients
user groups. Standard models include flat networks or VLANs that separate stream the images to new servers.
servers 6. Object storage (Swift)
and traffic. OpenStack Networking manages IP addresses, allowing for ● OpenStack Object Storage (Swift) is a scalable redundant storage system.
dedicated Objects and files are written to multiple disk drives spread throughout
static IP addresses. servers in the data center, with the OpenStack software responsible for
Floating IP addresses let traffic be dynamically rerouted to any resources in ensuring data replication and integrity across the cluster.
the IT ● Storage clusters scale horizontally simply by adding new servers. Should
infrastructure, so users can redirect traffic during maintenance or in case of a
a failure. server or hard drive fail, OpenStack replicates its content from other
3. Block storage (Cinder) active nodes to new locations in the cluster.
OpenStack Block Storage (Cinder) provides persistent block-level storage 7. Dashboard (Horizon)
devices OpenStack Dashboard (Horizon) provides administrators and users with a
for use with OpenStack compute instances. graphical interface to access, provision, and automate deployment of
The block storage system manages the creation, attaching and detaching of cloud-based resources.
the block devices to servers.Block storage volumes are fully integrated into The design accommodates third party products and services, such as
OpenStack Compute and the billing, monitoring, and additional management tools. The dashboard is
Dashboard allowing for cloud users to manage their own storage needs. also brand-able for service providers and other commercial vendors who
4. Authentication (Keystone) want to make use of it. The dashboard is one of several ways users can
OpenStack Identity (Keystone) provides a central directory of users interact with OpenStack resources. Developers can automate access or
mapped to the OpenStack services they can access. It acts as a common build tools to manage resources using the native OpenStack API or the
authentication system across the cloud operating EC2 compatibility API.
system and can integrate with existing backend directory services like 8. Cloud template (Heat)
LDAP(Lightweight Directory Access). Heat is a service to orchestrate multiple composite cloud applications
5. Image(Glance) OpenStack Image (Glance) provides discovery, using templates, through both an OpenStack-native REST API and a
registration, and delivery CloudFormation-compatible Query API.
services for disk and server images. Stored images can be used as a 9. Telemetry (Ceilometer)
template. It can also be used to store and OpenStack Telemetry (Ceilometer) provides a Single Point Of Contact for
billing systems, providing all the counters they need to establish customer

8|Page
billing, across all current and future OpenStack components. models are available. If there’s a need for a high degree of customization,
● The delivery of counters is traceable and auditable, the counters must along with the flexibility to choose hypervisors, then a DIY
be install is probably the best option (see highlighted section of the flowchart
easily extensible to support new projects, and agents doing data on previous page).
collections should be independent of the overall system. If you choose DIY install, there’s a wide choice of open source tools that are
Introduction very easy to use and can create environments for use in
One of the great things about OpenStack is all the options development, testing or production. These tools can deploy OpenStack on
you have for deploying it – from homebrew to hosted OpenStack to bare metal, virtual machines or even containers. Some
vendor appliances to OpenStack-as-a-service. Previously, Platform9 even install OpenStack in a production-grade, highly available architecture.
published But which tools are best suited for your requirements? Read on for an
a tech guide comparing various OpenStack deployment models. If you opt overview of some of the most popular tools, followed by a
for a doit- handy comparison matrix to summarize the options. More detailed
yourself (DIY) approach, then you face the question of which tool to use. documentation is available on each tool’s dedicated website.¹
This guide will OpenStack Installation: DevStack
familiarize you with the landscape of OpenStack installation tools, including DevStack is a series of extensible scripts used to quickly bring up a
an overview of the most complete OpenStack environment suitable for non-production
popular ones: DevStack, RDO Packstack, OpenStack-Ansible, Fuel and use. It’s used interactively as a development environment. Since DevStack
TripleO. installs all-in-one OpenStack environments, it can be used
OpenStack Architecture Overview to deploy OpenStack on a single VM, a physical server or a single LXC
If you’re new to OpenStack it may be helpful to review the OpenStack container. Each option is suitable depending on the hardware
components. (Skip this section if you’re already familiar with capacity available and the degree of isolation required. A multi-node
OpenStack.) OpenStack’s design, inspired by Amazon Web Services (AWS), OpenStack environment can also be deployed using DevStack,
has well-documented REST APIs that enable a self-service, but that’s not a thoroughly tested use case.
elastic Infrastructure-as-a Service (IaaS) cloud. In addition, OpenStack is For either kind of setup, the steps involve installing a minimal version of
fundamentally agnostic to the underlying infrastructure and one of the supported Linux distributions and downloading the
integrates well with various compute, virtualization, network and storage DevStack Git repository. The repo contains a script stack.sh that must be
technologies. run as a non-root user and will perform the complete install
How to Choose an OpenStack Deployment Model based on configuration settings.
The primary question that drives the choice of deployment models is The official approved and tested Linux distributions are Ubuntu (LTS plus
whether your IT team has the expertise, and the inclination, to current dev release), Fedora (latest and previous release) and
install and manage OpenStack. Depending on the desire to host your CentOS/RHEL 7 (latest major release). The supported databases are MySQL
infrastructure and avoid vendor lock-in, various deployment and PostgreSQL. RabbitMQ and Qpid are the recommended

9|Page

messaging service along with Apache as the web server. The setup defaults • local – extracts localrc from local.conf before stackrc is sourced
to a FlatDHCP network using Nova Network or a • post-config – runs after the layer 2 services are configured and before
similar configuration in Neutron. they are started
The default services configured by DevStack are Keystone, Swift, Glance, • extra – runs after services are started and before any files in extra.d are
Cinder, Nova, Nova Networking, Horizon and Heat. executed
DevStack supports a plugin architecture to include additional services that • post-extra – runs after files in extra.d are executed
are not included directly in the install. A specific meta-section local|localrc is used to provide a default localrc file.
Summary of the Installation Process This allows all custom settings for DevStack to be contained
1. Install one of the supported Linux Distributions in a single file. If localrc exists it will be used instead to preserve backward
2. Download DevStack from git compatibility.
3. git clone https://git.openstack.org/openstack-dev/devstack [[post-config|$NOVA_CONF]]
4. Make any desired changes to the configuration [DEFAULT]
5. Add a non-root user, with sudo enabled, to run the install script use_syslog = True
6. devstack/tools/create-stack-user.sh; su stack [osapi_v3]
7. Run the install and go grab a coffee enabled = False
8. cd devstack [[local|localrc]]
/stack.sh FIXED_RANGE=10.20.30.40/49
5 ADMIN_PASSWORD=secret
Configuration Options LOGFILE=$DEST/logs/stack.sh.log
DevStack provides a bunch of configuration options that can be modified as 6
needed. The sections below summarize some of the openrc
important ones. openrc configures login credentials suitable for use with the OpenStack
local.conf command-line tools. openrc sources stackrc at the beginning
DevStack configuration is modified via the file local.conf. It’s a modified .ini in order to pick up HOST_IP and/or SERVICE_HOST to use in the endpoints.
format file that introduces a meta-section header to carry The values shown below are the default values.
additional information regarding the configuration files to be changed. Minimal Configuration
The new header is similar to [[‘ <phase> ‘|’ <config–file–name> ‘]]’, where While stack.sh can run without a localrc section in local.conf, it’s easier to
<phase> is one of a set of phase names defined by stack. repeat installs by setting a few minimal variables. Below is an
sh and <config-file-name> is the configuration filename. If the path of the example of a minimal configuration for values that are often modified.
config file does not exist, it is skipped. The file is processed Note: if the *_PASSWORD variables are not set, the install script
strictly in sequence and any repeated settings will override previous values. will prompt for values:
The defined phases are: • No logging

10 | P a g e
• Pre-set the passwords to prevent interactive prompts LOGFILE to the fully qualified name of the destination log file. Old log files
• Move network ranges away from the local network are cleaned automatically if LOGDAYS is set to the number of
• Set the host IP if detection is unreliable days to keep old log files.
Service Repositories DevStack will log the stdout output of the services it starts. When using
The Git repositories used to check out the source for each service are screen this logs the output in the screen windows to a file.
controlled by a pair of variables set for each service. *_REPO Without screen this simply redirects stdout of the service process to a file in
points to the repository and *_BRANCH selects which branch to check out. LOGDIR. Some of the project logs will be colorized by
These may be overridden in local.conf to pull source from a default and can be turned off as below.
different repo. GIT_BASE points to the primary repository server. Logging all services to a single syslog can be convenient. If the destination
OS_PROJECT_NAME=demo log host is not localhost, the settings below can be used to
OS_USERNAME=demo direct the message stream to the log host.
OS_PASSWORD=secret #The usual cautions about putting passwords in Database Backend
environment variables apply The available databases are defined in the lib/databases directory. MySQL
HOST_IP=127.0.0.1 #Typically set in thelocalrc section is the default database but can be replaced in the
SERVICE_HOST=$HOST_IP localrc section:
OS_AUTH_URL=http://$SERVICE_HOST:5000/v2.0 Messaging Backend
#commented out by default Support for RabbitMQ is included. Additional messaging backends may be
# export KEYSTONECLIENT_DEBUG=1 available via external plugins. Enabling or disabling
# export NOVACLIENT_DEBUG=1 RabbitMQ is handled via the usual service functions
[[local|localrc]] LOGFILE=$DEST/logs/stack.sh.log
ADMIN_PASSWORD=secret LOGDAYS=1
DATABASE_PASSWORD=$ADMIN_PASSWORD LOG_COLOR=False
RABBIT_PASSWORD=$ADMIN_PASSWORD LOGDIR=$DEST/logs
SERVICE_PASSWORD=$ADMIN_PASSWORD SYSLOG=True
#FIXED_RANGE=172.31.1.0/24 SYSLOG_HOST=$HOST_IP
#FLOATING_RANGE=192.168.20.0/25 SYSLOG_PORT=516
#HOST_IP=10.3.4.5 disable_service mysql
# export NOVACLIENT_DEBUG=1 enable_service postgresql
Logging DEST=/opt/stack
By default stack.sh output is only written to the console where it runs. It disable_service rabbit
can be sent to a file in addition to the console by setting 8
Apache Frontend

11 | P a g e

The Apache web server can be enabled for wsgi services that support being 9
deployed under HTTPD + mod_wsgi. Each service that Disable Identity API v2
can be run under HTTPD + mod_wsgi also has an override toggle available The Identity API v2 is deprecated as of Mitaka and it is recommended to
that can be set. See examples below. only use the v3 API.
Clean Install Tempest
By default stack.sh only clones the project repos if they do not exist in If Tempest has been successfully configured, a basic set of smoke tests can
$DEST. This can be overridden as below and avoids having to be run as below.
manually remove repos to get the current branch from $GIT_BASE. Things to Consider
Guest Images DevStack is optimized for ease of use, making it less suitable for highly
Images provided in URLS, via the comma-separated IMAGE_URLS variable, customized installations. DevStack supplies a monolithic
will be downloaded and uploaded to glance by DevStack. installer script that installs all the configured modules. To add or remove
Default guest images are predefined for each type of hypervisor and their modules, the whole environment must be torn down using
testing requirements in stack.sh and can be overridden unstack.sh. Then, the updated configuration is installed by re-running
as below. stack.sh.
Instance Type DevStack installs OpenStack modules in a development environment, which
DEFAULT_INSTANCE_TYPE can be used to configure the default instance is very different from a typical production deployment.
type. When this parameter is not specified, DevStack creates It’s not possible to mix and match components in a production
additional micro and nano flavors for really small instances to run Tempest configuration with others in development configuration. In DevStack,
tests. dependencies are shared among all the modules. So a simple action of
Cinder syncing the dependencies for one module may unintentionally
The logical volume group, logical volume name prefix and the size of the update several other modules. DevStack is popular with developers
volume backing file are set as below. working on OpenStack, most typically used to test changes and
KEYSTONE_USE_MOD_WSGI=”True” verify they work in a running OpenStack deployment. Since it’s easy to use,
NOVA_USE_MOD_WSGI=”True” DevStack is ideal for setting up an OpenStack environment
RECLONE=yes for use in demos or proof of concept (POC). For production-grade installs,
DOWNLOAD_DEFAULT_IMAGES=False other tools are more appropriate (see OpenStack-Ansible,
IMAGE_URLS=”http://pf9.com/image1.qcow,” Fuel or TripleO).
IMAGE_URLS+=”http://pf9.com/image2.qcow” OpenStack Installation: RDO Packstack
DEFAULT_INSTANCE_TYPE=m1.tiny The 2016 OpenStack survey report asked what tools are being used to
VOLUME_GROUP=”stack-volumes” deploy OpenStack. Puppet was at the top of the list, and
VOLUME_NAME_PREFIX=”volume-” Ansible came in a close second. RDO Packstack is a Puppet-based utility to
VOLUME_BACKING_FILE_SIZE=10250M install OpenStack. RDO is the Red Hat distribution for

12 | P a g e
OpenStack and it packages the OpenStack components for Fedora-based During the early days of our product development, Platform9 used
Linux. Packstack to perform around 400 setups in a day. At this volume,
Prerequisites for Packstack the performance was not reliable and there were random timeouts. It was
Packstack is based on OpenStack Puppet modules. It’s a good option when difficult to investigate deployment errors. In addition, it was
installing OpenStack for a POC or when all OpenStack non-trivial to customize the scripts to build and deploy our custom changes.
controller services may be installed on a single node. Packstack defines In general, it is probably best to use Packstack for installing OpenStack on a
OpenStack resources declaratively and sets reasonable single node during a POC, when there isn’t a need to
default values for all settings that are essential to installing OpenStack. The customize the install process.
settings can be read or modified in a file, called the OpenStack Installation: OpenStack Ansible
answerfile in Packstack. Ansible is one of the top choices to deploy OpenStack. OpenStack-Ansible
Packstack runs on RHEL 7 or later versions and the equivalent version for (OSA) deploys a production-capable OpenStack environment
CentOS. The machine where Packstack will run needs at least using Ansible and LXC containers. This approach isolates the various
4GB of memory, at least one network adapter and x86 64-bit processors OpenStack services into their own containers and makes it
with hardware virtualization extensions. easier to install and update OpenStack.
ENABLE_IDENTITY_V2=False What is OpenStack-Ansible Deployment (OSAD)
$ cd /opt/stack/tempest OSAD is a source-based installation of OpenStack, deployed via Ansible
$ tox -efull tempest.scenario.test_network_basic_ops playbooks. It deploys OpenStack services on LXC containers
10 for complete isolation between components and services hosted on a
Install RDO Repository node. OSAD is well suited for deploying production environments.
To install OpenStack, first download the RDO repository rpm and install it. Ansible requires only SSH and Python to be available on the target host, no
On RHEL client or agents are installed. This makes it very
$ sudo yum install -y https://rdoproject.org/repos/rdo-release.rpm easy to run Ansible playbooks to manage environments of any size or type.
On CentOS There are a large number of existing Ansible modules for
$ sudo yum install -y centos-release-openstack-mitaka overall Linux management and OpenStack-Ansible playbooks can be written
Install OpenStack against the OpenStack APIs or Python CLIs.
Install the Packstack installer and then run packstack to install OpenStack 11
on a single node. Deployment Prerequisites
$ sudo yum install -y openstack-packstack The host that will run OpenStack-Ansible needs at least 16GB of RAM and
$ packstack --allinone 80GB of disk space. The host must have Ubuntu 14.04
Once the installer completes, verify the installation by login at or newer. It is recommended that all nodes hosting the Nova compute
http://${YourIp}/dashboard. service have multi-core processors with hardware-assisted
Things to Consider

13 | P a g e

virtualization extensions. All other infrastructure nodes should have multi- Once these prerequisites are met, proceed to the actual steps in the
core processors for best performance. installation. At a high level, the steps required are
Disk Requirements 1. Prepare deployment host
• Deployment hosts – 10GB of disk space for the OpenStack-Ansible 2. Prepare target hosts
repository content and other software 3. Configure deployment
• Compute hosts – At least 100GB of disk space available. Disks with higher 4. Run foundation playbooks
throughput, lower latency 5. Run infrastructure playbooks
• Storage hosts – At least 1TB of disk space. Disks with highest I/O 6. Run OpenStack playbooks
throughput with the lowest latency Let’s look at each step in detail below.
• Infrastructure hosts – At least 100GB of disk space for the services in the 12
OpenStack control plane Prepare Deployment Host
• Logging hosts – At least 50GB disk space for storing logs on logging hosts, The deployment host contains Ansible and orchestrates the installation on
with enough storage performance to keep up with the target hosts. It requires Ubuntu Server 14.04 LTS 64-bit.
the log traffic At least one network interface must be configured to access the Internet or
• Hosts that provide Block Storage (Cinder) volumes must have logical suitable local repositories.
volume manager (LVM) support and a volume group • Install the required utilities as shown below.
named cinder-volumes. $ apt-get install aptitude build-essential git ntp ntpdate openssh-server
Network Requirements python-dev sudo
• Bonded network interfaces– Increases performance and reliability • Configure NTP to synchronize with a suitable time source.
• VLAN offloading– Increases performance by adding and removing VLAN • Configure the network so that deployment host is on the same network
tags in hardware designated for container management.
• 1Gb or 10Gb Ethernet– Supports higher network speeds, may also • Clone OSA repository and bootstrap Ansible.
improve storage performance for Cinder • Configure SSH Keys
• Jumbo frames– Increase network performance by allowing more data to $ git clone -b VERSION https://github.com/openstack/openstack-ansible.git
be sent in each packet. /opt/openstack-ansible
Software Requirements $ scripts/bootstrap-ansible.sh
• Ubuntu 14.04 LTS or newer Prepare Target Hosts
• Linux kernel > v3.13.0-34-generic OSA recommends at least five target hosts contain the OpenStack
• Secure Shell (SSH) client and server environment and supporting infrastructure for the installation
• NTP client for time synchronization process. On each target host, perform the tasks below:
• Python 2.7 or later • Name target hosts
Installation Workflow • Install the operating system

14 | P a g e
• Generate and set up security measures • Installs and configure Rsyslog
• Update the operating system and install additional software packages • cd to /opt/openstack-ansible/playbooks
• Create LVM volume groups $ openstack-ansible setup-infrastructure.yml
• Configure networking devices • Confirm success with zero items unreachable or failed:
Configure Deployment PLAY RECAP
Ansible configuration files have to be updated to define target environment *********************************************************
attributes before running the Ansible playbooks. Perform deployment_host : ok=XX changed=0 unreachable=0 failed=0
the following tasks: Run OpenStack playbook
• Configure target host networking to define bridge interfaces and Finally, this step installs the OpenStack services as configured, in this order:
networks Keystone, Glance, Cinder, Nova, Heat, Horizon,
• Configure a list of target hosts on which to install the software Ceilometer, Aodh, Swift, Ironic:
• Configure virtual and physical network relationships for OpenStack • cd to /opt/openstack-ansible/playbooks
networking (neutron) $ openstack-ansible setup-openstack.yml
• Optionally, configure hypervisor and Cinder service Verify the Install
• Configure passwords for all services Since OpenStack can be consumed by either APIs or the UI, you’ll need to
Run Foundation Playbooks verify both after the install steps above complete
This step will prepare target hosts for infrastructure and OpenStack services successfully.
by doing the following: Verify OpenStack APIs
• Perform deployment host initial setup The utility container provides a CLI environment for additional
• Build containers on target hosts configuration and testing.
• Restart containers on target hosts • Determine the utility container name:
• Install common components into containers on target hosts $ lxc-ls | grep utility
• cd to /opt/openstack-ansible/playbook XX_utility_container_YY
$ openstack-ansible setup-hosts.yml • Access the utility container:
• deploy HAProxy $ lxc-attach -n XX_utility_container_YY
$ openstack-ansible haproxy-install.yml • Source the admin tenant credentials:
13 $ source /root/opener
Run Infrastructure Playbooks • Run an OpenStack command that uses one or more APIs. For example:
The main Ansible infrastructure playbook installs infrastructure services and $ openstack user list
performs the following operations: +----------------------------------+--------------------+
• Installs Memcached and the repository server | ID | Name |
• Installs Galera and RabbitMQ +----------------------------------+--------------------+

15 | P a g e

Verify UI Dashboard which can be accessed via a graphical or a command line interface, to
• With a web browser, access the dashboard using the external load provision, configure and manage OpenStack environments.
balancer IP address defined by the external_lb_vip_address Fuel deploys a master node and multiple slave nodes. The master node is a
option in the /etc/openstack_deploy/openstack_user_config.yml file. server with the installed Fuel application that performs
• Authenticate with admin username and password defined by the initial configuration, provisioning, and PXE booting of the slave nodes, as
keystone_auth_admin_password option in file well as assigning the IP addresses to the slave nodes. The
/etc/openstack_deploy/user_variables.yml slave nodes are servers provisioned by the master node. A slave node can
14 be a controller, compute, or storage node.
Benefits of OpenStack-Ansible Deployment This section will describe how to install Fuel on Oracle VirtualBox and use it
• No dependency conflicts among services due to container-based to deploy the Mirantis OpenStack environment. With the
architecture. Updating a service with new dependencies default configurations, such an environment is suitable for testing or a quick
doesn’t affect other services. demo. For a production environment, the configuration
• Deploy redundant services even on a single-node install. Galera, must specify network topology and IPAM, storage, the number, type and
RabbitMQ, and Keystone are deployed with redundancy, and flavor of service nodes, monitoring, any Fuel plug-ins etc.
HAProxy is installed in the host. Fuel Installation Prerequisites
• Easy to do local updates or repairs to an existing installation. Ansible can The environment must meet the following software prerequisites:
destroy a container and regenerate one with a newer • A 64-bit host operating system with at least 8 GB RAM and 300 GB of free
version of the service. space with virtualization enabled in the BIOS
• Mix and match services by using development packages on some, while • Access to the Internet or to a local repository containing the required files
keeping the rest configured for production use. • Oracle VirtualBox
Things to Consider • Oracle VM VirtualBox Extension Pack
OSAD is easy to install on a single node for a POC. Yet, it is robust enough • Mirantis OpenStack ISO
for a production install. Due to the containerized • Mirantis VirtualBox scripts with a version matching that of Mirantis
architecture, it is easy to upgrade services individually or all at the same OpenStack.
time. Compared to Puppet, Ansible playbooks are easier to • The latest versions of VirtualBox would work with these specific versions
customize. Despite all this ease, it is still non-trivial to investigate or newer of: Ubuntu Linux 12, Fedora 19, OpenSUSE
deployment errors due to the volume of logs. 12.2 and Microsoft Windows x64 with cygwin x64. MacOS 10.7.5 requires
OpenStack Installation: Fuel VirtualBox 4.3.x
Fuel is an open source tool that simplifies and accelerates the initial 15
deployment of OpenStack environments and facilitates their Overview of the Installation Process
ongoing management. Fuel deploys an OpenStack architecture that is Mirantis provides VirtualBox scripts that include configurations for the
highly available and load balanced. It provides REST APIs, virtual machine network and hardware settings. The script

16 | P a g e
provisions the virtual machines with all required settings automatically. The 7. Once the launch script completes, access the Fuel web UI to create an
steps involved in the process are: OpenStack environment as shown in the section below.
1. Install Oracle VirtualBox and Oracle VM VirtualBox Extension Pack. Create a New OpenStack environment
2. Download the Mirantis OpenStack ISO and place it in a directory named After the Fuel master node is installed, the slave nodes appear as
iso. unallocated nodes in the web UI. Now the OpenStack environment
3. Download the Mirantis VirtualBox scripts. can be created, configured and deployed. A single Fuel master node can
4. Modify the config.sh script to specify parameters that automate the Fuel deploy and manage multiple OpenStack environments, but
installation. For example, specify the number of virtual each environment must be created separately.
nodes to create, as well as how much memory, storage, and CPU to allocate To create an OpenStack environment:
to each machine. The parameter names are listed 1. Access the Fuel web UI at http://10.20.0.2:8443.
below: along with the default values in ( ) 2. Log in to the Fuel web UI as admin. The default password is same as set
oo vm_master_memory_mb (1536) earlier.
oo vm_master_disk_mb (65 GB) 3. Click New OpenStack environment to start the deployment wizard.
oo vm_master_nat_network (192.168.200.0/24) 4. In the Name and Release screen, type a name of the OpenStack
oo vm_master_ip ( 10.20.0.2) environment, select an OpenStack release and an operating
oo vm_master_username (root) system on which to deploy the OpenStack environment.
oo vm_master_password (r00tme) 5. In the Compute screen, select a hypervisor. By default, Fuel uses QEMU
oo cluster_size with KVM acceleration.
oo vm_slave_cpu (1) 6. In the Networking Setup screen, select a network topology. By default,
oo vm_slave_memory_mb (If the host system has 8 GB, default value is Fuel deploys Neutron with VLAN segmentation.
1536 MB. If the host system has 16 GB, default value 7. In the Storage Backends screen, select appropriate options. By default,
is 2048 MB) Fuel deploys Logical Volume Management (LVM) for
5. Run one of launch.sh, launch_8GB.sh or launch_16GB.sh scripts, Cinder, local disk for Swift, and Swift for Glance.
depending on the amount of memory on the computer. Each 8. In the Additional Services screen, select any additional OpenStack
script creates one Fuel master node. The slave nodes differ for each script. programs to deploy.
oo launch.sh – one salve node with 2048 MB RAM and 2 slave nodes with 9. In the Finish screen, click Create. Fuel now creates an OpenStack
1024 MB RAM each environment. Before using the environment, follow the UI
oo launch_8GB.sh – three slave nodes with 1536 MB RAM each options to add nodes, verify network settings, and complete other
oo launch_16GB.sh – five slave nodes with 2048 MB RAM each configuration tasks.
6. The script installs the Fuel master node on VirtualBox and may take up to © 2016 Platform9 <> 16
30 minutes to finish. Things to Consider

17 | P a g e

Fuel makes it very easy to install a test OpenStack environment using Oracle Overview of the Installation Process
VirtualBox. The automated script will spin up a master 1. Prepare the baremetal or virtual environment.
node and configure and deploy the slave nodes that will host compute, 2. Install Undercloud.
storage and other OpenStack services. Using Fuel you can 3. Prepare Images and Flavours for Overcloud.
also deploy multiple production-grade, highly available OpenStack 4. Deploy Overcloud.
environments on virtual or bare metal hardware. Fuel can be Prepare the Baremetal or Virtual Environment
used to configure and verify network configurations, test interoperability At a minimum, TripleO needs one environment for the Undercloud and one
between the OpenStack components and easily scale the each for the Overcloud Controller and Compute. All three
OpenStack environment by adding and removing nodes. environments can be virtual machines and each would need 4GB of
OpenStack memory and 40GB of disk space. If all three environments are
Installation: TripleO completely on bare metal then each would need multi-core CPU with 4GB
TripleO is short for “OpenStack on OpenStack,” memory and 60GB free disk space. For each additional
an official OpenStack project for deploying and Overcloud role, such as Block Storage or Object Storage, an additional bare
managing a production cloud onto bare metal metal machine would be required. TripleO supports the
hardware. The first “OpenStack” in the name is following operating systems: RHEL 7.1 x86_64 or CentOS 7 x86_64
for an operator-facing deployment cloud called The steps below are for a completely virtualized environment.
the Undercloud. This will contain the necessary Undercloud
OpenStack components to deploy and manage Undercloud Node
a tenant-facing workload cloud called the Overcloud
Overcloud, the second “OpenStack” in the Horizon
name. Ceilometer
The Undercloud server is a basic single-node MariaDB
OpenStack installation running on a single Nova
physical server used to deploy, test, manage Heat
and update the Overcloud servers. It contains Keystone
a strictly limited subset of OpenStack components, just enough to interact Ironic
with the Neutron
Overcloud. The Overcloud is the deployed solution and can represent a Glance
cloud for dev, RabbitMQ
test, production etc. The Overcloud is the functional cloud available to run Open Stack
guest virtual Clients
machines and workloads. Ceph-mon

18 | P a g e
Controller Node Agent
Horizon Neutron
Glance Open
MariaDB VSwitch
Object Agent
Storage Ceph
Node Storage
Swift Node
Storage Ceph-OSD
Celiometer Swift
Agent Storage
Neutron OpenStack
Open Clients
VSwitch Keystone
Agent Ceilometer
Block Nova API
Storage Neutron
Noce Server
Cinder Cinder API
Volume Cinder
Ceilometer Volune
Agent Swift Proxy
Neutron Heat API
Open Heat
VSwitch Engine
Agent RabbitMQ
Compute Neutron
Node Open
Nova KVM VSwitch
Nova Agent
Compute Neutron
Celiometer Open

19 | P a g e

VSwitch os-collect-config,os-net-config,os-refresh-config,python-
Agent tripleoclient,tripleo-common,openstack-
1. Install RHEL 7.1 Server x86_64 or CentOS 7 x86_64 on the host machine. tripleo-heat-templates,openstack-tripleo-image-elements,openstack-
2. Make sure sshd service is installed and running. tripleo,openstack-
3. The user performing all of the installation steps on the virt host needs to tripleo-puppet-elements,openstack-puppet-modules
have sudo enabled. If required, use the following EOF”
commands to create a new user called stack with password-less sudo sudo yum -y install epel-release
enabled. Do not run the rest of the steps in this guide sudo curl -o /etc/yum.repos.d/delorean.repo
as root. http://trunk.rdoproject.org/centos7/current-tripleo/
4. Enable needed repositories: delorean.repo
oo Enable epel: sudo curl -o /etc/yum.repos.d/delorean-deps.repo
oo Enable last known good RDO Trunk Delorean repository for core http://trunk.rdoproject.org/centos7/delorean-
openstack packages deps.repo
oo Enable latest RDO Trunk Delorean repository only for the TripleO sudo yum install -y instack-undercloud
packages instack–virt–setup
oo Enable the Delorean Deps repository © 2016 Platform9 <> 18
5. Install the Undercloud: 7. When the script has completed successfully, it will output the IP address
6. The virt setup automatically sets up a vm for the Undercloud, installed of the VM that has now been installed with a base OS.
with the same base OS as the host. 8. You can ssh to the vm as the root user:
sudo useradd stack 9. The vm contains a stack user to be used for installing the Undercloud.
sudo passwd stack # specify a password You can su – stack to switch to the stack user account.
echo “stack ALL=(root) NOPASSWD:ALL” | sudo tee -a /etc/sudoers.d/stack Install Undercloud
sudo chmod 0440 /etc/sudoers.d/stack 1. Log in to your machine where you want to install the Undercloud as a
sudo curl -o /etc/yum.repos.d/delorean-current.repo non-root user:
http://trunk.rdoproject.org/centos7/ ssh <non-root-user>@<undercloud-machine>
current/delorean.repo 2. Enable needed repositories using the same commands as in the section
sudo sed -i ‘s/\[delorean\]/\[delorean-current\]/’ above on preparing the environment.
/etc/yum.repos.d/delorean-current.repo 3. Install the yum-plugin-priorities package so that the Delorean repository
sudo /bin/bash -c “cat <<EOF>>/etc/yum.repos.d/delorean-current.repo takes precedence over the main RDO repositories:
includepkgs=diskimage-builder,instack,instack-undercloud,os-apply- sudo yum -y install yum-plugin-priorities
config,os-cloud-config, 4. Install the TripleO CLI, which will pull in all other necessary packages as
dependencies:

20 | P a g e
sudo yum install -y python-tripleoclient 8. Introspect hardware attributes of nodes:
5. Run the command to install the Undercloud: 9. Introspection has to finish without errors. The process can take up to 5
openstack undercloud install minutes for VM and up to 15 minutes for bare metal.
Once the install has completed, you should take note of the files stackrc 10. Create flavors i.e. node profiles:The Undercloud will have a number of
and undercloud-passwords.conf. You can source stackrc to default flavors created at install time. In most cases,
interact with the Undercloud via the OpenStack command-line client. these flavors do not need to be modified. By default, all Overcloud
undercloud-passwords.conf contains the passwords used for instances will be booted with the bare metal flavor, so all
each service in the Undercloud. bare metal nodes must have at least as much memory, disk, and CPU as
Prepare Images and Flavours for Overcloud that flavor. In addition, there are profile-specific flavors
1. Log into your Undercloud virtual machine as non-root user: created which can be used with the profile-matching feature.
2. In order to use CLI commands easily you need to source needed Deploy OverCloud
environment variables: Overcloud nodes can have a nameserver configured in order to resolve
3. Choose the image operating system: The built images will automatically hostnames via DNS. The nameserver is defined in the
have the same base OS as the running Undercloud. To Undercloud’s neutron subnet. If needed, define the nameserver to be used
choose a different OS set NODE_DIST to ‘centos7’ or ‘rhel7’ for the environment:
4. Install the current-tripleo delorean repo and deps repo into the 1. List the available subnets:
Overcloud images: 2. By default 1 compute and 1 control node will be deployed, with
5. Build the required images: networking configured for the virtual environment. To
export USE_DELOREAN_TRUNK=1 customize this, see the output of:
export 3. Run the deploy command, including any additional parameters as
DELOREAN_TRUNK_REPO=”http://trunk.rdoproject.org/centos7/current- necessary:
tripleo/” 4. When deploying the Compute node in a virtual machine, add –libvirt-
export DELOREAN_REPO_FILE=”delorean.repo” type qemu otherwise launching instances on the deployed
ssh root@<undercloud-machine> su – stack Overcloud will fail. This command will use Heat to deploy templates. In
openstack overcloud image build --all turn, Heat will use Nova to identify and reserve
source stackrc the appropriate nodes. Nova will use Ironic to startup nodes and install the
ssh root@<instack-vm-ip> correct images. Finally, services on nodes of the
© 2016 Platform9 <> 19 Overcloud are registered with Keystone.
6. Load the images into the Undercloud Glance: 5. To deploy the Overcloud with network isolation, bonds or custom
7. Register and configure nodes for your deployment with Ironic. The file to network interface configurations, follow the workflow here:
be imported may be either JSON, YAML or Configuring Network Isolation
CSV format: openstack baremetal introspection bulk start

21 | P a g e

openstack help overcloud deploy approach, it seems to work well for users of the Red Hat distribution of
openstack overcloud deploy --templates [additional parameters] OpenStack
neutron subnet-list
neutron subnet-update <subnet-uuid> –dns-nameserver <nameserver-ip>
openstack baremetal import instackenv.json
openstack overcloud image upload
© 2016 Platform9 <> 20
6. Openstack Overcloud deploy generates an overcloudrc file appropriate
for interacting with the deployed Overcloud in the
current user’s home directory. To use it, simply source the file:
7. To return to working with the Undercloud, source the stackrc file again:
Benefits of TripleO
Since TripleO uses OpenStack components and APIs, it has the following
benefits when used to deploying and operating an
OpenStack private cloud:
• The APIs are well documented and come with client libraries and
command line tools. Users already familiar with OpenStack
find it easier to understand TripleO.
• TripleO automatically inherits all the new features, bug fixes and security
updates which are added to the included OpenStack
components and allows more rapid feature development of TripleO.
• Tight integration with the OpenStack APIs provides a solid architecture
that has been extensively reviewed by the OpenStack
community.
Things to Consider
TripleO is one more option to deploy a production-grade OpenStack private
cloud. It tries to ease the deployment process by
“bootstrapping” the process using a subset of OpenStack components to
build a smaller cloud first. The benefit of this approach is
that operators can use familiar OpenStack APIs to deploy the subsequent
consumer-facing OpenStack cloud. While not an intuitive

22 | P a g e
assuring the integrity and functionality of its hosted
Chapter 5 – Cloud computer environment.
This is accomplished through redundancy of mechanical
cooling and power systems (including emergency backup
Management and Security power generators) serving the data center along with fiber
optic cables.
Example: Telecommunications Industry
Association's Telecommunications
Infrastructure Standard for Data Centers
Data Center & Cloud Management O It specifies the minimum requirements for
Data Center telecommunications infrastructure of data centers and
What is a Data Center? computer rooms including
• A data center is a facility that • single tenant enterprise data centers and
centralizes an organization’s IT • multi-tenant Internet hosting data centers.
operations and equipment, and O The topology proposed in this document is intended to be
where it stores, manages, and applicable to any size data center.
disseminates its data. Typical Projects within a Data Center
• It generally includes redundant or O Standardization/consolidation: This project helps to reduce the
backup power supplies, redundant number of hardware, software platforms, tools and processes within a
data communications connections, data center. Organizations replace aging data center equipment with
environmental controls (e.g., air newer ones that provide increased capacity and performance.
conditioning, fire suppression) and O Virtualize: There is a trend to use IT virtualization technologies to
various security devices. replace or consolidate multiple data center equipment, such as
Concerns for Data Centers servers. It helps lower energy consumption. This technology is also
• Companies rely on their information systems to run their used to create virtual desktops.
operations. If a system becomes unavailable, company O Automating: Data center automation involves automating tasks such
operations may be impaired or stopped completely. as provisioning, configuration, patching, release management and
• Information security is also a concern, and for this reason compliance.
a data center has to offer a secure environment which O Securing: In modern data centers, the security of data on virtual
minimizes the chances of a security breach. systems is integrated with existing security of physical infrastructures.
• A data center must therefore keep high standards for The security of a modern data center must take into account physical
1|Page

security, network security, and data and user security. design is determined. The detailed design phase should include the
Data Center Levels and Tiers detailed architectural, structural, mechanical and electrical
Design Considerations information and specification of the facility.
1. Design programming 6. Mechanical engineering infrastructure designs
O Design programming, also known as architectural It involves maintaining the interior environment of a data center, such
programming, is the process of researching and as heating, ventilation and air conditioning (HVAC); humidification
making decisions to identify the scope of a design and dehumidification equipment; pressurization; and so
project. 7. Electrical engineering infrastructure design
O Other than the architecture of the building itself there Its aspects may include utility service planning; distribution, switching
are three elements to design programming for data and bypass from power sources; uninterruptable power source (UPS)
centers: systems; and more.
1. Facility topology design (space planning) Other considerations
2. Engineering infrastructure design (mechanical O 8. Technology infrastructure design
systems such as cooling and electrical systems O 9. Availability expectations
including power) O 10. Site selection
3. Technology infrastructure design (cable plant). O 11. Modularity and flexibility
2. Modeling criteria O 12. Environmental control
Modeling criteria are used to develop future-state scenarios for space, O 13. Electrical power
power, cooling, and costs in the data center. The aim is to create a master O 14. Low-voltage cable routing
plan with parameters such as number, size, location, topology, IT floor O 15. Fire protection
system layouts, and power and cooling technology and configurations. O 16. Security
3. Design recommendations Data center infrastructure management
Design recommendations/plans generally follow the modelling criteria Data Center Infrastructure Management(DCIM) is the
phase. The optimal technology infrastructure is identified and planning integration of information technology (IT) and facility
criteria are developed, such as critical power capacities etc. management disciplines to centralize:
4. Conceptual design • monitoring,
Conceptual floor layouts should be driven by IT performance • management and
requirements as well as lifecycle costs associated with IT demand, energy • intelligent capacity planning of a data center's critical
efficiency, cost efficiency and availability. systems.
5. Detailed design Achieved through the implementation of specialized software,
Detailed design is undertaken once the appropriate conceptual hardware and sensors, DCIM enables common, real-time
2|Page
monitoring and management platform for all interdependent O Cloud management is the process of
systems across IT and facility infrastructures. overseeing and managing an
Data Center Services:
organization's cloud computing resources,
Hardware installation and maintenance
services, and infrastructure. It can be
Managed power distribution
Backup power systems performed by a company's internal IT
Data backup and archiving team or a third party service provider.
Managed load balancing CLOUD AUTOMATION
Controlled Internet access Cloud automation reduces the repetitive manual work needed to deploy
Managed e-mail and messaging and manage cloud
Managed user authentication and workloads
authorization Automation achieved via orchestration, which is the mechanism by which
Diverse firewalls and anti-malware programs
automation is
Managed outsourcing
Managed business continuance.
implemented
Continuous, efficient technical support. Ideally, automation and orchestration can reduce complex and time-
Some Issues Faced by Data consuming steps into a
Centers single script or click
O Data centers strive for providing fast, The idea is to boost operational efficiencies, accelerate application
uninterrupted service. Equipment failure, deployment and reduce
communication or power outages,
human error
network congestion and other problems
that keep people from accessing their Cloud automation - refers to processes and tools that reduce or eliminate
data and applications have to be dealt manual efforts used to
with immediately. Due to the constant provision and manage cloud computing workloads and services
demand for instant access, data centers Organizations can apply cloud automation to private, public and hybrid
are expected to run 24/7, which creates a cloud environments.
host of issues.
WHY USE CLOUD COMPUTING
Cloud Management
Cloud Management ?
3|Page

Sizing, provisioning and configuring resources such as virtual machines Development and deployment. Continuous software development relies
(VMs) on automation
Establishing VM clusters and load balancing for various steps, from code scans and version control to testing and
Creating storage logical unit numbers (LUNs) deployment.
Invoking virtual networks Tagging. Assets can be tagged automatically based on specific criteria,
The actual cloud deployment context and conditions
Monitoring and managing availability and performance of operation.
NOTE: To achieve cloud automation, an IT team needs to use orchestration Security. Cloud environments can be set up with automated security
and controls that enable or
automation tools that run on top of its virtualized environment. restrict access to apps or data, and scan for vulnerabilities and unusual
TYPES OF CLOUD COMPUTING performance levels.
Automating various tasks in the cloud removes the repetition, inefficiency Logging and monitoring. Cloud tools and functions can be set up to log all
and errors activity involving
inherent with manual processes and intervention services and workloads in an environment. Monitoring filters can be set up
Resource allocation. Autoscaling -- the ability to scale up and down the to look for anomalies
use of compute, or unexpected events.
memory or networking resources to match demand -- is a core tenet of Provisioning Automation:
cloud computing. It Infrastructure as Code (IaC): Tools like Terraform and AWS
provides elasticity in resource usage and enables the pay-as-you-go cloud CloudFormation allow for automated
cost model. provisioning of cloud resources.
Configurations. Infrastructure configurations can be defined through Self-service Portals: Users can provision resources through a user-friendly
templates and code interface.
and implemented automatically. In the cloud, opportunities for integration Cost Management and Optimization: Tools that automate cost monitoring,
increase with analysis, and
associated cloud services. optimization strategies to manage cloud expenses effectively.

4|Page
Network Configuration and Management: Automating network setup and Faster completion: Cloud automation enables tasks to be completed
management, faster. An IaC tool can
including VPNs, firewalls, and load balancers. set up a hundred servers in minutes using predefined templates, for
Workload Automation: Automating tasks and workflows that run in the instance, whereas a
cloud, often using human engineer might take several days to complete the same work.
tools like Apache Airflow or AWS Step Functions. Lower risk of errors: When tasks are automated, the risk of human error
BENEFITS OF CLOUD COMPUTING or oversight
Saves an organization time and money virtually disappears. As long as you properly configure the rules and
Is faster, more secure and more scalable than manually performing tasks templates that drive
Causes fewer errors, as organizations can construct more predictable and your automation, you will end up with clean environments.
reliable workflows Higher security: By a similar token, cloud automation reduces the risk that
Increases efficiency by enabling continuous deployment and automating a mistake made
bug detection by an engineer -- such as exposing to the public Internet an internal
Simplifies implementation, compared to on-premises platforms, requiring application that is
less IT intervention intended only for internal use -- could lead to security vulnerabilities.
Contributes directly to better IT and corporate governance Scalability: Cloud automation is essential for any team that works at
Frees IT teams from repetitive and manual administrative tasks to focus scale. It may be
on higher-level work that possible to manage a small cloud environment -- one that consists of a few
more closely aligns with the organization's business needs. This includes virtual machines
integrating higherlevel and storage buckets, for example -- using manual workflows. But if you
cloud services or developing new product features. want to scale up to
Time savings: By automating time-consuming tasks like infrastructure hundreds of server instances, terabytes of data and thousands of users,
provisioning, cloud cloud automation
automation tools allow human engineers to focus on other activities that becomes a must.
require higher CLOUD AUTOMATION CHALLENGES
levels of expertise and cannot be easily automated.

5|Page

Internet connectivity can be all-or-nothing. Public cloud services are built automated tasks to occur at specific times and in specific sequences for
on wide area specific purposes.
networks, making the reliability of the connection a major concern, a Automation refers to automating a single process or a small number of
serious consideration related tasks (e.g.,
for discussion with the service provider deploying an app)
Cloud automation security options are often limited, which can be Orchestration refers to managing multiple automated tasks to create a
particularly difficult in dynamic workflow
highly regulated industries with complex compliance requirements, given (e.g., deploying an app, connecting it to a network, and integrating it with
the lack of other systems)
customization and control flexibility CLOUD AUTOMATION USE CASES
Limited access to back-end data can make maintenance burdensome Some basic examples of cloud automation
when complex issues include the following:
arise Autoprovisioning cloud infrastructure resources.
Platform lock-in can be a risk. The convenience of cloud automation can Shutting down unused instances and processes,
lead to a broad buyin mitigating sprawl.
across the enterprise, with more business processes and operations Performing regular data backup.
committed to the CLOUD AUTOMATION TOOLS
platform. And the bigger that commitment, the tougher any future Examples of automation services from public cloud providers include the
migration to a different following:
platform will be. AWS Config, AWS CloudFormation and AWS Elastic Compute Cloud
DIFFERENCE BETWEEN CLOUD AUTOMATION AND CLOUD ORCHESTRATION Systems Manager.
Cloud automation invokes various steps and processes to deploy and Google Cloud Composer, Google Cloud Deployment Manager.
manage workloads IBM Cloud Orchestrator.
in the cloud with minimal or no human intervention. Microsoft Azure Resource Manager and Microsoft Azure Automation.
Cloud orchestration describes how an administrator codifies and CONFIGURATION MANAGEMENT TOOLS
coordinates those Chef Automate

6|Page
HashiCorp Terraform • Cloud infrastructure security is a framework for safeguarding cloud
Puppet Enterprise resources
Red Hat Ansible against internal and external threats. It protects computing environments,
Salt Open Source Software applications, and sensitive data from unauthorized access by centralizing
SaltStack Enterprise authentication and limiting authorized users’ access to resources.
MULTI-CLOUD MANAGEMENT TOOLS CLOUD INFRASTRUCTURE SECURITY GOAL
CloudBolt Software • The main goal of cloud infrastructure security is to protect this virtual
CloudSphere infrastructure
Flexera against a wide range of potential security threats, including both internal
Morpheus Data and
Snow Software Inc external threats
VMware • By implementing policies, tools, and technologies for identifying and
Zscaler managing
CLOUD INFRASTRUCTURE SECURITY security issues, companies reduce the cost to the business, improve
• Cloud infrastructure security involves protecting the infrastructure that business
cloud continuity, and enhance regulatory compliance efforts
computing services are based on, including both physical and virtual IMPORTANCE OF CLOUD INFRASTRUCTURE
infrastructure SECURITY
• Physical infrastructure includes the network infrastructure, servers, and • Companies are increasingly moving to the cloud, entrusting these
other physical environments
components of cloud data centers, while the Infrastructure as a Service with sensitive data and business-critical applications.
(IaaS) offerings • As a result, cloud security is a growing component of their cybersecurity
— such as virtualized network infrastructure, computing, and storage — programs, and cloud infrastructure security is a crucial part of this.
comprise the • Cloud infrastructure security processes and solutions provide companies
virtual infrastructure made available to cloud users with
much-needed protection against threats to their cloud infrastructure.

7|Page

• These solutions can help to prevent data breaches (ensuring that • A secure cloud infrastructure includes centralized identity and access
sensitive data management (IAM)
remains private by blocking unauthorized access), protect the reliability and granular, role-based access controls for managing access to
and applications and other
availability of cloud services, and support regulatory compliance in the system resources.
cloud. • This prevents unauthorized users from gaining access to digital assets and
HOW DOES IT WORK? allows system
• In public cloud, security is shared between the cloud provider and administrators to limit the resources that authorized users are permitted to
customer under the access.
cloud shared responsibility model TYPES OF CLOUD INFRASTRUCTURE
• In public cloud - service provider is responsible for the security of the SECURITY
physical • Public Cloud Infrastructure Security: According to the public cloud shared
infrastructure in their data centers responsibility model, the physical infrastructure in public cloud
• Responsibility for virtual infrastructure can be split between the public environments is
cloud customer and managed and protected by the cloud provider who owns it, while the
provider based on the cloud service model in use. virtual
• For example, the cloud provider is responsible for securing the services infrastructure is split between the cloud vendor and the customer
that they provide to • Private Cloud Infrastructure Security: Private clouds are deployed within
a cloud customer, such as the hypervisors used to host virtual machines in an
an IaaS organization’s data centers, making the organization responsible for
environment. ensuring private
• In a Software as a Service (SaaS) environment, the cloud provider is fully cloud security, including the security of the underlying infrastructure
responsible for the • Hybrid Cloud Infrastructure Security: Hybrid clouds mix public and private
security of the infrastructure stack. cloud
HOW DOES IT WORK? environments. This means that responsibility for the underlying
infrastructure is shared

8|Page
be BENEFITS OF CLOUD INFRASTRUCTURE SECURITY organizations’ access to their computing environments and the sensitive
• Improved Security: Cloud infrastructure security provides additional data that they hold.
visibility and Protecting the underlying infrastructure supporting these environments is
protection for the underlying infrastructure that supports an organization’s essential for
cloud services. regulatory compliance.
This enhanced security posture enables more rapid detection, prevention, • Decreased Operating Costs: Cloud infrastructure security can enable
and remediation of organizations to find
potential threats. and fix potential issues before they become major problems. This reduces
• Greater Reliability and Availability: Cyberattacks and other incidents can the cost of
cause an operating cloud-based infrastructure.
organization’s cloud-based applications to go offline or cause other • Cloud confidence: Cloud customers who are confident in their security
unplanned behavior. will move more
Cloud infrastructure security helps to reduce the risk of these incidents for workloads to the cloud, faster. This enables the cloud customer to more
example by rapidly take
blocking attack traffic, improving the availability and reliability of cloud advantage of the benefits of the cloud.
environments. CLOUD INFRASTRUCTURE SECURITY BEST
• Simplified Management: Cloud infrastructure security solutions should PRACTICES
be part of an • Implement security for both the control and data plane in cloud
organization’s cloud security architecture. This makes it easier to monitor environments.
and manage the • Perform regular patching and updates to protect applications and the OS
security of cloud environments as a whole.tween the cloud provider (in the against
case of public cloud) and the cloud customer potential exploits.
Regulatory Compliance: There are a wide variety of regulations with which • Implement strong access controls leveraging multi-factor authentication
cloud customers and the
need to comply, depending on their business requirements. Many of these principle of least privilege.
regulations define

9|Page

• Educate employees on the importance of cloud security and best environments change.
practices for INFRASTRUCTURE SECURITY
operating in the cloud. • Infrastructure Security in cloud computing Helps With:
• Encrypt data at rest and in transit across all of the organization’s IT • Data Protection
environment. • Access Management
• Perform regular monitoring and vulnerability scanning to identify current • Real-Time Threat Detection
threats and • Cloud Compliance
potential security risks. • Scalability
CLOUD INFRASTRUCTURE SECURITY AND ZERO • Network Security
TRUST • Application Security
• Zero Trust is a vital element of infrastructure security • Centralized Security
• Zero Trust is a security strategy designed to stop data breaches and make • Business Continuity
other cyber KEY COMPONENTS OF CLOUD INFRASTRUCTURE
security attacks unsuccessful SECURITY
• All users and devices, regardless of their location, must be authenticated • Identity and Access Management (IAM)
first and then • Network Security
ongoingly monitored to verify their authorization status • Data Security
• A comprehensive security solution built on Zero Trust Network Access • Endpoint Security
(ZTNA) • Application Security
architecture protects an organization’s data and resources across all IDENTITY AND ACCESS MANAGEMENT
platforms and (IAM)
environments • Identity and access management (IAM) is a security measure that
• With modern tools, companies can control access, monitor traffic and involves who can
usage access cloud resources and what activities they can perform. IAM systems
continuously, and adapt their security strategy easily—even as dynamic can
cloud

10 | P a g e
implement security policies, manage user identities, track all logins, and do management, and data loss prevention (DLP). Additional data security
more measures
operations. include adding access controls and secure configuration to cloud databases
• IAM mitigates insider threats by implementing least privilege access and and
segregating duties. Additionally, it can also help detect unusual behavior cloud storage buckets.
and • Moreover, data protection laws also play a critical role in protecting cloud
provide early warning signs of potential security breaches. data.
• Use of IAP (Identity-Aware Proxy) – Temporary access to resource Industry regulations like GDPR, ISO 27001, HIPAA, etc. mandate
NETWORK SECURITY organizations to
• Network security in the cloud means protecting the confidentiality and have proper security measures to protect user data in the cloud.
availability ENDPOINT SECURITY
of data as it moves across the network. As data reaches the cloud by • Endpoint security focuses on securing user devices or endpoints that are
traveling over used to
the internet, network security becomes more critical in a cloud access the cloud, such as smartphones, laptops, and tablets. With new
environment. working
• Security measures for networks include firewalls and virtual private policies like remote work and Bring Your Own Device (BYOD), endpoint
networks security
(VPN), among others. However, all cloud providers offer a virtual private has become a vital aspect of cloud infrastructure security.
cloud • Organizations must ensure that users access their cloud resources with
(VPC) feature for organizations that allows them to run a private and secured
secure devices. Endpoint security measures include firewalls, antivirus software,
network within their cloud data center. and
DATA SECURITY device management solutions. Additionally, it may include measures like
• Data security in the cloud involves protecting data at rest, in transit, and user
in use. It training and awareness to avoid potential security threats.
includes various measures such as encryption, tokenization, secure key APPLICATION SECURITY

11 | P a g e

• Cloud application security is probably the most critical part of cloud behavioral analytics to help identify and respond to potential threats and
infrastructure security. It involves securing applications in the cloud against ensure compliance
various security threats like cross-site scripting (XSS), Cross-Site Request with industry standards.
Forgery • Google Cloud Security Command Center: Google Cloud Security
(CSRF), and injection attacks. Command Center offers
• Cloud applications can be secured through various ways such as secure centralized access to cloud security solutions. As a result, it allows the
coding organization to have
practices, vulnerability scanning, and penetration testing. Additionally, complete visibility and control over the resources and services on Google
measures Cloud Platform
like web application firewalls (WAF) and runtime application self-protection (GCP). Its wide range of capabilities includes advanced threat detection
(RASP) can provide added layers of security. technologies, realtime
TOOLS FOR CLOUD INFRASTRUCTURE insights, and security analytics.
SECURITY TOOLS FOR CLOUD INFRASTRUCTURE
• Amazon Web Services (AWS) Security Hub: AWS Security Hub centralizes SECURITY
visibility and • Cisco Cloudlock: Cisco Cloudlock is an advanced cloud security platform
offers actionable insights in security alerts. Additionally, it helps that
organizations strengthen their operates natively on the cloud. It offers comprehensive data protection,
cloud posture with advanced threat intelligence, automated compliance access
checks, and seamless controls, and threat intelligence. It offers security measures to various
integration with other security tools. cloud
• Microsoft Azure Security Center: Microsoft Azure Security Center is a applications, especially for Software-as-a-Service (SaaS).
cloud-native • IBM Cloud Pak for Security: IBM Cloud Pak for Security is an integrated
security management tool that provides continuous security monitoring, security
threat detection, and platform for cloud environments that offers threat intelligence, security
actionable recommendations to improve Azure environments. It uses analytics,
machine learning and

12 | P a g e
and automation functionalities. As a result, it helps organizations to processes.
effectively IDENTITY AND ACCESS MANAGEMENT (IAM)
detect, investigate, and respond to security threats in both cloud and • We have already established this above as a key component of
hybrid infrastructure security in
environments. Additionally, it used advanced analytics and AI-driven cloud computing. The purpose of IAM tools is to authorize user identity
insights for and deny access to
better cloud security. unauthorized parties. IAM checks the user’s identity and determines
5 ADVANCED TECHNIQUES FOR CLOUD INFRASTRUTURE SECURITY whether the user is
1.ENCRYPTION allowed to access the cloud resources or not.
2.IDENTITY AND ACCESS MANGAMENT(IAM) • Since IAM protocols are not based on the device or location used while
3.CLOUD FIREWALLS attempting to log in,
4.VIRTUAL PRIVATE CLOUD(VPC) AND SECURITY GROUPS they are highly useful in keeping cloud infrastructure secure.
5.PENETRATION TESTING • Key capabilities of IAM tools:
ENCRYPTION • Identity Providers (IdP): Authenticate the identity of users.
• The goal of encryption is to make data unreadable for those who access • Single Sign-On (SSO): enables users to sign in once and access all cloud
it. Once resources associated with
data is encrypted, only authorized users i.e. individuals with decryption their account.
keys will • Multi-factor authentication (MFA): Measures like 2-factor authentication
be able to read it. Since encrypted data is useless, it cannot be stolen or add extra security layers
used to for user access.;
carry out other attacks. • Access Control: Allows and restricts user access.
• You can encrypt data while it is stored (at rest) and also when it is CLOUD FIREWALLS
transferred from • Just like traditional firewalls, cloud firewalls are a shield
one location to another (in transit). This technique is critical when around the cloud infrastructure that filters malicious traffic.
transferring Additionally, it helps prevent cyberattacks like DDoS attacks,
data, sharing information, or securing communication between different vulnerability exploitation, and malicious bot activity. There

13 | P a g e

are basically 2 types of cloud firewalls: conduct the testing on their cloud applications.
• Next-Generation Firewalls (NGFW): They are deployed in • Penetration testers (a.k.a ethical hackers) use a process to check each
a data center to protect the organization’s Infrastructure-As-a- part of the application to
Service (IaaS) or Platform-as-a-Service (PaaS) models. find where the security flaws lie. They document each vulnerability they
• SaaS Firewalls: These secure networks of the virtual space find, along with their
are just like traditional firewalls but for those hosted in the impact level, and also provide recommendations for remediations.
cloud such as the Software as a Service (SaaS) models. • Cloud Penetration Testing offers you:
VIRTUAL PRIVATE CLOUD (VPC) AND SECURITY • Security vulnerabilities present in a cloud infrastructure
GROUPS • Impact level of the vulnerabilities (low, high, or critical)
• A virtual private cloud (VPC) provides a private cloud environment for a • Ways to address these vulnerabilities
public cloud • Meet compliance needs
domain. Additionally, a VPC creates highly configurable sections of a public • Strengthen overall cloud security posture
cloud. This Security and Privacy Issues
means you can access VPC resources on demand and scale up as per your in Cloud Computing
needs. Infrastructure Security
• To secure your VPC, you can use certain security groups. Each security Data Security and Storage
group acts as a Identity and Access Management (IAM)
virtual firewall that controls the traffic flow in and out of the cloud. Privacy
However, these Infrastructure Security
groups can be implemented at the instance level and not at the subnet Network Level
level. Host Level
PENETRATION TESTING Application Level
• Cloud penetration testing is a technique to find vulnerabilities present in The Network Level
a cloud environment by Ensuring confidentiality and integrity of your
simulating real attacks. Organizations can appoint third-party penetration organization’s data-in-transit to and from your public
testing companies to cloud provider

14 | P a g e
Ensuring proper access control (authentication, Case study: Amazon's EC2
authorization, and auditing) to whatever resources infrastructure
you are using at your public cloud provider “Hey, You, Get Off of My Cloud: Exploring Information
Ensuring availability of the Internet-facing resources in Leakage in Third-Party Compute Clouds”
a public cloud that are being used by your Multiple VMs of different organizations with virtual
organization, or have been assigned to your boundaries separating each VM can run within one
organization by your public cloud providers physical server
Replacing the established model of network zones "virtual machines" still have internet protocol, or IP,
and tiers with domains addresses, visible to anyone within the cloud.
The Network Level - VMs located on the same physical server tend to have IP
Mitigation addresses that are close to each other and are assigned
Note that network-level risks exist regardless of what at the same time
aspects of “cloud computing” services are being used An attacker can set up lots of his own virtual machines,
The primary determination of risk level is therefore not look at their IP addresses, and figure out which one
which *aaS is being used, shares the same physical resources as an intended target
But rather whether your organization intends to use or Once the malicious virtual machine is placed on the
is using a public, private, or hybrid cloud. same server as its target, it is possible to carefully monitor
The Host Level how access to resources fluctuates and thereby
SaaS/PaaS potentially glean sensitive information about the victim
Both the PaaS and SaaS platforms abstract and hide the Local Host Security
host OS from end users Are local host machines part of the cloud
Host security responsibilities are transferred to the CSP infrastructure?
(Cloud Service Provider) Outside the security perimeter
You do not have to worry about protecting hosts While cloud consumers worry about the
However, as a customer, you still own the risk of security on the cloud provider’s site, they may
managing information hosted in the cloud services. easily forget to harden their own machines

15 | P a g e

The lack of security of local devices can An attack against the billing model that underlies the
Provide a way for malicious services on the cost of providing a service with the goal of bankrupting
cloud to attack local networks through these the service itself.
terminal devices End user security
Compromise the cloud and its resources for Who is responsible for Web application security in the
other users cloud?
Local Host Security (Cont.) SaaS/PaaS/IaaS application security
With mobile devices, the threat may be even stronger Customer-deployed application security
Users misplace or have the device stolen from them Data Security and Storage
Security mechanisms on handheld gadgets are often Several aspects of data security, including:
times insufficient compared to say, a desktop computer Data-in-transit
Provides a potential attacker an easy avenue into a Confidentiality + integrity using secured protocol
cloud system. Confidentiality with non-secured protocol and encryption
If a user relies mainly on a mobile device to access Data-at-rest
cloud data, the threat to availability is also increased as Generally, not encrypted , since data is commingled with
mobile devices malfunction or are lost other users’ data
Devices that access the cloud should have Encryption if it is not associated with applications?
Strong authentication mechanisms But how about indexing and searching?
Tamper-resistant mechanisms Then homomorphic encryption vs. predicate encryption?
Strong isolation between applications Processing of data, including multitenancy
Methods to trust the OS For any application to process data, not encrypted
Cryptographic functionality when traffic confidentiality Data Security and Storage
is required (cont.)
The Application Level Data lineage
DoS Knowing when and where the data was located w/i cloud is important
EDoS(Economic Denial of Sustainability) for audit/compliance purposes

16 | P a g e
e.g., Amazon AWS and will move beyond the control and will extend
Store <d1, t1, ex1.s3.amazonaws.com> into the service provider domain.
Process <d2, t2, ec2.compute2.amazonaws.com> Managing access for diverse user populations
Restore <d3, t3, ex2.s3.amazonaws.com> (employees, contractors, partners, etc.)
Data provenance Increased demand for authentication
Computational accuracy (as well as data integrity) personal, financial, medical data will now be
E.g., financial calculation: sum ((((2*3)*4)/6) -2) = $2.00 ? hosted in the cloud
Correct : assuming US dollar S/W applications hosted in the cloud requires
How about dollars of different countries? access control
Correct exchange rate? Need for higher-assurance authentication
Data Security and Storage authentication in the cloud may mean
• Data remanence authentication outside F/W
Inadvertent disclosure of sensitive information is Limits of password authentication
possible Need for authentication from mobile devices
Data security mitigation? What is Privacy?
Do not place any sensitive data in a public cloud The concept of privacy varies widely among
Encrypted data is placed into the cloud? (and sometimes within) countries, cultures, and
Provider data and its security: storage jurisdictions.
To the extent that quantities of data from many It is shaped by public expectations and legal
companies are centralized, this collection can interpretations; as such, a concise definition is
become an attractive target for criminals elusive if not impossible.
Moreover, the physical security of the data Privacy rights or obligations are related to the
center and the trustworthiness of system collection, use, disclosure, storage, and
administrators take on new importance. destruction of personal data (or Personally
Why IAM? Identifiable Information—PII).
Organization’s trust boundary will become dynamic At the end of the day, privacy is about the

17 | P a g e

accountability of organizations to data subjects, The aggregation of data raises new privacy issues
as well as the transparency to an organization’s Some governments may decide to search through data
practice around personal information. without necessarily notifying the data owner, depending
What is the data life cycle? on where the data resides
Whether the cloud provider itself has any right to see
and access customer data?
Some services today track user behaviour for a range
of purposes, from sending targeted advertising to
improving services
Retention
How long is personal information (that is transferred to
the cloud) retained?
Which retention policy governs the data?
What Are the Key Privacy Does the organization own the data, or the CSP?
Concerns? Who enforces the retention policy in the cloud, and
Typically mix security and privacy how are exceptions to this policy (such as litigation
Some considerations to be aware of: holds) managed?
Storage Destruction
Retention How does the cloud provider destroy PII at the end of the
Destruction retention period?
Auditing, monitoring and risk management How do organizations ensure that their PII is destroyed by
Privacy breaches the CSP at the right point and is not available to other
Who is responsible for protecting privacy? cloud users?
Storage Cloud storage providers usually replicate the data across
Is it commingled with information from other multiple systems and sites—increased availability is one of
organizations that use the same CSP? the benefits they provide.

18 | P a g e
How do you know that the CSP didn’t retain additional How do you know that a breach has occurred?
copies? How do you ensure that the CSP notifies you when a
Did the CSP really destroy the data, or just make it breach occurs?
inaccessible to the organization? Who is responsible for managing the breach notification
Is the CSP keeping the information longer than process (and costs associated with the process)?
necessary so that it can mine the data for its own use? If contracts include liability for breaches resulting from
Auditing, monitoring and risk negligence of the CSP?
management How is the contract enforced?
How can organizations monitor their CSP and provide How is it determined who is at fault?
assurance to relevant stakeholders that privacy Who is responsible for protecting privacy?
requirements are met when their PII is in the cloud? Data breaches have a cascading effect
Are they regularly audited? Full reliance on a third party to protect personal data?
What happens in the event of an incident? In-depth understanding of responsible data stewardship
If business-critical processes are migrated to a cloud Organizations can transfer liability, but not
computing model, internal security processes need to accountability
evolve to allow multiple cloud providers to participate in Risk assessment and mitigation throughout the data life
those processes, as needed. cycle is critical.
These include processes such as security monitoring, auditing, Many new risks and unknowns
forensics, incident response, and business continuity The overall complexity of privacy protection in the cloud
Privacy breaches represents a bigger challenge.

19 | P a g e
Private Cloud
Chapter – 4

Abstract
A private cloud is a computing model that provides an organization with exclusive access to cloud resources, ensuring enhanced
security, control, and customization. Unlike public cloud environments, where resources are shared among multiple users, private
clouds are dedicated to a single entity, either hosted on-premises or by a third-party provider. They allow businesses to tailor
infrastructure to their specific needs while maintaining data privacy and compliance with regulatory standards. Private clouds offer
scalability and flexibility, enabling organizations to optimize workloads and efficiently manage resources. Though typically more
expensive to maintain, they are ideal for businesses requiring stringent data security, high performance, and full control over their
cloud environment.

might be based on resources and infrastructure


Introduction
already present in an organization's on-premises data
A private cloud is a cloud computing environment that
center.
operates exclusively for a single organization. It is
designed to provide many of the same benefits as Advantages of Private Cloud
public cloud services, such as scalability, flexibility, The main advantage of a private cloud is that users
and on-demand resource provisioning, but with an don't share resources. Because of its proprietary
added layer of control and security. Unlike public nature, a private cloud computing model is best for
clouds, where multiple organizations share resources businesses with dynamic or unpredictable computing
and infrastructure, a private cloud dedicates needs that require direct control over their
infrastructure to one user, ensuring that sensitive data environments, typically to meet security, business
and applications are isolated from external entities. governance or regulatory compliance requirements.
This makes it an attractive option for businesses that Increased security of an isolated network. Increased
handle confidential information or operate in highly performance due to resources being solely dedicated
regulated industries, such as healthcare, finance, and to one organization. Increased capability for
government. By leveraging a private cloud, customization, such as specialized services or
organizations can customize their infrastructure to applications that suit the particular company. More
meet specific performance, security, and compliance Control -- Private clouds have more control over their
requirements while retaining full ownership of their resources and hardware than public clouds because it
data. Although it may require higher upfront is only accessed by selected users. Security & privacy -
investment and ongoing maintenance, the private - Security & privacy are one of the big advantages of
cloud offers unmatched control over computing cloud computing. Private cloud improved the security
resources, making it a preferred choice for companies level as compared to the public cloud. Improved
with strict data governance policies and the need for performance -- Private cloud offers better
high availability and reliability. performance with improved speed and space capacity.
What is Private Cloud? Disadvantages of Private Cloud
Private cloud is a type of cloud computing that delivers The increased automation and user self-service
similar advantages to public cloud, including capabilities in private clouds can introduce significant
scalability and self-service, but through a proprietary complexity to enterprise IT operations. Implementing
architecture. A private cloud, also known as an internal these technologies often requires IT teams to
or corporate cloud, is dedicated to the needs and goals rearchitect parts of their data center infrastructure
of a single organization whereas public clouds deliver and adopt additional software layers and management
services to multiple organizations. A private cloud is a tools.
single-tenant computing infrastructure and Private clouds typically incur higher costs due to the
environment, meaning the organization using it-- the dedicated infrastructure and specialized management
tenant-- doesn't share resources with other users. required. They are limited in their area of operations,
Private cloud resources can be hosted and managed by often restricted to a single organization or location.
the organization in a variety of ways. The private cloud Scalability can be limited compared to public cloud
environments, making it harder to scale resources service provider, which handles the setup,
dynamically. Managing a private cloud demands maintenance, security, and management of the cloud
skilled personnel to handle the complexity and ensure environment. Organizations benefit from the expertise
smooth operation. of the service provider, allowing them to focus on their
core business activities. Managed private clouds can be
Types of Private Cloud customized to meet specific requirements, providing a
tailored solution without the need for the organization
to invest heavily in IT resources.

Hosted Private Cloud: In a hosted private cloud model,


vendors provide dedicated cloud servers within their
own data centers. The vendor is responsible for the
entire infrastructure, including hardware, networking,
and security management. This allows organizations
to leverage cloud technology without having to
maintain physical hardware or worry about security
measures, as these are managed by the provider.
Hosted private clouds are ideal for businesses that
want to avoid the complexity of managing their own
Virtual Private Cloud (VPC): A Virtual Private Cloud data centers while still enjoying the benefits of a
(VPC) is a cloud model that combines the advantages private cloud environment.
of a private cloud with the scalability and flexibility of
public cloud resources. In a VPC, a dedicated portion of On-Premise Private Cloud: Unlike hosted solutions, an
a public cloud infrastructure is allocated to a single on-premise private cloud allows organizations to build
organization, creating an isolated environment. This and maintain their cloud infrastructure within their
allows organizations to have more control over their own facilities. This model provides maximum control
resources while still benefiting from the public cloud’s over the hardware, software, and security protocols,
capabilities. VPCs enable businesses to run sensitive allowing organizations to tailor the environment to
applications and store confidential data securely, their specific needs. However, it requires significant
providing a balance between security and cost- investment in physical infrastructure, as well as
effectiveness. ongoing maintenance and management by internal IT
staff. On-premise private clouds are well-suited for
Managed Private Cloud: A managed private cloud is a organizations with strict compliance requirements or
dedicated cloud infrastructure provided by a third- those that prefer to keep their data in-house for
party vendor, where the organization does not share security reasons.
resources with others. This model is managed by the

clouds, however, can mitigate these costs


Challenges of Private Cloud
substantially.
A private cloud can introduce challenges if an
Capacity utilization: Under the private cloud
organization does not have consistent computing
computing model, the organization is wholly
needs. When resource demand is in flux, a private
responsible for maximizing capacity utilization. An
cloud may not be able to scale effectively, costing the
under-utilized cloud deployment can cost the business
organization more money in the long run. Here are key
significantly.
considerations IT stakeholders should review:
Scalability: If the business needs additional computing
Up-front costs: Fully private clouds hosted on-site
power from the private cloud, it may take extra time
require a substantial outlay of capital before they can
and money to scale up the private cloud's available
bring value to the organization. The hardware
resource. Typically, this process will take longer than
required to run a private cloud can be very expensive
scaling a virtual machine or requesting additional
and it will require an expert cloud architect to set up.
resources from a public cloud provider.
maintain and manage the environment. Hosted private
Figure 1: Best practices to follow while using private cloud

Some businesses may prefer to use a private cloud,


Private cloud Vs Public cloud especially if they have extremely high security
A public cloud is where an independent third-party standards. Using a private cloud eliminates
provider, such as Amazon Web Services (AWS) or
intercompany multitenancy (there will still be
Microsoft Azure, owns and maintains compute
multitenancy among internal teams) and gives a
resources that customers can access over the internet. business more control over the cloud security
Public cloud users share these resources, a model
measures that are put in place. However, it may cost
known as a For example, various virtual machine (VM
more to deploy a private cloud, especially if the
multi-tenant environment) instances provisioned by business is managing the private cloud themselves.
public cloud users may share the same physical server,
Often, organizations that use private clouds will end up
while storage volumes created by users may coexist on
with a hybrid cloud deployment , incorporating some
the same storage subsystem. public cloud services for the sake of efficiency.

Figure 2: Public cloud vs Private cloud


others. In contrast, a hybrid cloud combines elements
Private cloud Vs Hybrid cloud of both private and public clouds, allowing
A hybrid cloud is a model in which a private cloud
organizations to utilize the benefits of both
connects with public cloud infrastructure, enabling an environments. With a hybrid model, businesses can
organization to orchestrate workloads-- ideally
keep critical applications and sensitive data within a
seamlessly-- across the two environments. In this private cloud while leveraging the scalability and cost-
model, the public cloud effectively becomes an effectiveness of public cloud resources for less critical
extension of the private cloud to form a single, uniform
workloads. This flexibility enables organizations to
cloud. A hybrid cloud deployment requires a high level adapt to changing demands and optimize resource
of compatibility between the underlying software and allocation, making hybrid clouds particularly
services used by both the public and private clouds.
attractive for those seeking a balanced approach to
cloud computing. While private clouds offer a higher
Private and hybrid clouds are two distinct cloud
level of control and security, hybrid clouds provide
computing models that cater to different
greater flexibility and the ability to scale resources
organizational needs. A private cloud provides a dynamically, allowing businesses to achieve their
dedicated environment for a single organization,
operational goals efficiently.
offering enhanced security, control, and customization
over resources. It is ideal for businesses that handle
sensitive data or have stringent compliance
requirements, as the infrastructure is not shared with

VM Migration cycle The VM migration cycle is a critical process in cloud


computing and virtualization that involves the transfer
of virtual machines (VMs) from one physical host to
another, either within the same data center or across
different locations. This cycle typically begins with
planning, where the organization's requirements for
performance, resource optimization, and downtime
minimization are assessed.

Figure 3: VM Migration

Phases of VM Migration:
Figure 4: Phases of VM Migration
•Onboarding: Select the VM to migrate
•Replication: Replicate data from the Next, the migration process involves several key steps,
source VM to the target cloud including pre-migration checks to ensure
•Set VM target details: Configure the target VM, compatibility and resource availability, followed by
including the project, network, memory, and the actual migration, which can be performed through
instance type methods such as cold migration (shutting down the VM
•Test-clone: Create a clone of the source VM on the before transfer), hot migration (moving the VM while
target cloud for testing it is running), or live migration (seamlessly
•Cut-over: Migrate the source VM to the target cloud, transferring the VM with minimal disruption).
which involves stopping the source VM, Once the migration is complete, post-migration
replicating data, and creating the target VM validation is conducted to confirm that the VM
•Finalize: Perform any final cleanup after the operates correctly in its new environment, and any
migration is complete. necessary adjustments or optimizations are
implemented. Throughout this cycle, monitoring and considered primary and is resumed in case of failure.
management tools are essential to track performance, Stage-4: Commitment. Host B indicates to A that is has
ensure stability, and address any issues that may arise, successfully received a consistent OS image. Host A
ultimately enhancing the efficiency and reliability of IT acknowledges this message as a commitment of
operations within an organization. migration transaction.
Stage-5: Activation. The migrated VM on B is now
Live Migration activated Post-migration code runs to reattach the
Steps involved in Live Migration of VM: device's drivers to the new machine and advertise
Stage-0: Pre-Migration. An active virtual machine moved IP addresses.
exists on the physical host A. This approach to failure management ensures that at
Stage-1: Reservation. A request is issued to migrate an least on host has a consistent VM image at all times
OS from host A to host B (a precondition is that the during migration:
necessary resources exist on B and a VM container of 1) Original host remains stable until migration
that size) commits and that the VM may be suspended and
Stage-3: Stop-and-Copy. Running OS instance at A is resumed on that host with no risk of failure.
suspended, and its network traffic is redirected to B. As 2) A migration request essentially attempts to move
described in reference 21, CPU state and remaining the VM to a new host and on any sort of failure,
inconsistent memory pages are then transferred. At execution is resumed locally, aborting the migration.
the end of this stage, there is a consistent suspended
copy of the VM at both A and B. The copy at A is

Figure 5: Stages of VM Migration


Live Migration Vendor Implementation scheduled downtime along with migrating virtual
Example machines away from failing or underperforming
There are lots of VM management and provisioning servers.
tools that provide the live migration of VM facility, two
of which are VMware VMotion and Citrix XenServer Citrix XenServer "XenMotion":
"XenMotion". Based on Xen live migrate utility, it provides the IT
Administrator the facility to move a running VM from
VMware VMotion: one XenServer to another in the same pool without
a) Automatically optimize and allocate an entire pool interrupting the service (hypothetically zero
of resources for maximum hardware utilization, downtime server maintenance), making it a highly
flexibility, and availability. available service and also good feature to balance
b)Perform hardware's maintenance without workloads on the virtualized environments.

host’s associated storage area. The virtual machine is


Regular/Cold Migration
registered with the new host. After the migration is
Cold migration is the migration of a powered-off
completed, the old version of the virtual machine is
virtual machine. With cold migration: You have
deleted from the source host.
options of moving the associated disks from one data
store to another. The virtual machines are not required
to be on a shared storage. 1) Live migrations needs to
a shared storage for virtual machines in the server’s
pool, but cold migration does not. 2) In live migration
for a virtual machine between two hosts, there should
be certain CPU compatibility checks, but in cold
migration this checks do not apply.
Cold migration (VMware product) is easy to
implement and is summarized as follows: The
configuration files, including NVRAM file (BIOS
Setting), log files, and the disks of the virtual machines,
are moved from the source host to the destination Figure 6: Regular Migration

with the appropriate OS template you need to


Cloud Provisioning
provision the virtual machine.
VM provisioning is the process of setting up a virtual
• load the appropriate software (operating System
machine (VM) for use, including:
you selected in the previous step, device
• Selecting resources: Choosing the right resources for
drivers, middleware, and the needed applications for
the VM
the service required).
• Loading applications and operating systems:
• customize and configure the machine (e.g., IP
Installing the applications and
address, Gateway) to configure an associated
operating systems on the VM
network and storage resources.
• Configuring settings: Setting up the VM's
• Finally, the virtual server is ready to start with its
configuration
newly loaded software.
• VM provisioning is the first step in a VM's life cycle. It
Provisioning from a template is an invaluable feature,
can be done manually, but this
because it reduces the time required to create a new
is time-consuming and error-prone. Instead,
virtual machine. Administrators can create different
configuration management tools like
templates for different purposes. For example, you
Ansible, Chef, and Puppet can be used to automate the
can create a Windows 2003 Server template for the
process.
finance department, or a Red Hat Linux template for
the engineering department. This enables the
Steps for VM Provisioning:
administrator to quickly provision a correctly
select a server from a pool of available servers
configured virtual server on demand.
(physical servers with enough capacity) along
Provisioning Types dedicated efforts, and lapses in oversight may result
in overlooked issues or inefficient resource
Manual Provisioning:
allocation. Inadequate monitoring can lead to
This conventional provisioning method involves
underperformance, security vulnerabilities, and
hands-on allocation and configuration by IT
difficulties in identifying and addressing emerging
administrators. Although it provides a high level of
issues promptly.
control, it can be time-intensive and less adaptable to
• Security Concerns: Ensuring robust security
dynamic workload changes. Use Cases: Well-suited for
protocols for provisioned resources is a persistent
static workloads with predictable resource demands.
challenge, with evolving threats requiring constant
adaptation and proactive measures. Security lapses
Automated Provisioning:
can expose sensitive data, compromise client trust,
Utilizing scripts or tools, automated provisioning
and lead to regulatory non-compliance.
minimizes human intervention, expediting the
• Skills and Knowledge Gaps: The dynamic nature of
deployment process and enhancing responsiveness to
cloud technologies necessitates ongoing training and
evolving demands. Use Cases: Ideal for environments
skill development, posing challenges in keeping
characterized by varying workloads, necessitating
teams updated and aligned with the latest trends.
swift and efficient resource allocation.
• Cost Management Complexity: Efficiently
managing costs in the cloud environment, especially
Dynamic Provisioning:
with fluctuating usage patterns, presents a complex
Also known as on-demand provisioning, dynamic
challenge for providers and clients alike. Poor cost
provisioning stands out as the most flexible
management may result in unexpected expenses,
and scalable cloud computing model. It empowers
undermining the cost-effectiveness of cloud
cloud service providers to allocate resources
provisioning for both providers and clients.
dynamically, enabling client organizations to swiftly
• Lack of Automation: The absence or limited
acquire IT resources without manual adjustments.
implementation of automated provisioning
Cloud automation and orchestration streamline this
processes hinders efficiency, making it challenging to
process, catering to diverse customer needs. Use Cases:
meet the dynamic demands of clients swiftly. It can
Optimal for applications with unpredictable or
lead to slower deployment times, resource
fluctuating workloads, delivering scalability and
bottlenecks, and reduced agility in responding to
resource optimization.
changing client needs.
• Billing Management: Managing and reconciling
User Self-Provisioning:
billing for various provisioned services, especially in
Termed as cloud self-service, user self-provisioning
multi-cloud or hybrid environments, presents a
allows customers to directly subscribe to required
significant administrative challenge. Billing
resources from the cloud provider via a website. Users
inaccuracies can strain client-provider relationships,
create an account and pay for the needed resources.
leading to disputes and hindered trust.
Use Cases: Ideal for organizations emphasizing
autonomy and agility, offering a straightforward
subscription process without complex procurement or Benefits of Cloud Provisioning
onboarding procedures with the cloud vendor. Cloud provisioning offers several advantages for
organizations, including scalability, cost savings, and
Challenges of Cloud Provisioning speed. By utilizing cloud services, organizations can
quickly scale computing resources up or down based
Possibilities of Human Errors: Manual cloud
on demand, ensuring efficient resource use. This
provisioning increases the risk of human errors,
flexibility also translates into cost savings, as
potentially leading to misconfigurations and service
organizations only pay for what they use, avoiding
disruptions. Inaccurate provisioning can result in
large upfront investments in on-premises
resource inefficiencies, downtime, and compromised
technology. Moreover, cloud provisioning enables
service reliability.
rapid deployment of new services and applications,
• Continuous Monitoring Difficulty: Continuous
enhancing operational speed. Accessibility is another
monitoring of provisioned resources requires
key benefit, allowing users to access cloud-based apps software platform for cloud computing thatsupports
and data from any location with an internet-connected all types of cloud computing platform for public and
device. Security is strengthened by the strict measures private clouds. It is mostly deployed as
employed by cloud service providers to safeguard data, infrastructure-as-a-service, OpenStack is basically
while disaster recovery options ensure reliable your key to building your own cloud infrastructure.
backups. Additionally, cloud computing reduces IT If we don't comfortable entrusting sensitive data to a
maintenance by delegating tasks such as hardware third party and you have tons of it, then an on-
upkeep, software updates, and security patches to the premise or private cloud infrastructure would be the
service provider. Finally, the cloud promotes better choice. By building your own cloud in your
innovation and experimentation by allowing own data center, we will have more control of your
organizations to test new technologies without data.
incurring high costs.
OpenStack Components and Architecture
Managing Private Cloud Compute (Nova)
Private cloud management can be done by a third- OpenStack Compute (Nova) is a cloud computing
party service provider or by the organization itself. fabric controller, which is the main part of an laas
Options for managing Private Cloud: Managed private system. It is designed to manage and automate pools
cloud. A third-party service provider manages and of computer resources and can work with widely
maintains the cloud's infrastructure, including physical available virtualization technologies. KVM, VMware,
hardware, monitoring, reporting, and disaster and Xen are available choices for hypervisor
recovery. Private cloud operating system: A private technology (virtual machine monitor), together with
cloud operating system, like OpenStack, can be Hyper-V and Linux container technology such as LXC.
managed through a web-based dashboard.
Networking (Neutron)
OpenStack OpenStack Networking (Neutron) is a system for
managing networks and IP addresses. OpenStack
Networking provides networking models for
OpenStack is a cloud operating system that controls
different applications or user groups. Standard
large pools of compute, storage, and networking
models include flat networks or VLANs that separate
resources throughout a datacenter, all managed and
servers and traffic. OpenStack Networking manages
provisioned through APIs with common authentication
IP addresses,
mechanisms. OpenStack is a free and open-source

Figure 7: OpenStack Components


allowing for dedicated static IP addresses. Floating IP based resources. The design accommodates third
addresses let traffic be dynamically rerouted to any party products and services, such as billing,
resources in the IT infrastructure, so users can redirect monitoring, and additional management tools. The
traffic during maintenance or in case of a failure. dashboard is also brand-able for service providers
and other commercial vendors who want to make use
Block storage (Cinder) of it. The dashboard is one of several ways users can
OpenStack Block Storage (Cinder) provides persistent interact with OpenStack resources. Developers can
block-level storage devices for use with OpenStack automate access or build tools to manage resources
compute instances. The block storage system manages using the native OpenStack API or the EC2
the creation, attaching and detaching of the block compatibility API.
devices to servers. Block storage volumes are fully
integrated into OpenStack Compute and the Dashboard Cloud template (Heat)
allowing for cloud users to manage their own storage Heat is a service to orchestrate multiple composite
needs. cloud applications using templates, through both an
OpenStack-native REST API and a CloudFormation-
Authentication (Keystone) compatible Query API.
OpenStack Identity (Keystone) provides a central
directory of users mapped to the OpenStack services Telemetry (Ceilometer)
they can access. It acts as a common authentication OpenStack Telemetry (Ceilometer) provides a Single
system across the cloud operating system and can Point Of Contact for billing systems, providing all the
integrate with existing backend directory services like counters they need to establish customer billing,
LDAP(Lightweight Directory Access). across all current and future OpenStack components.
The delivery of counters is traceable and auditable,
Image (Glance) the counters must be easily extensible to support
OpenStack Image (Glance) provides discovery, new projects, and agents doing data collections
registration, and delivery services for disk and server should be independent of the overall system.
images. Stored images can be used as a template. It can
also be used to store and catalog an unlimited number Adding a new physical machine in an
of backups. The Image Service can store disk and server OpenStack cloud
images in a variety of back-ends, including Swift. The
Image Service API provides a standard REST interface
for querying information about disk images and lets
clients stream the images to new servers.

Object storage (Swift)


OpenStack Object Storage (Swift) is a scalable
redundant storage system. Objects and files are written
to multiple disk drives spread throughout servers in
the data center, with the OpenStack software
responsible for ensuring data replication and integrity
across the cluster. Storage clusters scale horizontally Figure 8: OpenStack Network
simply by adding new servers. Should a server or hard
drive fail, OpenStack replicates its content from other
For a cloud infrastructure, the typical OpenStack
active nodes to new locations in the cluster.
configuration includes: a "master" controller node
present on a physical machine managing other
Dashboard (Horizon)
physical "slave" machines called compute nodes.
OpenStack Dashboard (Horizon) provides
administrators and users with a graphical interface to
access, provision, and automate deployment of cloud-
Figure 9: OpenStack Architecture

Google Cloud – VPC


Benefits of OpenStack GCP VPC provides networking functionality to
OpenStack offers several benefits that make it a Compute Engine VM instances, Google Kubernetes
valuable solution for organizations. It enables rapid Engine clusters, and App Engine flexible
innovation by providing developers with faster access environment. It provides networking for customers’
to IT resources through its orchestration and self- cloud-based resources and services that are global,
service capabilities. This, in turn, cuts down time-to- scalable, and flexible.
market, as end users and business units no longer have
to wait for days or weeks to access the needed services, VPC Networks
allowing for quicker project rollouts. OpenStack also A Virtual Private Cloud (VPC) network is a virtual
boosts scalability and resource utilization by allowing version of a physical network, implemented inside of
on-demand server management, even though it's not as Google’s production network, using Andromeda. VPC
scalable as public clouds. Additionally, OpenStack networks along with their associated routes and
simplifies regulatory compliance by enabling private, firewall rules, are global resources i.e., they are not
on-premise cloud environments, giving organizations associated with any distinct region or zone. A VPC
more control over access privileges, security measures, network provides the following:
and policies, which is crucial for securing sensitive and • Provides connectivity for your Compute Engine
regulated information. virtual machine (VM) instances, including Google
Kubernetes Engine (GKE) clusters, App Engine flexible
environment instances, and other Google Cloud
products built on Compute Engine VMs.
• Offers native Internal TCP/UDP Load Balancing and
proxy systems for Internal HTTP(S) Load
Balancing.
• Connects to on-premises networks using Cloud VPN
tunnels and Cloud Interconnect
attachments.
VPC Firewall Rules
• Distributes traffic from Google Cloud external load
Firewall rules apply to both outgoing (egress) and
balancers to backends.
incoming (ingress) traffic in the network. They
Note: Projects can contain multiple VPC networks. Unless
manage traffic even if it is entirely within the
an organizational policy is created by the user that
network, including communication among VM
prohibits it, new projects start with a default network
instances. Virtual Private Cloud firewall rules apply
(an auto mode VPC network) that has one subnetwork
to a given project and network. They allow users to
(subnet) in each region.
control which packets are allowed to travel to which
destinations. Every VPC network has two implied
VPC Subnets firewall rules that block all incoming connections and
Each VPC network consists of one or more useful IP allow all outgoing connections. When you create a
range partitions called subnets and each subnet is VPC firewall rule, a VPC network is specified along
associated with a region. VPC networks do not have with a set of components that define what the rule
any IP address ranges associated with them, IP ranges does. The components enable you to target certain
are defined for the subnets. Subnets are regional types of traffic, based on the traffic’s protocol,
resources. Each subnet defines a range of IP addresses. destination ports, sources, and destinations.
When a subnet is created, its primary IP address range
must be defined. Optionally secondary IP address
VPC Routes
ranges can also be added to a subnet, which is only used
Google Cloud routes define the paths that network
by alias IP ranges. Each primary or secondary IP range
traffic takes from a virtual machine (VM) instance to
for subnets in the VPC network needs to be a unique
other destinations. These destinations can be inside
valid CIDR block.
your Google Cloud Virtual Private Cloud (VPC)
network (for example, in another VM) or outside it.In
VPC IP Addresses a VPC network, a route consists of a single destination
Resources such as VM instances and load balancers prefix in CIDR format and a single next hop. When an
have IP addresses in Google Cloud which enables instance in a VPC network sends a packet, Google
Google Cloud resources to communicate with other Cloud delivers the packet to the route’s next hop if the
resources in Google Cloud, in on-premises networks, or packet’s destination address is within the route’s
on the public internet. Google Cloud uses the following destination range. You can create custom static
labels to describe different IP address types. For routes to direct some packets to specific destinations.
example, subnet IP address ranges must be internal IP
addresses, which are addresses that are not publicly
VPC Networks and Subnets
routed. An external IP address is a publicly routed IP
Google Cloud offers three types of VPC networks,
address that can be assigned an external IP address to
determined by their subnet creation
the network interface of a Google Cloud VM.
mode:
` • Default-mode VPC
• Auto-mode VPC
• Custom-mode VPC
Default Mode VPC: Every project is provided with a
Default VPC network with preset subnets and firewall
rules. Specifically, a subnet is allocated for each region
with non-overlapping CIDR blocks and firewall rules
that allow ingress traffic from ICMP, RDP, and SSH
traffic from anywhere, as well as ingress traffic from
within the default network for all protocols and ports.
Auto Mode VPC: In this network, one subnet from each
region is automatically created within it. The Default
network can be understood as an Auto-mode network.
These automatically created subnets use a set of
predefined IP ranges with a /20 mask that can be
expanded to a /16. These subnets fit within the
10.128.0.0/9 CIDR block. So, as new GCP regions
become available, new subnets are automatically
created in those regions and added to auto mode
networks using an IP range from that block. Custom
Mode VPC: A Custom Mode network does not
automatically create subnets. This type of network
provides the user with complete control over its
subnets and IP ranges. The users decide which subnets
to create in regions they choose and using IP ranges
they specify within the RFC 1918 address space. These
IP ranges cannot overlap between subnets of the same
network.

Benefits of VPC
Flexibility to scale and control how workloads connect
both regionally and globally. Bring your own IP
addresses to Google’s network infrastructure
anywhere. Access VPCs with no need to replicate
connectivity or management policies in each region.
VPC Flow logs: VPC flow logs help with network
monitoring, forensics, real-time security analysis, and
expense optimization. Host globally distributed multi-
tier applications by creating a VPC with subnets.
Disaster Recovery: With application replication, create
backup Google Cloud compute capacity, then revert
back once the incident is over. Packet Mirroring:
Securely connect your existing network to the VPC
network over IPsec using VPN. VPC Peering: Configure
private communication across the same or different
organizations without bandwidth bottlenecks or single
points of failure. Shared VPC can be used within an
organization.
Chapter

2
CLOUD MANAGEMENT SECURITY

Luciana Porcher Nedel, Flávio Rech Wagner and Richard Leithold

1.1. What is a Data Center?


A data center is a facility that centralizes an organization’s IT operations and equipment,
and where it stores, manages, and disseminates its data.
• It generally includes redundant or backup power supplies, redundant data communi-
cations connections, environmental controls (e.g., air conditioning, fire suppression) and
various security devices.

• Data centers are centralized locations where computing and networking equip-
ment is concentrated for the purpose of collecting, storing, processing, distributing or
allowing access to large amounts of data.
• As equipment got smaller and cheaper, and data processing needs began to increase and
they have increased exponentially, so networking multiple servers together to increase
processing power have started.
• Large numbers of these clustered servers and related equipment can be housed in a room,
an entire building or groups of buildings.
• Today’s data center is likely to have thousands of very powerful and very small servers
running 24/7.
• Data centers are sometimes referred to a server farms and they provide important ser-
vices such as data storage, backup and recovery, data management and networking.
• These centers can store and serve up Web sites, run e-mail and instant messaging (IM)
services, provide cloud storage and applications, enable e-commerce transactions, power
online gaming communities and many more.

1.2. Why Data Centers?


• Demand for processing power, storage space and information is growing constantly.
• Any entity that generates or uses data has the need for data centers on some level,
including government agencies, educational bodies, telecommunications companies, fi-
nancial institutions, etc.
Figure 1.1. Enter Caption

• Lack of fast and reliable access to data mean an inability to provide vital services or loss
of customer satisfaction

1.3. Concerns for Data Centers


• Companies rely on their information systems to run their operations. If a system be-
comes unavailable, company operations may be impaired or stopped completely.
• Information security is also a concern, and for this reason a data center has to offer a
secure environment which minimizes the chances of a security breach.
• A data center must therefore keep high standards for assuring the integrity and function-
ality of its hosted computer environment. This is accomplished through redundancy of
mechanical cooling and power systems (including emergency backup power generators)
serving the data center along with fiber optic cables.

1.4. Data Center Tier Levels


1.5. Level 1
O Customers who use a Tier 1 Data Center generally are not dependent on real time
delivery of their product or service.
O Companies who need a dedicated infrastructure solution beyond installing a server in
their office for storing data could use a Tier 1 level data center.
O The level of disaster protection is the lowest in this tier.

1.6. Level 2
O Companies who need access to their data without downtime could use a Tier 2 level
data center.
O Infrastructure includes Tier I capabilities with redundant components for power and
cooling, which may include backup UPS battery systems, chillers, generators and pumps.
O This gives the customers more reliability against disruptions.

1.7. Level 3
O Companies for whom delivery of their product or service in real time is critical to
their operations, such as media providers like Netflix, content providers like Facebook,
financial companies, etc.
O Maintenance and repairs can be performed without disrupting service to the customer.
For these customers, down time is very costly.

1.8. Level 4
O Includes Tier I, Tier II and Tier III capabilities, adding another layer of fault tolerance.
O Power, cooling and storage are all independently dual- powered. O The topography of
the infrastructure allows one fault anywhere in the system without disruption to service
and the least downtime.
O For enterprises that must stay active 24/7, a Tier 4 data center is ideal.

1.9. Data Center Types


1.10. 1. Enterprise Data centers
These are built, owned, and operated by companies and are optimized for their end users.
Most often they are housed on the corporate campus.

1.11. 2. Managed Services Data centers


These data centers are managed by a third party (or a managed services provider) on
behalf of a company. The company leases the equipment and infrastructure instead of
buying it.

1.12. 3. Colocation Data centers


In colocation ("colo") data centers, a company rents space within a data center owned
by others and located off company premises. The colocation data center hosts the infras-
tructure: building, cooling, bandwidth, security, etc., while the company provides and
manages the components, including servers, storage, and firewalls.

1.13. 4. Cloud data centers


In this off-premises form of data center, data and applications are hosted by a cloud ser-
vices provider such as Amazon Web Services (AWS), Microsoft (Azure), or IBM Cloud
or other public cloud provider.

1.14. Other considerations


O 8.Technologyinfrastructuredesign
O 9.Availabilityexpectations
O 10.Siteselection
O 11.Modularityandflexibility
O 12.Environmentalcontrol
O 13.Electricalpower
O 14.Low-voltagecablerouting
O 15.Fireprotection
O 16.Security
Figure 1.2. Enter Caption

Figure 1.3. Enter Caption

1.15. Data Center Infrastructure


- The data center is home to the computational power, storage, and applications necessary
to support an enterprise business.
- The data center infrastructure is central to the IT architecture, from which all content is
sourced or passes through.
- Proper planning of the data center infrastructure design is critical, and performance,
resiliency, and scalability need to be carefully considered.
-Another important aspect of the data center design is flexibility in quickly deploying and
supporting new services.
-Such a design requires solid initial planning and thoughtful consideration in the areas of
port density, access layer uplink bandwidth, true server capacity, and oversubscription.
- The data center network design is based on a proven layered approach
- The layered approach is the basic foundation of the data center design that seeks to
improve scalability, performance, flexibility, resiliency, and maintenance

1.16. Data Center Core Components


Data center design includes routers, switches, firewalls, storage systems, servers, and
application delivery controllers. Because these components store and manage business-
critical data and applications, data center security is critical in data center design. To-
gether, they provide:
1. Network infrastructure - This connects servers (physical and virtualized), data center
services, storage, and external connectivity to end-user locations.
2. Storage infrastructure Data is the fuel of the modern data center. Storage systems are
used to hold this valuable commodity.
3. Computing resources Applications are the engines of a data center. These servers
provide the processing, memory, local storage, and network connectivity that drive appli-
cations.

1.17. Data Center Design Models


1. Multi-tier Model The multi-tier model is the most common design in the enterprise. It
is based on the web, application, and database layered design supporting commerce and
enterprise business ERP and CRM solutions. - It supports many web service architectures,
such as those based on Microsoft .NET or Java 2 Enterprise Edition.
- It is dominated by HTTP-based applications.
-The multi-tier model uses software that runs as separate processes on the same machine
using interprocess communication (IPC), or on different machines with communications
over the network
Typically, the following three tiers are used: O Web-server O Application O Database
2. Server Cluster Model In the modern data center environment, clusters of servers
are used for many purposes, including high availability, load balancing, and increased
computational power. All clusters have the common goal of combining multiple CPUs
to appear as a unified high performance system using special software and high-speed
network interconnects. Server clusters have historically been associated with university
research, scientific laboratories, and military research for unique applications, such as
the following: O Meteorology (weather simulation) O Seismology (seismic analysis) O
Military research (weapons, warfare)

1.18. Data Center Services


Data center services provide all the supporting components necessary to the proper oper-
ation of a data center. They include all activities associated with data center implementa-
tion, maintenance and operation and involve hardware, software, processes and personnel.
- Hardware installation and maintenance - Managed power distribution - Backup power
systems - Data backup and archiving - Managed load balancing - Controlled Internet ac-
cess - Email and messaging services. - User authentication and access management. -
Perimeter security, including firewalls and virus, malware and ransomware prevention
programs. - Outsourcing and colocation of data center services. -Disaster recovery and
business continuity services. -Technical support. - Regulatory and standards compliance.

1.19. What is data center as a service?


When an organization outgrows its existing on-premises data center and needs additional
infrastructure, it can opt for a data center as a service (DCaaS) offering to obtain the
needed services.
1.20. What is in a data center facility?
Data center components require significant infrastructure to support the center’s hardware
and software. These include power subsystems, uninterruptible power supplies (UPS),
ventilation, cooling systems, fire suppression, backup generators, and connections to ex-
ternal networks.

1.21. Data center Management


Data center management encompasses the tasks and tools organizations need to keep their
private data centers operational, secure and compliant. The person responsible for carry-
ing out these tasks is known as a data center manager. A data center manager performs
general maintenance, such as software and hardware upgrades, general cleaning or decid-
ing the physical arrangement of servers.
-They also take proactive or reactive measures against any threat or event that
harms the data center. -Data center managers in the enterprise can use data center infras-
tructure management (DCIM) solutions to simplify overall management and achieve IT
performance optimization. - These software solutions provide a centralized platform for
data center managers to monitor, measure, manage and control all data center elements in
real time.

1.22. Data center Infrastructure Management


Data Center Infrastructure Management(DCIM) is the integration of information technol-
ogy (IT) and facility management disciplines to centralize: • monitoring, • management
and • intelligent capacity planning of a data center’s critical systems. Achieved through
the implementation of specialized software, hardware and sensors, DCIM enables com-
mon real-time monitoring and management platform for all interdependent systems across
IT and facility infrastructures.

1.23. Data center Benefits


The combination of cloud architecture and SDI offers many advantages to data centers and
their users, such as: O Optimal utilization of compute, storage and networking resources
O Rapid deployment of applications and services O Scalability O Variety of services and
data center solutions O Cloud-native development

1.24. Some Issues Faced by Data Centers


O Data centers strive for providing fast, uninterrupted service. O Equipmentfailure,communicationorpower
outages, network congestion and other problems that keep people from accessing their
data and applications have to be dealt with immediately. O Due to the constant demand
for instant access, data centers are expected to run 24/7, which creates a host of issues.

1.25. Cloud Management ?


O Cloud management is the process of overseeing and managing an organization’s cloud
computing resources, services, and infrastructure. It can be performed by a company’s
internal IT team or a third party service provider.
Figure 1.4. cloud management

1.26. Cloud Management Goals


O Self-service refers to the flexibility achieved when cloud users access cloud resources,
create new ones, monitor usage and cost, and adjust resource allocations – without the
intervention of IT professionals or cloud service providers.
O Workflow automation lets operations teams manage cloud instances without human
intervention. This is a key element in any automation infrastructure used for workload
deployment and monitoring.
O Cloud analysis helps track cloud workloads and user experiences. This is essential for
the management and optimization of cloud costs and performance.

Figure 1.5. Enter Caption


1.27. How does cloud management work?
O Effective cloud management relies on two vital elements: tools and practices.
O Cloud management is conducted with software tools designed to discover, provision,
track utilization, measure performance and produce reports on the cloud resources and
services used by an organization.
OThetools are often supplied by the cloudprovider itself in the form of a service
O Tools can also be purchased from a third-party provider and deployed in a local data
center or in the cloud itself inside a cloud virtual machine

1.28. Tasks of Cloud Management


"Auditing the System Backups"
It is essential to perform audit backups in a specific time frame to verify and restore the
selected files of multiple users. Below there are two methods for performing backups. O
Performing the Backup files by the organization’s on-site computers to the disks available
in the cloud.
O The second method is backing up the files by the Cloud Service Provider.
"Data Flow of the System"
O The organization managers have the responsibility to develop a figure for describing a
detailed workflow process flow.
O The workflow process explains the movement of data belonging to the company through
the cloud solution. Vendor Lock-In Awareness and its Solutions
O The method to exit from the service of a specific cloud provider must be informed to
the IT managers.
O The methods are defined for authorizing the Cloud Managers for exporting an organi-
zation’s data from their device towards another cloud service provider.
"Gaining information about Providers Security Procedures"
O It is essential to know the security plans of the Cloud Provider for services like E-
Commerce Processing, Screening of an Employee and Encrypted Policy
"Monitoring Capability Planning and Scaling Capacities"
O The IT manager must have the information about the planning capacity for inspecting
whether the Cloud Service Provider can meet the future business requirements or not.
O The IT manager must manage the scaling capacities for ensuring the services which
can scale up or scale down according to the requirements.
"Monitoring the Audit Log Usage"
O For recognizing the errors in the system, the IT managers must audit the logs regularly.
"Solution Testing and Validation"
O When the Cloud Service Provider provides the solution, it is compulsory to test the
particular solution and check its final results for error-free solutions. A system must be
solid and reliable.

1.29. Cloud Management Platform


O A Cloud Management Platform (CMP) is a comprehensive suite of tools designed to
simplify the management of cloud environments.
O CMPs offer a unified interface for streamlined control, optimization, and orchestration
of resources.
O The cloud management platform basically works like a consolidated platform which has
many APIs for deploying, integrating and even scaling up an enterprise’s cloud ecosys-
tem.
O With a cloud management platform, the companies can not only understand how every
cloud asset works; they can also evaluate their performances.
O The chief executives can make use of the dashboards to see compliance information.
O The CMP will also help businesses to analyze costs depending on the applications,
providers or region. This information will give useful insights into usage and expenditure
of cloud assets.

1.30. Why Cloud Management Platform


The Growing Complexity of Cloud Environments: Statistics reveal the escalating com-
plexity of managing cloud environments:
O Over 90per of enterprises use multiple cloud services (Source: Flexera).
O 85per of organizations operate in a multi-cloud environment (Source: RightScale State
of the Cloud Report).
O 60per of IT decision-makers report that managing multi-cloud environments is a top
challenge (Source: IBM).
1.31. How does a cloud management platform work?
A cloud management platform enables organizations to manage cloud resources via an
orchestration suite that automates cloud management tasks. The CMP allows for full
visibility into each cloud environment an enterprise is running – whether that’s public,
private, or both. It monitors the usage of all a company’s cloud resources in order to
ensure organisations are using them efficiently.

1.32. Cloud Management Wheel


Cloud management is made of seven functional areas and five cross-functional attributes.
The functional areas are specific to one use case, whereas the cross-functional attributes
aim at broader goals that are common to multiple use cases.

1.33. Cloud Cost Management


Cloud cost management, also known as cloud cost optimization, is a technique used to
efficiently manage and track the usage of cloud resources and optimize their use in the best
possible way. It includes understanding of costs related to cloud resources and removing
unused or unnecessary resources. The cloud costs grow less efficient as the resources are
not managed and tracked properly. There are many factors that affect cloud cost which
can be virtual machine instances, memory and storage utilization, network traffic, web
services, software licenses etc.
1.34. Why is Cloud Cost Management required?
Cloud Cost Management helps businesses know their spending on the cloud services and
resources, analyze them and make the best use of it.
By understanding the utilization and spend on cloud resources, businesses can also achieve
their objectives and goals like security, accountability, better visibility, strategic planning
etc.
O Cloud Cost increases the management cost.
O Surveys indicate that companies waste around 35per of the cloud.
O Which then demands getting the right instance size, shutting down unused resources,
and scheduling the virtual machines right.

1.35. Cloud Cost Management Benefits


O Scalability O Control the spending on Cloud O Best practices implementations O An-
alyze the resources usage O Lower IT cost O Balancing the need O Scheduling time

1.36. Cloud Lifecycle Management


Enterprise Manager allows you to manage the entire Cloud lifecycle which includes the
following: O Planning O Setting Up the Cloud O Building the Cloud O Testing and
Deploying a Service O Monitoring and Managing the Cloud O Metering, Charging, and
Optimization
1.37. Multi-Cloud Management
Multi-cloud management is the ability to manage cloud- based services across multi-
ple vendors from a single, centralized environment. A multi-cloud management solution
should create consistent workflows that help to manage the organization’s infrastructure
provisioning across many public cloud vendors.
Multi-cloud management is the process of tracking, securing, and optimizing a multi-
cloud deployment. It should also provide the visibility necessary to create seamless con-
nectivity and security between components.

1.38. Multi-Cloud Management Features


O Self-Service Provisioning - it provides businesses with the freedom of choice O Sched-
uled Tasks O Workflow Automation O Reliability and Flexibility O Eliminating Vendor
Lock in O Cost O Reporting Features

1.39. Cloud Automation


Cloud automation uses software to configure, deploy, provision, and manage cloud com-
puting resources and infrastructure. The purpose of cloud automation is to reduce the
need for human intervention in these processes, so they are faster, more efficient, and less
error prone.
The use of technology is to unite cloud management processes, including cloud opera-
tions, orchestration, and governance.
Cloud automation enables IT admins and Cloud admins to automate manual processes
and speed up the delivery of infrastructure resources on a self- service basis, according to
user or business demand. Cloud automation can also be used in the software development
lifecycle for code testing, network diagnostics, data security, software-defined networking
(SDN), or version control in DevOps teams.
Any industry that encounters repetitive tasks can use automation, but automation is more
prevalent in the industries of manufacturing, robotics, and automotives, as well as in IT
systems.
Cloud automation can also be implemented to support corporate WAN, VLAN, and SD-
WAN deployments using software.
It uses cloud management tools to achieve tasks . Auto-provisioning servers, backing up
data, or discovering and eliminating unused processes are some of the tasks that cloud
automation could accomplish without real-time human interaction.

1.40. Why automate clouds?


-Hybrid and multicloud environments add an additional layer of complexity to infrastruc-
ture, network, application, and user administration. -IT teams need to manage both on-site
and cloud-based environments, often using specialized management tools for each. -IT
operations are resource-intensive—and maintaining legacy systems and processes at the
same time as new ones only increases complexity. -Requirements and demand are out-
growing IT and business capabilities. -The scale of technology (virtualization, cloud,
containers, etc.) is too great to do manually. -As a result, it can be nearly impossible to
effectively maintain, track, scale, and secure resources and applications by hand. -Cloud
automation streamlines tasks or processes to improve efficiency and reduce manual work-
load. -It can unite hybrid and multicloud management under a single set of processes
and policies to improve consistency, scalability, and speed. Some of the repetitive tasks
are: -Sizing, provisioning and configuring resources such as virtual machines (VMs).
-Establishing VM clusters and load balancing. -Creating storage logical unit numbers
(LUNs). -Invoking virtual networks. -The actual cloud deployment. -Monitoring and
managing availability and performance.

1.41. What cloud management processes can be automated?


-Resource allocation through autoscaling
-Infrastructure configurations can be defined through templates and code
-Continuous software development relies on automation for various steps, from code
scans and version control to testing and deployment
-Assets can be tagged automatically based on specific criteria, context and conditions of
operation.
-Cloud environments can be set up with automated security controls that enable or restrict
access to apps or data, and scan for vulnerabilities.
-Logging and monitoring - Cloud tools and functions can be set up to log all activity in-
volving services and workloads in an environment
-Scale multiclouds - more than 1 cloud service, from more than 1 cloud vendor

1.42. Cloud automation vs. Cloud orchestration


-Cloud orchestration is about achieving objectives via cloud infrastructure by strategically
organizing automated tasks.
-Cloud orchestration combines low-level tasks into processes and coordinates them through-
out the entire infrastructure, often consisting of multiple locations or systems
-Cloud orchestration technologies integrate automated tasks and processes into a work-
flow to perform specific business functions.
-Cloud automation defines the deployment and management of tasks to be automated, and
cloud orchestration arranges and coordinates those defined tasks into a unified approach
to accomplish intended goals.

1.43. Cloud Automation - Benefits


-Saves an organization time and money. - Is faster, more secure and more scalable than
manually performing tasks. -Causes fewer errors, as organizations can construct more
predictable and reliable workflows. -Increases efficiency by enabling continuous deploy-
ment and automating bug detection -Simplifies implementation, compared to on-premises
platforms, requiring less IT intervention. -Centralize governance: A unified automation
platform allows organizations to standardize governance across data centers -Boost se-
curity: Organizations can use automation to monitor and log activity across an entire IT
environment.

1.44. Cloud Infrastructure Security


Cloud infrastructure security is a framework for safeguarding cloud resources against
internal and external threats. It protects computing environments, applications, and sensi-
tive data from unauthorized access by centralizing authentication and limiting authorized
users’ access to resources.
-The cloud infrastructure security approach comprises a broad set of policies, tech-
nologies, and applications.
-It includes controls that help eliminate vulnerabilities or mitigate the consequences of an
incident by automatically preventing, detecting, reducing, and correcting issues as they
occur.
- It also facilitates business continuance by aiding in disaster recovery and supports regu-
latory compliance across multiple cloud infrastructures.

1.45. Why Cloud Infrastructure Security?


• 98PER of companies having experienced a cloud data breach in the past 18 months •
Security threats because of multi-cloud strategy • Accidental data leaks • To protect the
reliability and availability of cloud services, and • To support regulatory compliance in
the cloud

1.46. How does Cloud Infrastructure Security works?


• The responsibility for cloud infrastructure security is dependent on the customer’s cloud
strategy. • For example, in public cloud, security is shared between the cloud provider
and customer under the cloud shared responsibility model. Here the public cloud service
provider is responsible for the security of the physical infrastructure in their data centers.
• Responsibility for virtual infrastructure can be split between the public cloud customer
and provider based on the cloud service model in use. • For example, the cloud provider
is responsible for securing the services that they provide to a cloud customer, such as the
hypervisors. • The cloud provider is fully responsible for the security of the infrastructure
stack. • The public cloud customer is responsible for properly configuring the security
settings provided by the cloud provider. • They are also responsible for securing every-
thing above the handover point in the cloud infrastructure stack. • For example, a cloud
customer should deploy virtual firewalls and similar network security solutions to secure
traffic in an IaaS deployment.

1.47. Types of Cloud Infrastructure Security


"Public Cloud Infrastructure Security:"
According to the public cloud shared responsibility model, the physical infrastructure in
public cloud environments is managed and protected by the cloud provider who owns it,
while the virtual infrastructure is split between the cloud vendor and the customer. The
cloud provider provides tools that allow the organization to secure its workloads.
An organization is responsible for: • Securing workloads and data • Ensuring cloud con-
figurations remain secure • Understanding which service level agreements (SLA), sup-
plied by your cloud provider, deliver relevant services and monitoring.

"Private Cloud Infrastructure Security:"


Private clouds are deployed within an organization’s data centers, making the organization
responsible for ensuring private cloud security, including the security of the underlying
infrastructure.
Additional measures organization should take to secure your private cloud:
• Use cloud native monitoring tools to gain visibility over any anomalous behavior in your
running workloads. • Monitor privileged accounts and resources for suspicious activity to
detect insider threats • Ensure complete isolation between virtual machines, containers,
and host operating systems • Virtual machines should have dedicated NICs or VLANs,
and hosts should communicate over the network using a separate network interface.
"Hybrid Cloud Infrastructure Security:"
Hybrid clouds mix public and private cloud environments. This means that responsibility
for the underlying infrastructure is shared between the cloud provider (in the case of
public cloud) and the cloud customer
. The following security considerations are important in a hybrid cloud environment:
• Ensure public cloud systems are secured using all the best practices. • Private cloud
systems should follow private cloud security best practices, as well as traditional network
security measures for the local data center. • Avoid separate security strategies and tools
in each environment—adopt a single security framework that can provide controls across
the hybrid environment. • Identify all integration points between environments, treat them
as high-risk components and ensure they are secured.

1.48. Benefits of Cloud Infrastructure Security


• Improved Security • Greater Reliability and Availability • Simplified Management •
Regulatory Compliance • Decreased Operating Costs • Cloud confidence

1.49. Cloud Infrastructure Security Best Practices


Cloud infrastructure security is vital to the protection of corporate cloud environments
and the resources that they contain.
Some security best practices for the cloud include:
• Implement security for both the control and data plane in cloud environments. • Perform
regular patching and updates to protect applications and the OS against potential exploits.
• Implement strong access controls leveraging multi-factor authentication and the princi-
ple of least privilege. • Educate employees on the importance of cloud security and best
practices for operating in the cloud. • Encrypt data at rest and in transit across all of the
organization’s IT environment. • Perform regular monitoring and vulnerability scanning
to identify current threats and potential security risks.

1.50. Cloud Network Security


Cloud network security refers to the security measures—technology, policies, controls,
and processes—used to protect public, private, and hybrid cloud networks. Cloud net-
work security solutions focus on securing data, applications, virtual machines, and in-
frastructure in the cloud from the risks of unauthorized access, data loss, data breaches,
service interruption, and degraded performance. Cloud network security forms one of
the foundational layers of cloud security that enables companies to embed security mon-
itoring, threat prevention, and network security controls to help manage the risks of the
dissolving network perimeter.

1.51. Why is Cloud Network Security important?


-Moving beyond a traditional on-premises perimeter i.e. in cloud is a challenge. -Trust
in your cloud service provider and trust in your own systems are incredibly important
concerns. -Extending your existing network to cloud environments has security concerns
– existing security mechanisms are not good enough. -Sensitive information is migrated
to the cloud, where it becomes more vulnerable -Need to protect these resources in accor-
dance with corporate security policies and applicable regulations

1.52. How Cloud Network Security works?


-Cloud environments use software-defined networking (SDN) to route traffic through an
organization’s cloud-based infrastructure.
-Cloud network security solutions integrate with cloud platforms and virtualization so-
lutions and deploy virtual security gateways in order to achieve the visibility and control
required to perform segmentation, security monitoring and advanced threat prevention for
network traffic.
These virtual security gateways are similar in function and capability to on-premise secu-
rity gateways, but are virtual and hosted in the cloud.

1.53. Cloud network security architecture – components - SDN


-Software Defined Networks (SDNs) represent the central component of cloud network
security architectures.
-They separate the control plane, also known as the logical network layer that makes
traffic routing decisions, from the underlying data plane, the mechanism that forwards
network traffic through routers.
-SDNs offer network traffic programmability and control with policy management while
leveraging network hardware resources from public cloud providers.
-Traffic moving across an SDN can be classified based on application or service type and
then prioritized and forwarded based on centrally- managed policies that optimize net-
work traffic.
- SDN technology also reduces the physical infrastructure overhead so that any organiza-
tion can quickly deploy highly secure networks either locally or globally.

1.54. How does AWS Network Security works?


-AWS cloud security works by protecting the infrastructure that runs services within the
AWS Cloud.
-It is composed of the hardware, software, networking, and facilities that run AWS Cloud
services.
-AWS is responsible for security processes such as patch management and configuration
management, servicing flaws within the infrastructure of the cloud as well as maintaining
configuration of its infrastructure devices.

1.55. Elements of cloud network security


-Integrated cloud security stacks –
Includes next-generation firewall protection, anti-virus and anti-bot tools, intrusion pre-
vention systems, controls for individual apps, IAM, and data loss prevention tools.
-Sanitization –
Systems can filter low-level traffic and remove potential threats, without the need for full-
scale inspection.
-Exploit protection –
Protection against known Zero Day Exploits, with data derived from the latest threat in-
telligence.
-Traffic inspection –
Inspection of SSL/TLS traffic passing throughout virtualized environments. Analyzes en-
crypted traffic without compromising speed.
-Centralized security administration –
Solutions cover all cloud applications and storage assets. They integrate seamlessly with
existing resources, providing total awareness of network activity.
-Segmentation –
Cloud network security applies micro-segmentation to limit user permissions and guard
confidential data.
-Remote access –
Ensures secure access for remote workers and third parties. Users can connect to cloud
assets safely from any location.
-Automation tools –
Includes automated extension to newly installed cloud services. Automated workflows
blend ease of use and security, allowing companies to harness the potential of the cloud.
-Simple integration –
Cloud security tools integrate with legacy applications, operating systems, and third-party
security systems.

1.56. Shared Responsibility Model


-The shared responsibility model (SRM) is an understanding between the cloud service
provider (CSP) and an end-user of its services. -This agreement says that a CSP will be
responsible for securing the platform infrastructure of its cloud operations while an end-
user is responsible for securing the workloads running on the cloud platform. -In AWS,
it works to keep its infrastructure safe, customers are in charge of IT controls such as
encryption and identity and access management (IAM), patching guest operating systems,
configuring databases, and employee cybersecurity training.

1.57. Benefits of cloud network security


-Enhanced protection for sensitive data - Better visibility for administrators to monitor
threats and user activity - Simplified cloud policy management - Robust security systems
detect, contain, and neutralize malicious threats before they cause damage - Users can
automatically extend access controls and threat detection to new cloud resources - Policy-
based. internet traffic routing - Secure remote and mobile network access - Cost-effective
network scalability

1.58. Challenges of cloud network security


-Understanding shared responsibility - Managing dynamic cloud environments - Keeping
Data Safe From Cyber Attacks
1.59. Host level security in cloud
Host level security in cloud computing is a set of tools that protect servers and worksta-
tions from attacks and other threats.
These tools can include:
-Security configurations: A thorough assessment of the security configurations of each
host system - Software installations: An evaluation of the software installations on each
host system - Access controls: An evaluation of the access controls on each host system
- Vulnerability identification: The use of tools and techniques to identify vulnerabilities,
misconfigurations, and potential security gaps
- Remediation strategies:
The development of detailed recommendations and remediation strategies to strengthen
host security
- It describes how your server is set up for the following tasks:
- Preventing attacks.
- Minimizing the impact of a successful attack on the overall system.
- Responding to attacks when they occur.
Host level security in cloud - Host level security refers to measures taken to se-
cure an individual computer or device within a network. - These measures may include
installing and regularly updating antivirus software, using strong passwords, limiting ac-
cess to authorized users, and enabling firewalls to prevent unauthorized access. - Ensuring
host level security is important because it helps prevent attackers from gaining access to
sensitive information stored on the device or using it to launch attacks on other devices in
the network.

1.60. Key Features of Host level security


Key components and practices of host-level security in the cloud are: 1. Hardening Hosts
- Hardening involves configuring servers to minimize vulnerabilities and reduce the attack
surface.
2. Patch Management - Regularly applying security patches and updates to the host oper-
ating systems and software is essential to protect against known vulnerabilities.
3. Access Controls - Implementing strict access controls ensures that only authorized
users can access the hosts.
4. Intrusion Detection and Prevention Systems (IDPS) - IDPS solutions monitor host ac-
tivities for signs of malicious behavior and can take action to prevent or mitigate attacks.
5. Encryption - Encrypting data both at rest and in transit is crucial to protect sensitive
information from unauthorized access.
6. Endpoint Protection - Deploying endpoint protection solutions on hosts helps to safe-
guard them from malware, viruses, and other threats.
7. Monitoring and Logging - Continuous monitoring and logging of host activities are
vital for detecting and responding to security incidents.
8. Backup and Recovery - Regular backups and a robust disaster recovery plan ensure
data integrity and availability in case of a security incident or hardware failure.
9. Firewall Protection: Firewalls provide a layer of protection against malicious access to
your network.
10. Antivirus Protection: To detect and remove any malicious code that may be present
on your network. Monitor for malicious activity and alert you when a threat is detected.
11. Intrusion Detection: To detect and alert you to any suspicious activity on your net-
work. This includes detecting traffic from known malicious IP addresses and preventing
unauthorized access.
12. User Authentication: To ensure that only authorized users can access your network.
To monitor user activity to ensure that no suspicious activity is occurring.

1.61. Resource Management


In cloud computing, resource management refers to the process of managing the usage
of resources in a cloud environment. This includes compute resources (like CPU and
memory), storage resources (like disk space), and network resources (like bandwidth).
Resource management is crucial for optimizing the performance and cost of a cloud en-
vironment. It involves monitoring resource usage, allocating resources, and managing
resource capacity.
- Resource management is a core function of any man-made system, it affects the three
basic criteria for the evaluation of a system: performance, functionality, and cost
-An efficient resource management has a direct effect on performance and cost and an
indirect effect on the functionality of the system
-Cloud resource management requires complex policies and decisions for multi-objective
optimization.
-Effective resource management is extremely challenging due to the scale of the cloud
infrastructure and to the unpredictable interactions of the system with a large population
of users.

-Resource management becomes even more complex when resources are oversub-
scribed and users are uncooperative.
- In addition to external factors, resource management is affected by internal factors, such
as heterogeneity of hardware and software systems, the scale of the system, the failure
rates of different components, and other factors.
1.62. How resources are managed in cloud?
The strategies for resource management associated with the basic cloud delivery models,
IaaS, PaaS, SaaS, and DBaasS are different
- In all cases, the cloud service providers are faced with large fluctuating loads which
challenge the claim of cloud elasticity.
- In some cases, when a spike can be predicted, the resources can be provisioned in ad-
vance, e.g., for web services subject to seasonal spikes. For an unplanned spike the situa-
tion is slightly more complicated.
Auto-scaling can be used for unplanned spikes of the workload provided that:
(a) there is a pool of resources that can be released or allocated on demand; and
(b) there is a monitoring system enabling the resource management system to reallocate
resources in real time.
Auto-scaling is supported by PaaS services, such as Google AppEngine.

1.63. Resource Management


- Centralized control cannot provide adequate solutions for management policies when
changes in the environment are frequent and unpredictable.
- Distributed control poses its own challenges since it requires some form of coordination
between the entities in control.
- Autonomic policies are of great interest due to the scale of the system and the unpre-
dictability of the load when the ration of peak to mean resource demands can be very large.

1.64. POLICIES AND MECHANISMS FOR RESOURCE MANAGEMENT


A policy refers to the principles guiding decisions, while mechanisms represent the means
to implement policies.
Cloud resource management policies can be loosely grouped into five classes:
1. admission control, 2. capacity allocation, 3. load balancing, 4. energy optimization,
and 5. QoS guarantees
The explicit goal of an admission control policy is to prevent the system from accepting
workload in violation of high-level system policies. For example, a system may not accept
additional workload which would prevent it from completing work already in progress or
contracted. Capacity allocation means to allocate resources for individual instances; an
instance is an activation of a service Load balancing and energy optimization can be done
locally – they both are correlated and affect the cost for providing services. Load balanc-
ing - evenly distribute the load among the set of servers. - An example An important goal
of cloud resource management is minimization of the cost for providing cloud service
and, in particular, minimization of cloud energy consumption.
QoS is the aspect of resource management probably the most difficult to address and, at
the same time, possibly the most critical for the future of cloud computing.
In common, the resource management strategies jointly target performance and power
consumption.
The Dynamic Voltage and Frequency Scaling (DVFS) techniques such as Intel’s Speed-
Step and AMD’s PowerNow lower the voltage and the frequency to decrease the power
consumption
Intel’s SpeedStep will slow down the CPU to save electricity and also reduces heat and
noise of the PC. These techniques have migrated virtually to all processors including the
ones used for high performance servers
Processor performance decreases, but at a substantially lower rate than the energy con-
sumption, as a result of lower voltages and clock frequencies - CPU with 1.8 GHz saves
18PER of the energy required for maximum performance
Cloud resource allocation techniques must be based on a systematic approach, rather than
on ad hoc methods.
The four basic mechanisms for the implementation of resource management policies are:
• Controltheory • Machine Learning • Utility-basedapproaches • Market-orientedmechanisms
- Control theory uses feedback mechanisms to guarantee system stability and to predict
transient behavior. Feedback can only be used to predict local, rather than global behav-
ior. Kalman filters have been used for unrealistically simplified models.
- Machine learning techniques do not need a performance model of the system, a major
advantage.
- Utility-based approaches require a performance model and a mechanism to correlate
user-level performance with cost.
- Market-oriented mechanisms do not require a model of the system, e.g., combinatorial
auctions for bundles of resources

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy