Cloud
Cloud
1. Definition
• A private cloud is a cloud infrastructure operated solely for a single organization. It can be
hosted on-premises or externally by a third-party provider.
• It delivers similar benefits to public cloud services, such as scalability and self-service, but
with additional control and security, making it ideal for organizations with strict regulatory or
security requirements.
o The organization manages and hosts the cloud infrastructure in its own data
centers. It offers maximum control but requires significant investment in hardware,
software, and personnel.
3. Key Features
• Dedicated Environment: Resources like storage, computing power, and networking are
dedicated to a single organization, ensuring maximum privacy and control.
• Scalability: Like public clouds, private clouds can scale up or down based on demand, but
the resources are only for the organization’s use, ensuring predictable performance.
• Compliance: Private clouds are preferred by industries like finance, healthcare, and
government where strict compliance (e.g., HIPAA, GDPR) is mandatory. They allow
organizations to meet data residency, privacy, and security requirements more easily.
4. Technologies Involved
• Automation Tools: Private clouds use tools like VMware, OpenStack, and Microsoft Azure
Stack to automate provisioning, configuration, and management tasks, streamlining
operations.
5. Advantages
• Control and Customization: Complete control over the cloud environment, with the ability
to customize hardware, networking, and security to meet organizational needs.
• Security and Compliance: Enhanced security features like encryption, firewalls, and
network segmentation, making it easier to comply with regulations.
• Regulatory Compliance: Private clouds offer better control over data residency and
privacy, making it easier to adhere to local and international regulatory requirements.
6. Disadvantages
• Higher Upfront Costs: Significant initial investment in hardware, software, and personnel is
required for on-premises private clouds.
• Management Complexity: The organization must manage, maintain, and update the cloud
infrastructure, which can require specialized IT staff and complex tools.
• Scaling Limitations: Unlike public clouds, which can scale infinitely, private clouds are
limited by the organization's own hardware and infrastructure. Scaling requires purchasing
additional hardware.
7. Use Cases
• Highly Regulated Industries: Financial services, healthcare, and government sectors that
require strict security and regulatory compliance often prefer private clouds.
• Large Enterprises: Companies with sufficient resources for hardware, infrastructure, and
management teams prefer private clouds for their flexibility and long-term cost savings.
o Public cloud is cheaper and easier to scale but comes with less control and security
than private clouds.
o Hybrid cloud combines both private and public cloud elements. Organizations
might use private clouds for critical workloads and public clouds for less sensitive
operations.
9. Popular Providers
• VMware vCloud Suite: Offers private cloud infrastructure with virtualization, management,
and automation features.
• Microsoft Azure Stack: An extension of Microsoft’s public cloud, Azure Stack allows
businesses to create their private cloud using Microsoft technologies.
• AWS Outposts: AWS brings its public cloud services into your own data centers, allowing
businesses to build a hybrid or private cloud with AWS infrastructure.
10. Conclusion
• Private clouds are ideal for organizations that prioritize control, security, and compliance.
Though they come with higher costs and management overhead, they offer unparalleled
benefits for businesses in regulated industries or those with unique computing needs.
1. Enhanced Security
• Private clouds provide a high level of security as they operate in isolated environments,
ensuring sensitive data and critical applications are protected from external threats.
2. Increased Control
• Organizations have full control over the hardware, software, and network, allowing them to
tailor the cloud environment to their specific needs, including configuration and
management.
3. Regulatory Compliance
• Private clouds enable organizations to meet strict regulatory standards and compliance
requirements (e.g., GDPR, HIPAA), especially in sectors like finance and healthcare.
4. Customizable Infrastructure
• Since the private cloud is dedicated to one organization, it can be customized for unique
business needs, allowing for specialized configurations, storage, and networking setups.
5. Consistent Performance
• Resources are not shared with other tenants, meaning there is no competition for
bandwidth or compute power, resulting in predictable and reliable performance.
• While upfront costs can be high, private clouds can become cost-effective in the long run
for large enterprises with consistent workloads, reducing dependency on external services.
7. Data Privacy
• Data is stored on dedicated infrastructure, which reduces the risk of unauthorized access
and ensures that data remains private and secure, which is crucial for sensitive information.
8. Scalability
• Although not as limitless as the public cloud, private clouds offer scalability by allowing
organizations to expand resources (compute, storage) when needed, in line with business
growth.
9. High Availability
• Private clouds can be designed with high redundancy and failover mechanisms to ensure
that services and applications are always available, even in case of hardware failure.
• Private clouds offer flexible deployment options, whether on-premises, hosted externally by
a provider, or managed by a third party, giving organizations the choice of how they want to
run their cloud environment.
Here are some key challenges of implementing and managing a Private Cloud:
2. Complex Management
• Private clouds demand specialized IT staff and expertise to manage and maintain the
infrastructure, including software updates, network management, and hardware
maintenance.
3. Scalability Limitations
• Unlike public clouds, which offer nearly infinite scalability, private clouds are limited by the
organization’s hardware and resources. Expanding the infrastructure requires purchasing
additional hardware.
4. Capacity Planning
• Proper planning is essential to avoid under-utilization or over-provisioning of resources.
Organizations must accurately predict demand, which can be difficult, especially for
growing businesses.
5. Ongoing Maintenance
6. Limited Flexibility
• While the private cloud provides control and customization, the flexibility to quickly scale
up and down like public cloud services may be limited by the available resources on hand.
• Description: The cloud infrastructure is hosted and maintained within the organization’s
own data center. The company has full control over its setup, configuration, and
maintenance.
• Advantages: Complete control over data, hardware, and security. Suitable for organizations
with strict data governance and compliance needs.
• Challenges: Requires significant capital investment and ongoing maintenance costs. The
organization needs skilled IT staff to manage the infrastructure.
• Description: The private cloud is hosted off-site by a third-party provider, but the
infrastructure is exclusively dedicated to one organization. The hosting provider manages
the hardware and physical infrastructure.
• Description: A third-party provider not only hosts the cloud infrastructure but also
manages it on behalf of the organization. This includes everything from software updates to
security management.
• Description: A private cloud that exists within a public cloud environment but is logically
isolated from other tenants. It uses virtualization technology to create private networks and
resources within the public cloud provider’s infrastructure.
• Advantages: Combines the scalability of the public cloud with the security and isolation of
a private environment. Organizations benefit from the flexibility and cost-effectiveness of
the public cloud while maintaining control over their resources.
• Challenges: The organization still depends on the public cloud provider for some elements
of infrastructure and may face vendor lock-in.
5. Community Cloud
• Challenges: Limited customization and control for each organization, as resources and
decisions are shared among the community members.
• Description: A combination of private cloud and public cloud elements. Organizations may
use a private cloud for sensitive data and applications, while leveraging public cloud
resources for less critical operations.
• Advantages: Provides the flexibility to balance cost and performance. Organizations can
scale quickly by integrating public cloud services when additional capacity is needed.
• Challenges: Managing both private and public cloud infrastructure can be complex and
may require advanced networking and integration skills.
These types allow businesses to choose a private cloud solution that best meets their security,
compliance, scalability, and cost requirements.
• Setting up a private cloud requires significant capital investment in hardware, software, and
infrastructure. The cost is typically much higher than using public cloud services, especially
for small or medium-sized businesses.
2. Complex Management
• Managing a private cloud is more complex than using public cloud services. It requires
skilled IT personnel to handle tasks such as maintenance, security, and updates. The
organization is responsible for all infrastructure management.
3. Limited Scalability
• While private clouds offer scalability, they are limited by the available hardware and
infrastructure. Scaling up requires purchasing additional equipment, which can take time
and be costly, unlike public clouds, which can scale instantly.
4. Maintenance Overhead
5. Security Responsibility
• Although private clouds offer a secure environment, the responsibility for implementing and
managing security falls entirely on the organization. This includes encryption, firewalls,
access control, and compliance with regulations.
• Setting up disaster recovery and backup solutions can be more complex in a private cloud
environment. It requires additional infrastructure and planning, often leading to higher
costs and more management complexity.
Private cloud services refer to a range of offerings that provide cloud computing
functionalities within a private infrastructure, ensuring security, control, and customization.
These services can vary based on what an organization needs. Here are the key private cloud
services:
• Benefits: Customization of the infrastructure, security, and full control over data and
applications.
• Description: PaaS provides a platform allowing users to develop, run, and manage
applications without worrying about the underlying infrastructure. It is ideal for developers
who need a secure environment to build and deploy applications.
• Examples: Red Hat OpenShift, Cloud Foundry.
• Examples: Private deployments of applications like Microsoft 365 or Salesforce for specific
organizations.
• Benefits: Secure access to applications, with the privacy of private cloud infrastructure.
• Description: BaaS provides backup solutions to ensure that data is securely stored and
easily recoverable in the event of data loss or a disaster. The service is hosted within a
private cloud environment, ensuring data confidentiality.
• Benefits: Reliable, secure backup with quick recovery options, avoiding the risks associated
with public cloud backups.
• Description: DRaaS ensures business continuity by replicating and hosting servers and data
in a private cloud environment to provide failover in case of disasters, ensuring that
business-critical operations can continue.
• Benefits: Robust disaster recovery plans with minimal downtime, secure data
management, and easy failover/failback.
• Description: STaaS offers scalable storage solutions within a private cloud environment,
allowing organizations to store large volumes of data securely and efficiently.
• Benefits: Secure and scalable storage with custom options for different data types,
including backups, archives, and real-time storage.
• Description: DBaaS provides secure, scalable, and managed database solutions within a
private cloud infrastructure. Organizations can focus on their data and applications while
leaving database management to the service provider.
• Examples: Oracle Cloud Database, IBM Db2, Microsoft SQL Server in private clouds.
• Examples: Okta, Microsoft Azure Active Directory (in private cloud setups).
• Examples: Cisco SecureX, Palo Alto Networks, McAfee (configured for private clouds).
These private cloud services provide businesses with tailored solutions that combine the flexibility
of cloud computing with the privacy and control of dedicated infrastructure
VMigration:
VM Migration refers to the process of moving a virtual machine (VM) from one physical host or
environment to another. This is done to optimize performance, maintain uptime, or ensure resource
efficiency. There are different types of VM migrations, each serving different purposes. Here are the
main types:
1. Cold Migration
• Description: In cold migration, the virtual machine is powered off before being moved from
one host to another.
• Use Case: When downtime is acceptable, and there’s no need for the VM to be running
during the migration.
2. Live Migration
• Description: Live migration allows moving a running VM from one physical host to another
without stopping the VM. The VM’s memory, state, and storage are transferred while it
remains active.
• Use Case: Used when high availability is required, and downtime is not acceptable.
3. Storage Migration
• Description: Storage migration refers to moving the VM’s data (virtual disks or storage) from
one storage location to another, either within the same host or across different storage
systems.
• Use Case: Used when upgrading storage systems, balancing storage load, or moving data to
faster storage.
• Disadvantages: Potential for high I/O impact during the migration process.
4. Hot Migration
• Description: Hot migration is similar to live migration, where the VM remains powered on
while it is transferred from one host to another.
• Use Case: When you want to move a VM without interrupting its services or stopping it.
5. Hybrid Migration
• Use Case: Useful in scenarios where partial downtime is acceptable, but you want to
minimize it.
• Description: This involves migrating VMs between data centers, typically over wide-area
networks (WAN). The migration can be live or offline depending on the network speed and
latency.
• Use Case: Used when moving workloads between geographically separated data centers
for load balancing, disaster recovery, or regulatory reasons.
• Disadvantages: High complexity and network latency can cause delays in the migration
process.
7. Manual Migration
• Description: In manual migration, an administrator manually moves the VM from one host
to another. This can be done either while the VM is powered off (cold) or powered on (live).
• Use Case: Typically used in smaller environments or when automated tools are unavailable.
8. Automatic Migration
• Description: Automatic migration uses tools like VMware DRS (Distributed Resource
Scheduler) or Microsoft Hyper-V to automatically move VMs based on workload balancing
or fault tolerance needs.
• Use Case: Used for dynamic environments where workloads frequently change and need to
be optimized.
9. Host-to-Host Migration
• Description: This type involves migrating a VM from one physical host to another in the
same data center or cluster. It can be either live or cold.
• Use Case: Typically used for hardware maintenance or balancing the load across multiple
hosts in a cluster.
• Disadvantages: Requires both hosts to be compatible and part of the same cluster.
• Use Case: When shifting workloads between cloud environments to avoid vendor lock-in or
for cost optimization.
Each type of VM migration has its specific use cases and challenges, and the choice of method
depends on factors like downtime tolerance, resource availability, and migration complexit
Let's break down each stage of Hot/Live VM Migration and explain it step by step:
Stage 1: Reservation
• Process: A migration request is sent from the source host (Host A) to the target host (Host
B). During this process, Host B checks if it has the necessary resources (CPU, memory,
storage) to accommodate the migrating VM.
• Outcome:
o If resources are insufficient, the migration request is rejected, and the VM continues
to run unaffected on Host A.
• Process: In this phase, the VM’s memory pages from Host A are transferred to Host B in a
series of iterations. Initially, all memory pages are copied to Host B.
• Subsequent Iterations: After the initial copy, only the memory pages that have been
modified (referred to as "dirty pages") during the transfer process are sent over in the
subsequent iterations.
• Goal: To minimize the amount of data that needs to be transferred during the actual switch-
over (stop-and-copy phase) by iterating until the number of dirty pages becomes very small.
• Process: At this stage, the source VM (on Host A) is temporarily stopped. The remaining dirty
pages (those that were changed during the pre-copy phase) are copied to Host B. This is a
quick process as most of the data has already been transferred during the pre-copy phase.
• Goal: To transfer the final state of the VM to Host B while keeping the downtime as short as
possible.
Stage 4: Commitment
• Process: Once the remaining memory pages are copied, the migration process reaches the
commitment stage. The target VM (on Host B) is now ready to take over the operation.
• Action: At this point, the migration either proceeds to completion or, if any error occurs, the
migration can be aborted, and the VM continues to run on Host A.
Stage 5: Activation
• Process: Host B activates the VM using the copied data, and the VM resumes operation on
the new host. All network connections and processes are switched to Host B.
• Outcome: The migration is considered successful once the VM is activated and running on
Host B. The resources on Host A are freed, and the migration process is complete.
Benefits of VM Migration
1. Load Balancing
o Use Case: Businesses can use live migration to prevent downtime during
hardware maintenance or upgrades.
3. Energy Efficiency
o Benefit: By consolidating VMs onto fewer physical servers, idle servers can be
powered down, reducing energy consumption in data centers.
5. Disaster Recovery
6. Scalability
7. Performance Optimization
o Benefit: VM migration allows moving applications to hardware that best fits the
performance requirements of that specific application, optimizing response
times and resource usage.
8. Cost Efficiency
9. Geographic Flexibility
o Use Case: Enterprises with global operations can migrate VMs across different
regions to serve local user bases better or meet regulatory requirements.
o Use Case: Developers can move VMs to isolated environments for testing new
software versions without disrupting live environments.
Challenges of VM Migration
o Impact: Services may face a small downtime or disruption during the final
phase of migration.
5. Storage Dependencies
o Challenge: VMs rely on underlying storage systems, and migrating VMs with
large amounts of storage (or complex storage setups) can be slow and
resource-intensive.
o Impact: Slow migration times for large or complex VMs, leading to potential
service interruptions.
6. Security Concerns
7. Compatibility Issues
VM migration offers several benefits like optimizing resources, reducing downtime, and ensuring
business continuity, but it also comes with challenges such as network limitations, security risks,
and complexities in managing large-scale migrations. Proper planning and infrastructure readiness
are key to ensuring a smooth migration process.
2. Cost Efficiency: Pay only for the resources you use, reducing upfront capital expenditure on
hardware and infrastructure.
4. Flexibility: Supports various deployment models (private, public, hybrid) tailored to different
business needs.
1. Manual Provisioning
o Use Case: Suitable for small organizations with less frequent resource changes.
2. Automated Provisioning
o Description: Uses scripts or tools to automate the provisioning process, allowing for
rapid and consistent resource allocation.
3. Self-Service Provisioning
4. Dynamic Provisioning
o Use Case: Useful for applications with variable workloads to ensure optimal
performance.
5. Hybrid Provisioning
o Use Case: Organizations with specific regulatory or performance needs can utilize
both environments.
1. Complexity
o Challenge: Managing and integrating multiple provisioning types and environments
can become complicated.
2. Security Risks
3. Cost Management
4. Performance Issues
5. Compliance Challenges
6. Vendor Lock-In
o Challenge: Relying on specific cloud providers can create dependencies that are
difficult to migrate away from.
7. Lack of Visibility
9. Change Management
o Challenge: Managing changes in provisioning processes requires proper planning
and communication to avoid disruptions.
• Challenge: Organizations may lack the necessary skills and expertise to effectively manage
cloud provisioning.
Here are some examples of cloud provisioning across various cloud service providers and
scenarios:
2. Microsoft Azure
o Description: Allows users to create, update, and manage resources in Azure using
templates. ARM supports automated provisioning of multiple resources in a single
operation through declarative templates.
4. IBM Cloud
5. Oracle Cloud
6. VMware Cloud
7. DigitalOcean
• Example: Droplets
8. Heroku
o Description: Heroku uses dynos to run applications. Developers can easily scale
their applications by provisioning additional dynos or changing dyno types through a
simple command in the CLI.
9. Alibaba Cloud
10. OpenStack
These examples illustrate the diverse cloud provisioning capabilities across different platforms,
highlighting automation, scalability, and ease of use for users in various scenarios.
OpenStack is an open-source cloud computing platform that enables users to deploy and manage
cloud infrastructure and services in a flexible and scalable manner. It provides a set of software
tools for building and managing cloud computing environments, typically deployed as
infrastructure-as-a-service (IaaS). Here’s a detailed overview of what OpenStack does and how it
works:
1. Infrastructure Management:
o OpenStack allows users to create and manage virtualized computing resources (like
virtual machines), storage, and networking. It can run on standard hardware,
enabling users to turn their physical servers into a cloud environment.
2. Multi-Tenancy:
3. Resource Provisioning:
4. Self-Service:
o Users can deploy their applications and services through self-service dashboards or
APIs, allowing for increased agility and reduced dependency on IT departments.
5. Scalability:
o OpenStack can scale out by adding more hardware or resources as demand grows,
making it suitable for large-scale deployments.
6. Modularity:
OpenStack operates through a set of core components, each responsible for different aspects of
cloud management. Here are the main components and their functions:
1. Nova (Compute):
2. Neutron (Networking):
o Provides network connectivity as a service. It allows users to create and manage
networks, subnets, and routers, supporting advanced networking features like load
balancing and VPNs.
o Manages block storage for virtual machines. It enables users to create, attach, and
manage volumes of storage, ensuring data persistence.
o A highly scalable object storage system that allows users to store and retrieve large
amounts of unstructured data, such as images and backups.
o Provides a registry for storing and retrieving virtual machine disk images. It allows
users to create snapshots and manage images for instance deployment.
6. Horizon (Dashboard):
o A web-based user interface for OpenStack that allows users to manage and
visualize resources, services, and configurations easily.
8. Heat (Orchestration):
9. Ceilometer (Telemetry):
o Collects and monitors usage metrics and statistics across all OpenStack services,
helping with billing and capacity planning.
• Management: Administrators can manage the OpenStack environment through the Horizon
dashboard or command-line tools. APIs are available for programmatic access and
automation.
• Community and Support: Being open-source, OpenStack has a vibrant community that
contributes to its development. It also offers various distributions (like Red Hat OpenStack,
Canonical’s Charmed OpenStack, etc.) that provide additional support and enterprise
features.
Conclusion
OpenStack provides a flexible and powerful platform for building and managing cloud
environments. Its modular architecture, combined with open-source principles, allows
organizations to customize their cloud infrastructure to meet their specific needs while leveraging
the benefits of scalability, multi-tenancy, and self-service capabilities.
OpenStack consists of several key components, each designed to handle specific functionalities
within the cloud infrastructure. Here’s a breakdown of the main OpenStack components:
1. Nova (Compute)
2. Neutron (Networking)
• Features: Allows users to create, attach, and manage storage volumes for VMs, ensuring
data persistence across reboots and migrations.
• Features: Stores and retrieves unstructured data, like images and backups. It supports high
availability and durability.
• Features: Stores, retrieves, and manages images used for launching VMs, including
snapshot capabilities for existing instances.
6. Horizon (Dashboard)
• Features: Allows users and administrators to manage OpenStack resources and services
visually, including monitoring and configuring settings.
• Features: Provides a centralized directory for user identities, roles, and permissions across
OpenStack services, enabling secure access control.
8. Heat (Orchestration)
• Features: Enables users to define and deploy complex cloud applications through
templates, automating the resource provisioning process.
9. Ceilometer (Telemetry)
• Features: Gathers usage data across OpenStack services, facilitating billing, reporting, and
resource management through telemetry data.
• Features: Integrates with Kubernetes, Docker Swarm, and Apache Mesos, allowing users to
provision and manage container clusters.
• Features: Provides a secure interface for storing and retrieving sensitive information like
encryption keys and passwords.
• Features: Manages physical servers as if they were virtual machines, allowing users to
deploy workloads directly on hardware.
• Features: Allows users to manage DNS records and zones within their OpenStack
environments.
• Features: Automates the lifecycle management of clusters, including scaling, healing, and
updating.
Conclusion
Here are the pros and cons of using OpenStack for cloud infrastructure:
Pros of OpenStack
1. Open Source:
o Free to use and modify, promoting innovation and flexibility without vendor lock-in.
o A large community contributes to continuous improvements and updates.
2. Modularity:
o Composed of multiple components (like Nova, Neutron, etc.) that can be deployed
independently, allowing for customized solutions based on specific needs.
3. Scalability:
4. Flexibility:
5. Multi-Tenancy:
6. Self-Service Portal:
o Offers a user-friendly dashboard (Horizon) and APIs for users to manage their
resources, enabling self-service provisioning and management.
o Suitable for running a range of workloads, including web applications, big data, and
containerized services.
o Easily integrates with various DevOps tools and CI/CD pipelines, streamlining
application deployment and management processes.
o Users can choose their hardware and software stack without being tied to a specific
vendor, allowing for cost-effective solutions.
Cons of OpenStack
1. Complexity:
o Setting up and configuring OpenStack can be complex, requiring substantial
technical knowledge and expertise.
2. Resource Intensive:
o Requires significant resources (CPU, memory, storage) to run efficiently, which may
lead to higher infrastructure costs.
o New users may find it challenging to understand the architecture, components, and
management processes, necessitating training and support.
o While there is a wealth of documentation, some areas may lack depth, making it
challenging to find solutions to specific issues.
5. Variable Performance:
o Performance can vary based on configuration and the underlying hardware, leading
to inconsistencies in resource availability.
o While the community is active, official vendor support can be limited compared to
proprietary solutions, which may affect critical deployments.
7. Integration Challenges:
8. Frequent Updates:
10. Fragmentation:
Conclusion
OpenStack offers a powerful and flexible cloud computing platform suitable for various
organizations, from startups to large enterprises. However, potential users should carefully
consider the complexities and challenges associated with its deployment and management to
ensure it aligns with their operational capabilities and business goals.
This is a detailed outline of how to set up a private cloud on Google Cloud Platform (GCP).
Below is a refined version of your text, organized into clear sections for better readability.
A private cloud on GCP is a dedicated environment that provides a level of control and security
similar to a traditional on-premises data center. This environment is isolated from other customers,
offering enhanced security and compliance.
1. VPC Network:
o The fundamental building block that provides a logical network for your resources.
2. VM Instances:
o Virtual machines running applications and workloads within the VPC network.
3. Firewall Rules:
o Control network traffic in and out of your VPC, ensuring security and isolation.
4. Cloud Storage:
o Provides persistent storage for your data, including files, images, and other content.
5. Cloud SQL:
6. Cloud DNS:
• Provide details:
o Default Allow: Allow all internal traffic within the VPC network.
o Ingress rules: Allow incoming traffic from external networks (e.g., SSH for remote
access or HTTP for web servers).
o Egress rules: Allow outgoing traffic from your VPC network (e.g., outbound internet
access).
2. Create VM Instances
• Provide details:
o Choose a boot disk image (e.g., Ubuntu, CentOS) or create a custom image.
o Example rules:
▪ Database access: Allow inbound TCP traffic on specific ports (e.g., 3306 for
MySQL) from specific IP addresses.
• Cloud Storage:
• Cloud SQL:
o Set up instances for your databases, choosing a database engine (e.g., MySQL,
PostgreSQL).
• Cloud DNS:
o Create DNS zones for your domain names and add DNS records.
• Use SSH or other methods to connect to your VM instances and manage resources within
the private cloud.
This structured approach outlines the key components and steps required to set up a private cloud
on Google Cloud Platform effectively. If you have any further questions or need more details on any
specific area, feel free to ask!
• Overview: Keystone is a major project within the OpenStack software stack, responsible for
identity management.
• Functionality:
o Maintains a service catalog detailing available services and their API endpoints.
• Installation:
o Add the admin and demo users, along with various services and their endpoint
URLs.
• Overview: Glance enables users to access, retrieve, and store images and snapshots.
• Services:
o glance-api: Accepts API requests for image discovery, retrieval, and storage.
• Overview: Nova is the core service and the heart of OpenStack, responsible for managing
compute resources.
• Functionality:
• Consideration: The network node must have three NICs (Network Interface Cards):
• Overview: Although OpenStack is primarily managed via the command line, it also provides
a GUI dashboard named Horizon.
o Deploy images.
• Overview: After completing the major setup processes, it's time to launch an instance.
o An image is uploaded.
• Note: The setup steps are more time-consuming the first time; subsequent instance
launches are simpler.
1. Why Are Data Centers Important?
• Data Storage: Centralized data storage ensures that businesses can store large volumes of
data securely and access it efficiently.
• Business Continuity: Data centers provide backup and disaster recovery services to
maintain operations in case of failures or disasters.
• Scalability: They offer the infrastructure for scaling IT resources (e.g., compute, storage) as
the demand grows.
• Data Processing: Data centers house high-performance computing systems for processing
large-scale data (e.g., for analytics, AI).
• Secure Connectivity: Data centers offer secure networks, connecting businesses globally
while safeguarding against cyber threats.
• 1980s - Client-Server Era: With the rise of client-server computing, businesses began
using smaller, distributed systems, increasing the number of data centers.
• 1990s - Internet Age: The explosion of internet usage created the need for larger-scale data
centers. Virtualization started to gain traction, optimizing server usage.
• 2000s - Cloud Computing: The advent of cloud computing led to massive data centers
managed by cloud providers (e.g., AWS, Azure, Google Cloud). Organizations began to
migrate to the cloud.
• Present - Edge Computing: Modern data centers now integrate edge computing to process
data closer to the source, reducing latency and improving response times. The rise of IoT
and AI has also influenced data center designs.
a. Compute Infrastructure:
b. Storage Infrastructure:
• Cloud Storage: Remote storage resources provided over the internet by cloud services.
c. Network Infrastructure:
• Routers and Switches: Manage data traffic and ensure efficient communication within the
data center.
• Firewalls: Secure the network from cyber threats by controlling incoming and outgoing
traffic.
• Load Balancers: Distribute network or application traffic across multiple servers to ensure
reliability and performance.
• Cabling: Fiber-optic or copper cables to connect servers, storage, and networking devices.
d. Support Infrastructure:
• Cooling Systems: Air conditioning and cooling towers to regulate temperature and prevent
overheating.
• Fire Suppression: Automatic systems to detect and suppress fires, protecting sensitive
equipment.
• Security Systems: Physical security like biometric access, surveillance, and alarms to
safeguard the facility.
• Monitoring Systems: Tools to monitor performance, temperature, and power usage in real-
time.
. Data Center Levels and Tiers (Tier 1, 2, 3, 4)
The Tier Classification system, established by the Uptime Institute, is used to rate the
performance, redundancy, and uptime of data centers. There are four tiers that indicate varying
levels of reliability and infrastructure investment:
• Features:
o No backup components.
• Features:
• Description: A data center where maintenance can be performed without taking the
system offline.
• Features:
o Multiple paths for power and cooling, but only one active at a time.
o Redundant components.
• Use Case: Larger enterprises and organizations with high availability needs.
• Description: The highest level, ensuring continuous operation even during unplanned
events.
• Features:
o Fully redundant infrastructure, with multiple active paths for power and cooling.
• Use Case: Critical services like banking, e-commerce, and cloud providers.
a. Colocation Services:
• Description: Businesses rent physical space in a data center to house their own servers
and networking equipment.
• Benefits:
o Cost Savings: Reduces the need to build and maintain an in-house data center.
• Limitations:
o Limited Control: Customers may not have complete control over facility
operations.
• Description: These are hosted by third-party cloud providers (e.g., AWS, Azure, Google
Cloud) offering infrastructure, platforms, and software as services.
• Benefits:
• Limitations:
o Latency: May experience latency issues depending on the location of the cloud
provider’s data center.
o Security: Sensitive data stored in third-party cloud providers may raise compliance
and security concerns.
c. Managed Hosting:
• Benefits:
o Security: Advanced security measures such as firewalls and intrusion detection are
managed by the host.
• Limitations:
• Description: These are smaller, decentralized data centers located close to the end-users
or data sources, primarily for low-latency applications.
• Benefits:
o Low Latency: Process data closer to users, improving response times for
applications like IoT or autonomous vehicles.
o Distributed Architecture: Reduces the load on central data centers and networks.
• Limitations:
o Limited Capacity: Edge data centers may lack the extensive capacity of traditional
data centers.
o Maintenance: More challenging to manage and maintain when distributed across
locations.
Benefits:
• Cost Efficiency: Data center services reduce the need for on-premise hardware and
facilities, reducing upfront capital costs.
• Business Continuity: Data centers offer built-in disaster recovery and redundancy to
ensure continuous operation.
Limitations:
• Dependence on Third Parties: Organizations often lose full control over infrastructure
when relying on external providers.
• Security Concerns: Storing sensitive data offsite or in the cloud can raise concerns over
privacy and regulatory compliance.
• Connectivity: Remote data centers may introduce latency or downtime risks, especially in
regions with poor connectivity.
• Costs: While operational expenses might be reduced, scaling services (especially in the
cloud) can lead to higher-than-expected operational costs.
Amazon Web Services (AWS) manages its data centers with a focus on reliability, scalability,
security, and efficiency. AWS operates one of the largest and most complex infrastructures in the
world to support cloud services. Here’s an overview of how AWS manages its data centers:
• Regions and Availability Zones (AZs): AWS data centers are organized into geographic
regions (such as North America, Europe, and Asia-Pacific), each consisting of multiple
availability zones. Each AZ is essentially a cluster of physically separate data centers,
offering redundancy and fault isolation.
o Regions: AWS has 32+ regions across the globe, allowing users to deploy their
applications closer to their end-users.
o Availability Zones: AZs within each region are independent and isolated from one
another to prevent a single point of failure. This ensures high availability even if one
data center or AZ goes offline.
• Redundant Power and Cooling: AWS data centers are equipped with multiple layers of
redundant power sources, including backup generators and uninterruptible power supply
(UPS) systems. Cooling is also designed redundantly to ensure temperature control for
hardware reliability.
• Multiple Network Paths: AWS uses multiple network connections and paths to ensure that if
one path fails, traffic can be rerouted. This improves uptime and network resilience.
• Data Replication: Data is replicated across multiple availability zones and sometimes
across regions to provide high availability and fault tolerance.
3. Security Management
o 24/7 Surveillance: CCTV, guards, and motion detection ensure only authorized
personnel can access the facility.
o Biometric Scanning and Badge Access: Only authorized staff can enter secure areas
using multi-factor authentication methods like biometric scanning and keycards.
o Fire Detection and Suppression: Systems to detect smoke, heat, or fire early and
automatically suppress them.
• Logical Security: AWS uses various encryption techniques for data at rest and in transit to
secure user data. They also enforce strict access controls, audits, and monitoring systems
to detect and respond to threats.
• Energy Optimization: AWS optimizes energy usage with advanced cooling technologies,
such as evaporative cooling, and uses low-power servers to reduce its carbon footprint.
• Green Energy Initiatives: AWS is committed to renewable energy and aims to achieve 100%
renewable energy for its global infrastructure by 2025. They invest in wind, solar farms, and
other sustainable energy projects.
• Data Center Designs: AWS continually redesigns its data centers to optimize power usage
effectiveness (PUE) and reduce energy consumption.
5. Monitoring and Maintenance
• Real-Time Monitoring: AWS uses advanced monitoring systems for real-time visibility into
data center conditions. Monitoring includes hardware performance, power supply,
temperature, network health, and security events.
• Data Replication: AWS uses replication to achieve high durability (e.g., 99.999999999% for
Amazon S3). Data is stored across multiple devices and AZs, ensuring that even in the case
of hardware failures, the data remains intact.
• Backup and Disaster Recovery: AWS has built-in backup solutions and disaster recovery
mechanisms that replicate data across geographically dispersed regions, minimizing the
impact of regional failures or natural disasters.
• Elastic Compute and Storage: AWS data centers are designed for elasticity, allowing users
to scale their compute and storage resources on demand. Services like EC2 (compute) and
S3 (storage) automatically scale based on user demand without manual intervention.
• Certifications: AWS data centers comply with numerous international standards, including:
• Audit and Transparency: AWS provides audit reports and transparency regarding its data
center security and operational procedures, ensuring that customers meet their regulatory
requirements.
9. Automation and Software-Defined Infrastructure
• AWS Control Plane: AWS uses a software-defined infrastructure with automation tools to
manage large-scale resources and services efficiently. This includes deploying, configuring,
and scaling infrastructure resources without manual intervention.
• Edge Computing: AWS offers AWS Local Zones and AWS Outposts for edge computing,
bringing compute and storage closer to customers, reducing latency for applications like
video streaming, gaming, and IoT services.
• Content Delivery Network (CDN): AWS uses Amazon CloudFront to deliver content from
edge locations, reducing latency and improving speed for end-users.
Summary:
AWS manages its data centers with a strong emphasis on global infrastructure (regions and
availability zones), redundancy, security, energy efficiency, and automation. By utilizing advanced
technologies for monitoring, scaling, and recovery, AWS provides a highly reliable and secure
environment to support its vast array of cloud services.
• Fencing and Barriers: AWS data centers have physical barriers like high fences, gates, and
walls to protect the facility's perimeter.
• Guard Patrols: Security personnel continuously monitor and patrol the perimeter.
• Surveillance Systems: AWS uses 24/7 CCTV surveillance with infrared and motion-
detection cameras around the perimeter to detect unauthorized access.
• Entry Points Control: The number of entry points into the facility is limited and tightly
controlled. Only approved personnel can access the data center, and their identities are
verified through badges or biometric systems.
• Anti-Vehicle Defenses: Bollards and crash barriers are placed at the perimeter to prevent
unauthorized vehicle entry.
• Access Control: Inside the data center, access is highly restricted using multi-factor
authentication mechanisms, such as biometrics (fingerprints or iris scanning) and RFID-
based badges.
• Physical Segmentation: Different parts of the infrastructure (like server rooms) are
separated, and only authorized personnel can access specific areas.
• Monitoring and Logging: AWS continuously monitors infrastructure for unusual activity,
unauthorized access attempts, and logs all access to sensitive areas.
• Fire Suppression Systems: The infrastructure layer includes fire detection and
suppression systems, such as smoke detectors and waterless fire extinguishing systems, to
protect equipment from fire damage.
• Redundant Power Systems: Backup generators, UPS (Uninterruptible Power Supplies), and
redundant power lines ensure continuous operation during outages.
• Encryption: Data at rest is encrypted using AES-256 encryption. Data in transit is encrypted
using SSL/TLS to protect against interception.
• Data Replication: Data is often replicated across multiple availability zones to ensure high
availability and fault tolerance.
• Access Control: AWS provides strict access control mechanisms for user data, including
the use of Identity and Access Management (IAM) to enforce the principle of least privilege.
• Auditing and Logging: AWS CloudTrail and AWS Config are used to track API calls and
configuration changes, ensuring data activity is traceable and auditable.
• Temperature and Humidity Control: HVAC (Heating, Ventilation, and Air Conditioning)
systems are deployed to maintain the optimal environment for server performance, with
redundant systems in place to ensure uptime.
• Water Detection Systems: Sensors are installed to detect leaks or flooding that could
damage infrastructure.
• Seismic and Structural Design: AWS data centers are built in locations and structures that
are designed to withstand natural disasters like earthquakes and floods.
Summary:
AWS uses perimeter, infrastructure, data, and environmental layers to build a comprehensive
security strategy for its data centers. Each layer is fortified with multiple tools and protocols to
ensure that physical and digital assets are protected from intrusion, environmental threats, and
data loss.
Cloud Management - Definition
Cloud management refers to the set of tools, processes, and technologies used to monitor,
manage, and optimize cloud computing resources and services. It encompasses the control of
both public, private, and hybrid cloud environments, enabling organizations to oversee resource
provisioning, cost, performance, security, and compliance.
1. Resource Optimization: Efficiently utilize and allocate cloud resources to avoid wastage
and ensure performance.
2. Cost Efficiency: Control and reduce cloud spending by managing pay-per-use models and
identifying underutilized resources.
4. Security and Compliance: Ensure that cloud environments meet security protocols and
comply with regulatory standards like GDPR, HIPAA, etc.
5. Scalability: Enable easy scaling of cloud resources based on demand, ensuring agility and
flexibility.
6. Automation: Automate routine tasks like provisioning, backup, and disaster recovery to
reduce manual efforts.
7. Governance: Enforce policies and controls for cloud usage, ensuring that the organization
follows best practices and meets internal and external requirements.
1. Complexity: Multi-cloud and hybrid cloud environments add layers of complexity, making it
harder to manage diverse resources, services, and configurations.
2. Cost Overruns: Unmanaged cloud environments can lead to unanticipated costs due to
pay-as-you-go models and uncontrolled resource usage.
3. Security: Protecting sensitive data, managing access control, and ensuring compliance in a
cloud environment are ongoing challenges.
5. Resource Sprawl: Without proper oversight, cloud resources can multiply, leading to
inefficient usage and difficulties in monitoring them.
6. Visibility: Gaining full visibility into resource utilization, billing, and performance across
cloud platforms can be challenging.
7. Data Governance: Ensuring data integrity, encryption, and proper storage policies in a
cloud environment can be hard to enforce consistently.
2. Monitoring and Alerts: Real-time tracking of system performance, usage patterns, and
resource health, with automated alerts for anomalies or failures.
3. Cost Management and Reporting: Tools to track cloud spend, generate reports, and
identify cost-saving opportunities by optimizing resource use.
5. Backup and Disaster Recovery: Automated data backups and recovery processes to
ensure business continuity in case of failures.
6. Governance and Policy Management: Enforcing governance policies for cloud usage,
access control, and compliance.
7. Scalability and Elasticity: Dynamically scaling resources up or down based on demand
without manual intervention.
8. Service Integration: Seamless integration with third-party services like DevOps tools,
monitoring platforms, and security services.
1. Centralized Dashboard: Provides a unified view of all cloud resources across various
platforms (AWS, Azure, GCP) and helps manage multiple environments from a single point.
3. Monitoring Tools: Track and report key metrics like CPU utilization, memory, storage, and
network performance to ensure health and performance.
4. Cost Tracking: Continuously track spending, offering suggestions for optimizing costs
based on resource usage trends.
5. Security Controls: Implement access controls and monitor for security threats,
vulnerabilities, and ensure encryption of sensitive data.
6. Integration with DevOps: Continuous integration and deployment pipelines, along with
infrastructure-as-code, can be managed to ensure agility in cloud-based development
environments.
1. Centralized Control: Use a cloud management platform (CMP) that integrates different
cloud providers (AWS, Azure, GCP) into one dashboard, providing visibility across all
environments.
2. Automation: Implement automation for tasks like provisioning, monitoring, scaling, and
cost management. Tools like AWS CloudFormation and Terraform help automate
infrastructure management.
3. Multi-Cloud Strategy: Utilize multiple cloud platforms to avoid vendor lock-in and optimize
costs and performance based on workloads. Multi-cloud management solutions like
VMware or CloudBolt can assist.
4. Cost Management: Continuously monitor cloud usage, identify underused resources, and
implement cost-optimization techniques, such as right-sizing instances and leveraging spot
instances.
5. Security and Compliance Management: Ensure security protocols (encryption, access
control, firewalls) are consistently applied across cloud environments. Use cloud-native
tools (e.g., AWS GuardDuty) or third-party platforms to monitor and secure cloud
environments.
7. Governance and Policy Enforcement: Establish and enforce governance policies that
control access, resource usage, and compliance with regulatory requirements. Tools like
AWS Config, Azure Policy, or GCP’s Resource Manager can help maintain policy adherence.
8. Backup and Disaster Recovery Planning: Ensure regular backups and disaster recovery
plans are in place. Use cloud-native services like AWS Backup, Azure Backup, or GCP
snapshots to automate these processes.
Cloud Automation
Cloud automation refers to the use of software and tools to automate cloud management tasks,
such as provisioning, configuring, scaling, monitoring, and decommissioning cloud resources.
Automation helps streamline repetitive processes, improving efficiency and reducing human
intervention.
2. Consistency: Automating tasks ensures that processes are executed the same way every
time, leading to fewer errors and more predictable results.
2. Reduced Human Errors: By automating tasks, the risk of manual errors is significantly
reduced, improving overall system reliability.
3. Cost Efficiency: Automating tasks such as provisioning, scaling, and monitoring optimizes
resource usage, leading to reduced costs by avoiding under- or over-provisioning.
1. Complexity: Setting up cloud automation can be complex, requiring the right tools,
configurations, and policies tailored to business needs.
2. Initial Setup Costs: Though automation saves money long-term, the upfront cost of
implementing automated solutions can be high.
3. Vendor Lock-in: Many automation tools are specific to cloud service providers, making it
difficult to switch platforms without reconfiguring automation tools.
4. Maintenance and Updates: Automated systems require regular updates and monitoring to
ensure they run optimally and adapt to changing needs.
5. Security Risks: While automation improves security, poorly configured automation scripts
can introduce vulnerabilities if not managed properly.
6. Lack of Expertise: Skilled personnel are required to set up and manage automation
processes, which may be a challenge for some organizations.
1. Cloud Automation:
2. Cloud Orchestration:
Example Use Case Auto-scaling based on CPU Coordinating full application deployments
Cloud Automation Use Cases
2. CI/CD Pipelines: Automating the deployment and testing of code changes through
continuous integration and continuous deployment workflows.
3. Disaster Recovery: Automating backup and recovery processes to ensure critical data is
protected and recoverable in case of failure.
4. DevOps Processes: Automating infrastructure as code (IaC) for DevOps teams, using tools
like Terraform or AWS CloudFormation to provision environments consistently.
o Works with multiple cloud providers like AWS, Azure, and GCP.
2. AWS CloudFormation:
5. Chef:
o A configuration management tool used for automating infrastructure and
application deployment.
6. Puppet:
8. Kubernetes:
9. Jenkins:
o A popular CI/CD tool that automates the building, testing, and deployment of
applications in cloud environments.
Summary:
Cloud automation simplifies and speeds up cloud resource management, ensuring efficiency,
scalability, and consistency across environments. It differs from cloud orchestration, which
coordinates complex workflows of automated tasks. By using automation tools like Terraform, AWS
CloudFormation, and Ansible, organizations can ensure better resource utilization, lower costs,
and enhanced security. Despite challenges such as complexity and initial setup costs, the benefits
of cloud automation, including reduced errors and improved performance, make it indispensable
for modern cloud operations.
Cloud Infrastructure Security involves protecting cloud-based infrastructure from various threats
and vulnerabilities while ensuring compliance with regulations and maintaining data integrity.
2. Compliance: Ensure adherence to legal and regulatory standards (e.g., GDPR, HIPAA).
5. Risk Management: Identify, assess, and mitigate risks associated with cloud infrastructure.
• Data Breaches: Cloud environments are prime targets for cybercriminals; effective security
reduces the risk of data breaches.
• Trust: Organizations must ensure the security of customer data to maintain trust and
reputation.
• Operational Continuity: Security measures help prevent disruptions and ensure business
continuity.
• Cost Management: Effective security reduces the financial impact of data loss and
recovery efforts.
• Encryption: Encrypting data at rest and in transit to protect sensitive information from
unauthorized access.
• Access Control: Enforcing strict access controls and user authentication to limit who can
access resources.
3. Data Security: Involves data encryption, data masking, and data loss prevention to protect
sensitive information.
4. Identity and Access Management (IAM): Manages user identities and access to resources,
ensuring that only authorized individuals can access sensitive data.
5. Endpoint Security: Protects devices that access cloud services, including mobile devices
and IoT devices, from threats.
1. Enhanced Data Protection: Reduces the risk of data breaches and unauthorized access.
4. Increased Trust: Builds customer confidence in the organization’s ability to protect their
data.
5. Cost Savings: Minimizes the potential costs associated with data breaches and
compliance fines.
1. Implement Strong Access Controls: Use IAM to enforce the principle of least privilege.
2. Regularly Update and Patch: Ensure all systems, applications, and services are up to date
to protect against vulnerabilities.
3. Data Encryption: Encrypt sensitive data at rest and in transit to secure it from unauthorized
access.
4. Conduct Security Audits: Regularly assess the security posture of cloud environments
through audits and vulnerability assessments.
5. Monitor and Respond to Threats: Use monitoring tools to detect and respond to
suspicious activities in real-time.
5 Key Components of Cloud Infrastructure Security
2. Network Security:
3. Data Security:
4. Endpoint Security:
5. Application Security:
1. AWS Identity and Access Management (IAM): Manages user access and permissions in
AWS environments.
2. Azure Security Center: Provides unified security management and advanced threat
protection across hybrid cloud environments.
3. Google Cloud Identity: Offers IAM capabilities to manage access to Google Cloud
resources.
4. Palo Alto Networks Prisma Cloud: Provides comprehensive security for cloud
applications, including visibility and compliance.
5. Cloudflare: Offers network security, including DDoS protection and web application
firewall (WAF) services.
6. Splunk: Provides security information and event management (SIEM) capabilities to
monitor and analyze security data.
7. IBM Security Cloud Pak for Security: Integrates security tools and data across cloud
environments for better threat visibility and response.
8. Fortinet FortiGate: A cloud-native firewall that provides advanced threat protection for
cloud networks.
9. Zscaler: A cloud security platform that offers secure internet access and private
application access for users.
10. McAfee MVISION Cloud: Protects data across cloud services with visibility and
compliance tools.
Summary
• Cloud Infrastructure Security is essential for protecting sensitive data and maintaining
trust while ensuring compliance with regulations.
• Implementing a multi-layered security approach with best practices, key components, and
appropriate tools can significantly enhance the security posture of cloud environments.
Encryption
• Importance:
o Once data is encrypted, it becomes useless to attackers, preventing data theft and
misuse.
o Encryption can be applied to data at rest (stored data) and in transit (data being
transferred), which is crucial for secure communication and data sharing.
• Applications:
• Definition: IAM is a critical security component in cloud computing that manages user
identities and access rights.
• Purpose: To verify user identities and prevent unauthorized access to cloud resources.
• Key Features:
o Single Sign-On (SSO): Allows users to log in once and gain access to all associated
cloud resources.
o Access Control: Grants or restricts access to resources based on user roles and
permissions.
Cloud Firewalls
• Overview: Cloud firewalls act as protective barriers for cloud infrastructure, filtering
malicious traffic and preventing cyberattacks.
• Types:
o A VPC provides a secure and private cloud environment within a public cloud,
allowing organizations to customize their cloud settings.
• Security Groups:
o Act as virtual firewalls to control incoming and outgoing traffic for VPC resources.
o Can be configured at the instance level, allowing granular control over resource
access.
Penetration Testing
• Benefits:
Summary
The outlined concepts form the backbone of cloud security, ensuring that data is protected, user
access is controlled, and infrastructure is fortified against potential threats. Implementing
encryption, IAM, cloud firewalls, VPCs, and regular penetration testing collectively contributes to a
robust security framework for cloud environments.
Cloud network security encompasses the measures and protocols used to protect cloud
computing environments from various threats and vulnerabilities. Understanding the differences
between private and public cloud network security is crucial for organizations when choosing the
right infrastructure for their needs.
• Key Features:
• Challenges:
• Definition: A public cloud is a cloud infrastructure available to the general public and is
owned by a cloud service provider. Multiple organizations share the same resources, which
can present unique security challenges.
• Key Features:
o Scalability: Public clouds can scale resources easily and rapidly, but security must
be carefully managed as the environment grows.
o Built-in Security Features: Many public cloud providers offer built-in security tools,
including firewalls, encryption, and IAM capabilities, to help organizations secure
their data.
• Challenges:
o Less Control: Limited control over the underlying infrastructure can make it
challenging to implement custom security policies.
o
•
Cloud Network Security
Cloud network security is critical for protecting sensitive data and applications hosted in the cloud.
Here’s a comprehensive overview of its benefits, best practices, solutions, architecture, and key
components.
1. Data Protection: Ensures sensitive data is safeguarded from unauthorized access and
breaches through encryption and access controls.
3. Improved Visibility: Offers enhanced monitoring and visibility into network activities,
enabling rapid identification and response to threats.
4. Threat Mitigation: Protects against cyber threats such as DDoS attacks, malware, and
unauthorized access through robust security measures.
5. Scalability: Provides scalable security solutions that can grow with the organization's
needs without significant investments in hardware.
2. Data Encryption:
4. Network Segmentation:
o Segment networks to isolate sensitive data and applications, reducing the attack
surface.
o Use firewalls to control incoming and outgoing traffic based on predefined security
rules.
o Use security information and event management (SIEM) tools for analysis.
7. Educate Employees:
2. Intrusion Detection and Prevention Systems (IDPS): Monitor network traffic for
suspicious activities and take action against potential threats.
3. Virtual Private Network (VPN): Secure remote access to the cloud by encrypting data
transmitted between devices and the cloud.
4. Identity and Access Management (IAM): Tools that manage user identities, access rights,
and authentication.
5. Encryption Tools: Solutions that encrypt data at rest and in transit to safeguard sensitive
information.
6. Cloud Security Posture Management (CSPM): Tools that help assess and manage the
security posture of cloud environments.
• Access Control Layer: Enforces policies for user access to various parts of the network
based on roles and permissions.
• Data Security Layer: Protects data through encryption, tokenization, and other data
protection methods.
• Monitoring Layer: Involves logging, monitoring, and analyzing network activities for threat
detection and incident response.
Network Segmentation
• Benefits:
o Enhanced Security: Limits access to sensitive areas of the network, reducing the
risk of unauthorized access.
• Traffic Filtering:
• Firewall Rules:
o Definition: Set of defined rules that govern what traffic is allowed or denied on a
network.
o Best Practices:
Conclusion
Cloud network security is essential for safeguarding sensitive data and applications in cloud
environments. By implementing best practices, leveraging advanced security solutions, and
maintaining a robust security architecture, organizations can effectively mitigate risks and enhance
their overall security posture.
Host Level Security
Host level security refers to the measures taken to secure an individual computer or device within a
network. It is a critical aspect of overall network security, as it helps prevent unauthorized access
and protects sensitive data. Below is a detailed overview of host-level security, including its
importance, components, and specific considerations for different cloud service models.
• Prevention of Network Attacks: Reduces the risk of the device being used to launch
attacks on other devices within the network.
• Mitigation of Malware and Threats: Helps to detect and remove malicious software,
protecting the integrity of the host.
1. Antivirus Software:
2. Firewalls:
4. User Authentication:
o Ensures that only authorized users can access the network and its resources.
o Protect individual devices and hosts by monitoring and controlling access to the
network and data.
digiALERT is a host-level security solution that encompasses various security measures and
technologies to protect individual devices within a network. It focuses on comprehensive security
strategies to ensure the integrity and confidentiality of host devices.
When assessing host security, it’s essential to consider the context of different cloud service
models, such as:
o Customers are primarily responsible for securing the hosts in the cloud.
o Customers rely on cloud service providers for the security of the host platform.
o Similar to PaaS, security responsibilities for the host platform rest with the cloud
service providers.
o Customers must trust that the providers have adequate security measures in place.
Conclusion
Host level security is essential for protecting individual devices within a network, particularly in
cloud environments. Understanding the responsibilities and best practices associated with
different cloud service models (IaaS, PaaS, and SaaS) is crucial for organizations to ensure robust
security measures are in place. By implementing strong host-level security solutions, organizations
can significantly reduce the risk of data breaches and cyber threats.
o Host level security prevents unauthorized users from accessing sensitive data and
applications stored on individual devices.
o Implementing security measures at the host level helps detect and neutralize
malware and other malicious attacks, reducing the potential impact on the overall
network.
o Many industries have regulatory requirements regarding data protection. Host level
security helps organizations meet these compliance standards (e.g., GDPR, HIPAA).
o Effective host security limits an attacker’s ability to move laterally across the
network, confining potential damage to the compromised device.
1. Antivirus Software:
o Monitors the device for malicious software, provides real-time protection, and
regularly scans for threats.
2. Firewalls:
o Monitor network traffic for suspicious activities, providing alerts for potential
threats.
4. User Authentication:
5. Patch Management:
o Regularly updates software and operating systems to fix vulnerabilities and improve
security.
o Encrypts sensitive data at rest and in transit, ensuring that unauthorized users
cannot read it.
o Configure firewalls to restrict unauthorized access and regularly review and update
firewall rules.
7. Educate Employees:
By focusing on these aspects of host level security, organizations can significantly enhance their
overall security posture and protect sensitive data from a wide range of threats.
CHAPTER
Figures
6.1 Google Cloud Platform a region and within a region are availability zones .
These zones are isolated from a single point of
Suite of cloud computing services offered by Google failure. HTTP global load balancer are global and
that provides a series of modular cloud services can receive requests from any of the Google edge
including computing, data storage, data analytics. locations and regions.. Other resources, like
GCP is a public cloud vendor — like competitors storage, can be regional. The storage is distributed
Amazon Web Services (AWS) and Microsoft Azure across multiple zones within a region for
.Customers are able to access computer resources redundancy. And finally zonal resources, including
housed in Google’s data centers around the world compute instances, are only available in one
for free or on a pay-per-use basis specific zone within one specific region. When
deploying applications on GCP, you must select the
Google Cloud & Google Cloud Platform
locations depending on the performance,
Google Cloud - includes a combination of services reliability, scalability, and security needs of your
available over the internet that can help organization.
organizations go digital .Google Cloud Platform
GCP Services Each GCP region offers a category of
provides public cloud infrastructure for hosting
services. Some services are limited to specific
web-based applications - part of Google Cloud.
region. Major services of Google Cloud Platform
Google Cloud - Other Services:
include: Computing and hosting, Storage and
Google Workspace (formerly known as G Suite and database, Networking ,Big Data ,Machine learning
Google Apps) - provides identity management for GCP pros and cons ▪ GCP strengths : Google Cloud
organizations, Gmail, and collaboration tools. Platform documentation.Global backbone network
Enterprise versions of Android and Chrome OS. that uses advanced software-defined networking
Application programming interfaces (APIs) for and edgecaching services to deliver fast, consistent,
machine learning and enterprise mapping services and scalable performance. GCP weaknesses
Google Cloud Platform has far fewer services than
6.1.1 History of GCP those offered by AWS and Azure. GCP has an
opinionated model of how their cloud services
GCP first came online in 2008 with the launch of a
should be used.
product called App Engine .Google announced a
developer tool that allowed customers to run their 6.1.3 GCP Compute Service Google Compute
web applications on Google infrastructure.To Engine
source the feedback needed to make
improvements to this preview release, App Engine Google Cloud offers users the facility of computing
was made available to 10,000 developers. These and hosting where they can pick from the following
early-adopter developers could run apps with 500 options: Work in a serverless environment .Use a
MB of storage, 200 million megacycles of CPU per managed application platform. Build cloud-based
day, and 10 GB of bandwidth per day .By late 2011, infrastructure to facilitate maximum control and
Google pulled App Engine out of preview mode and flexibility. Leverage container technologies to
made it an official, fully supported Google product achieve maximum flexibility. Compute Options :
.Today, Google Cloud Platform is one of the top Compute Engine , App Engine , Cloud Functions ,
public cloud vendors in the world. Google Cloud Kubernetes Engine , Cloud Run
customers include Nintendo, eBay, UPS, The Home
Depot, Etsy, PayPal, 20th Century Fox, and Twitter.
Standard Storage, Nearline Storage , Coldline analytics workloads. Media content storage and
Storage , Archival Storage delivery: Cloud Storage provides the availability and
throughput needed to stream audio or video
Standard Storage: Frequently accessed data - for directly to applications and websites. Backups and
a general purpose . Highly available and less Archives: Backup data in Cloud Storage can be used
latency. Nearline Storage: The data must be highly for more than just recovery because all storage
available but not accessed as frequently as standard classes have ms latency and are accessed through a
storage. The which needs to be accessed within single API. Features of GCP Object Lifecycle
seconds or minutes can be stored in Nearline Management: Define conditions that trigger data
Storage. Coldline Storage: The data which is deletion or transition to a cheaper storage class.
accessed infrequently can be stored in Coldline Object Versioning: Continue to store old copies of
storage The data which needs to be accessed within objects when they are deleted or overwritten.
hours can be stored in this Coldline Storage. Retention policies: Define minimum retention
Archival Storage: Archival storage is mainly used periods that objects must be stored for before
for storing data that is in infrequent access and can they’re deleted. The object holds: Place a hold on
be retained for long periods of time. Cost-effective an object to prevent its deletion. Customer-
option for storing data that is not accessed managed encryption keys: Encrypt object data with
frequently but must be preserved for legal, encryption keys stored by the Cloud Key
regulatory, or business reasons. Management Service and managed by you.
Customer-supplied encryption keys: Encrypt object
Benefits of using Archival Storage Low Cost: The
data with encryption keys created and managed by
data stored in Archival storage is not accessed that
you. Uniform bucket-level access: Uniformly control
frequently so the cost of the storage will also be
access to your Cloud Storage resources by disabling
very low. High durability: When compared to the
object ACLs. Requester Pays: Require access to your
durability of the Archival storage it same as the
data to include a project ID to bill for network
other storage. Long retention period: The data is
charges, operation charges, and retrieval fees.
stored in Archival storage it will be stored for long
Bucket Lock: Bucket Lock allows you to configure a
periods it will be available for more than 8 years.
data retention policy for a Cloud Storage bucket
Lifecycle management: With the help of lifecycle
that governs how long objects in the bucket must
management rules the data can be moved
be retained. Pub/Sub Notifications for Cloud
automatically to the Archival storage.
Storage: Send notifications to Pub/Sub when
6.2.3 Cloud Storage objects are created, updated, or deleted. Cloud
Audit Logs with Cloud Storage: Maintain admin
Cloud storage is a fully managed scalable service, no activity logs and data access logs for your Cloud
need to provision capacity ahead of time. Each Storage resources. Object- and bucket-level
object in Cloud storage has a URL. Cloud storage permissions: Cloud Identity and Access
consists of buckets you create and configure and Management (IAM) allows you to control who has
use to hold your storage objects(immutable – no access to your buckets and objects.
edit, create new versions). Cloud storage encrypts
your data on the server side before being written to GCP Storage Features
disk. (by default = https). You can move objects of
High performance , Internet-scale ,Data encryption
cloud storage to other GCP storage services. When
at rest , Data encryption in transit by default from
you create a bucket, it is given a globally unique Google to endpoint , Online and offline import
name, specify a geographic location where the services are available
bucket and its contents are stored, and a default
storage class. Use Cases of Cloud Storage GCP - Networking Google Cloud networking
Integrated repository for analytics and ML: Cloud services or technologies
Storage is strongly consistent giving accuracy in
Security & privacy
Chapter – 4 Private Security & privacy are one of the big advantages of cloud computing. Private cloud
improved the security level as compared to the public cloud.
Improved performance
Cloud Private cloud offers better performance with improved speed and space capacity.
What is the difference between private cloud vs. public cloud?
A public cloud is where an independent third-party provider, such as Amazon Web
Services (AWS) or Microsoft Azure, owns and maintains compute resources that
customers can access over the internet. Public cloud users share these
resources, a model known as a multi-tenant environment. For example, various
What is a private cloud?
virtual machine (VM) instances provisioned by public cloud users may share the
Private cloud is a type of cloud computing that delivers similar advantages
same physical server, while storage volumes created by users may coexist on the
to public cloud, including scalability and self-service, but through a proprietary
same storage subsystem.
architecture. A private cloud, also known as an internal or corporate cloud, is
What is the difference between private cloud vs. hybrid cloud?
dedicated to the needs and goals of a single organization whereas public clouds
A hybrid cloud is a model in which a private cloud connects with public cloud
deliver services to multiple organizations.
infrastructure, enabling an organization to orchestrate workloads -- ideally
A private cloud is a single-tenant computing infrastructure and environment,
seamlessly -- across the two environments. In this model, the public cloud
meaning the organization using it -- the tenant -- doesn't share resources with
effectively becomes an extension of the private cloud to form a single, uniform
other users. Private cloud resources can be hosted and managed by the
cloud. A hybrid cloud deployment requires a high level of compatibility between
organization in a variety of ways. The private cloud might be based on resources
the underlying software and services used by both the public and private clouds.
and infrastructure already present in an organization's on-premises data center.
Is it better to use a public cloud or a private cloud?
The main advantage of a private cloud is that users don't share resources.
Some businesses may prefer to use a private cloud, especially if they have
Because of its proprietary nature, a private cloud computing model is best for
extremely high security standards. Using a private cloud eliminates
businesses with dynamic or unpredictable computing needs that require direct
intercompany multitenancy (there will still be multitenancy among internal
control over their environments, typically to meet security, business governance
teams) and gives a business more control over the cloud security measures
or regulatory compliance requirements.
that are put in place.
Advantage of private cloud?
However, it may cost more to deploy a private cloud, especially if the
•increased security of an isolated network;
business is managing the private cloud themselves. Often, organizations that
•increased performance due to resources being solely dedicated to one
use private clouds will end up with a hybrid cloud deployment, incorporating
organization; and
some public cloud services for the sake of efficiency.
•increased capability for customization, such as specialized services or
Advantage of private cloud?
applications that suit the particular company.
•increased security of an isolated network;
•More Control
•increased performance due to resources being solely dedicated to one
Private clouds have more control over their resources and hardware than public
organization; and
clouds
•increased capability for customization, such as specialized services or
because it is only accessed by selected users.
1|Page
2|Page
private cloud requests has us running around, and for good reason: Storm’s hardware. A bonus is that the CSP provides all the expertise needed to manage
managed hosting negates many of the challenges associated with private cloud. the physical infrastructure as well as the cloud infrastructure.
In this post we’ll look at the major drawbacks of a private cloud, and how Storm’s Complex, ongoing maintenance
managed hosting overcomes them. Despite the potential for automation, cloud monitoring and maintenance still
If you don’t know what the cloud or a private cloud is, here’s a quick recap: require experienced staff. This can be challenging for organisations given the
Imagine being able to pool the resources of all computing devices in your house. ongoing skills shortage as well as the high cost associated with training or
This includes everything that has a CPU, memory, and storage space. Instead of upskilling staff.
those individual devices, you now have one big unit with the combined resources CSPs are uniquely positioned to apply internal skill sets to DevOps tasks for a more
of those individual devices. The cloud is created in the same way: the physical holistic integration with cloud maintenance, including monitoring and
resources of several, tens, hundreds, or even thousands of physical servers are performance management, scalability and elasticity management, disaster
pooled together to create one big resource-rich unit. Software called a hypervisor recovery and backups, and security.
can be used to create virtual devices such as servers and networking equipment For example, CSPs can be tasked with the implementation and management of
using those pooled physical resources. advanced monitoring tools like AWS CloudWatch or Azure Monitor. The CSP
So what then is a private cloud? The best way to explain it is alongside the public configures custom dashboards and alerts to monitor application performance and
cloud hosting model: system health, helping the organisation maintain optimal performance and quickly
Public cloud: A public cloud follows a multi-tenant hosting model: just as with address potential issues before they affect operations.
shared hosting, all tenants on a public cloud make use of the same processing, In instances where organisations experience significant variability in workload due
memory, and storage resources. Unlike shared hosting, however, these resources to seasonal events (such as eCommerce companies), a CSP can help implement
are dedicated to the public cloud account (when we’re talking about virtual autoscaling solutions that adjust resources automatically, ensuring responsiveness
servers) and can be scaled as needed. Despite this, peak times can introduce under heavy loads without overspending on idle resources during off-peak
higher latency and reduced speeds. Given the public nature of a public cloud, periods.
privacy can be an issue when compliance with data protection regulations is Lower scalability
paramount. Private clouds have the benefit of hardware resources entirely dedicated to them.
Private cloud: A private cloud is cloud infrastructure built on hardware dedicated But that can also be a disadvantage since it means they’re less scalable than public
to your account. As such, the cloud infrastructure itself is also completely private, clouds with seemingly ‘limitless’ resources.
which means you get all the resources and don’t have to deal with noisy Given that CSPs own and maintain the hardware of a private cloud, the obvious
neighbours. Among its many advantages, private clouds also deliver increased solution would be to request more physical servers to increase the potential for
privacy which in itself boosts security. scaling; the onus (and cost) is on the CSP to acquire the hardware and add it to the
Dealing With Private Cloud Challenges cloud infrastructure. Scaling can also be overcome by employing a hybrid cloud
High initial costs and setup model where workloads are run in highly scalable public cloud environments
Private clouds can come with high startup costs to build, operate, and manage the when private cloud resources reach peak capacity.
hardware infrastructure. Very often organisations have to hire and / or train staff When adding more hardware is not an option for whatever reason,
to deliver the required expertise. To be fair, however, that only relates to on- containerisation and virtualisation can be used; they encapsulate applications in a
premise installations. Private cloud setups through a cloud service provider (CSP) way that consumes fewer resources than traditional virtual machines and allow for
negate much of the hardware costs since it’s the CSP that owns and maintains the
3|Page
more granular scaling, and can improve the utilisation efficiency of the underlying • Secondly, you need to load the appropriate software
physical resources. (operating System you selected in the previous step,
Efficient resource utilisation device drivers, middleware, and the needed applications
Because private clouds tend to have lower overall scalability compared to the for the service required).
public cloud, and where a hybrid model isn’t feasible, organisations that add more
• Thirdly, you need to customize and configure the
resources to their private cloud instances to deal with spikes in demand struggle
machine (e.g., IP address, Gateway) to configure an
to make efficient use of their resources outside of those peak times.
Cloud service providers can employ various tactics that can help organisations
associated network and storage resources.
make more efficient use of their resources without resorting to a hybrid cloud • Finally, the virtual server is ready to start with its
model (which could, for example, complicate already fickle compliance issues). newly loaded software.
Some of these include: VM Provisioning Process contd.
• autoscaling solutions that automatically adjust the amount of resources To summarize, server provisioning is defining server’s
based on the workload needs configuration based on the organization requirements,
• resource optimisation tools that identify idle or underused resources and a hardware, and software component ( pr ocessor,
suggest adjustments RAM, storage, networking, operating system, applications,
Ultimately, the challenges of managing a private cloud can vary significantly based etc.).
on the specific infrastructure and its usage. However, skill shortages or limited • Normally, virtual machines can be provisioned by manually
budgets should not deter organisations from leveraging the cloud in a way that installing an operating system, by using a preconfigured VM
best suits their needs. template, by cloning an existing VM, or by importing a
At Storm Internet, we are committed to a partnership model that goes beyond physical server or a virtual server from another hosting
mere service provision. We integrate closely with our customers’ businesses, platform.
offering tailored solutions that enhance growth and simplify operational • Physical servers can also be virtualized and
complexities. Our goal is not just to provide technology, but to enable real provisioned using P 2 V ( Physical to Virtual) tools and
business transformation by making cloud technology accessible and aligned with techniques (e.g., virt-p2v).
your strategic objectives. This close-knit integration ensures that every • After creating a virtual machine by virtualizing a physical
organisation can achieve its potential, regardless of its size or sector. server, or by building a new virtual server in the virtual
VM M ig r a t io n environment, a template can be created out of it.
VM Provisioning Process • M o s t v i r t u a l i z a t i o n ma n a g eme n t v e n d o r s ( V M wa r e ,
XenServer, etc.) provide the data center’s administration
• The common and normal steps of provisioning a virtual
with the ability to do such tasks in an easy way.
server are as follows:
VM Provisioning Process contd.
• Firstly, you need to select a server from a pool of • Provisioning from a template is an invaluable feature,
available servers ( physical servers with enough because it reduces the time required to create a new virtual
capacity) along with the appropriate OS template you machine.
need to provision the virtual machine. • •Administrators can create different templates for different
4|Page
purposes. For example, you can create a Windows 2003 logical steps that are executed when migrating an OS.
Server template for the finance department, or a Red Hat • In this research, the migration process has been viewed as a
Linux template for the engineering department. This enables transactional interaction between the two hosts involved:
the administrator to quickly provision a correctly configured
virtual server on demand.
VIRTUALMACHINEMIGRATION
Migration and High Availability)
SERVICES(Live
• Live migration ( which is also called hot or real- time
migration) can be defined as the movement of a virtual
machine from one physical host to another while being powered
on.
• When it is properly carried out, this process takes place without LIVE MIGRATION STAGES
any noticeable effect from the end user’s point of view (a Here is the extracted text from the image:
matter of milliseconds).
• One of the most significant advantages of live migration is the Stage-0: Pre-Migration. An active virtual machine exists on the physical host A.
fact that it facilitates proactive maintenance in case of Stage-1: Reservation. A request is issued to migrate an OS from host A to host B (a
failure, because the potential problem can be resolved before precondition is that the necessary resources exist on B and a VM container of that
the disruption of service occurs. size).
• Live migration can also be used for load balancing in which Stage-3: Stop-and-Copy. Running OS instance at A is suspended, and its network
work is shared among computers in order to optimize the traffic is redirected to B. As described in reference 21, CPU state and remaining
utilization of available CPU resources. inconsistent memory pages are then transferred. At the end of this stage, there is
Live Migration Anatomy, Xen Hypervisor Algorithm. a consistent suspended copy of the VM at both A and B. The copy at A is
• How to live migration’s mechanism and memory and virtual considered primary and is resumed in case of failure.
machine states are being transferred, through the network,
from one host A to another host B:
• the Xen hypervisor is an example for this mechanism. The
5|Page
Stage-4: Commitment. Host B indicates to A that it has successfully received a VMware VMotion:
consistent OS image. Host A acknowledges this message as a commitment of a) Automatically optimize and allocate an entire pool of resources for maximum
migration transaction. hardware utilization, flexibility, and availability.
Stage-5: Activation. The migrated VM on B is now activated. Post-migration code b) Perform hardware’s maintenance without scheduled downtime along with
runs to reattach the device’s drivers to the new machine and advertise moved IP migrating virtual machines away from failing or underperforming servers.
addresses. Citrix XenServer "XenMotion":
This approach to failure management ensures that at least one host has a Based on Xen live migrate utility, it provides the IT Administrator the facility to
consistent VM image at all times during migration: move a running VM from one XenServer to another in the same pool without
1. Original host remains stable until migration commits and that the VM may interrupting the service (hypothetically zero—downtime server maintenance),
be suspended and resumed on that host with no risk of failure. making it a highly available service and also a good feature to balance workloads
2. A migration request essentially attempts to move the VM to a new host on the virtualized environments.
and on any sort of failure, execution is resumed locally, aborting the REGULAR /COLD MIGRATION
migration. • Cold migration is the migration of a powered-off virtual
• LIVE MIGRATION TIMELINE machine. With cold migration:
• You have options of moving the associated disks from one data
store to another.
• The virtual machines are not required to be on a shared storage.
1)Live migrations needs to a shared storage for virtual machines
in the server’s pool, but cold migration does not. 2) In live
migration for a virtual machine between two hosts, there should be
certain CPU compatibility checks, but in cold migration this checks
do not apply.
• Cold migration (VMware product ) is easy to implement and is
summarized as follows:
• The configuration files, including NVRAM file (BIOS Setting), log
files, and the disks of the virtual machines, are moved from the
source host to the destination host’s associated storage area.
• The virtual machine is registered with the new host.
• After the migration is completed, the old version of the virtual
machine is deleted from the source host.
Generative AI Use-cases for Enterprise
1. Will Generative AI replace me at my job?
LIVE MIGRATION VENDOR IMPLEMENTATION EXAMPLE
There are lots of VM management and provisioning tools that provide the live 2. Introduction to Generative AI
migration of VM facility, two of which are VMware VMotion and Citrix XenServer 3. Generative AI and Use Cases
"XenMotion". 4. Tools and Technologies
6|Page
5. Auto-GPT and Autonomous Agents 4. Use of Personal Information: The document mentions that OpenAI can
6. Architecture Implications use Content to provide and maintain the
7. Safe and Responsible AI Services, comply with applicable law, and enforce policies. However, it is
8. Questions and Open Discussion not explicitly mentioned how long this
Generative AI data will be retained, which might be an issue especially for EU users under
Generative AI is a subset of Artificial GDPR rules.
Intelligence that leverages machine OpenStack Architecture and Components
learning techniques to generate new What is OpenStack ?
content. This could be in the form of OpenStack is a cloud operating system that controls large pools of
text, images, music, voice, or even compute, storage, and networking resources throughout a datacenter, all
new models. The generated output is managed and provisioned through APIs with common authentication
typically novel and created from mechanisms.
random or user-provided data. OpenStack is a free and open-source software platform for cloud
Examples of Generative AI Output - Advice computing that supports all types of cloud computing platform for public
Q. Find legal weaknesses in the following document```These Terms and private clouds.
of Use apply when you use the services of OpenAI, L.L.C. … ``` It is mostly deployed as infrastructure-as-a-service, OpenStack is basically
A. your key to building your own cloud infrastructure.
1. Ambiguity in Terms: While the document outlines the terms of use and Why we need OpenStack?
restrictions, there may be terms that are If we don't comfortable entrusting sensitive data to a third party and you
not explicitly defined or clear. For example, the term "Services" is defined have tons of it, then an on-premise or private cloud infrastructure would be
broadly, which might be interpreted the better choice. By building your own cloud in your own data center, we
differently by different parties. Similarly, the term "applicable laws" is not will
specified and could vary significantly have more control of your data.
depending on jurisdiction. 1. Compute (Nova)
2. Acceptance of Terms: The document assumes that by using the Services, OpenStack Compute (Nova) is a cloud computing fabric controller, which
users agree to the Terms. In some is the main part of an IaaS system.
jurisdictions, active consent may be required instead of implied consent. It is designed to manage and automate pools of computer resources and
3. Assignment of Rights: In section 3a, the document states that OpenAI can work with widely available virtualization technologies.
assigns to the user all its rights, title and KVM, VMware, and Xen are available choices for hypervisor technology
interest in and to Output. It may need further clarification whether it (virtual machine monitor), together with Hyper-V and Linux container
includes intellectual property rights as well. technology such as LXC.[59][60]
2. Networking (Neutron)
7|Page
OpenStack Networking (Neutron) is a system for managing networks and IP catalog an unlimited number of backups. The Image Service can store disk
addresses. and server images in a variety of
OpenStack Networking provides networking models for different back-ends, including Swift. The Image Service API provides a standard
applications or REST interface for querying information about disk images and lets clients
user groups. Standard models include flat networks or VLANs that separate stream the images to new servers.
servers 6. Object storage (Swift)
and traffic. OpenStack Networking manages IP addresses, allowing for ● OpenStack Object Storage (Swift) is a scalable redundant storage system.
dedicated Objects and files are written to multiple disk drives spread throughout
static IP addresses. servers in the data center, with the OpenStack software responsible for
Floating IP addresses let traffic be dynamically rerouted to any resources in ensuring data replication and integrity across the cluster.
the IT ● Storage clusters scale horizontally simply by adding new servers. Should
infrastructure, so users can redirect traffic during maintenance or in case of a
a failure. server or hard drive fail, OpenStack replicates its content from other
3. Block storage (Cinder) active nodes to new locations in the cluster.
OpenStack Block Storage (Cinder) provides persistent block-level storage 7. Dashboard (Horizon)
devices OpenStack Dashboard (Horizon) provides administrators and users with a
for use with OpenStack compute instances. graphical interface to access, provision, and automate deployment of
The block storage system manages the creation, attaching and detaching of cloud-based resources.
the block devices to servers.Block storage volumes are fully integrated into The design accommodates third party products and services, such as
OpenStack Compute and the billing, monitoring, and additional management tools. The dashboard is
Dashboard allowing for cloud users to manage their own storage needs. also brand-able for service providers and other commercial vendors who
4. Authentication (Keystone) want to make use of it. The dashboard is one of several ways users can
OpenStack Identity (Keystone) provides a central directory of users interact with OpenStack resources. Developers can automate access or
mapped to the OpenStack services they can access. It acts as a common build tools to manage resources using the native OpenStack API or the
authentication system across the cloud operating EC2 compatibility API.
system and can integrate with existing backend directory services like 8. Cloud template (Heat)
LDAP(Lightweight Directory Access). Heat is a service to orchestrate multiple composite cloud applications
5. Image(Glance) OpenStack Image (Glance) provides discovery, using templates, through both an OpenStack-native REST API and a
registration, and delivery CloudFormation-compatible Query API.
services for disk and server images. Stored images can be used as a 9. Telemetry (Ceilometer)
template. It can also be used to store and OpenStack Telemetry (Ceilometer) provides a Single Point Of Contact for
billing systems, providing all the counters they need to establish customer
8|Page
billing, across all current and future OpenStack components. models are available. If there’s a need for a high degree of customization,
● The delivery of counters is traceable and auditable, the counters must along with the flexibility to choose hypervisors, then a DIY
be install is probably the best option (see highlighted section of the flowchart
easily extensible to support new projects, and agents doing data on previous page).
collections should be independent of the overall system. If you choose DIY install, there’s a wide choice of open source tools that are
Introduction very easy to use and can create environments for use in
One of the great things about OpenStack is all the options development, testing or production. These tools can deploy OpenStack on
you have for deploying it – from homebrew to hosted OpenStack to bare metal, virtual machines or even containers. Some
vendor appliances to OpenStack-as-a-service. Previously, Platform9 even install OpenStack in a production-grade, highly available architecture.
published But which tools are best suited for your requirements? Read on for an
a tech guide comparing various OpenStack deployment models. If you opt overview of some of the most popular tools, followed by a
for a doit- handy comparison matrix to summarize the options. More detailed
yourself (DIY) approach, then you face the question of which tool to use. documentation is available on each tool’s dedicated website.¹
This guide will OpenStack Installation: DevStack
familiarize you with the landscape of OpenStack installation tools, including DevStack is a series of extensible scripts used to quickly bring up a
an overview of the most complete OpenStack environment suitable for non-production
popular ones: DevStack, RDO Packstack, OpenStack-Ansible, Fuel and use. It’s used interactively as a development environment. Since DevStack
TripleO. installs all-in-one OpenStack environments, it can be used
OpenStack Architecture Overview to deploy OpenStack on a single VM, a physical server or a single LXC
If you’re new to OpenStack it may be helpful to review the OpenStack container. Each option is suitable depending on the hardware
components. (Skip this section if you’re already familiar with capacity available and the degree of isolation required. A multi-node
OpenStack.) OpenStack’s design, inspired by Amazon Web Services (AWS), OpenStack environment can also be deployed using DevStack,
has well-documented REST APIs that enable a self-service, but that’s not a thoroughly tested use case.
elastic Infrastructure-as-a Service (IaaS) cloud. In addition, OpenStack is For either kind of setup, the steps involve installing a minimal version of
fundamentally agnostic to the underlying infrastructure and one of the supported Linux distributions and downloading the
integrates well with various compute, virtualization, network and storage DevStack Git repository. The repo contains a script stack.sh that must be
technologies. run as a non-root user and will perform the complete install
How to Choose an OpenStack Deployment Model based on configuration settings.
The primary question that drives the choice of deployment models is The official approved and tested Linux distributions are Ubuntu (LTS plus
whether your IT team has the expertise, and the inclination, to current dev release), Fedora (latest and previous release) and
install and manage OpenStack. Depending on the desire to host your CentOS/RHEL 7 (latest major release). The supported databases are MySQL
infrastructure and avoid vendor lock-in, various deployment and PostgreSQL. RabbitMQ and Qpid are the recommended
9|Page
messaging service along with Apache as the web server. The setup defaults • local – extracts localrc from local.conf before stackrc is sourced
to a FlatDHCP network using Nova Network or a • post-config – runs after the layer 2 services are configured and before
similar configuration in Neutron. they are started
The default services configured by DevStack are Keystone, Swift, Glance, • extra – runs after services are started and before any files in extra.d are
Cinder, Nova, Nova Networking, Horizon and Heat. executed
DevStack supports a plugin architecture to include additional services that • post-extra – runs after files in extra.d are executed
are not included directly in the install. A specific meta-section local|localrc is used to provide a default localrc file.
Summary of the Installation Process This allows all custom settings for DevStack to be contained
1. Install one of the supported Linux Distributions in a single file. If localrc exists it will be used instead to preserve backward
2. Download DevStack from git compatibility.
3. git clone https://git.openstack.org/openstack-dev/devstack [[post-config|$NOVA_CONF]]
4. Make any desired changes to the configuration [DEFAULT]
5. Add a non-root user, with sudo enabled, to run the install script use_syslog = True
6. devstack/tools/create-stack-user.sh; su stack [osapi_v3]
7. Run the install and go grab a coffee enabled = False
8. cd devstack [[local|localrc]]
/stack.sh FIXED_RANGE=10.20.30.40/49
5 ADMIN_PASSWORD=secret
Configuration Options LOGFILE=$DEST/logs/stack.sh.log
DevStack provides a bunch of configuration options that can be modified as 6
needed. The sections below summarize some of the openrc
important ones. openrc configures login credentials suitable for use with the OpenStack
local.conf command-line tools. openrc sources stackrc at the beginning
DevStack configuration is modified via the file local.conf. It’s a modified .ini in order to pick up HOST_IP and/or SERVICE_HOST to use in the endpoints.
format file that introduces a meta-section header to carry The values shown below are the default values.
additional information regarding the configuration files to be changed. Minimal Configuration
The new header is similar to [[‘ <phase> ‘|’ <config–file–name> ‘]]’, where While stack.sh can run without a localrc section in local.conf, it’s easier to
<phase> is one of a set of phase names defined by stack. repeat installs by setting a few minimal variables. Below is an
sh and <config-file-name> is the configuration filename. If the path of the example of a minimal configuration for values that are often modified.
config file does not exist, it is skipped. The file is processed Note: if the *_PASSWORD variables are not set, the install script
strictly in sequence and any repeated settings will override previous values. will prompt for values:
The defined phases are: • No logging
10 | P a g e
• Pre-set the passwords to prevent interactive prompts LOGFILE to the fully qualified name of the destination log file. Old log files
• Move network ranges away from the local network are cleaned automatically if LOGDAYS is set to the number of
• Set the host IP if detection is unreliable days to keep old log files.
Service Repositories DevStack will log the stdout output of the services it starts. When using
The Git repositories used to check out the source for each service are screen this logs the output in the screen windows to a file.
controlled by a pair of variables set for each service. *_REPO Without screen this simply redirects stdout of the service process to a file in
points to the repository and *_BRANCH selects which branch to check out. LOGDIR. Some of the project logs will be colorized by
These may be overridden in local.conf to pull source from a default and can be turned off as below.
different repo. GIT_BASE points to the primary repository server. Logging all services to a single syslog can be convenient. If the destination
OS_PROJECT_NAME=demo log host is not localhost, the settings below can be used to
OS_USERNAME=demo direct the message stream to the log host.
OS_PASSWORD=secret #The usual cautions about putting passwords in Database Backend
environment variables apply The available databases are defined in the lib/databases directory. MySQL
HOST_IP=127.0.0.1 #Typically set in thelocalrc section is the default database but can be replaced in the
SERVICE_HOST=$HOST_IP localrc section:
OS_AUTH_URL=http://$SERVICE_HOST:5000/v2.0 Messaging Backend
#commented out by default Support for RabbitMQ is included. Additional messaging backends may be
# export KEYSTONECLIENT_DEBUG=1 available via external plugins. Enabling or disabling
# export NOVACLIENT_DEBUG=1 RabbitMQ is handled via the usual service functions
[[local|localrc]] LOGFILE=$DEST/logs/stack.sh.log
ADMIN_PASSWORD=secret LOGDAYS=1
DATABASE_PASSWORD=$ADMIN_PASSWORD LOG_COLOR=False
RABBIT_PASSWORD=$ADMIN_PASSWORD LOGDIR=$DEST/logs
SERVICE_PASSWORD=$ADMIN_PASSWORD SYSLOG=True
#FIXED_RANGE=172.31.1.0/24 SYSLOG_HOST=$HOST_IP
#FLOATING_RANGE=192.168.20.0/25 SYSLOG_PORT=516
#HOST_IP=10.3.4.5 disable_service mysql
# export NOVACLIENT_DEBUG=1 enable_service postgresql
Logging DEST=/opt/stack
By default stack.sh output is only written to the console where it runs. It disable_service rabbit
can be sent to a file in addition to the console by setting 8
Apache Frontend
11 | P a g e
The Apache web server can be enabled for wsgi services that support being 9
deployed under HTTPD + mod_wsgi. Each service that Disable Identity API v2
can be run under HTTPD + mod_wsgi also has an override toggle available The Identity API v2 is deprecated as of Mitaka and it is recommended to
that can be set. See examples below. only use the v3 API.
Clean Install Tempest
By default stack.sh only clones the project repos if they do not exist in If Tempest has been successfully configured, a basic set of smoke tests can
$DEST. This can be overridden as below and avoids having to be run as below.
manually remove repos to get the current branch from $GIT_BASE. Things to Consider
Guest Images DevStack is optimized for ease of use, making it less suitable for highly
Images provided in URLS, via the comma-separated IMAGE_URLS variable, customized installations. DevStack supplies a monolithic
will be downloaded and uploaded to glance by DevStack. installer script that installs all the configured modules. To add or remove
Default guest images are predefined for each type of hypervisor and their modules, the whole environment must be torn down using
testing requirements in stack.sh and can be overridden unstack.sh. Then, the updated configuration is installed by re-running
as below. stack.sh.
Instance Type DevStack installs OpenStack modules in a development environment, which
DEFAULT_INSTANCE_TYPE can be used to configure the default instance is very different from a typical production deployment.
type. When this parameter is not specified, DevStack creates It’s not possible to mix and match components in a production
additional micro and nano flavors for really small instances to run Tempest configuration with others in development configuration. In DevStack,
tests. dependencies are shared among all the modules. So a simple action of
Cinder syncing the dependencies for one module may unintentionally
The logical volume group, logical volume name prefix and the size of the update several other modules. DevStack is popular with developers
volume backing file are set as below. working on OpenStack, most typically used to test changes and
KEYSTONE_USE_MOD_WSGI=”True” verify they work in a running OpenStack deployment. Since it’s easy to use,
NOVA_USE_MOD_WSGI=”True” DevStack is ideal for setting up an OpenStack environment
RECLONE=yes for use in demos or proof of concept (POC). For production-grade installs,
DOWNLOAD_DEFAULT_IMAGES=False other tools are more appropriate (see OpenStack-Ansible,
IMAGE_URLS=”http://pf9.com/image1.qcow,” Fuel or TripleO).
IMAGE_URLS+=”http://pf9.com/image2.qcow” OpenStack Installation: RDO Packstack
DEFAULT_INSTANCE_TYPE=m1.tiny The 2016 OpenStack survey report asked what tools are being used to
VOLUME_GROUP=”stack-volumes” deploy OpenStack. Puppet was at the top of the list, and
VOLUME_NAME_PREFIX=”volume-” Ansible came in a close second. RDO Packstack is a Puppet-based utility to
VOLUME_BACKING_FILE_SIZE=10250M install OpenStack. RDO is the Red Hat distribution for
12 | P a g e
OpenStack and it packages the OpenStack components for Fedora-based During the early days of our product development, Platform9 used
Linux. Packstack to perform around 400 setups in a day. At this volume,
Prerequisites for Packstack the performance was not reliable and there were random timeouts. It was
Packstack is based on OpenStack Puppet modules. It’s a good option when difficult to investigate deployment errors. In addition, it was
installing OpenStack for a POC or when all OpenStack non-trivial to customize the scripts to build and deploy our custom changes.
controller services may be installed on a single node. Packstack defines In general, it is probably best to use Packstack for installing OpenStack on a
OpenStack resources declaratively and sets reasonable single node during a POC, when there isn’t a need to
default values for all settings that are essential to installing OpenStack. The customize the install process.
settings can be read or modified in a file, called the OpenStack Installation: OpenStack Ansible
answerfile in Packstack. Ansible is one of the top choices to deploy OpenStack. OpenStack-Ansible
Packstack runs on RHEL 7 or later versions and the equivalent version for (OSA) deploys a production-capable OpenStack environment
CentOS. The machine where Packstack will run needs at least using Ansible and LXC containers. This approach isolates the various
4GB of memory, at least one network adapter and x86 64-bit processors OpenStack services into their own containers and makes it
with hardware virtualization extensions. easier to install and update OpenStack.
ENABLE_IDENTITY_V2=False What is OpenStack-Ansible Deployment (OSAD)
$ cd /opt/stack/tempest OSAD is a source-based installation of OpenStack, deployed via Ansible
$ tox -efull tempest.scenario.test_network_basic_ops playbooks. It deploys OpenStack services on LXC containers
10 for complete isolation between components and services hosted on a
Install RDO Repository node. OSAD is well suited for deploying production environments.
To install OpenStack, first download the RDO repository rpm and install it. Ansible requires only SSH and Python to be available on the target host, no
On RHEL client or agents are installed. This makes it very
$ sudo yum install -y https://rdoproject.org/repos/rdo-release.rpm easy to run Ansible playbooks to manage environments of any size or type.
On CentOS There are a large number of existing Ansible modules for
$ sudo yum install -y centos-release-openstack-mitaka overall Linux management and OpenStack-Ansible playbooks can be written
Install OpenStack against the OpenStack APIs or Python CLIs.
Install the Packstack installer and then run packstack to install OpenStack 11
on a single node. Deployment Prerequisites
$ sudo yum install -y openstack-packstack The host that will run OpenStack-Ansible needs at least 16GB of RAM and
$ packstack --allinone 80GB of disk space. The host must have Ubuntu 14.04
Once the installer completes, verify the installation by login at or newer. It is recommended that all nodes hosting the Nova compute
http://${YourIp}/dashboard. service have multi-core processors with hardware-assisted
Things to Consider
13 | P a g e
virtualization extensions. All other infrastructure nodes should have multi- Once these prerequisites are met, proceed to the actual steps in the
core processors for best performance. installation. At a high level, the steps required are
Disk Requirements 1. Prepare deployment host
• Deployment hosts – 10GB of disk space for the OpenStack-Ansible 2. Prepare target hosts
repository content and other software 3. Configure deployment
• Compute hosts – At least 100GB of disk space available. Disks with higher 4. Run foundation playbooks
throughput, lower latency 5. Run infrastructure playbooks
• Storage hosts – At least 1TB of disk space. Disks with highest I/O 6. Run OpenStack playbooks
throughput with the lowest latency Let’s look at each step in detail below.
• Infrastructure hosts – At least 100GB of disk space for the services in the 12
OpenStack control plane Prepare Deployment Host
• Logging hosts – At least 50GB disk space for storing logs on logging hosts, The deployment host contains Ansible and orchestrates the installation on
with enough storage performance to keep up with the target hosts. It requires Ubuntu Server 14.04 LTS 64-bit.
the log traffic At least one network interface must be configured to access the Internet or
• Hosts that provide Block Storage (Cinder) volumes must have logical suitable local repositories.
volume manager (LVM) support and a volume group • Install the required utilities as shown below.
named cinder-volumes. $ apt-get install aptitude build-essential git ntp ntpdate openssh-server
Network Requirements python-dev sudo
• Bonded network interfaces– Increases performance and reliability • Configure NTP to synchronize with a suitable time source.
• VLAN offloading– Increases performance by adding and removing VLAN • Configure the network so that deployment host is on the same network
tags in hardware designated for container management.
• 1Gb or 10Gb Ethernet– Supports higher network speeds, may also • Clone OSA repository and bootstrap Ansible.
improve storage performance for Cinder • Configure SSH Keys
• Jumbo frames– Increase network performance by allowing more data to $ git clone -b VERSION https://github.com/openstack/openstack-ansible.git
be sent in each packet. /opt/openstack-ansible
Software Requirements $ scripts/bootstrap-ansible.sh
• Ubuntu 14.04 LTS or newer Prepare Target Hosts
• Linux kernel > v3.13.0-34-generic OSA recommends at least five target hosts contain the OpenStack
• Secure Shell (SSH) client and server environment and supporting infrastructure for the installation
• NTP client for time synchronization process. On each target host, perform the tasks below:
• Python 2.7 or later • Name target hosts
Installation Workflow • Install the operating system
14 | P a g e
• Generate and set up security measures • Installs and configure Rsyslog
• Update the operating system and install additional software packages • cd to /opt/openstack-ansible/playbooks
• Create LVM volume groups $ openstack-ansible setup-infrastructure.yml
• Configure networking devices • Confirm success with zero items unreachable or failed:
Configure Deployment PLAY RECAP
Ansible configuration files have to be updated to define target environment *********************************************************
attributes before running the Ansible playbooks. Perform deployment_host : ok=XX changed=0 unreachable=0 failed=0
the following tasks: Run OpenStack playbook
• Configure target host networking to define bridge interfaces and Finally, this step installs the OpenStack services as configured, in this order:
networks Keystone, Glance, Cinder, Nova, Heat, Horizon,
• Configure a list of target hosts on which to install the software Ceilometer, Aodh, Swift, Ironic:
• Configure virtual and physical network relationships for OpenStack • cd to /opt/openstack-ansible/playbooks
networking (neutron) $ openstack-ansible setup-openstack.yml
• Optionally, configure hypervisor and Cinder service Verify the Install
• Configure passwords for all services Since OpenStack can be consumed by either APIs or the UI, you’ll need to
Run Foundation Playbooks verify both after the install steps above complete
This step will prepare target hosts for infrastructure and OpenStack services successfully.
by doing the following: Verify OpenStack APIs
• Perform deployment host initial setup The utility container provides a CLI environment for additional
• Build containers on target hosts configuration and testing.
• Restart containers on target hosts • Determine the utility container name:
• Install common components into containers on target hosts $ lxc-ls | grep utility
• cd to /opt/openstack-ansible/playbook XX_utility_container_YY
$ openstack-ansible setup-hosts.yml • Access the utility container:
• deploy HAProxy $ lxc-attach -n XX_utility_container_YY
$ openstack-ansible haproxy-install.yml • Source the admin tenant credentials:
13 $ source /root/opener
Run Infrastructure Playbooks • Run an OpenStack command that uses one or more APIs. For example:
The main Ansible infrastructure playbook installs infrastructure services and $ openstack user list
performs the following operations: +----------------------------------+--------------------+
• Installs Memcached and the repository server | ID | Name |
• Installs Galera and RabbitMQ +----------------------------------+--------------------+
15 | P a g e
Verify UI Dashboard which can be accessed via a graphical or a command line interface, to
• With a web browser, access the dashboard using the external load provision, configure and manage OpenStack environments.
balancer IP address defined by the external_lb_vip_address Fuel deploys a master node and multiple slave nodes. The master node is a
option in the /etc/openstack_deploy/openstack_user_config.yml file. server with the installed Fuel application that performs
• Authenticate with admin username and password defined by the initial configuration, provisioning, and PXE booting of the slave nodes, as
keystone_auth_admin_password option in file well as assigning the IP addresses to the slave nodes. The
/etc/openstack_deploy/user_variables.yml slave nodes are servers provisioned by the master node. A slave node can
14 be a controller, compute, or storage node.
Benefits of OpenStack-Ansible Deployment This section will describe how to install Fuel on Oracle VirtualBox and use it
• No dependency conflicts among services due to container-based to deploy the Mirantis OpenStack environment. With the
architecture. Updating a service with new dependencies default configurations, such an environment is suitable for testing or a quick
doesn’t affect other services. demo. For a production environment, the configuration
• Deploy redundant services even on a single-node install. Galera, must specify network topology and IPAM, storage, the number, type and
RabbitMQ, and Keystone are deployed with redundancy, and flavor of service nodes, monitoring, any Fuel plug-ins etc.
HAProxy is installed in the host. Fuel Installation Prerequisites
• Easy to do local updates or repairs to an existing installation. Ansible can The environment must meet the following software prerequisites:
destroy a container and regenerate one with a newer • A 64-bit host operating system with at least 8 GB RAM and 300 GB of free
version of the service. space with virtualization enabled in the BIOS
• Mix and match services by using development packages on some, while • Access to the Internet or to a local repository containing the required files
keeping the rest configured for production use. • Oracle VirtualBox
Things to Consider • Oracle VM VirtualBox Extension Pack
OSAD is easy to install on a single node for a POC. Yet, it is robust enough • Mirantis OpenStack ISO
for a production install. Due to the containerized • Mirantis VirtualBox scripts with a version matching that of Mirantis
architecture, it is easy to upgrade services individually or all at the same OpenStack.
time. Compared to Puppet, Ansible playbooks are easier to • The latest versions of VirtualBox would work with these specific versions
customize. Despite all this ease, it is still non-trivial to investigate or newer of: Ubuntu Linux 12, Fedora 19, OpenSUSE
deployment errors due to the volume of logs. 12.2 and Microsoft Windows x64 with cygwin x64. MacOS 10.7.5 requires
OpenStack Installation: Fuel VirtualBox 4.3.x
Fuel is an open source tool that simplifies and accelerates the initial 15
deployment of OpenStack environments and facilitates their Overview of the Installation Process
ongoing management. Fuel deploys an OpenStack architecture that is Mirantis provides VirtualBox scripts that include configurations for the
highly available and load balanced. It provides REST APIs, virtual machine network and hardware settings. The script
16 | P a g e
provisions the virtual machines with all required settings automatically. The 7. Once the launch script completes, access the Fuel web UI to create an
steps involved in the process are: OpenStack environment as shown in the section below.
1. Install Oracle VirtualBox and Oracle VM VirtualBox Extension Pack. Create a New OpenStack environment
2. Download the Mirantis OpenStack ISO and place it in a directory named After the Fuel master node is installed, the slave nodes appear as
iso. unallocated nodes in the web UI. Now the OpenStack environment
3. Download the Mirantis VirtualBox scripts. can be created, configured and deployed. A single Fuel master node can
4. Modify the config.sh script to specify parameters that automate the Fuel deploy and manage multiple OpenStack environments, but
installation. For example, specify the number of virtual each environment must be created separately.
nodes to create, as well as how much memory, storage, and CPU to allocate To create an OpenStack environment:
to each machine. The parameter names are listed 1. Access the Fuel web UI at http://10.20.0.2:8443.
below: along with the default values in ( ) 2. Log in to the Fuel web UI as admin. The default password is same as set
oo vm_master_memory_mb (1536) earlier.
oo vm_master_disk_mb (65 GB) 3. Click New OpenStack environment to start the deployment wizard.
oo vm_master_nat_network (192.168.200.0/24) 4. In the Name and Release screen, type a name of the OpenStack
oo vm_master_ip ( 10.20.0.2) environment, select an OpenStack release and an operating
oo vm_master_username (root) system on which to deploy the OpenStack environment.
oo vm_master_password (r00tme) 5. In the Compute screen, select a hypervisor. By default, Fuel uses QEMU
oo cluster_size with KVM acceleration.
oo vm_slave_cpu (1) 6. In the Networking Setup screen, select a network topology. By default,
oo vm_slave_memory_mb (If the host system has 8 GB, default value is Fuel deploys Neutron with VLAN segmentation.
1536 MB. If the host system has 16 GB, default value 7. In the Storage Backends screen, select appropriate options. By default,
is 2048 MB) Fuel deploys Logical Volume Management (LVM) for
5. Run one of launch.sh, launch_8GB.sh or launch_16GB.sh scripts, Cinder, local disk for Swift, and Swift for Glance.
depending on the amount of memory on the computer. Each 8. In the Additional Services screen, select any additional OpenStack
script creates one Fuel master node. The slave nodes differ for each script. programs to deploy.
oo launch.sh – one salve node with 2048 MB RAM and 2 slave nodes with 9. In the Finish screen, click Create. Fuel now creates an OpenStack
1024 MB RAM each environment. Before using the environment, follow the UI
oo launch_8GB.sh – three slave nodes with 1536 MB RAM each options to add nodes, verify network settings, and complete other
oo launch_16GB.sh – five slave nodes with 2048 MB RAM each configuration tasks.
6. The script installs the Fuel master node on VirtualBox and may take up to © 2016 Platform9 <> 16
30 minutes to finish. Things to Consider
17 | P a g e
Fuel makes it very easy to install a test OpenStack environment using Oracle Overview of the Installation Process
VirtualBox. The automated script will spin up a master 1. Prepare the baremetal or virtual environment.
node and configure and deploy the slave nodes that will host compute, 2. Install Undercloud.
storage and other OpenStack services. Using Fuel you can 3. Prepare Images and Flavours for Overcloud.
also deploy multiple production-grade, highly available OpenStack 4. Deploy Overcloud.
environments on virtual or bare metal hardware. Fuel can be Prepare the Baremetal or Virtual Environment
used to configure and verify network configurations, test interoperability At a minimum, TripleO needs one environment for the Undercloud and one
between the OpenStack components and easily scale the each for the Overcloud Controller and Compute. All three
OpenStack environment by adding and removing nodes. environments can be virtual machines and each would need 4GB of
OpenStack memory and 40GB of disk space. If all three environments are
Installation: TripleO completely on bare metal then each would need multi-core CPU with 4GB
TripleO is short for “OpenStack on OpenStack,” memory and 60GB free disk space. For each additional
an official OpenStack project for deploying and Overcloud role, such as Block Storage or Object Storage, an additional bare
managing a production cloud onto bare metal metal machine would be required. TripleO supports the
hardware. The first “OpenStack” in the name is following operating systems: RHEL 7.1 x86_64 or CentOS 7 x86_64
for an operator-facing deployment cloud called The steps below are for a completely virtualized environment.
the Undercloud. This will contain the necessary Undercloud
OpenStack components to deploy and manage Undercloud Node
a tenant-facing workload cloud called the Overcloud
Overcloud, the second “OpenStack” in the Horizon
name. Ceilometer
The Undercloud server is a basic single-node MariaDB
OpenStack installation running on a single Nova
physical server used to deploy, test, manage Heat
and update the Overcloud servers. It contains Keystone
a strictly limited subset of OpenStack components, just enough to interact Ironic
with the Neutron
Overcloud. The Overcloud is the deployed solution and can represent a Glance
cloud for dev, RabbitMQ
test, production etc. The Overcloud is the functional cloud available to run Open Stack
guest virtual Clients
machines and workloads. Ceph-mon
18 | P a g e
Controller Node Agent
Horizon Neutron
Glance Open
MariaDB VSwitch
Object Agent
Storage Ceph
Node Storage
Swift Node
Storage Ceph-OSD
Celiometer Swift
Agent Storage
Neutron OpenStack
Open Clients
VSwitch Keystone
Agent Ceilometer
Block Nova API
Storage Neutron
Noce Server
Cinder Cinder API
Volume Cinder
Ceilometer Volune
Agent Swift Proxy
Neutron Heat API
Open Heat
VSwitch Engine
Agent RabbitMQ
Compute Neutron
Node Open
Nova KVM VSwitch
Nova Agent
Compute Neutron
Celiometer Open
19 | P a g e
VSwitch os-collect-config,os-net-config,os-refresh-config,python-
Agent tripleoclient,tripleo-common,openstack-
1. Install RHEL 7.1 Server x86_64 or CentOS 7 x86_64 on the host machine. tripleo-heat-templates,openstack-tripleo-image-elements,openstack-
2. Make sure sshd service is installed and running. tripleo,openstack-
3. The user performing all of the installation steps on the virt host needs to tripleo-puppet-elements,openstack-puppet-modules
have sudo enabled. If required, use the following EOF”
commands to create a new user called stack with password-less sudo sudo yum -y install epel-release
enabled. Do not run the rest of the steps in this guide sudo curl -o /etc/yum.repos.d/delorean.repo
as root. http://trunk.rdoproject.org/centos7/current-tripleo/
4. Enable needed repositories: delorean.repo
oo Enable epel: sudo curl -o /etc/yum.repos.d/delorean-deps.repo
oo Enable last known good RDO Trunk Delorean repository for core http://trunk.rdoproject.org/centos7/delorean-
openstack packages deps.repo
oo Enable latest RDO Trunk Delorean repository only for the TripleO sudo yum install -y instack-undercloud
packages instack–virt–setup
oo Enable the Delorean Deps repository © 2016 Platform9 <> 18
5. Install the Undercloud: 7. When the script has completed successfully, it will output the IP address
6. The virt setup automatically sets up a vm for the Undercloud, installed of the VM that has now been installed with a base OS.
with the same base OS as the host. 8. You can ssh to the vm as the root user:
sudo useradd stack 9. The vm contains a stack user to be used for installing the Undercloud.
sudo passwd stack # specify a password You can su – stack to switch to the stack user account.
echo “stack ALL=(root) NOPASSWD:ALL” | sudo tee -a /etc/sudoers.d/stack Install Undercloud
sudo chmod 0440 /etc/sudoers.d/stack 1. Log in to your machine where you want to install the Undercloud as a
sudo curl -o /etc/yum.repos.d/delorean-current.repo non-root user:
http://trunk.rdoproject.org/centos7/ ssh <non-root-user>@<undercloud-machine>
current/delorean.repo 2. Enable needed repositories using the same commands as in the section
sudo sed -i ‘s/\[delorean\]/\[delorean-current\]/’ above on preparing the environment.
/etc/yum.repos.d/delorean-current.repo 3. Install the yum-plugin-priorities package so that the Delorean repository
sudo /bin/bash -c “cat <<EOF>>/etc/yum.repos.d/delorean-current.repo takes precedence over the main RDO repositories:
includepkgs=diskimage-builder,instack,instack-undercloud,os-apply- sudo yum -y install yum-plugin-priorities
config,os-cloud-config, 4. Install the TripleO CLI, which will pull in all other necessary packages as
dependencies:
20 | P a g e
sudo yum install -y python-tripleoclient 8. Introspect hardware attributes of nodes:
5. Run the command to install the Undercloud: 9. Introspection has to finish without errors. The process can take up to 5
openstack undercloud install minutes for VM and up to 15 minutes for bare metal.
Once the install has completed, you should take note of the files stackrc 10. Create flavors i.e. node profiles:The Undercloud will have a number of
and undercloud-passwords.conf. You can source stackrc to default flavors created at install time. In most cases,
interact with the Undercloud via the OpenStack command-line client. these flavors do not need to be modified. By default, all Overcloud
undercloud-passwords.conf contains the passwords used for instances will be booted with the bare metal flavor, so all
each service in the Undercloud. bare metal nodes must have at least as much memory, disk, and CPU as
Prepare Images and Flavours for Overcloud that flavor. In addition, there are profile-specific flavors
1. Log into your Undercloud virtual machine as non-root user: created which can be used with the profile-matching feature.
2. In order to use CLI commands easily you need to source needed Deploy OverCloud
environment variables: Overcloud nodes can have a nameserver configured in order to resolve
3. Choose the image operating system: The built images will automatically hostnames via DNS. The nameserver is defined in the
have the same base OS as the running Undercloud. To Undercloud’s neutron subnet. If needed, define the nameserver to be used
choose a different OS set NODE_DIST to ‘centos7’ or ‘rhel7’ for the environment:
4. Install the current-tripleo delorean repo and deps repo into the 1. List the available subnets:
Overcloud images: 2. By default 1 compute and 1 control node will be deployed, with
5. Build the required images: networking configured for the virtual environment. To
export USE_DELOREAN_TRUNK=1 customize this, see the output of:
export 3. Run the deploy command, including any additional parameters as
DELOREAN_TRUNK_REPO=”http://trunk.rdoproject.org/centos7/current- necessary:
tripleo/” 4. When deploying the Compute node in a virtual machine, add –libvirt-
export DELOREAN_REPO_FILE=”delorean.repo” type qemu otherwise launching instances on the deployed
ssh root@<undercloud-machine> su – stack Overcloud will fail. This command will use Heat to deploy templates. In
openstack overcloud image build --all turn, Heat will use Nova to identify and reserve
source stackrc the appropriate nodes. Nova will use Ironic to startup nodes and install the
ssh root@<instack-vm-ip> correct images. Finally, services on nodes of the
© 2016 Platform9 <> 19 Overcloud are registered with Keystone.
6. Load the images into the Undercloud Glance: 5. To deploy the Overcloud with network isolation, bonds or custom
7. Register and configure nodes for your deployment with Ironic. The file to network interface configurations, follow the workflow here:
be imported may be either JSON, YAML or Configuring Network Isolation
CSV format: openstack baremetal introspection bulk start
21 | P a g e
openstack help overcloud deploy approach, it seems to work well for users of the Red Hat distribution of
openstack overcloud deploy --templates [additional parameters] OpenStack
neutron subnet-list
neutron subnet-update <subnet-uuid> –dns-nameserver <nameserver-ip>
openstack baremetal import instackenv.json
openstack overcloud image upload
© 2016 Platform9 <> 20
6. Openstack Overcloud deploy generates an overcloudrc file appropriate
for interacting with the deployed Overcloud in the
current user’s home directory. To use it, simply source the file:
7. To return to working with the Undercloud, source the stackrc file again:
Benefits of TripleO
Since TripleO uses OpenStack components and APIs, it has the following
benefits when used to deploying and operating an
OpenStack private cloud:
• The APIs are well documented and come with client libraries and
command line tools. Users already familiar with OpenStack
find it easier to understand TripleO.
• TripleO automatically inherits all the new features, bug fixes and security
updates which are added to the included OpenStack
components and allows more rapid feature development of TripleO.
• Tight integration with the OpenStack APIs provides a solid architecture
that has been extensively reviewed by the OpenStack
community.
Things to Consider
TripleO is one more option to deploy a production-grade OpenStack private
cloud. It tries to ease the deployment process by
“bootstrapping” the process using a subset of OpenStack components to
build a smaller cloud first. The benefit of this approach is
that operators can use familiar OpenStack APIs to deploy the subsequent
consumer-facing OpenStack cloud. While not an intuitive
22 | P a g e
assuring the integrity and functionality of its hosted
Chapter 5 – Cloud computer environment.
This is accomplished through redundancy of mechanical
cooling and power systems (including emergency backup
Management and Security power generators) serving the data center along with fiber
optic cables.
Example: Telecommunications Industry
Association's Telecommunications
Infrastructure Standard for Data Centers
Data Center & Cloud Management O It specifies the minimum requirements for
Data Center telecommunications infrastructure of data centers and
What is a Data Center? computer rooms including
• A data center is a facility that • single tenant enterprise data centers and
centralizes an organization’s IT • multi-tenant Internet hosting data centers.
operations and equipment, and O The topology proposed in this document is intended to be
where it stores, manages, and applicable to any size data center.
disseminates its data. Typical Projects within a Data Center
• It generally includes redundant or O Standardization/consolidation: This project helps to reduce the
backup power supplies, redundant number of hardware, software platforms, tools and processes within a
data communications connections, data center. Organizations replace aging data center equipment with
environmental controls (e.g., air newer ones that provide increased capacity and performance.
conditioning, fire suppression) and O Virtualize: There is a trend to use IT virtualization technologies to
various security devices. replace or consolidate multiple data center equipment, such as
Concerns for Data Centers servers. It helps lower energy consumption. This technology is also
• Companies rely on their information systems to run their used to create virtual desktops.
operations. If a system becomes unavailable, company O Automating: Data center automation involves automating tasks such
operations may be impaired or stopped completely. as provisioning, configuration, patching, release management and
• Information security is also a concern, and for this reason compliance.
a data center has to offer a secure environment which O Securing: In modern data centers, the security of data on virtual
minimizes the chances of a security breach. systems is integrated with existing security of physical infrastructures.
• A data center must therefore keep high standards for The security of a modern data center must take into account physical
1|Page
security, network security, and data and user security. design is determined. The detailed design phase should include the
Data Center Levels and Tiers detailed architectural, structural, mechanical and electrical
Design Considerations information and specification of the facility.
1. Design programming 6. Mechanical engineering infrastructure designs
O Design programming, also known as architectural It involves maintaining the interior environment of a data center, such
programming, is the process of researching and as heating, ventilation and air conditioning (HVAC); humidification
making decisions to identify the scope of a design and dehumidification equipment; pressurization; and so
project. 7. Electrical engineering infrastructure design
O Other than the architecture of the building itself there Its aspects may include utility service planning; distribution, switching
are three elements to design programming for data and bypass from power sources; uninterruptable power source (UPS)
centers: systems; and more.
1. Facility topology design (space planning) Other considerations
2. Engineering infrastructure design (mechanical O 8. Technology infrastructure design
systems such as cooling and electrical systems O 9. Availability expectations
including power) O 10. Site selection
3. Technology infrastructure design (cable plant). O 11. Modularity and flexibility
2. Modeling criteria O 12. Environmental control
Modeling criteria are used to develop future-state scenarios for space, O 13. Electrical power
power, cooling, and costs in the data center. The aim is to create a master O 14. Low-voltage cable routing
plan with parameters such as number, size, location, topology, IT floor O 15. Fire protection
system layouts, and power and cooling technology and configurations. O 16. Security
3. Design recommendations Data center infrastructure management
Design recommendations/plans generally follow the modelling criteria Data Center Infrastructure Management(DCIM) is the
phase. The optimal technology infrastructure is identified and planning integration of information technology (IT) and facility
criteria are developed, such as critical power capacities etc. management disciplines to centralize:
4. Conceptual design • monitoring,
Conceptual floor layouts should be driven by IT performance • management and
requirements as well as lifecycle costs associated with IT demand, energy • intelligent capacity planning of a data center's critical
efficiency, cost efficiency and availability. systems.
5. Detailed design Achieved through the implementation of specialized software,
Detailed design is undertaken once the appropriate conceptual hardware and sensors, DCIM enables common, real-time
2|Page
monitoring and management platform for all interdependent O Cloud management is the process of
systems across IT and facility infrastructures. overseeing and managing an
Data Center Services:
organization's cloud computing resources,
Hardware installation and maintenance
services, and infrastructure. It can be
Managed power distribution
Backup power systems performed by a company's internal IT
Data backup and archiving team or a third party service provider.
Managed load balancing CLOUD AUTOMATION
Controlled Internet access Cloud automation reduces the repetitive manual work needed to deploy
Managed e-mail and messaging and manage cloud
Managed user authentication and workloads
authorization Automation achieved via orchestration, which is the mechanism by which
Diverse firewalls and anti-malware programs
automation is
Managed outsourcing
Managed business continuance.
implemented
Continuous, efficient technical support. Ideally, automation and orchestration can reduce complex and time-
Some Issues Faced by Data consuming steps into a
Centers single script or click
O Data centers strive for providing fast, The idea is to boost operational efficiencies, accelerate application
uninterrupted service. Equipment failure, deployment and reduce
communication or power outages,
human error
network congestion and other problems
that keep people from accessing their Cloud automation - refers to processes and tools that reduce or eliminate
data and applications have to be dealt manual efforts used to
with immediately. Due to the constant provision and manage cloud computing workloads and services
demand for instant access, data centers Organizations can apply cloud automation to private, public and hybrid
are expected to run 24/7, which creates a cloud environments.
host of issues.
WHY USE CLOUD COMPUTING
Cloud Management
Cloud Management ?
3|Page
Sizing, provisioning and configuring resources such as virtual machines Development and deployment. Continuous software development relies
(VMs) on automation
Establishing VM clusters and load balancing for various steps, from code scans and version control to testing and
Creating storage logical unit numbers (LUNs) deployment.
Invoking virtual networks Tagging. Assets can be tagged automatically based on specific criteria,
The actual cloud deployment context and conditions
Monitoring and managing availability and performance of operation.
NOTE: To achieve cloud automation, an IT team needs to use orchestration Security. Cloud environments can be set up with automated security
and controls that enable or
automation tools that run on top of its virtualized environment. restrict access to apps or data, and scan for vulnerabilities and unusual
TYPES OF CLOUD COMPUTING performance levels.
Automating various tasks in the cloud removes the repetition, inefficiency Logging and monitoring. Cloud tools and functions can be set up to log all
and errors activity involving
inherent with manual processes and intervention services and workloads in an environment. Monitoring filters can be set up
Resource allocation. Autoscaling -- the ability to scale up and down the to look for anomalies
use of compute, or unexpected events.
memory or networking resources to match demand -- is a core tenet of Provisioning Automation:
cloud computing. It Infrastructure as Code (IaC): Tools like Terraform and AWS
provides elasticity in resource usage and enables the pay-as-you-go cloud CloudFormation allow for automated
cost model. provisioning of cloud resources.
Configurations. Infrastructure configurations can be defined through Self-service Portals: Users can provision resources through a user-friendly
templates and code interface.
and implemented automatically. In the cloud, opportunities for integration Cost Management and Optimization: Tools that automate cost monitoring,
increase with analysis, and
associated cloud services. optimization strategies to manage cloud expenses effectively.
4|Page
Network Configuration and Management: Automating network setup and Faster completion: Cloud automation enables tasks to be completed
management, faster. An IaC tool can
including VPNs, firewalls, and load balancers. set up a hundred servers in minutes using predefined templates, for
Workload Automation: Automating tasks and workflows that run in the instance, whereas a
cloud, often using human engineer might take several days to complete the same work.
tools like Apache Airflow or AWS Step Functions. Lower risk of errors: When tasks are automated, the risk of human error
BENEFITS OF CLOUD COMPUTING or oversight
Saves an organization time and money virtually disappears. As long as you properly configure the rules and
Is faster, more secure and more scalable than manually performing tasks templates that drive
Causes fewer errors, as organizations can construct more predictable and your automation, you will end up with clean environments.
reliable workflows Higher security: By a similar token, cloud automation reduces the risk that
Increases efficiency by enabling continuous deployment and automating a mistake made
bug detection by an engineer -- such as exposing to the public Internet an internal
Simplifies implementation, compared to on-premises platforms, requiring application that is
less IT intervention intended only for internal use -- could lead to security vulnerabilities.
Contributes directly to better IT and corporate governance Scalability: Cloud automation is essential for any team that works at
Frees IT teams from repetitive and manual administrative tasks to focus scale. It may be
on higher-level work that possible to manage a small cloud environment -- one that consists of a few
more closely aligns with the organization's business needs. This includes virtual machines
integrating higherlevel and storage buckets, for example -- using manual workflows. But if you
cloud services or developing new product features. want to scale up to
Time savings: By automating time-consuming tasks like infrastructure hundreds of server instances, terabytes of data and thousands of users,
provisioning, cloud cloud automation
automation tools allow human engineers to focus on other activities that becomes a must.
require higher CLOUD AUTOMATION CHALLENGES
levels of expertise and cannot be easily automated.
5|Page
Internet connectivity can be all-or-nothing. Public cloud services are built automated tasks to occur at specific times and in specific sequences for
on wide area specific purposes.
networks, making the reliability of the connection a major concern, a Automation refers to automating a single process or a small number of
serious consideration related tasks (e.g.,
for discussion with the service provider deploying an app)
Cloud automation security options are often limited, which can be Orchestration refers to managing multiple automated tasks to create a
particularly difficult in dynamic workflow
highly regulated industries with complex compliance requirements, given (e.g., deploying an app, connecting it to a network, and integrating it with
the lack of other systems)
customization and control flexibility CLOUD AUTOMATION USE CASES
Limited access to back-end data can make maintenance burdensome Some basic examples of cloud automation
when complex issues include the following:
arise Autoprovisioning cloud infrastructure resources.
Platform lock-in can be a risk. The convenience of cloud automation can Shutting down unused instances and processes,
lead to a broad buyin mitigating sprawl.
across the enterprise, with more business processes and operations Performing regular data backup.
committed to the CLOUD AUTOMATION TOOLS
platform. And the bigger that commitment, the tougher any future Examples of automation services from public cloud providers include the
migration to a different following:
platform will be. AWS Config, AWS CloudFormation and AWS Elastic Compute Cloud
DIFFERENCE BETWEEN CLOUD AUTOMATION AND CLOUD ORCHESTRATION Systems Manager.
Cloud automation invokes various steps and processes to deploy and Google Cloud Composer, Google Cloud Deployment Manager.
manage workloads IBM Cloud Orchestrator.
in the cloud with minimal or no human intervention. Microsoft Azure Resource Manager and Microsoft Azure Automation.
Cloud orchestration describes how an administrator codifies and CONFIGURATION MANAGEMENT TOOLS
coordinates those Chef Automate
6|Page
HashiCorp Terraform • Cloud infrastructure security is a framework for safeguarding cloud
Puppet Enterprise resources
Red Hat Ansible against internal and external threats. It protects computing environments,
Salt Open Source Software applications, and sensitive data from unauthorized access by centralizing
SaltStack Enterprise authentication and limiting authorized users’ access to resources.
MULTI-CLOUD MANAGEMENT TOOLS CLOUD INFRASTRUCTURE SECURITY GOAL
CloudBolt Software • The main goal of cloud infrastructure security is to protect this virtual
CloudSphere infrastructure
Flexera against a wide range of potential security threats, including both internal
Morpheus Data and
Snow Software Inc external threats
VMware • By implementing policies, tools, and technologies for identifying and
Zscaler managing
CLOUD INFRASTRUCTURE SECURITY security issues, companies reduce the cost to the business, improve
• Cloud infrastructure security involves protecting the infrastructure that business
cloud continuity, and enhance regulatory compliance efforts
computing services are based on, including both physical and virtual IMPORTANCE OF CLOUD INFRASTRUCTURE
infrastructure SECURITY
• Physical infrastructure includes the network infrastructure, servers, and • Companies are increasingly moving to the cloud, entrusting these
other physical environments
components of cloud data centers, while the Infrastructure as a Service with sensitive data and business-critical applications.
(IaaS) offerings • As a result, cloud security is a growing component of their cybersecurity
— such as virtualized network infrastructure, computing, and storage — programs, and cloud infrastructure security is a crucial part of this.
comprise the • Cloud infrastructure security processes and solutions provide companies
virtual infrastructure made available to cloud users with
much-needed protection against threats to their cloud infrastructure.
7|Page
• These solutions can help to prevent data breaches (ensuring that • A secure cloud infrastructure includes centralized identity and access
sensitive data management (IAM)
remains private by blocking unauthorized access), protect the reliability and granular, role-based access controls for managing access to
and applications and other
availability of cloud services, and support regulatory compliance in the system resources.
cloud. • This prevents unauthorized users from gaining access to digital assets and
HOW DOES IT WORK? allows system
• In public cloud, security is shared between the cloud provider and administrators to limit the resources that authorized users are permitted to
customer under the access.
cloud shared responsibility model TYPES OF CLOUD INFRASTRUCTURE
• In public cloud - service provider is responsible for the security of the SECURITY
physical • Public Cloud Infrastructure Security: According to the public cloud shared
infrastructure in their data centers responsibility model, the physical infrastructure in public cloud
• Responsibility for virtual infrastructure can be split between the public environments is
cloud customer and managed and protected by the cloud provider who owns it, while the
provider based on the cloud service model in use. virtual
• For example, the cloud provider is responsible for securing the services infrastructure is split between the cloud vendor and the customer
that they provide to • Private Cloud Infrastructure Security: Private clouds are deployed within
a cloud customer, such as the hypervisors used to host virtual machines in an
an IaaS organization’s data centers, making the organization responsible for
environment. ensuring private
• In a Software as a Service (SaaS) environment, the cloud provider is fully cloud security, including the security of the underlying infrastructure
responsible for the • Hybrid Cloud Infrastructure Security: Hybrid clouds mix public and private
security of the infrastructure stack. cloud
HOW DOES IT WORK? environments. This means that responsibility for the underlying
infrastructure is shared
8|Page
be BENEFITS OF CLOUD INFRASTRUCTURE SECURITY organizations’ access to their computing environments and the sensitive
• Improved Security: Cloud infrastructure security provides additional data that they hold.
visibility and Protecting the underlying infrastructure supporting these environments is
protection for the underlying infrastructure that supports an organization’s essential for
cloud services. regulatory compliance.
This enhanced security posture enables more rapid detection, prevention, • Decreased Operating Costs: Cloud infrastructure security can enable
and remediation of organizations to find
potential threats. and fix potential issues before they become major problems. This reduces
• Greater Reliability and Availability: Cyberattacks and other incidents can the cost of
cause an operating cloud-based infrastructure.
organization’s cloud-based applications to go offline or cause other • Cloud confidence: Cloud customers who are confident in their security
unplanned behavior. will move more
Cloud infrastructure security helps to reduce the risk of these incidents for workloads to the cloud, faster. This enables the cloud customer to more
example by rapidly take
blocking attack traffic, improving the availability and reliability of cloud advantage of the benefits of the cloud.
environments. CLOUD INFRASTRUCTURE SECURITY BEST
• Simplified Management: Cloud infrastructure security solutions should PRACTICES
be part of an • Implement security for both the control and data plane in cloud
organization’s cloud security architecture. This makes it easier to monitor environments.
and manage the • Perform regular patching and updates to protect applications and the OS
security of cloud environments as a whole.tween the cloud provider (in the against
case of public cloud) and the cloud customer potential exploits.
Regulatory Compliance: There are a wide variety of regulations with which • Implement strong access controls leveraging multi-factor authentication
cloud customers and the
need to comply, depending on their business requirements. Many of these principle of least privilege.
regulations define
9|Page
• Educate employees on the importance of cloud security and best environments change.
practices for INFRASTRUCTURE SECURITY
operating in the cloud. • Infrastructure Security in cloud computing Helps With:
• Encrypt data at rest and in transit across all of the organization’s IT • Data Protection
environment. • Access Management
• Perform regular monitoring and vulnerability scanning to identify current • Real-Time Threat Detection
threats and • Cloud Compliance
potential security risks. • Scalability
CLOUD INFRASTRUCTURE SECURITY AND ZERO • Network Security
TRUST • Application Security
• Zero Trust is a vital element of infrastructure security • Centralized Security
• Zero Trust is a security strategy designed to stop data breaches and make • Business Continuity
other cyber KEY COMPONENTS OF CLOUD INFRASTRUCTURE
security attacks unsuccessful SECURITY
• All users and devices, regardless of their location, must be authenticated • Identity and Access Management (IAM)
first and then • Network Security
ongoingly monitored to verify their authorization status • Data Security
• A comprehensive security solution built on Zero Trust Network Access • Endpoint Security
(ZTNA) • Application Security
architecture protects an organization’s data and resources across all IDENTITY AND ACCESS MANAGEMENT
platforms and (IAM)
environments • Identity and access management (IAM) is a security measure that
• With modern tools, companies can control access, monitor traffic and involves who can
usage access cloud resources and what activities they can perform. IAM systems
continuously, and adapt their security strategy easily—even as dynamic can
cloud
10 | P a g e
implement security policies, manage user identities, track all logins, and do management, and data loss prevention (DLP). Additional data security
more measures
operations. include adding access controls and secure configuration to cloud databases
• IAM mitigates insider threats by implementing least privilege access and and
segregating duties. Additionally, it can also help detect unusual behavior cloud storage buckets.
and • Moreover, data protection laws also play a critical role in protecting cloud
provide early warning signs of potential security breaches. data.
• Use of IAP (Identity-Aware Proxy) – Temporary access to resource Industry regulations like GDPR, ISO 27001, HIPAA, etc. mandate
NETWORK SECURITY organizations to
• Network security in the cloud means protecting the confidentiality and have proper security measures to protect user data in the cloud.
availability ENDPOINT SECURITY
of data as it moves across the network. As data reaches the cloud by • Endpoint security focuses on securing user devices or endpoints that are
traveling over used to
the internet, network security becomes more critical in a cloud access the cloud, such as smartphones, laptops, and tablets. With new
environment. working
• Security measures for networks include firewalls and virtual private policies like remote work and Bring Your Own Device (BYOD), endpoint
networks security
(VPN), among others. However, all cloud providers offer a virtual private has become a vital aspect of cloud infrastructure security.
cloud • Organizations must ensure that users access their cloud resources with
(VPC) feature for organizations that allows them to run a private and secured
secure devices. Endpoint security measures include firewalls, antivirus software,
network within their cloud data center. and
DATA SECURITY device management solutions. Additionally, it may include measures like
• Data security in the cloud involves protecting data at rest, in transit, and user
in use. It training and awareness to avoid potential security threats.
includes various measures such as encryption, tokenization, secure key APPLICATION SECURITY
11 | P a g e
• Cloud application security is probably the most critical part of cloud behavioral analytics to help identify and respond to potential threats and
infrastructure security. It involves securing applications in the cloud against ensure compliance
various security threats like cross-site scripting (XSS), Cross-Site Request with industry standards.
Forgery • Google Cloud Security Command Center: Google Cloud Security
(CSRF), and injection attacks. Command Center offers
• Cloud applications can be secured through various ways such as secure centralized access to cloud security solutions. As a result, it allows the
coding organization to have
practices, vulnerability scanning, and penetration testing. Additionally, complete visibility and control over the resources and services on Google
measures Cloud Platform
like web application firewalls (WAF) and runtime application self-protection (GCP). Its wide range of capabilities includes advanced threat detection
(RASP) can provide added layers of security. technologies, realtime
TOOLS FOR CLOUD INFRASTRUCTURE insights, and security analytics.
SECURITY TOOLS FOR CLOUD INFRASTRUCTURE
• Amazon Web Services (AWS) Security Hub: AWS Security Hub centralizes SECURITY
visibility and • Cisco Cloudlock: Cisco Cloudlock is an advanced cloud security platform
offers actionable insights in security alerts. Additionally, it helps that
organizations strengthen their operates natively on the cloud. It offers comprehensive data protection,
cloud posture with advanced threat intelligence, automated compliance access
checks, and seamless controls, and threat intelligence. It offers security measures to various
integration with other security tools. cloud
• Microsoft Azure Security Center: Microsoft Azure Security Center is a applications, especially for Software-as-a-Service (SaaS).
cloud-native • IBM Cloud Pak for Security: IBM Cloud Pak for Security is an integrated
security management tool that provides continuous security monitoring, security
threat detection, and platform for cloud environments that offers threat intelligence, security
actionable recommendations to improve Azure environments. It uses analytics,
machine learning and
12 | P a g e
and automation functionalities. As a result, it helps organizations to processes.
effectively IDENTITY AND ACCESS MANAGEMENT (IAM)
detect, investigate, and respond to security threats in both cloud and • We have already established this above as a key component of
hybrid infrastructure security in
environments. Additionally, it used advanced analytics and AI-driven cloud computing. The purpose of IAM tools is to authorize user identity
insights for and deny access to
better cloud security. unauthorized parties. IAM checks the user’s identity and determines
5 ADVANCED TECHNIQUES FOR CLOUD INFRASTRUTURE SECURITY whether the user is
1.ENCRYPTION allowed to access the cloud resources or not.
2.IDENTITY AND ACCESS MANGAMENT(IAM) • Since IAM protocols are not based on the device or location used while
3.CLOUD FIREWALLS attempting to log in,
4.VIRTUAL PRIVATE CLOUD(VPC) AND SECURITY GROUPS they are highly useful in keeping cloud infrastructure secure.
5.PENETRATION TESTING • Key capabilities of IAM tools:
ENCRYPTION • Identity Providers (IdP): Authenticate the identity of users.
• The goal of encryption is to make data unreadable for those who access • Single Sign-On (SSO): enables users to sign in once and access all cloud
it. Once resources associated with
data is encrypted, only authorized users i.e. individuals with decryption their account.
keys will • Multi-factor authentication (MFA): Measures like 2-factor authentication
be able to read it. Since encrypted data is useless, it cannot be stolen or add extra security layers
used to for user access.;
carry out other attacks. • Access Control: Allows and restricts user access.
• You can encrypt data while it is stored (at rest) and also when it is CLOUD FIREWALLS
transferred from • Just like traditional firewalls, cloud firewalls are a shield
one location to another (in transit). This technique is critical when around the cloud infrastructure that filters malicious traffic.
transferring Additionally, it helps prevent cyberattacks like DDoS attacks,
data, sharing information, or securing communication between different vulnerability exploitation, and malicious bot activity. There
13 | P a g e
are basically 2 types of cloud firewalls: conduct the testing on their cloud applications.
• Next-Generation Firewalls (NGFW): They are deployed in • Penetration testers (a.k.a ethical hackers) use a process to check each
a data center to protect the organization’s Infrastructure-As-a- part of the application to
Service (IaaS) or Platform-as-a-Service (PaaS) models. find where the security flaws lie. They document each vulnerability they
• SaaS Firewalls: These secure networks of the virtual space find, along with their
are just like traditional firewalls but for those hosted in the impact level, and also provide recommendations for remediations.
cloud such as the Software as a Service (SaaS) models. • Cloud Penetration Testing offers you:
VIRTUAL PRIVATE CLOUD (VPC) AND SECURITY • Security vulnerabilities present in a cloud infrastructure
GROUPS • Impact level of the vulnerabilities (low, high, or critical)
• A virtual private cloud (VPC) provides a private cloud environment for a • Ways to address these vulnerabilities
public cloud • Meet compliance needs
domain. Additionally, a VPC creates highly configurable sections of a public • Strengthen overall cloud security posture
cloud. This Security and Privacy Issues
means you can access VPC resources on demand and scale up as per your in Cloud Computing
needs. Infrastructure Security
• To secure your VPC, you can use certain security groups. Each security Data Security and Storage
group acts as a Identity and Access Management (IAM)
virtual firewall that controls the traffic flow in and out of the cloud. Privacy
However, these Infrastructure Security
groups can be implemented at the instance level and not at the subnet Network Level
level. Host Level
PENETRATION TESTING Application Level
• Cloud penetration testing is a technique to find vulnerabilities present in The Network Level
a cloud environment by Ensuring confidentiality and integrity of your
simulating real attacks. Organizations can appoint third-party penetration organization’s data-in-transit to and from your public
testing companies to cloud provider
14 | P a g e
Ensuring proper access control (authentication, Case study: Amazon's EC2
authorization, and auditing) to whatever resources infrastructure
you are using at your public cloud provider “Hey, You, Get Off of My Cloud: Exploring Information
Ensuring availability of the Internet-facing resources in Leakage in Third-Party Compute Clouds”
a public cloud that are being used by your Multiple VMs of different organizations with virtual
organization, or have been assigned to your boundaries separating each VM can run within one
organization by your public cloud providers physical server
Replacing the established model of network zones "virtual machines" still have internet protocol, or IP,
and tiers with domains addresses, visible to anyone within the cloud.
The Network Level - VMs located on the same physical server tend to have IP
Mitigation addresses that are close to each other and are assigned
Note that network-level risks exist regardless of what at the same time
aspects of “cloud computing” services are being used An attacker can set up lots of his own virtual machines,
The primary determination of risk level is therefore not look at their IP addresses, and figure out which one
which *aaS is being used, shares the same physical resources as an intended target
But rather whether your organization intends to use or Once the malicious virtual machine is placed on the
is using a public, private, or hybrid cloud. same server as its target, it is possible to carefully monitor
The Host Level how access to resources fluctuates and thereby
SaaS/PaaS potentially glean sensitive information about the victim
Both the PaaS and SaaS platforms abstract and hide the Local Host Security
host OS from end users Are local host machines part of the cloud
Host security responsibilities are transferred to the CSP infrastructure?
(Cloud Service Provider) Outside the security perimeter
You do not have to worry about protecting hosts While cloud consumers worry about the
However, as a customer, you still own the risk of security on the cloud provider’s site, they may
managing information hosted in the cloud services. easily forget to harden their own machines
15 | P a g e
The lack of security of local devices can An attack against the billing model that underlies the
Provide a way for malicious services on the cost of providing a service with the goal of bankrupting
cloud to attack local networks through these the service itself.
terminal devices End user security
Compromise the cloud and its resources for Who is responsible for Web application security in the
other users cloud?
Local Host Security (Cont.) SaaS/PaaS/IaaS application security
With mobile devices, the threat may be even stronger Customer-deployed application security
Users misplace or have the device stolen from them Data Security and Storage
Security mechanisms on handheld gadgets are often Several aspects of data security, including:
times insufficient compared to say, a desktop computer Data-in-transit
Provides a potential attacker an easy avenue into a Confidentiality + integrity using secured protocol
cloud system. Confidentiality with non-secured protocol and encryption
If a user relies mainly on a mobile device to access Data-at-rest
cloud data, the threat to availability is also increased as Generally, not encrypted , since data is commingled with
mobile devices malfunction or are lost other users’ data
Devices that access the cloud should have Encryption if it is not associated with applications?
Strong authentication mechanisms But how about indexing and searching?
Tamper-resistant mechanisms Then homomorphic encryption vs. predicate encryption?
Strong isolation between applications Processing of data, including multitenancy
Methods to trust the OS For any application to process data, not encrypted
Cryptographic functionality when traffic confidentiality Data Security and Storage
is required (cont.)
The Application Level Data lineage
DoS Knowing when and where the data was located w/i cloud is important
EDoS(Economic Denial of Sustainability) for audit/compliance purposes
16 | P a g e
e.g., Amazon AWS and will move beyond the control and will extend
Store <d1, t1, ex1.s3.amazonaws.com> into the service provider domain.
Process <d2, t2, ec2.compute2.amazonaws.com> Managing access for diverse user populations
Restore <d3, t3, ex2.s3.amazonaws.com> (employees, contractors, partners, etc.)
Data provenance Increased demand for authentication
Computational accuracy (as well as data integrity) personal, financial, medical data will now be
E.g., financial calculation: sum ((((2*3)*4)/6) -2) = $2.00 ? hosted in the cloud
Correct : assuming US dollar S/W applications hosted in the cloud requires
How about dollars of different countries? access control
Correct exchange rate? Need for higher-assurance authentication
Data Security and Storage authentication in the cloud may mean
• Data remanence authentication outside F/W
Inadvertent disclosure of sensitive information is Limits of password authentication
possible Need for authentication from mobile devices
Data security mitigation? What is Privacy?
Do not place any sensitive data in a public cloud The concept of privacy varies widely among
Encrypted data is placed into the cloud? (and sometimes within) countries, cultures, and
Provider data and its security: storage jurisdictions.
To the extent that quantities of data from many It is shaped by public expectations and legal
companies are centralized, this collection can interpretations; as such, a concise definition is
become an attractive target for criminals elusive if not impossible.
Moreover, the physical security of the data Privacy rights or obligations are related to the
center and the trustworthiness of system collection, use, disclosure, storage, and
administrators take on new importance. destruction of personal data (or Personally
Why IAM? Identifiable Information—PII).
Organization’s trust boundary will become dynamic At the end of the day, privacy is about the
17 | P a g e
accountability of organizations to data subjects, The aggregation of data raises new privacy issues
as well as the transparency to an organization’s Some governments may decide to search through data
practice around personal information. without necessarily notifying the data owner, depending
What is the data life cycle? on where the data resides
Whether the cloud provider itself has any right to see
and access customer data?
Some services today track user behaviour for a range
of purposes, from sending targeted advertising to
improving services
Retention
How long is personal information (that is transferred to
the cloud) retained?
Which retention policy governs the data?
What Are the Key Privacy Does the organization own the data, or the CSP?
Concerns? Who enforces the retention policy in the cloud, and
Typically mix security and privacy how are exceptions to this policy (such as litigation
Some considerations to be aware of: holds) managed?
Storage Destruction
Retention How does the cloud provider destroy PII at the end of the
Destruction retention period?
Auditing, monitoring and risk management How do organizations ensure that their PII is destroyed by
Privacy breaches the CSP at the right point and is not available to other
Who is responsible for protecting privacy? cloud users?
Storage Cloud storage providers usually replicate the data across
Is it commingled with information from other multiple systems and sites—increased availability is one of
organizations that use the same CSP? the benefits they provide.
18 | P a g e
How do you know that the CSP didn’t retain additional How do you know that a breach has occurred?
copies? How do you ensure that the CSP notifies you when a
Did the CSP really destroy the data, or just make it breach occurs?
inaccessible to the organization? Who is responsible for managing the breach notification
Is the CSP keeping the information longer than process (and costs associated with the process)?
necessary so that it can mine the data for its own use? If contracts include liability for breaches resulting from
Auditing, monitoring and risk negligence of the CSP?
management How is the contract enforced?
How can organizations monitor their CSP and provide How is it determined who is at fault?
assurance to relevant stakeholders that privacy Who is responsible for protecting privacy?
requirements are met when their PII is in the cloud? Data breaches have a cascading effect
Are they regularly audited? Full reliance on a third party to protect personal data?
What happens in the event of an incident? In-depth understanding of responsible data stewardship
If business-critical processes are migrated to a cloud Organizations can transfer liability, but not
computing model, internal security processes need to accountability
evolve to allow multiple cloud providers to participate in Risk assessment and mitigation throughout the data life
those processes, as needed. cycle is critical.
These include processes such as security monitoring, auditing, Many new risks and unknowns
forensics, incident response, and business continuity The overall complexity of privacy protection in the cloud
Privacy breaches represents a bigger challenge.
19 | P a g e
Private Cloud
Chapter – 4
Abstract
A private cloud is a computing model that provides an organization with exclusive access to cloud resources, ensuring enhanced
security, control, and customization. Unlike public cloud environments, where resources are shared among multiple users, private
clouds are dedicated to a single entity, either hosted on-premises or by a third-party provider. They allow businesses to tailor
infrastructure to their specific needs while maintaining data privacy and compliance with regulatory standards. Private clouds offer
scalability and flexibility, enabling organizations to optimize workloads and efficiently manage resources. Though typically more
expensive to maintain, they are ideal for businesses requiring stringent data security, high performance, and full control over their
cloud environment.
Figure 3: VM Migration
Phases of VM Migration:
Figure 4: Phases of VM Migration
•Onboarding: Select the VM to migrate
•Replication: Replicate data from the Next, the migration process involves several key steps,
source VM to the target cloud including pre-migration checks to ensure
•Set VM target details: Configure the target VM, compatibility and resource availability, followed by
including the project, network, memory, and the actual migration, which can be performed through
instance type methods such as cold migration (shutting down the VM
•Test-clone: Create a clone of the source VM on the before transfer), hot migration (moving the VM while
target cloud for testing it is running), or live migration (seamlessly
•Cut-over: Migrate the source VM to the target cloud, transferring the VM with minimal disruption).
which involves stopping the source VM, Once the migration is complete, post-migration
replicating data, and creating the target VM validation is conducted to confirm that the VM
•Finalize: Perform any final cleanup after the operates correctly in its new environment, and any
migration is complete. necessary adjustments or optimizations are
implemented. Throughout this cycle, monitoring and considered primary and is resumed in case of failure.
management tools are essential to track performance, Stage-4: Commitment. Host B indicates to A that is has
ensure stability, and address any issues that may arise, successfully received a consistent OS image. Host A
ultimately enhancing the efficiency and reliability of IT acknowledges this message as a commitment of
operations within an organization. migration transaction.
Stage-5: Activation. The migrated VM on B is now
Live Migration activated Post-migration code runs to reattach the
Steps involved in Live Migration of VM: device's drivers to the new machine and advertise
Stage-0: Pre-Migration. An active virtual machine moved IP addresses.
exists on the physical host A. This approach to failure management ensures that at
Stage-1: Reservation. A request is issued to migrate an least on host has a consistent VM image at all times
OS from host A to host B (a precondition is that the during migration:
necessary resources exist on B and a VM container of 1) Original host remains stable until migration
that size) commits and that the VM may be suspended and
Stage-3: Stop-and-Copy. Running OS instance at A is resumed on that host with no risk of failure.
suspended, and its network traffic is redirected to B. As 2) A migration request essentially attempts to move
described in reference 21, CPU state and remaining the VM to a new host and on any sort of failure,
inconsistent memory pages are then transferred. At execution is resumed locally, aborting the migration.
the end of this stage, there is a consistent suspended
copy of the VM at both A and B. The copy at A is
Benefits of VPC
Flexibility to scale and control how workloads connect
both regionally and globally. Bring your own IP
addresses to Google’s network infrastructure
anywhere. Access VPCs with no need to replicate
connectivity or management policies in each region.
VPC Flow logs: VPC flow logs help with network
monitoring, forensics, real-time security analysis, and
expense optimization. Host globally distributed multi-
tier applications by creating a VPC with subnets.
Disaster Recovery: With application replication, create
backup Google Cloud compute capacity, then revert
back once the incident is over. Packet Mirroring:
Securely connect your existing network to the VPC
network over IPsec using VPN. VPC Peering: Configure
private communication across the same or different
organizations without bandwidth bottlenecks or single
points of failure. Shared VPC can be used within an
organization.
Chapter
2
CLOUD MANAGEMENT SECURITY
• Data centers are centralized locations where computing and networking equip-
ment is concentrated for the purpose of collecting, storing, processing, distributing or
allowing access to large amounts of data.
• As equipment got smaller and cheaper, and data processing needs began to increase and
they have increased exponentially, so networking multiple servers together to increase
processing power have started.
• Large numbers of these clustered servers and related equipment can be housed in a room,
an entire building or groups of buildings.
• Today’s data center is likely to have thousands of very powerful and very small servers
running 24/7.
• Data centers are sometimes referred to a server farms and they provide important ser-
vices such as data storage, backup and recovery, data management and networking.
• These centers can store and serve up Web sites, run e-mail and instant messaging (IM)
services, provide cloud storage and applications, enable e-commerce transactions, power
online gaming communities and many more.
• Lack of fast and reliable access to data mean an inability to provide vital services or loss
of customer satisfaction
1.6. Level 2
O Companies who need access to their data without downtime could use a Tier 2 level
data center.
O Infrastructure includes Tier I capabilities with redundant components for power and
cooling, which may include backup UPS battery systems, chillers, generators and pumps.
O This gives the customers more reliability against disruptions.
1.7. Level 3
O Companies for whom delivery of their product or service in real time is critical to
their operations, such as media providers like Netflix, content providers like Facebook,
financial companies, etc.
O Maintenance and repairs can be performed without disrupting service to the customer.
For these customers, down time is very costly.
1.8. Level 4
O Includes Tier I, Tier II and Tier III capabilities, adding another layer of fault tolerance.
O Power, cooling and storage are all independently dual- powered. O The topography of
the infrastructure allows one fault anywhere in the system without disruption to service
and the least downtime.
O For enterprises that must stay active 24/7, a Tier 4 data center is ideal.
-Resource management becomes even more complex when resources are oversub-
scribed and users are uncooperative.
- In addition to external factors, resource management is affected by internal factors, such
as heterogeneity of hardware and software systems, the scale of the system, the failure
rates of different components, and other factors.
1.62. How resources are managed in cloud?
The strategies for resource management associated with the basic cloud delivery models,
IaaS, PaaS, SaaS, and DBaasS are different
- In all cases, the cloud service providers are faced with large fluctuating loads which
challenge the claim of cloud elasticity.
- In some cases, when a spike can be predicted, the resources can be provisioned in ad-
vance, e.g., for web services subject to seasonal spikes. For an unplanned spike the situa-
tion is slightly more complicated.
Auto-scaling can be used for unplanned spikes of the workload provided that:
(a) there is a pool of resources that can be released or allocated on demand; and
(b) there is a monitoring system enabling the resource management system to reallocate
resources in real time.
Auto-scaling is supported by PaaS services, such as Google AppEngine.