0% found this document useful (0 votes)
11 views21 pages

KEY1R

The document outlines an examination paper for Cloud Computing at Chaitanya Bharathi Institute of Technology, detailing questions on various topics including the advantages of cloud computing, scaling differences between traditional and cloud computing, and the role of relational DBMS in the cloud. It also discusses limitations of traditional computing, metering and billing mechanisms in cloud services, and strategies for scaling in the cloud. The examination is structured into two parts, with Part A consisting of short answer questions and Part B requiring more detailed responses.

Uploaded by

srikanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views21 pages

KEY1R

The document outlines an examination paper for Cloud Computing at Chaitanya Bharathi Institute of Technology, detailing questions on various topics including the advantages of cloud computing, scaling differences between traditional and cloud computing, and the role of relational DBMS in the cloud. It also discusses limitations of traditional computing, metering and billing mechanisms in cloud services, and strategies for scaling in the cloud. The examination is structured into two parts, with Part A consisting of short answer questions and Part B requiring more detailed responses.

Uploaded by

srikanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

CHAITANYA BHARATHI INSTITUTE OF TECHNOLOGY

(Autonomous)
R23 M.Tech. I Sem A I & D S (Main) Examination
Cloud Computing 23ADE102

Time: 3 Hours Max Marks: 60

Note: Answer ALL questions from Part-A at one place in the same order and
Part B (Internal Choice)

Part - A
(5QX 2M = 10 Marks)

1. How does cloud computing address business challenges? Prove a brief example.
Ans: Cloud computing offers solutions to many business challenges, such as:
Cost Reduction: By eliminating the need for expensive hardware and IT infrastructure, cloud computing helps
businesses reduce both operational
Innovation: Cloud platforms enable rapid deployment of applications and services, allowing businesses to innovate
quickly and respond to changing market conditions.
Remote Work Enablement: Cloud services support remote work by allowing employees to access systems,
applications, and data from anywhere, fostering collaboration and productivity.
Data-Driven Decision Making: Cloud-based analytics tools help businesses process and analyze vast amounts of
data in real-time, enabling better decision-making and insights.
Global Reach: Cloud platforms allow businesses to expand their operations globally, providing services and
products to customers across multiple regions without the need for local infrastructure.
Ex: A retail company experiences seasonal spikes in website traffic during sales. Using cloud computing, it can
scale up its server resources to handle increased traffic and scale down when demand decreases, avoiding
unnecessary costs. Amazon Web Services (AWS) or Microsoft Azure allow such dynamic resource allocation,
ensuring smooth customer experiences while optimizing expenses.
2 .Differentiate between scaling in tradition computing and scaling in cloud computing.
Ans: Scaling in Traditional Computing :
Scaling: Refers to the process of increasing the capacity of a system (or) adjusting computing resources to meet
workload demands.

 This means adding more CPU, memory, or storage to a single machine to handle greater workloads.
Unlike cloud computing, traditional scaling is often limited by the physical constraints of the hardware,
making it less flexible and more costly as demand grows.
Techniques:
1.Scale up capacity-increase ex: ram size:16gb-32gb
2.scale down-decrease capacity ex: 32gb-16gb
Scaling in Cloud Computing :
refers to the ability to dynamically adjust the resources allocated to applications and services based on demand. This
flexibility is one of the key advantages of cloud computing. Here are the main aspects of scaling in cloud
computing:
Types of Scaling:
Vertical Scaling (Scaling Up): This involves adding more resources (CPU, memory, storage) to an existing
instance. While simple, it has limitations as it can only scale to the capacity of the machine.ex: old sys(increase
memory)
Horizontal Scaling (Scaling Out): This involves adding more instances or servers to distribute the load. It enhances
redundancy and availability and can effectively handle larger workloads.ex: selling old system,purchasing new one
Auto-Scaling: Cloud environments often support auto-scaling, which automatically adjusts resources in response to
real-time demand. This can involve adding or removing instances based on predefined metrics (e.g., CPU usage,
memory load), ensuring optimal performance and cost efficiency.ex: at the ec2 developing time we can assign the
min and max resorce values.
3. Define what a Content Delivery Network(CDN) is and briefly explain its primary purpose.
Ans: Content Delivery Network(CDN):

 It means placing servers in various geographic locations that are optimized for delivering content
quickly and efficiently to users, reducing latency and ensuring better performance worldwide.
 A CDN is a highly distributed platform of servers that minimize latency/delay in loading webpage
content by reducing the physical distance between the server and the user. Ex: Gmail should load
within 1-3 second.
 This helps users around the world view the same high-quality content without slowing down.
 The main goal of a CDN is to reduce latency and improve website performance by caching content
closer to end users. Instead of fetching data from a single central server, CDNs deliver it from the
nearest server node, leading to:
 Faster Load Times – Reduces delays by serving content from nearby locations.
 Reduced Bandwidth Usage – Offloads traffic from the origin server, minimizing costs.
 Improved Reliability – Ensures website availability even during traffic spikes or server failures.
 Enhanced Security – Protects against DDoS attacks by distributing traffic.

4. Describe the open visualization format(OVF) and its significance in cloud management.
Ans: Open Virtualization Format (OVF)
• The Open Virtualization Format (OVF) is an open standard for packaging and distributing virtual
appliances and virtual machines (VMs). OVF is designed to make it easier to create, deploy, and manage
virtualized software applications across different virtualization platforms. It was developed by the
Distributed Management Task Force (DMTF), and it is widely supported by various cloud providers,
hypervisor technologies, and virtualization platforms.

Significance in Cloud Management:

 Interoperability – OVF ensures that virtual machines can be deployed across different hypervisors
like VMware, VirtualBox, and Hyper-V without compatibility issues.
 Portability – It enables seamless migration of VMs between on-premises data centers and cloud
environments.
 Efficiency – OVF uses a compressed format, reducing the storage and bandwidth required for
transferring VM images.
 Security – Digital signatures can be included in OVF packages to ensure integrity and authenticity.
 Simplifies Deployment – It provides metadata about the virtual machine, such as CPU, memory,
and networking configurations, making cloud deployment more manageable.

5. Describe the roll of relational DBMS in th cloud. How does it differ from tradition on premises on
relational Databases?
Ans: Relational Data Model: In this model, data is organized into tables (or relations), with rows representing
records and columns representing attributes. It is the most widely used model today, and SQL (Structured Query
Language) is typically used to manage relational databases.
Example: MySQL, PostgreSQL, Oracle DB use relational models.
Advantages: Highly flexible, supports complex queries, and maintains data integrity with ACID properties
(Atomicity, Consistency, Isolation, Durability).
Disadvantages: Can be inefficient for very large-scale datasets or very complex queries.

Feature Cloud DBMS On- premises RDBMS

Infrastructure Hosted on cloud provider's Requires physical servers and storage.


servers.

Scalability Easily scales up or down as Scaling requires hardware


needed. upgrades.

Accessibility Accessible from anywhere Limited to internal networks unless


via the internet. configured for remote access.

Security Built-in encryption, access Requires in-house security


controls, and compliance measures and compliance
with industry standards. management.

Part - B
(5Q X 10M = 50 Marks)
6(a). List and briefly explain the limitations of Traditional computing approaches.
Ans: Everytechnologyhasbothpositiveandnegativeaspectsthatarehighlyimportanttobe discussed before
implementing it. The points highlight the benefits of using cloud technology and the following discussion will
outline the potential consor disadvantages of Cloud Computing.

Vulnerability to attacks

Storing data in the cloud may pose serious challenges of information theft since in the cloud every
data of a company is online. A security breach is something that even the best organizations have
suffered from and it is a potential risk in the cloud as well. Although advanced security measures are
deployed on the cloud, still storing confidential data in the cloud can be a risky affair, and hence
vulnerability to attacks shall be considered.
Network connectivity dependency

Cloud Computing is entirely dependent on the Internet. This direct tie-up with the Internet means that
accompany needs to have reliable and consistent Internet service as well as a fast connection and
bandwidth to reap the benefits of Cloud Computing.

Downtime

Downtime is considered as one of the biggest potential downsides of using Cloud Computing. The
loud providers may sometimes face technical outages that can happen due to various reasons, such as
loss of power, low Internet connectivity, data centre’s going out of service for maintenance, etc.

Vendor lock-in

When in need to migrate from one cloud platform to another, a company might face some serious
challenges because of the differences between vendor platforms. Hosting and running the applications
of the current cloud platform on some other platform may cause support issues, configuration
complexities, and additional expenses. The company data might also be left vulnerable to security
attacks due to compromises that might have been made during migrations.

Limited control

Cloud customers may face limited control over their deployments. Cloud services run on remote
servers that are completely owned and managed by service providers, which makes it hard for the
companies to have the level of control that they would want over their back-end infrastructure.

May not get all the features

Some cloud providers offer only limited versions and with the most popular features. Before signing
up, it is important to know what cloud services are provided.

LIMITATIONS OF TRADITIONAL COMPUTING:


• Less User-friendly (as data can’t be accessed anywhere)
• Less cost-effective (as an individual has to buy expensive equipment’s)
• Offline mode (so accessing information is very slow)
• Less storage
• Does not provide any Scalability and Elasticity.
• Software is purchased and updated individually.

6(b). Discuss how metering and billing mechanisms work in cloud computing.

Ans: Metering and Billing in Cloud Computing

In cloud computing, metering and billing systems ensure that customers are charged based on the resources they
consume. This is a core feature of cloud services, allowing providers to offer flexible pricing models.

 Usage-Based Pricing: Cloud services track and meter the usage of computing resources (e.g.,
CPU time, memory, storage, data transfer). Users only pay for what they actually use, providing
cost efficiency compared to traditional fixed-price infrastructure.
 Billing Models:
 Pay-as-you-go: This model bills users based on actual resource consumption, allowing them to
scale their usage dynamically.
 Reserved Instances: Some cloud providers allow users to reserve computing capacity for a longer
period (e.g., one year or three years) at a discounted rate compared to on-demand pricing.
 Spot Instances: Some cloud providers offer unused capacity at significantly reduced prices, but
the availability of these instances can be interrupted if demand increases.
 Cost Management Tools: Cloud providers often offer tools to monitor and optimize costs,
helping businesses track their usage and avoid unexpected expenses. AWS Cost Explorer, Google
Cloud's Cost Management tools, and Azure Cost Management are examples of these services.

7(a). Compare and contrast the three layers of traditional computing with the structure of cloud computing.
Ans: Computers and computing have become an integral part of our daily lives. Different
peopleusedifferentcategoriesofcomputingfacilities.Thesecomputingfacilitiescan be segmented into three
categories:
 Infrastructure
 Platform
 Application

These three categories of computing facilities form three layers in the basic architecture of
computing. Below figure represents the relationships between these three entities.
Application

Platform

Infrastructure

(Three layers of computing facilities)

Infrastructure

The bottom layer or the foundation is the ‘computing infrastructure’ facility. This includes
all physical computing devices or hardware components like the processor, memory,
network, storage devices and other hardware appliances. Infrastructure refers to computing
resources in their bare-metal form (without any layer of software installed over them, not
even the operating system). This layer needs basic amenities like electric supply, cooling
system etc.

Platform

In computing, platform is the underlying system over which applications run. It can be said
that the platform consists of the physical computing device (hardware) loaded with layer(s)
of software where the program or application can run. The term ‘computing platform’ refers
to different abstract levels. It consists of:
 Certain hardware components, only.
 Hardware loaded with an operating system(OS).

Application

Applications (application software) constitute the topmost layer of this layered architecture. This
layer generally provides interfaces for interaction with external systems (human ormachine) and is
accessed by end users of computing. A user works on the application layer while he or she is going to
edit a document, play a game or use the calculator in a computer. At this layer, organizations access
enterprise applications using application interfaces to run their business.
Different types of people work at different layers of computing. They need to have
differentskillsetsandknowledge.Belowfigureshowsthegeneralusersorsubscribers of three different
computing layer

ApplicationInterfaces

OperatingSystem ApplicationRuntimes

Compute Storage Network

(Elementsof threecomputing layers)

An upper layer in this architecture is dependent on underlying layer(s).Access to the topmost layer
effectively consumes facilities from all underlying layers. Thus, a
7(b).Explain the concept of resource virtualization in cloud computing.
Ans: Resource Virtualization:
• Resource virtualization /Virtualization in cloud computing is a process that creates a virtual
version of physical resources—such as servers, storage, networks, or even applications—allowing multiple
virtual resources to operate on a single physical infrastructure.
• This abstraction layer is essential to cloud computing, enabling providers to offer scalable,
efficient, and cost-effective cloud services to end-users without requiring them to manage the underlying
physical hardware.
• Through virtualization, cloud providers can pool resources from various hardware and deliver
them as on-demand.

Through virtualization, cloud providers can pool resources from various hardware and deliver them as on-demand.

Types of Virtualization in Cloud Computing:

• Hardware/Server Virtualization:

Divides a single physical server into multiple virtual machines (VMs), each with its own OS and
applications, running independently on the same physical hardware. Allows providers to maximize
hardware utilization, isolate workloads, and offer scalable resources on demand. Hypervisors like
VMware ESXi, Microsoft Hyper-V, and KVM are common tools used.

• Storage Virtualization:

Combines multiple storage devices into a unified storage pool, which can then be divided into
virtual storage units for cloud users. Simplifies data management and improves storage scalability
and resilience.SAN (Storage Area Network) and NAS (Network-Attached Storage) are
commonly virtualized storage types.

• Network Virtualization:

Abstracts physical networking resources to create virtual networks, allowing for easier
management and greater flexibility in network configuration. Enables features like virtual LANs
(VLANs) and virtual network functions (VNFs), often managed through Software-Defined
Networking (SDN).

• Desktop Virtualization:

Allows users to access virtual desktops hosted on remote servers. These desktops can run
independently, providing flexibility and security. Tools like VMware Horizon and Citrix Virtual
Apps and Desktops are popular for desktop virtualization.

• Application Virtualization:

Separates applications from the underlying OS, allowing them to be deployed and managed
independently of the hardware or OS they run on. Commonly used for software as a service (SaaS)
applications, where users access applications in the cloud without needing local installation.

8(a).Discuss different strategies for scaling in the cloud.

Ans: There are three types of Scaling. Such as:


 Vertical Scalability (Scaling-up and Scale-down)
 Horizontal scalability (Scaling-out and Scale-in)
 Auto scalability

VERTICAL SCALING (SCALING UP):

 In vertical scaling, the overall system capacity is enhanced by replacing resource components
within existing nodes.
 This type of scaling where a resource component is powered up through replacement is called
scaling up or vertical scaling
Disadvantages:
 Require huge number of financial investments.
 Greater risk of hardware failure
 Vender lock-in and limited upgradability in future.
 Low availability

Advantages:
 less power consumption than running multiple servers.
 Easy to manage and install.

Dia:

Resource
Pool

Physical
servers

More More
1CPU 1CPU 1CPU
Deman Deman
d d
- with - with - with
3GHz 6GHz 9GHz

capacit capacit capacit


y y y

(Computing resource scaled up by replacing it with more powerful resource as demand increases)

1. Horizontal Scaling (Scaling Out/In)


o This strategy adds or removes multiple instances of a resource to balance the load dynamically.
o Example: Auto-scaling groups in AWS or Kubernetes clusters automatically adding more
instances when traffic spikes and reducing them when demand decreases.
o Pros: Highly scalable, improves fault tolerance and redundancy.
o Cons: Requires load balancers, more complex management.
2. Diagonal Scaling (Hybrid Approach)
o Combines both vertical and horizontal scaling by first scaling up (adding resources to a single
instance) and then scaling out (adding more instances) as needed.
o Example: A database server might be scaled vertically until it reaches a hardware limit, after
which additional read replicas are deployed horizontally.
o Pros: Maximizes resource efficiency, balances cost and performance.
o Cons: Requires careful planning to avoid bottlenecks

8(b).Compare load balancing strategies used by google cloud and amazon EC2.
Ans: Case study on Google cloud and Amazon Elastic Compute Cloud (EC2):
A case study on Google Cloud Platform (GCP) and Amazon Elastic Compute Cloud (EC2) offers insights into the
capabilities, advantages, and challenges of these cloud computing services. Here’s a structured overview comparing
their offerings:

1. Overview of GCP and EC2

 Google Cloud Platform (GCP): GCP is Google’s suite of cloud computing services, offering
compute, storage, networking, machine learning, and big data capabilities. It supports app
development, analytics, and deployment.
 Amazon Elastic Compute Cloud (EC2): EC2 is part of Amazon Web Services (AWS) and
provides scalable virtual server instances. EC2 allows users to rent and configure compute
capacity, making it a popular choice for applications that need to scale quickly.

2. Key Offerings and Services

 Compute Power: GCP: Offers Compute Engine for VM instances, Google Kubernetes Engine
(GKE) for container orchestration, and App Engine for server less computing.
 EC2: Provides flexible instance types, including On-Demand, Reserved, and Spot Instances, with
tailored options for computing power, memory, and storage needs.
 Storage and Database Solutions: GCP: Includes Cloud Storage for object storage, Big Query for
data warehousing, and Cloud SQL and Spanner for relational databases.
 EC2: Integrated with AWS services like S3 (object storage), RDS (relational database), and
Dynamo DB (NoSQL database).
 Networking: GCP: Provides a global fiber network and advanced load balancing. Networking
solutions are integrated, enabling low-latency and secure global access.
 EC2: Supports Virtual Private Cloud (VPC), enabling users to set up isolated networks, and
Elastic Load Balancing (ELB) for scalable applications.

3. Use Cases and Industry Applications

 GCP: Known for data science, AI, and machine learning, GCP’s Big Query is widely used in data-
heavy industries for advanced analytics. It’s also used by developers for mobile app backend
solutions and content delivery.
 EC2: Popular across various industries, EC2 serves large-scale e-commerce platforms, web
hosting, and enterprise applications, particularly those that require flexibility in scaling up or
down.

4. Security and Compliance

 GCP: GCP offers a security model with data encryption, Identity and Access Management (IAM),
and compliance with standards like GDPR, HIPAA, and Fed RAMP. Cloud Armor and Security
Command Center provide additional protection.
 EC2: AWS focuses on a secure, isolated environment with data encryption and extensive IAM.
Security services include Guard Duty, Shield, and Inspector to help meet compliance and industry
standards.

5. Pricing Models

 GCP: Offers Pay-as-you-go pricing, with sustained use discounts and committed use contracts.
Pricing is generally competitive and includes free-tier offerings for new users.
 EC2: EC2 instances are billed based on an On-Demand, Reserved, or Spot pricing model. AWS
also offers savings plans and free-tier options for a limited time.

6. Advantages

GCP:
 Superior data analytics and machine learning capabilities.
 Fast and efficient network infrastructure, reducing latency.
 Strong integration with Google’s own services like Gmail, Maps, and Search.

EC2:

 Extensive global reach and long-standing reliability.


 Rich ecosystem of AWS services, including Lambda for server less computing.
 High flexibility in configurations for compute power, memory, and storage.

7. Challenges and Limitations

 GCP: Smaller range of services compared to AWS and can have a steeper learning curve
for new users.
 EC2: Pricing can become complex, and while it has a broad range of services, it can
overwhelm new users with its sheer number of options.

8. Conclusion

Both GCP and EC2 offer powerful cloud computing solutions, though each has unique strengths. GCP is well-suited
for data-driven applications and machine learning, while EC2 excels with flexible compute options and an extensive
service ecosystem for traditional and enterprise-level applications.

9(a).Discuss the steps involved in capacity planning.

Ans: Capacity planning in computing is a systematic process that helps organizations determine the necessary
computing resources—such as servers, storage, network bandwidth, and software—to meet current and anticipated
workloads. This strategic planning ensures that IT infrastructure can efficiently support business operations without
overspending on unnecessary resources or facing performance issues due to inadequate capacity.

STEPS FOR CAPACITY PLANNING IN CC:


For a service provider who provides computing as a utility service, there are three basic steps for capacity planning
to add value to their system. These are also considered as the core concerns of capacity planning. The steps are
mentioned below.
Step 1. Determining the expected demand – In the first step of capacity planning process the service provider must
carefully examine the expected overall resource usage patterns as they vary over a course of period.
Step 2. Analysing current response to load – Next, the service provider must analyse the available resource capacity
of their system and how the applications respond to load (or overload) with current capacity, so that any requirement
of additional capacity that is to be added can be identified.
Step 3. Knowing the value of the system – Finally, the service provider must be aware about the value of the
systems to the business, so to know when adding more capacity provides value and when it does not.
Wrong capacity planning results in huge business loss in traditional computing. Cloud service consumers are also
not totally relieved of the capacity planning task; but they are at low risk if their estimation is wrong.

9(b). Analyze the challenges associated with capacity planning in cloud environment.
Ans: Capacity planning in a cloud environment refers to the process of forecasting, allocating, and managing
computing resources efficiently to meet current and future demands. Unlike traditional IT infrastructure, cloud
computing offers dynamic scalability, but this also introduces complexities in optimizing performance, cost, and
reliability. Organizations must carefully plan resource allocation to ensure efficient operations while avoiding
excessive costs or performance bottlenecks.
Challenges in Capacity Planning

1. Unpredictable Demand
Cloud workloads can vary significantly due to user activity, seasonal spikes, or unexpected traffic surges.
Predicting resource needs accurately becomes challenging, leading to either over-provisioning (wasting
resources) or under-provisioning (causing performance issues).
2. Cost Management
Unlike traditional computing, where resources are fixed, cloud providers charge based on usage. Improper
capacity planning can lead to higher operational costs if resources are not optimized efficiently.
3. Scalability Complexity
Deciding whether to scale resources vertically (adding more power to existing instances) or horizontally
(adding more instances) requires careful monitoring. Automation tools help, but misconfigurations can lead
to inefficiencies.
4. Multi-Tenancy Issues
In public cloud environments, multiple users share resources. Heavy usage by one tenant may impact the
performance of others, requiring effective resource allocation strategies.
5. Performance Optimization
Balancing compute power, storage, and network bandwidth for different workloads is crucial. Poor capacity
planning can lead to slow performance or downtime, affecting business operations.
6. Service-Level Agreements (SLAsCompliance)
Organizations must meet SLAs for uptime and availability while scaling resources dynamically.
Mismanagement can lead to SLA violations, resulting in financial penalties and reputational damage.
7. Security and Compliance
Scaling resources while maintaining security and regulatory compliance is a challenge. Increased resource
provisioning may introduce vulnerabilities if security measures are not properly enforced

10(a). Design a security as a service (SecaaS) model for a cloud based enterprise, addressing identity
management and access control. Discuss the benefits of this approach.
Ans: Identity Management and Access Control (IMAC): In cloud computing ensures that only authorized users
and systems can access specific resources while maintaining security and compliance. These concepts are
fundamental to securing cloud environments and data, as they protect sensitive information and reduce the risk of
unauthorized access.
1. Identity Management (IdM):
User Authentication: The process of verifying a user's identity using credentials such as usernames, passwords,
biometric data, or security tokens. Multi-Factor Authentication (MFA) adds extra layers of security by requiring
more than one form of verification
Single Sign-On (SSO): A centralized authentication mechanism that allows users to access multiple services and
applications with a single set of credentials, improving user convenience while maintaining security
Identity Federation: Allows users to use the same identity across different organizations or platforms, reducing the
need for multiple logins and providing seamless access control across different services
Identity as a Service (IDaaS): Cloud-based identity management services that handle user authentication and
authorization for applications in a centralized, scalable manner
2. Access Control:
Access Control Models:
Role-Based Access Control (RBAC): Users are assigned specific roles that grant them predefined permissions to
access resources. Roles could be based on job functions or responsibilities, such as "admin" or "employee"
Attribute-Based Access Control (ABAC): Access decisions are based on attributes of the user (such as department,
location, or security clearance) and the environment (e.g., time of day, device used). ABAC is more flexible and
dynamic than RBAC
Discretionary Access Control (DAC): The owner of a resource determines who has access to it, typically allowing
more
Security-as-a-Service (SECaaS) refers to the delivery of security services through the cloud, allowing organizations
to outsource their security operations to third-party providers. This approach helps businesses to avoid the
complexities of managing security infrastructure in-house, benefiting from expert solutions that are continuously
updated and scalable. SECaaS is part of the broader trend of cloud-based service models like Software-as-a-Service
(SaaS), Infrastructure-as-a-Service (IaaS), and Platform-as-a-Service (PaaS).
Here are the key components of Security-as-a-Service:
1. Identity and Access Management (IAM):
Access Control: Provides solutions to authenticate and authorize users before granting access to applications or
systems. This includes technologies like Single Sign-On (SSO) and Multi-Factor Authentication (MFA)
User Authentication: Ensures secure user verification across multiple platforms, making it easier to control user
access remotely
2. Network Security:
Firewalls: Cloud-based firewalls filter incoming and outgoing network traffic to prevent unauthorized access
Intrusion Detection and Prevention Systems (IDS/IPS): Monitors network traffic for malicious activity and responds
by blocking suspicious actions
DDoS Protection: SECaaS providers often offer Distributed Denial of Service (DDoS) attack mitigation services to
protect against large-scale network flooding
3.Data Protection:
Encryption: SECaaS often includes tools for encrypting data in transit and at rest to ensure confidentiality and
prevent unauthorized access
Data Loss Prevention (DLP): Tools that monitor, detect, and prevent unauthorized data transfers or leakage,
enhancing data security
4. Security Monitoring and Incident Response:
24/7 Monitoring: Continuous monitoring of IT infrastructure to detect and respond to security threats in real-time
Incident Response and Remediation: SECaaS providers offer incident detection and automated responses,
minimizing the damage of potential security breaches
5. Compliance and Risk Management:
Compliance Support: SECaaS solutions often help organizations meet compliance requirements like GDPR,
HIPAA, PCI-DSS, and more, by providing security audits, encryption tools, and reporting capabilities
Risk Assessment: Providers assist businesses in evaluating security risks associated with their cloud environments,
offering tools to mitigate identified vulnerabilities
Examples of SECaaS Providers:
• Cisco Umbrella: A cloud security platform that provides threat intelligence and security features
such as DNS-layer protection, web filtering, and secure internet gateway
• Palo Alto Networks Prisma Cloud: A comprehensive cloud-native security platform that secures
applications, data, and networks across public and hybrid cloud environments
10(b).Compare the performances improvements offered by a CDN for static content vs Dynamic content.
Ans: Content Delivery Network(CDN):
• It means placing servers in various geographic locations that are optimized for delivering content quickly
and efficiently to users, reducing latency and ensuring better performance worldwide.
• A CDN is a highly distributed platform of servers that minimize latency/delay in loading webpage content
by reducing the physical distance between the server and the user. Ex: Gmail should load within 1-3
second.
A Content Delivery Network (CDN) improves website performance by distributing content across multiple edge
servers located closer to end users. However, the performance enhancements vary depending on whether the content
is static or dynamic.

For static content, such as images, CSS files, JavaScript, and videos, a CDN provides significant performance
improvements by fully caching and serving these files from the nearest edge server. This reduces latency, minimizes
load on the origin server, and lowers bandwidth consumption. Since static files do not change frequently, they can
be efficiently stored at multiple locations worldwide, ensuring faster page load times and a seamless user
experience. For example, an e-commerce website can use a CDN to load product images quickly, reducing waiting
times for customers.

On the other hand, dynamic content, such as personalized dashboards, search results, and real-time updates, is more
complex to optimize with a CDN because it changes based on user interactions. Unlike static files, dynamic content
cannot be fully cached, as it requires real-time processing from the origin server. However, CDNs still enhance
performance through route optimization, dynamic edge caching, and persistent connections, reducing response times
and improving efficiency. For instance, a financial website displaying live stock prices benefits from dynamic
acceleration techniques, allowing users to receive updates faster while minimizing delays.

11(a).Compare two different cloud security design principals and discuss their respective impacts on securing
cloud based application and data.

Ans: SECURITY REFERENCE MODEL IN CC

Traditional data centres allow perimeterized (i.e., within organization’s own network boundary or perimeter) access
to computing resources.

• But the de-perimeterization (to open-up the interaction with outer network) and erosion of trust boundaries
that was happening in enterprise applications.

There are two important working group who have contributed to developing a model to address security standards in
cloud computing.

Following groups observed the end users requirements and based on that security tech. will be provide.

1. The Cloud Security Alliance (CSA)


2. Jericho Forum Group

The primary objectives of Cloud Security Alliance (CSA) include the following:
 Encourage to develop a common level of understanding between cloud service providers and service
consumers (end users) regarding the necessary security requirements.
 Developing best practices related to cloud computing security by promoting independent research in
the field.
 Initiate educational programs to spread awareness about proper usages of the services.

A cloud security certification program for users was launched by CSA in 2010 which was first of its kind. They
also offered a certification program for service providers known as ‘CSA Security, Trust and Assurance
Registry’ (STAR) for self-assessment of providers which can be conducted by several CSA-authorized third-
party agencies.

Jericho Forum Group

 The Jericho Forum was formed in 2004 as an international IT security association of companies, vendors,
government groups and academics whose mission was to improve the security of global open-network
computing environment.
 In 2003, a group of concerned CISOs (Chief Information Security Officers) formed a forum to discuss this
issue. This initiative later emerged as ‘Jericho Forum Group’. In 2004, Jericho Forum officially announced
its existence and set its office in UK.
 The term ‘de-perimeterization’ was first coined by a member of this group to describe the blurring network
boundaries. The forum concentrated to find a security solution for the collaborative computing
environment.
 In 2009, this forum proposed a security model for cloud computing that has been accepted as the global
standard. Later, The Jericho Forum became a part of another vendor-neutral industry consortium entitled as
‘The Open Group’ and declared its closure in 2013 as their objectives were fulfilled.
 The collaborations among different groups have contributed positively in development of cloud security
framework. For instance, on several occasions, Jericho Forum and the Cloud Security Alliance had worked
together to promote best practices for secured collaboration in the cloud.

Cloud Security Design Principles are key guidelines and best practices for creating secure cloud environments.
These principles help ensure that the cloud infrastructure is resilient, scalable, and protected against potential threats.
Below are some essential cloud security design principles:

1. Security by Design:

 Incorporate Security from the Start: Security should be integrated into the cloud architecture from
the beginning rather than being added after the system is built. This proactive approach helps
address vulnerabilities early
 Risk Assessment: Conduct a thorough risk analysis to understand potential threats to cloud
infrastructure and data, allowing the design of appropriate countermeasures

2. Least Privilege:

 Minimize Access: Ensure that users, applications, and systems are only granted the minimum
access necessary to perform their functions. This minimizes the risk of unauthorized access and
limits the potential impact of a breach
 Role-Based Access Control (RBAC): Use RBAC to enforce least privilege by assigning users only
the roles and permissions required for their tasks

3. Data Protection:

 Data Encryption: All sensitive data should be encrypted both at rest and in transit to prevent
unauthorized access. End-to-end encryption ensures data remains secure even if intercepted
 Data Classification: Implement policies to classify data based on its sensitivity and apply
appropriate security measures accordingly, such as encryption, access controls, and backup
policies
4. Resilience and Availability:

 Redundancy and Fault Tolerance: Cloud systems should be designed with redundancy in mind,
ensuring that services remain operational even in the event of hardware failures or other
disruptions. Multi-region deployment helps maintain availability by distributing services across
different geographical locations
 Disaster Recovery and Backup: Have a disaster recovery plan in place that includes automated
backups, replication, and the ability to restore services quickly in case of failure

5. Scalability:

 Elasticity: The cloud infrastructure should be able to scale up or down to meet varying demand
levels without compromising security. Auto-scaling solutions help ensure that the system can
handle peak loads without adding vulnerabilities
 Load Balancing: Distribute traffic across multiple resources to ensure system stability, security,
and performance during traffic spikes

11(b). Discuss how block chain technology can enhance cloud security. Evaluate the potential benefits of
integrating block chain with cloud services and outline use cases where this integrate is particularly
advantageous.

Ans: Enhancing Cloud Security with Block chain Technology

Block chain technology, known for its decentralized, immutable, and transparent nature, can significantly enhance
cloud security by addressing vulnerabilities such as unauthorized access, data tampering, and central points of
failure. Integrating block chain with cloud services provides a trustworthy and tamper-proof security framework,
making cloud computing more resilient to cyber threats.

Potential Benefits of Integrating Block chain with Cloud Services

1. Enhanced Data Integrity

 Block chain’s immutable ledger ensures that once data is recorded, it cannot be altered or
deleted, reducing the risk of data breaches and tampering.
 This is particularly useful in cloud storage, where data integrity is crucial.

2. Decentralized Security Model

 Unlike traditional cloud security, which relies on a central authority (e.g., cloud
provider), block chain uses decentralized consensus mechanisms to verify transactions
and access.
 This reduces the risk of single points of failure and insider threats.

3. Improved Access Control & Authentication

 Block chain can replace traditional username-password authentication with decentralized


identity management using cryptographic keys and smart contracts.
 It enhances multi-factor authentication (MFA) and role-based access control (RBAC)
without depending on a central authority.

4. Auditability & Transparency

 Every transaction in a block chain network is logged and time stamped, providing a
complete audit trail for cloud operations.
 This is beneficial for compliance with data protection regulations like GDPR, HIPAA,
and ISO 27001.

5. Enhanced Data Security with Smart Contracts

 Smart contracts can automate and enforce security policies, ensuring that cloud services
follow pre-defined security rules without human intervention.

Use Cases of Block chain Integration with Cloud Services

1. Secure Cloud Storage

 Block chain-based decentralized storage solutions like Storj, Sia, and Filecoin prevent
data manipulation by distributing data across multiple nodes, reducing dependency on a
single cloud provider.

2. Data Sharing & Secure Transactions

 Organizations can use block chain to securely share data across cloud platforms without
exposing sensitive information to unauthorized entities.
 Example: Healthcare providers can store and share patient records securely on block
chain-based cloud systems.

3. Identity & Access Management (IAM)

 Block chain can provide decentralized identity verification, preventing identity theft and
unauthorized access in cloud-based applications.
 Example: Microsoft’s ION (Identity Overlay Network) uses blockchain for identity
authentication in cloud environments.

4. Cyber security & Fraud Prevention

 Block chain enhances real-time threat detection in cloud security systems by verifying
transactions and blocking unauthorized changes.
 Example: Financial institutions can use block chain-integrated cloud security to prevent
fraud in online banking.

5. Edge Computing & IoT Security

 Cloud-based IoT networks can use block chain to secure data exchange between devices
and prevent unauthorized tampering.
 Example: Smart cities use block chain to protect cloud-managed IoT infrastructure.

12(a).Explain the role of open standards in addressing portability and Interoperability challenges in the
cloud. Discuss the advantages and potential drawbacks of relaying on Open Standards.
Ans: Challenges in the Cloud, The Issues in Traditional Computing: Apart from security, privacy and compliance
issues, portability and interoperability are 2 other vital challenges are related CC.
 Portability: it is the ability to run components or systems developed for one cloud provider’s
environment on some other cloud provider’s environment. Portability concerns are highest when
these two environments are from two different vendors.
 Interoperability is about communicating between multiple clouds. Clouds can be of different
deployments (public, private and hybrid) and/or from different vendors. Diverse systems should
be able to understand one another in terms of application, data formats, configurations, service
interfaces etc.
Advantages of Relying on Open Standards

1. Avoids Vendor Lock-in: Businesses can switch between cloud providers without extensive reconfiguration,
ensuring flexibility.
2. Enhances Cross-Platform Compatibility:
a. Open standards enable different cloud platforms to interact smoothly, improving hybrid and multi-
cloud strategies.
3. Encourages Innovation and Collaboration
a. Open standards foster community-driven development, allowing continuous improvement and
updates.
4. Improves Security and Compliance
a. Standardized security protocols ensure consistent encryption, authentication, and data protection
across clouds.
5. Cost-Effectiveness
a. Organizations reduce costs by avoiding proprietary solutions that require extensive modifications.

Potential Drawbacks of Relying on Open Standards

1. Slower Adoption and Implementation: Open standards require industry-wide agreement, leading to delays
in adoption and updates.
2. Lack of Uniformity A mong Cloud Providers: Some vendors may modify open standards to fit their
platforms, causing minor compatibility issues.
3. Security Risks from Open-Source Vulnerabilities: Since open standards are publicly accessible, they may
become targets for security exploits if not properly maintained.
4. Limited Advanced Features: Open standards may lag behind proprietary solutions in offering cutting-edge
features tailored to specific cloud providers.

12(b).Compare the Portability and interpretability Scenarios for infra structure as a service (IaaS) and
Platform as a Services (PaaS) cloud models.

Ans: Portability and interoperability are critical factors in cloud computing, influencing how easily applications and
workloads can be transferred or integrated across different cloud platforms. In Infrastructure as a Service (IaaS),
portability refers to the ability to migrate virtual machines, storage, and networking configurations from one cloud
provider to another. However, challenges arise due to differences in virtualization formats (e.g., AWS AMI vs.
Azure VHD), networking settings, and security policies. To address these challenges, organizations often rely on
open virtualization standards like Open Virtualization Format (OVF), containerization technologies such as Docker
and Kubernetes, and cloud-agnostic automation tools like Terraform. These solutions help standardize infrastructure
deployment across multiple cloud environments.

In contrast, Platform as a Service (PaaS) presents greater portability challenges due to its reliance on provider-
specific development environments, APIs, and runtime frameworks. Applications built on PaaS platforms like
Google App Engine, AWS Lambda, or Microsoft Azure Functions often require modifications when migrating to
another provider due to differences in programming languages, database services, and middleware configurations.
Standardized development frameworks such as Cloud Foundry and the use of API abstraction layers can help
mitigate these dependencies, allowing for greater flexibility in moving applications across PaaS environments.
However, the lack of universally accepted standards in PaaS remains a significant hurdle to achieving full
portability.

Interoperability in IaaS is relatively easier to achieve compared to PaaS because infrastructure components like
VMs, networking, and storage can leverage standardized networking protocols, cross-cloud authentication
mechanisms, and open-source cloud platforms like OpenStack. However, vendor-specific APIs for resource
management still pose integration challenges. On the other hand, interoperability in PaaS is more complex due to
proprietary cloud services that do not always support seamless integration with other platforms. For instance,
applications built using Google Firebase may not directly integrate with AWS services without additional
middleware or API gateways. Standardization efforts like Open API and Red Hat Open Shift aim to bridge these
gaps, but differences in cloud architectures still limit true cross-platform interoperability.

Overall, IaaS offers better portability and interoperability due to its lower dependency on vendor-specific services,
whereas PaaS poses significant challenges due to tightly integrated application development environments. While
solutions such as containerization, multi-cloud orchestration, and middleware technologies can improve cloud
portability and interoperability, complete standardization remains an ongoing challenge. Organizations looking to
avoid cloud lock-in must carefully plan their cloud strategies by adopting open standards and cross-platform
compatible tools.

13(a).Evaluate the impact of improved portability and interoperability on the scalability of the cloud services.
Provide examples to support evolution.

Ans: Improved portability and interoperability significantly enhance the scalability of cloud services by allowing
organizations to efficiently distribute workloads, optimize resource utilization, and expand their operations across
multiple cloud environments. Portability ensures that applications, data, and services can be seamlessly moved
across different cloud providers, while interoperability enables diverse cloud platforms to communicate and function
together without compatibility issues. These improvements drive greater flexibility, reduce vendor lock-in, and
enhance overall cloud scalability.

One of the primary impacts of improved portability is the ability to implement a multi-cloud or hybrid-cloud
strategy. Organizations can dynamically scale their applications by distributing workloads across multiple cloud
providers, ensuring they meet demand without being restricted to a single vendor. For instance, Netflix utilizes AWS
as its primary cloud provider but also integrates Google Cloud services for specialized AI and analytics workloads.
This flexibility enables Netflix to scale its streaming services worldwide without being dependent on a single cloud
infrastructure.

Interoperability, on the other hand, enhances cloud scalability by enabling seamless integration of different cloud
services, databases, and APIs. For example, modern container orchestration platforms like Kubernetes allow
businesses to deploy applications across various cloud environments without modifications. This means companies
can scale their applications horizontally by adding resources from different cloud providers as needed. A real-world
example is Airbnb, which uses Kubernetes to ensure that its micro services-based architecture can scale globally
across multiple cloud regions without service disruptions.

Moreover, improved portability and interoperability facilitate auto-scaling and resource optimization. Cloud-native
technologies such as Terraform for infrastructure-as-code and Open Shift for cross-cloud PaaS deployment allow
businesses to scale their infrastructure dynamically based on traffic spikes. For example, Spotify leverages a multi-
cloud strategy where workloads shift between Google Cloud and AWS based on performance and cost efficiency,
ensuring optimal scalability during peak streaming hours.

13(b).Discuss the role of machine imaging in facilitating disaster recovery and business continuity in the
cloud. Discuss considerations for creating robust machine images for such scenarios.

Ans: Machine imaging plays a crucial role in disaster recovery (DR) and business continuity (BC) by enabling
organizations to quickly restore systems and applications in case of failures, cyberattacks, or natural disasters. A
machine image is a snapshot of a virtual machine (VM), including its operating system, configurations, applications,
and dependencies. Cloud providers like AWS (Amazon Machine Images - AMIs), Azure (Managed Images), and
Google Cloud (Custom Images) offer machine imaging services that help businesses maintain operational resilience.

In disaster recovery scenarios, machine images enable rapid system restoration by allowing organizations to spin up
identical instances in a different cloud region or availability zone. This minimizes downtime and ensures business
continuity. For example, if a critical application running on AWS EC2 fails due to a hardware failure, a pre-
configured AMI can be used to instantly deploy a new instance, reducing recovery time. Additionally, machine
images support automated scaling and redundancy, ensuring that services remain available even in high-traffic or
failure-prone conditions.

Considerations for Creating Robust Machine Images for Disaster Recovery

 Up-to-Date System Configurations: Ensure the latest OS updates, security patches, and software
versions are included to prevent vulnerabilities.
 Minimal Image Size and Dependencies: Optimize images by removing unnecessary files and
software to reduce deployment time and storage costs.
 Security and Compliance: Implement encryption for sensitive data, disable unused ports, and
harden security configurations before creating the image.
 Multi-Region Availability: Store machine images across multiple cloud regions or availability
zones to enable quick recovery in case of local failures.
 Automated Backup and Versioning Regularly update and maintain multiple versions of machine
images to quickly roll back to a stable version if needed.
 Testing and Validation: Periodically test the deployment and functionality of machine images to
ensure reliable disaster recovery procedures.

14(a).Describe the relationship between SOA and cloud computing. How does SOA enhances the capabilities
of cloud based systems?
Ans: Service-Oriented Architecture (SOA) is a stage in the evolution of application development and/or integration. It
defines a way to make software components reusable using the interfaces.

SOA and Cloud Computing are complementary technologies, each enhancing the value and scalability of the other.

 Integration with Cloud Services: Cloud computing platforms often use service-oriented principles
to offer their services. For instance, Platform as a Service (PaaS) and Software as a Service (SaaS)
are examples of cloud solutions that can be integrated into an SOA. Cloud providers offer scalable
and flexible services that can be consumed by other applications, aligning perfectly with the SOA
model of modular service consumption.
 Cost-Effectiveness: Cloud services typically offer a pay-as-you-go model, which, combined with
the reusable nature of SOA services, helps reduce costs. This allows organizations to only pay for
the services they consume without the need for heavy upfront investments in infrastructure.
 Scalability and Elasticity: Cloud environments offer elasticity, meaning services can scale up or
down based on demand. SOA services, being modular and independent, can easily scale in a cloud
environment without requiring re-engineering of the entire system.
 Simplified Management and Automation: Cloud platforms often come with built-in automation for
provisioning, scaling, and managing services, which complements the service-based structure of
SOA. For example, cloud orchestration tools can manage the interaction between SOA services
and other cloud-based services.
 Enhanced Flexibility: Cloud computing, when paired with SOA, allows businesses to quickly
implement, update, or retire services. Cloud services provide on-demand resources, making it easy
to develop and deploy new functionalities within an existing SOA framework.

14(b).Explain the concept of data models in the context of database technology. How do the data model
influence the organization of information in databases?

Ans: Data Models

A data model is a conceptual representation of the structure, organization, and relationships of data in a database. It
provides a framework for designing and managing data and is crucial in ensuring consistency, efficiency, and clarity
in handling information. Data models help in understanding how data elements relate to one another and how they
can be stored, processed, and manipulated.

Types of Data Models:

1. Hierarchical Data Model:

 This model organizes data in a tree-like structure, where each record has a single parent
and can have multiple children (known as a parent-child relationship).
 Example: XML is a real-world example of a hierarchical model.
 Advantages: Simple to design, easy to implement, and can efficiently represent a "one-to-
many" relationship.
 Disadvantages: It can be rigid and difficult to adapt to changing requirements due to its
fixed structure.
2. Network Data Model:

 Similar to the hierarchical model but allows more complex relationships by supporting
many-to-many relationships, where a child can have multiple parents.
 Example: IDMS (Integrated Database Management System) uses a network model.
 Advantages: More flexible than the hierarchical model and supports a richer set of
relationships.
 Disadvantages: More complex to design and maintain, and queries can be slower
compared to relational models.

3. Relational Data Model:

 In this model, data is organized into tables (or relations), with rows representing records
and columns representing attributes. It is the most widely used model today, and SQL
(Structured Query Language) is typically used to manage relational databases.
 Example: MySQL, PostgreSQL, Oracle DB use relational models.
 Advantages: Highly flexible, supports complex queries, and maintains data integrity with
ACID properties (Atomicity, Consistency, Isolation, Durability).
 Disadvantages: Can be inefficient for very large-scale datasets or very complex queries.

4. Object-Oriented Data Model:

 This model combines object-oriented programming with databases, allowing for the
storage of complex data types like images, videos, and other multimedia, along with
traditional data.
 Example: ObjectDB is an example of a database that implements an object-oriented
model.
 Advantages: Handles complex data types well, and offers better modeling for certain
types of applications.
 Disadvantages: More complex to design and can lead to performance issues for certain
types of queries.

5. Entity-Relationship (ER) Model:

 The ER model is used to visually represent the relationships between different data
entities in a system. It is often used in the early stages of database design.
 Example: Tools like Microsoft Visio or Lucid chart can be used to create ER diagrams.
 Advantages: Helps visualize relationships, which aids in clearer database design.
 Disadvantages: It is more of a theoretical design tool and lacks the detailed structure
needed for actual implementation.

6. Document Data Model (NoSQL):

 This is a NoSQL model that stores data as documents, typically in JSON, BSON, or
XML format. It is suitable for semi-structured data where data formats may change over
time.
 Example: MongoDB and Couch DB are NoSQL databases using this model.
 Advantages: Flexible, can handle unstructured or semi-structured data well, and scales
horizontally.
 Disadvantages: Lacks the strict data consistency guarantees of relational models.

15(a).Compare the benefits of SOA with traditional monolithic architecture in terms of adaptability to
changing business requirement. Discuss specific advantages of SOA in this context.
Ans: Service-Oriented Architecture (SOA) is a stage in the evolution of application development and/or integration.
It defines a way to make software components reusable using the interfaces.
Formally, SOA is an architectural approach in which applications make use of services available in the network. In
this architecture, services are provided to form applications, through a network call over the internet. It uses
common communication standards to speed up and streamline the service integrations in applications. Each service
in SOA is a complete business function in itself. The services are published in such a way that it makes it easy for
the developers to assemble their apps using those services.

Advantages:
1. Service reusability: In SOA, applications are made from existing services. Thus, services can be reused to
make many applications.
2. Easy maintenance: As services are independent of each other they can be updated and modified easily
without affecting other services.
3. Platform independent: SOA allows making a complex application by combining services picked from
different sources, independent of the platform.
4. Availability: SOA facilities are easily available to anyone on request.
5. Reliability: SOA applications are more reliable because it is easy to debug small services rather than huge
codes
6. Scalability: Services can run on different servers within an environment, this increases scalability

15(b). Explain the relational DBMS in cloud with respect to non relational DBMS with an illustration.

Ans: The choice of data model depends on the nature of the data and the use case:

1. Relational models are ideal for structured data that fits well in tables with defined relationships (e.g.,
customer databases, financial systems).
2. NoSQL models like document, key-value, and graph databases are better suited for dynamic, unstructured
data that needs to scale horizontally or represent complex relationships.

Relational DBMS in Cloud

A Relational Database Management System (RDBMS) in the cloud refers to the traditional relational database
systems (such as MySQL, PostgreSQL, and SQL Server) hosted and managed in the cloud environment. These
databases store data in tables with rows and columns, and enforce ACID properties to ensure reliable transaction
processing.

Benefits:

 Managed Services: Providers like AWS, Azure, and Google Cloud offer fully managed relational
databases, handling tasks like scaling, backups, and patching automatically.
 Integration with Cloud Ecosystem: Cloud-hosted relational databases can integrate seamlessly
with other cloud services like analytics tools, AI models, and storage.
 Cost Control: Cloud-based RDBMS often offers flexible pricing models based on storage and
usage, reducing upfront costs compared to traditional database systems.

Popular Examples:

 Amazon RDS (Supports MySQL, PostgreSQL, MariaDB, Oracle, and SQL Server)
 Microsoft Azure SQL Database
 Google Cloud SQL

Non relational DBMS in Cloud

Non relational (NoSQL) databases are designed to handle unstructured or semi-structured data, offering flexibility in
terms of schema design and scalability. They are particularly useful for applications requiring high-speed data
retrieval, large volumes of diverse data types, or horizontal scalability.

Key Features:
 Flexible Schema: NoSQL databases do not require predefined schemas, which makes them ideal
for handling evolving data structures.
 Scalability: These databases are designed to scale horizontally, allowing data to be distributed
across many servers.
 Variety of Data Models: NoSQL databases come in several models, including document, key-
value, column-family, and graph databases.

Types of NoSQL Databases:

 Document Stores: Stores data in JSON or BSON format. Example: MongoDB, Couchbase.
 Key-Value Stores: Store data as key-value pairs. Example: Redis, DynamoDB.
 Column-Family Stores: Organize data in columns rather than rows. Example: Apache Cassandra,
HBase.
 Graph Databases: Store data in graph structures (nodes, edges). Example: Neo4j, Amazon
Neptune.

Popular Examples:

 MongoDB Atlas (Document-Based)


 Amazon DynamoDB (Key-Value)
 Google Cloud Bigtable (Column-Family)
 Neo4j Aura (Graph)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy