CC EndSem - 2023 Solution
CC EndSem - 2023 Solution
Telegram Channel
https://t.me/SPPU_TE_BE_COMP
(for all engineering Resources)
WhatsApp Channel
(for all Engg & tech updates)
https://whatsapp.com/channel/
0029ValjFriICVfpcV9HFc3b
Insta Page
(for all Engg & tech updates)
https://www.instagram.com/
sppu_engineering_update
SPPU_TE_BE_COMP
Cost Model Pay-as-you-go or
subscriptionbased
pricing model.
Main Use Scientific research, simulations, largescale computations. Web hosting, data
Cases storage, app
development,
AI/ML services,
etc.
Fault Lower fault tolerance; failure in one node can affect the High fault
Tolerance task. tolerance; cloud
providers offer
redundancy and
backup.
Example BOINC, Globus Toolkit. AWS, Microsoft
Technologies Azure, Google
Cloud.
Ownership Resources often belong to different organizations or Resources are
individuals. owned and
managed by cloud
service providers.
Definition of Virtualization:
Advantages of Virtualization:
Disadvantages of Virtualization:
1. Performance Overhead:
o Virtual machines may run slower than physical machines due to resource
sharing.
2. Initial Setup Cost:
o Licensing for hypervisors and high-end servers can be costly initially.
3. Security Risks:
o If a hypervisor is compromised, all VMs under it are at risk.
4. Complex Management: o Requires skilled administrators to manage the virtual
infrastructure.
5. Limited Hardware Access:
o VMs may not support certain hardware features like GPUs efficiently without
special configuration.
6. Software Licensing Issues:
o Some software may have licensing issues or restrictions in virtual environments.
OR
Q2) a) Describe virtual clustering in cloud computing? [9]
Definition:
Virtual Clustering in cloud computing refers to the grouping of multiple virtual machines
(VMs) across one or more physical servers to work together as a single logical unit (or cluster).
These VMs can cooperate to perform tasks, balance load, and ensure high availability—just
like a physical cluster, but entirely within a virtualized environment.
Key Concepts:
Example:
A company runs a virtual Hadoop cluster in the cloud with 10 VMs to process large datasets. If
one VM crashes, another can be spun up automatically without halting the job.
A hypervisor, also known as a Virtual Machine Monitor (VMM), is a software layer that
enables the creation and management of multiple virtual machines (VMs) on a single physical
host.
Role of Hypervisor in Cloud Computing:
1. Virtualization Enabler:
o Allows multiple VMs to run on a single hardware platform.
2. Resource Isolation:
o Each VM operates independently; resources like CPU, RAM are allocated
securely.
SPPU_TE_BE_COMP
3. Dynamic Resource Allocation:
o Resources can be reassigned between VMs based on load.
4. Improved Utilization:
o Maximizes hardware usage by hosting multiple systems.
5. Scalability:
o Easily add/remove virtual machines on demand.
6. Foundation for IaaS (Infrastructure as a Service):
o Services like AWS EC2 or Azure VMs are built upon hypervisors.
Types of Hypervisors:
Feature Type 1 Hypervisor (Bare Metal) Type 2 Hypervisor (Hosted)
Installation Runs directly on physical hardware. Runs on top of a host operating
Location system.
Performance High performance; less overhead. Slower due to host OS layer.
Examples VMware ESXi, Microsoft Hyper-V, Oracle VirtualBox, VMware
Xen. Workstation.
Use Case Used in data centers and enterprise Used for development, testing, or
environments. personal use.
Security More secure; fewer layers = smaller Less secure; relies on host OS
attack surface. security.
Hardware Access Direct access to hardware. Limited access via host OS.
Summary:
1. Education
2. Healthcare
3. Business/Enterprise
4. Entertainment & Media
5. Banking & Finance
6. E-Governance
7. Software Development
8. Scientific Research
SPPU_TE_BE_COMP
9. Storage and Backup
10. Social Networking
(i) Education:
• Cloud platforms offer e-learning tools, online classrooms, and digital libraries.
• Example: Google Classroom, Coursera, and Moodle Cloud use cloud to manage online
learning content.
• Benefits:
o Global access to resources o Reduced
infrastructure cost for schools/universities o
Real-time collaboration between teachers and
students
(ii) Healthcare:
• Cloud stores Electronic Health Records (EHRs), offers telemedicine platforms, and
enables real-time health monitoring.
• Example: IBM Watson Health, Microsoft Azure for Healthcare.
• Benefits:
o Centralized access to patient records o
Improves diagnostics through AI tools o
Remote health monitoring using IoT + cloud
Summary:
Cloud computing transforms many sectors by providing on-demand, scalable, and costeffective
services—improving productivity and innovation across industries.
1. Compute Services:
• Amazon EC2 (Elastic Compute Cloud): o Provides virtual servers (instances) for
running applications.
• AWS Lambda: o Run code without provisioning servers (serverless).
• Elastic Beanstalk:
o PaaS for deploying web applications easily.
SPPU_TE_BE_COMP
2. Storage Services:
3. Database Services:
6. Developer Tools:
OR
Q4) a) How the Amazon simple storage service (S3) works? Explain with
suitable diagram? [8]
Amazon S3 (Simple Storage Service) is an object storage service that enables users to store,
retrieve, and manage any amount of data at any time from anywhere on the web. It is highly
durable, scalable, and cost-effective.
1. Buckets:
o Containers for storing data in S3.
o Each user account can create multiple buckets (globally unique names).
2. Objects:
o The actual data (files) stored in buckets.
o Each object consists of:
Data
Metadata Unique key (name)
3. Keys:
o The unique identifier for each object within a bucket.
4. Storage Classes:
o Standard, Intelligent-Tiering, Glacier, etc., based on access frequency and cost.
5. Access Control:
o IAM policies and bucket policies control who can access what.
bash CopyEdit
ssh -i your-key.pem ec2-user@<Public-IP>
Q5) a) What are the different types of testing in cloud computing? Explain briefly?
[9]
1. Functional Testing:
This ensures that the cloud-based application performs all required functions correctly. It
tests user interactions, APIs, and system workflows to ensure that all features behave as
expected in different cloud environments.
2. Performance Testing:
It checks how the system performs under varying load conditions. This includes:
o Load Testing: To test normal user traffic.
o Stress Testing: To test the system under high load or peak usage.
o Soak Testing: To check the application’s performance over extended periods.
3. Security Testing:
This type of testing identifies vulnerabilities such as data leaks, weak authentication,
and unauthorized access. It is crucial in cloud systems where sensitive data is often
stored offsite.
4. Compatibility Testing:
Ensures the application works across various browsers, operating systems, and devices.
This is especially important in cloud environments due to the wide range of user
configurations.
5. Scalability Testing:
Evaluates the system's ability to scale up or down based on demand. This ensures
performance is maintained during spikes in usage.
6. Disaster Recovery Testing:
Validates the cloud system’s ability to recover from failures such as crashes or outages,
ensuring business continuity and data integrity.
7. Interoperability Testing:
Ensures that the application can work with other systems, services, and cloud platforms,
especially in hybrid or multi-cloud setups.
8. Multi-Tenancy Testing:
Checks that each user or tenant in a shared cloud environment has isolated data and
access, ensuring privacy and security.
9. Regression Testing:
Confirms that recent updates or changes in the application do not introduce new bugs or
break existing functionalities.
SPPU_TE_BE_COMP
b) Explain the different types of security risk involved in cloud computing? [9]
• Data Breaches:
Occur when unauthorized users gain access to sensitive data. This can result from
misconfigured storage, weak access controls, or vulnerabilities in the system.
• Data Loss:
Happens due to accidental deletion, system crashes, or malware. Without proper backups, this
can lead to permanent loss of important information.
• Insecure APIs:
Cloud services depend on APIs for communication and control. Poorly secured APIs can
expose systems to data manipulation, unauthorized access, or denial of service.
• Insider Threats:
Malicious or careless employees with access to systems can intentionally or accidentally cause
data leaks or service outages. These are hard to detect without proper monitoring.
• Denial of Service (DoS) Attacks:
Attackers flood cloud servers with excessive traffic, causing downtime and preventing
legitimate users from accessing services.
• Compliance Violations:
Cloud providers and users must comply with regulations like GDPR or HIPAA. Non-
compliance due to improper data handling can lead to legal issues and penalties.
OR
P.T.O.
Cloud security services are designed to protect the data, applications, and infrastructure in a
cloud environment. These services help ensure the confidentiality, integrity, and availability of
cloud-based resources. Below are the key cloud security services:
SPPU_TE_BE_COMP
Content Level Security (CLS) is a security mechanism designed to protect data at the
application or content level. It focuses on securing the actual content of the application or
service, rather than just the network or infrastructure. CLS ensures that sensitive data within
the content is protected from unauthorized access, modification, and tampering. Below are the
key uses and importance of CLS:
Docker uses a client-server architecture to manage containers, where the client sends requests
to the server to carry out various operations related to containers. Here’s how the Docker
clientserver architecture works:
1. Docker Client:
o The Docker client is the interface through which users interact with Docker. It is
the command-line interface (CLI) or a GUI that accepts commands from the
user and sends them to the Docker server (also known as the Docker daemon).
o Common commands include docker run, docker build, docker pull, and docker push. o
The client can run on the same system as the Docker daemon or on a remote
system.
2. Docker Daemon (Server):
o The Docker daemon (also called dockerd) is the core component of Docker that
manages container creation, execution, and orchestration. o It runs as a
background process on the host machine, accepting requests from Docker
clients. o The daemon is responsible for building images, running
containers, managing container lifecycle, and interacting with container
registries (like Docker Hub). o It communicates with the Docker client via
REST API and responds with status messages, error codes, or container outputs.
3. Docker Registries:
o Docker registries are storage repositories where Docker images are stored and
shared. The default registry is Docker Hub, but custom registries can be used. o
The Docker daemon can pull images from a registry or push created images to
it. The interaction between the registry and daemon is integral to ensuring that
Docker containers are properly deployed and shared.
4. Docker Engine:
SPPU_TE_BE_COMP
o The Docker engine is the combined software that includes both the Docker
daemon and the Docker client. It is responsible for creating and managing
containers and images. The client sends commands to the daemon, and the
daemon executes those commands, such as pulling an image from a registry,
building a container, or running a container.
5. Communication:
o Communication between the Docker client and daemon happens over
HTTP/REST APIs. For instance, when the client requests to start a container, the
client sends the request to the daemon, and the daemon handles the lifecycle of
the container.
o Docker also allows remote management via the Docker API, which can be
accessed by sending HTTP requests to the Docker daemon's REST API.
6. Container Lifecycle:
o The Docker daemon controls the entire container lifecycle, including creating,
starting, stopping, and deleting containers. The client initiates actions by
providing commands such as docker run or docker stop, which the daemon executes
in the background.
Mobile Cloud Computing (MCC) refers to the integration of cloud computing and mobile
devices to provide computing resources, data storage, and services to mobile users through the
cloud. It offers scalable resources that mobile devices can access remotely, enabling a wide
range of functionalities. Here’s a detailed explanation:
Use Cases Typically used in scenarios where global distribution and Commonly used in
redundancy are needed, such as content delivery networks real-time data
(CDNs) and large-scale cloud computing services. processing
scenarios like
autonomous
vehicles, industrial
IoT, smart cities,
and
manufacturing.
Control & Managed by centralized cloud providers like AWS, Azure, or Decentralized
Management Google Cloud, which control the resources across multiple processing, where
locations. the devices or
edge servers
handle local
computation and
only relevant data
is sent to the
cloud or central
server.
Latency Typically higher latency than edge computing because data Designed to
still needs to travel to distributed cloud nodes for processing. minimize latency,
with most data
being processed
at the edge of the
network, close to
the data source.
Bandwidth Since data is processed and transmitted across the cloud, By processing data
Consumption there can be high bandwidth consumption for data transfer locally at the edge,
between distributed cloud locations. bandwidth usage
is reduced as only
processed results
or small amounts
of relevant data
are sent to the
central cloud.
SPPU_TE_BE_COMP
Complexity More complex in terms of infrastructure because it requires Simpler
managing multiple cloud resources in different geographical architecture with
regions. edge devices
handling
processing,
reducing the need
for centralized
cloud
infrastructure.
Example Cloud services that are distributed across different regions IoT sensors
to provide redundancy, scalability, and better performance processing data
(e.g., AWS Global Infrastructure). locally in smart
devices,
autonomous
vehicles making
decisions in
realtime, or
industrial
machines
analyzing data
without relying on
cloud.
OR
Q8) a)
Both Distributed Cloud Computing and Edge Computing involve the distribution of computing
resources across different locations. However, their core architectures and purposes differ.
Here's a detailed comparison between the two:
SPPU_TE_BE_COMP
• Distributed Cloud: Focuses on distributing cloud services across multiple regions while
maintaining centralized management.
• Edge Computing: Focuses on processing data at or near the data source to reduce
latency and bandwidth usage.
1. Definition of DevOps:
o DevOps is a set of practices, tools, and cultural philosophies that aim to improve
the collaboration between development (Dev) and IT operations (Ops) teams.
o The main goal is to shorten the development lifecycle and ensure continuous
delivery of high-quality software.
2. DevOps Lifecycle:
The DevOps lifecycle is a continuous cycle of stages where development and
operations teams work together to create and maintain software applications. The stages
include: o Plan: The planning phase involves defining project requirements, user
stories, and detailed specifications.
o Develop: Developers write code, build applications, and create features
according to the specifications.
o Build: Code is compiled, integrated, and prepared for deployment. Automated
build tools are often used to ensure efficient and error-free builds.
o Test: Automated testing is performed to ensure the code is bug-free and meets
the required specifications. Testing is integrated into the DevOps pipeline to
catch errors early.
o Release: The code is deployed to a production environment, ensuring that it is
ready for end users. Continuous integration/continuous deployment (CI/CD)
pipelines are used for automatic deployments.
o Deploy: Deployment tools automatically deploy code into production
environments in a controlled and monitored manner.
o Operate: Operations teams monitor and maintain the software in production,
ensuring performance, scalability, and security.
o Monitor: Continuous monitoring of the application and infrastructure for
performance metrics, errors, and user feedback. Monitoring helps identify
problems and areas for improvement.
3. Key Principles of DevOps:
SPPU_TE_BE_COMP
o Collaboration: Development and operations teams work closely together, sharing
knowledge, responsibilities, and tools.
o Automation: Automating repetitive tasks such as code integration, testing, and
deployment to reduce manual errors and increase efficiency.
o Continuous Integration and Continuous Delivery (CI/CD):
CI refers to integrating code into a shared repository frequently (multiple
times a day), followed by automated testing to identify issues early.
CD extends CI by automating the release and deployment process, ensuring
code is always in a deployable state. o Monitoring and Feedback:
Continuous monitoring of applications in production and feedback loops
help teams improve the software in real-time based on actual user behavior
and performance.
o Security (DevSecOps): Integrating security into the DevOps lifecycle to ensure
that security vulnerabilities are identified and addressed throughout
development.
4. Benefits of DevOps:
o Faster Development and Delivery: By automating manual processes and
enabling continuous delivery, DevOps shortens the time required to build, test,
and deploy software.
o Improved Collaboration and Communication: It bridges the gap between
development and operations teams, fostering a culture of shared responsibility. o
Higher Quality: Frequent testing, continuous integration, and monitoring
improve software quality by identifying and fixing bugs early in the
development cycle.
o Scalability: DevOps practices make it easier to scale infrastructure and
applications, allowing businesses to quickly respond to growing demands. o
Reduced Costs: Automation and improved efficiency help reduce operational
costs associated with manual work and infrastructure maintenance.
5. DevOps Tools:
Various tools support different stages of the DevOps lifecycle:
o Version Control: Git, GitHub, GitLab o Build Automation: Jenkins, Travis CI,
CircleCI o Configuration Management: Ansible, Puppet, Chef o
Containerization: Docker, Kubernetes
o Monitoring and Logging: Prometheus, Grafana, ELK Stack (Elasticsearch,
Logstash, Kibana) o
Collaboration: Jira, Slack,
Trello
6. Challenges in Implementing DevOps:
o Cultural Change: Shifting to a DevOps culture requires breaking down
traditional silos between development and operations teams and promoting
collaboration.
o Tool Overload: The wide variety of tools in the DevOps ecosystem can be
overwhelming and may require significant integration efforts.
o Security Risks: Automating processes can introduce security risks if not
managed carefully (e.g., in CI/CD pipelines).