Loud Omputing P Code Ubject Code
Loud Omputing P Code Ubject Code
QP CODE: 559
SUBJECT CODE: 35271
PART –A
(Each question carries 2 marks, Answer any FIVE questions, Q.No. 8 – Compulsory)
2
3. Para virtualization
14. State the limitations of virtualization.
1. If the CPU does not allow for hardware virtualization we can run some operating system in software
virtualization but it is generally slower. Some operating system will not run in software virtualization and require to have
CPU with hardware virtualization so it would cost more if CPU with hardware virtualization is not possible.
2. If we want a own server and intend to resell a virtual server then it cost high. This mean purchase of 64 bit
hardware with multiple CPU’s and multiple hard drives.
3. Some of the limitations are in analysis and planning which problems can be divided into three types they are
a. Technical limitation
b. Marketing strategies
c. Political strategies
4. It has a high risk in physical fault.
5. It is more complicated to set up and manage virtual environment with high critical servers in a production
environment. It is not easy as managing physical servers.
6. It does not support all applications.
15. What is ISCSI?
Internet Small Computer Systems Interface is an Internet Protocol (IP)-based storage networking standard for
linking data storage facilities. It provides block-level access to storage devices by carrying SCSI commands over a
TCP/IP network. ISCSI is used to facilitate data transfers over intranets and to manage storage over long distances. It can
be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable
location-independent data storage and retrieval.
16. What is the goal of encrypted cloud storage?
The goal of encrypted cloud storage is to create a virtual private storage system that maintains confidentiality and
data integrity. Encryption should separate stored data (data at rest) from data in transit. Depending upon the particular
cloud provider, such as Microsoft allows up to five security accounts per client, and can use these different accounts to
create different zones. On Amazon Web Service, we can create multiple keys and rotate those keys during different
sessions.
PART-C
(Each question carries marks, Answer division (a) or division (b))
3
Other important factors that have enabled cloud computing to evolve are virtualization technology, development
of universal high-speed bandwidth and universal software standards.
Most of the IT professionals use cloud computing as it offers increased storage space, high flexibility and very
low cost.Thus cloud computing has brought enormous benefits for users.
4
Secure builds
When you developed your own network and you have to buy third-party security software to get the level of
protection you want. With the cloud solution, those tools can be bundled in and available to you and you can develop your
system with whatever level of security you desire.
Easier to test impact of security changes: this is a big one. Spin up a copy of your production environment, implement a
security change and test the impact at low cost, with minimal startup time. This is a big deal and removes a major barrier
to ‘doing’ security in production environments.
Drive vendors to create more efficient security software:
Billable CPU cycles get noticed. More attention will be paid to inefficient processes; e.g. poorly tuned security
agents. Process accounting will make a Comeback as customers target ‘expensive’ processes. Security vendors that
understand how to squeeze the most performance from their software will win.
Security Testing
Reduce cost of testing security: A SaaS provider only passes on a portion of their security testing costs. It is
shared among the cloud users. The end results is that because you are in a pool with others but you never see the other
users but you realize the lower cost for testing. Even with Platform as a Service (PaaS) where your developers get to write
code, but the cloud code –scanning tools check for security weakness.
(OR)
17. (b) Discuss the regulatory issues of cloud computing and the government policies.
Regulatory Issues
In the case of cloud computing, regulation might be exactly what we need. Without some rules in place there are
chances for unsecure with service or even shifty enough to make off with your data.
‘Sensitive Data’ is defined as personal information that relates to:
(a)passwords;
(b)financial information such as Bank account or credit card or debit card or other payment instrument details;
(c)physical, psychological and mental health condition;
(d)sexual orientation;
(e)medical records and history;
(f)biometric information;
any detail relating to the above received by the body corporate for provision of services; or any information relating to (a)
– (g) that is received, stored or processed by the body corporate under a lawful contract or otherwise. No existing
regulation: currently there is no existing regulation. While comparing cloud service providers to banks there are
similarities.
Banks deal with money whereas cloud service providers deal with data, both are immense value to consumers and
organizations alike.
Location of Stored Data – Service providers generally do not disclose the location where the service subscriber’s data
are stored. It leaves users in the dark regarding the extent of protection applied to their critical information. Although
security certifications could lessen the user’s anxiety, the matter of determining if the provider’s compliance with legal
and regulatory laws includes those that cover the geographical location where data is stored, aside from the laws of the
areas where the data was collected.
If government can figure out a way to safeguard data, either from loss or theft of any company facing such a loss
would applaud the regulation. One such example is the greatest bank failure in American History. In 2008 the United
States government took control of Washington Mutual. On the other hand, there are those who think the government
should stay out of it and let competition and market forces guide cloud computing.
Law enforcement agencies have easier access to personal information on cloud data than that stored on a personal
computer. Also the big problem is that people using cloud services are not aware of the privacy and security implication
on their online email accounts, their LinkedIn account, their MySpace page, and so forth. While these are popular sites for
individuals, they are still considered cloud services and their regulation may affect other cloud services.
Government Procurement
5
There are also questions about whether government agencies will store their data on the cloud. Procurement
regulations will have to change for government agencies to be keen on jumping on the cloud. The General Service
Administration (GSA) is making a push toward cloud computing, in an effort to reduce the amount of energy their
computers consume. The GSA is working with a vendor to develop an application that will calculate how much energy
government agencies consume.
Government Policies:
The aim of the cloud policy of government is to realise a comprehensive vision of a government cloud (GI Cloud)
environment available for use by central and state government line departments, districts and municipalities to accelerate
their ICT-enabled service improvements. As per the guidelines, both cloud service provider (CSP) and government
department will have to share responsibility for the managing services provisioned using cloud computing facility.
To implement the policy, Government of India has made an initial step “GI Cloud” which has been coined as
‘Meghraj’. The focus of this initiative is to accelerate delivery of e-services in the country while optimizing the
expenditure of the Government. The ministry of electronics and IT has issued on important guideline regarding the
location of data as follows:
“The terms and conditions of the Empanelment of the Cloud Service Provider has taken care of this requirement
by stating that all services including data will be guaranteed to reside in India”. The cloud computing service enables its
user to hire or use software, storage, servers as per requirement instead of purchasing the whole system. Meity( Ministry
of Electronics and Information Technology) has empanelled the following companies for providing cloud computing
services to government departments :
1. Microsoft Corp.,
2. Hewlett Packard,
3. IBM India ,
4. Tata Communications,
5. Bharat Sanchar Nigam Ltd (BSNL),
6. Net Magic IT Services,
7. Sify Technologies and
8. CtrlS Data Centers.
The architectural vision of GI Cloud as mentioned above consists of a set of discrete cloud computing
environments spread across multiple locations, built on existing or new (augmented) infrastructure as given below :
Components of Meghraj
1. Setting up of State and National Clouds
2. Set up an e-Gov Appstore
3. Empanelment of Cloud Service Providers
4. Empanelment of Cloud Auditors
5. Setting up of Cloud Management Office
Setting up an eco-system for Cloud proliferation (Policies, Guidelines, templates, security norms, certification,
business models for applications, tariff & revenue models for private sector Cloud services)
Awareness workshops, training programs and migration support for cloud adoption by departments.
6. MeghRaj (GI-Cloud) service Directory
7. Setting up of Clouds by other Government entities
Cloud Deployment Models:
The empanelment of the Cloud service offerings of CSPs has been done for a combination of the Cloud Deployment
models and Service models as mentioned below:
6
GI Cloud Architecture
Implementation model
8
This model provides scalability that is easily supported by an increasing number of consumers to meet their own
objectives.
4. Serving new markets quickly and easily:
SaaS allows the organization to quickly and easily adds programs so as to adapt the changes based on the demand
at a faster rate.
5. On demand: The solution is self serve and available for use as needed.
6. Scalable: It allows for the infinite scalability and quick processing time.
Economic benefits of SaaS :
SaaS not only saves time but also has greater financial benefits.
1. It reduces IT expenses.
2. The implementation cost of SaaS is much lower than the traditional software.
3. It redirects savings expenses towards business improvements.
4. It strengthens the financial capability.
5. By utilizing SaaS, we are free to use as much of any software as we need. This gives you easy and economical
access to many programs.
6. SaaS vendors release upgrades for their software, thus users need not put any effort into installing and
upgrading the software.
7. Another main benefit in SaaS is that it can quickly and easily be accessed from anywhere by using a web
browser. .
SaaS utilization
19. (a) Explain in detail the various aspects for the need of virtualization in cloud computing.
Virtualization is needed for the following reasons
Costs
With virtualization, administration becomes a lot easier, faster and cost effective. Virtualization lowers the
existing cost. It dramatically simplifies the ownership and administration of their existing IT servers. The operational
overhead of staffing, backup, hardware and software maintenance has become verify significant in IT budgets and
business. In such case virtualization reduces these costs beneficially. By using virtualization we can save the operational
costs. Virtualization concentrates on increasing utilization and consolidation of equipment thereby reducing capital costs,
cabling, operational costs such as power, cooling, maintenance cost of hardware and software.
Thus Virtualization is cost effective.
9
Administration
Administrating virtualization has to be done in efficient manner since all the resources are centralized security
issues has to be categorized more sensitively. The users access the resources like data storage, hardware or software has to
be allocated properly. Since more users will utilize the resources, the sharing of needed resources is complicated.
Administration of virtual server is done through virtual server administration website. By using this virtual server is
assigned to application for access.
In this, virtual IP addresses are configured on the load balancer. When a request is sent from the user from certain
port on a virtual IP load balancer, it distribute the incoming request among multiple server the needed service will be
provided to particular user.
Fast Deployment
Deployment of consolidated virtual servers, migrating physical, servers has to be done.
Virtualization deployment involves several phases and planning. Both server and client systems can support several
operating systems simultaneously, virtualization providers offer reliable and easily manageable platform to large
companies.
It can be built with independent, isolated units which work together without being tied to physical equipment.
Virtualization provides much faster and efficient way of deployment of services by some third party software like
VMware, Oracle etc. Thus it provides the fastest service to the users.
Reducing Infrastructure Cost
Virtualization essentially allows one computer to do the job of the multiple computers by means of sharing the
resources of a single computer across multiple environments.
Virtual servers and virtual desktops allow hosting multiple operating systems and multiple applications locally
and in remote locations. It lowers the expense by efficient use of the hardware resources.
It increases utilization rate for server and cost savings efficiently by altering the physical resources by virtual sharing.
Some other reasons are
1. To run old Apps
2. To access virus infected Data
3. To safely browse
4. Test software, upgrades or new configurations
5. To run Linux on top of Windows
6. To backup a entire operating system
7. To create a personal cloud computer
8. To reuse old hardware.
(OR)
19. (b) Write short notes on (i) Software virtualization (ii) network virtualization.
(i) Software virtualization
It is the virtualization of applications or computer programs. One of the most widely used software virtualization
is Software Virtualization Solution (SVS) which is developed by Altris.
It is similar to hardware which is simulated as virtual machines. Software virtualization involves creating a virtual layer or
virtual hard drive space where applications can be installed. From this virtual space, the application can be run as they
have been installed onto host OS.
Once user finished using application, they can switch it off. When a application is switched off, any changes that
the application made to the host OS will be completely reversed. This means that registry entries and installation
directories will have no trace of the application being installed, executed at all.
Benefits of software virtualization are,
The ability to run applications without making permanent registry or library changes.
The ability to run multiple versions of the same application.
The ability to install applications that would otherwise conflict with each other.
10
The ability to test new applications in an isolated environment.
It is easy to implement.
(ii) Network virtualization
Network virtualization is the process of combining hardware and software network resources and network
functionality into a single, software based administrative entity which is said to be virtual network. Network virtualization
involves platform virtualization. Network virtualization is categories into external network virtualization and internal
network virtualization.
External network virtualization is combining of many networks into a virtual unit. Internal network virtualization
is providing network like functionality to the software containers on a single system. Network virtualization enables
connections between applications, services, dependencies and end users to be accurately emulated in the test environment.
11
change is sent to the secondary and must be acknowledged before the next write can happen. The alternative is to
asynchronously mirror changes to the secondary site. This replication can be configured to happen as quickly as every
second, or every few minutes or hours. This means that the client could permanently lose some data, if the primary SAN
goes down before it has a chance to copy its data to the secondary.
(OR)
20. (b) Explain in detail about object storage.
File systems or object storage
Object storage (also known as object-based storage) is a computer data storage architecture that manages data as
objects, as opposed to other storage architectures like file systems which manage data as a file hierarchy and block storage
which manages data as blocks within sectors and tracks. Each object typically includes the data itself, a variable amount
of metadata, and a globally unique identifier.
Object storage can be implemented at multiple levels, including the device level (object storage device), the
system level, and the interface level. In each case, object storage seeks to enable capabilities not addressed by other
storage architectures, like interfaces that can be directly programmable by the application, a namespace that can span
multiple instances of physical hardware, and data management functions like data replication and data distribution at
object-level granularity. Object storage systems allow retention of massive amounts of unstructured data.
Object storage is used for purposes such as storing photos on Facebook, songs on Spotify, or files in online
collaboration services, such as Dropbox. The majority of cloud storage available in the market uses the object storage
architecture. Two notable examples are Amazon Web Services S3, which debuted in 2005, and Rackspace Files. Other
major cloud storage services include IBM Bluemix, Microsoft Azure, Google Cloud Storage, Alibaba Cloud OSS, Oracle
Elastic Storage Service and DreamHost based on Ceph.
Characteristics of Object Storage
Performs best for big content and high storage throughput
Data can be stored across multiple regions
Scales infinitely to Petabytes (bigger than terabyte) and beyond
Customizable metadata, not limited to number of tags
Advantages
Scalable capacity
Scalable performance
Durable
Low cost
Simplified management
Single Access Point
No volumes to manage/resize/etc.
Disadvantages
No random access to files
The Application Programming Interface (API), along with command line shells and utility interfaces (POSIX
utilities) do not work directly with object-storage
Integration may require modification of application and workflow logic
Typically, lower performance on a per-object basis than block storage
The Object Storage is suited for the following:
Unstructured data
Media (images, music, video)
Web Content
Documents
Backups/Archives
12
Archival and storage of structured and semi-structured data
Databases
Sensor data
Log files
The Object Storage is not suited for the following:
Relational Databases
Data requiring random access/updates within objects
21. (a) write short notes on (i) Brokered cloud storage access. (ii) Storage location and tenancy.
(i) Brokered cloud storage access
Cloud Broker is an entity that manages the use, performance and delivery of cloud services, and relationships
between cloud providers and cloud consumers.
All the data stored in the cloud. It can be located in the cloud service provider’s system used to transfer data from
sent and received. The cloud computing has no physical system that serves this purpose. To protect the cloud storage is
the way to isolate data from client direct access. They are two services are created. One service for a broker with full
access to storage but no access to the client, and another service for a proxy with no access to storage but access to both
the client and broker. These important two services are in the direct data path between the client and data stored in the
cloud. Under this system, when a client makes a request for data, here’s what happens:
1. The request goes to the external service interface of the proxy.
2. The proxy using internal interface, forwards the request to the broker.
3. The broker requests the data from the cloud storage system.
4. The storage system returns the results to the broker.
5. The broker returns the results to the proxy.
The proxy completes the response by sending the data requested to the client.
Even if the proxy service is compromised, that service does not have access to the trusted key that is necessary to
access the cloud storage. In the multi-key solution, not eliminated all internal service endpoints, but proxy service run at
13
a reduced trust level is eliminated. The creation of storage zones with associated encryption keys can further protect cloud
storage from unauthorized access.
14
It is important to know what impact a disaster or interruption occur on the stored data. Since data are stored across
multiples sites, it may not be possible to recover data in a timely manner.
(OR)
21. (b) (i) Explain virtualization security management. (ii)Explain briefly about virtual threats.
(i) Explain virtualization security management.
Historically, the development and implementation of new technology has preceded the full understanding of its
inherent security risks, and virtualized systems are no different. The global adoption of virtualization is a relatively recent
event, threats to the virtualized infrastructure.
A virtual machine (VM) is an operating system (OS) or application environment that is installed on software,
which imitates dedicated hardware. The Virtual Machine (VM), Virtual Memory Manager (VMM), and hypervisor or host
OS are the minimum set of components needed in a virtual environment.
Virtualization Types:
Based on the minimum set of components, we classify the Virtual Environments in the following distinct ways.
Type 1 virtual environments are considered “full virtualization” environments and have VMs running on a
hypervisor that interacts with the hardware.
Type 2 virtual environments are also considered “full virtualization” but work with a
host OS instead of a hypervisor.
Para virtualized environments offer performance gains by eliminating some of the emulation that occurs in full
virtualization environments.
Other type designations include hybrid virtual machines (HVMs) and hardware assisted techniques.
These classifications are somewhat ambiguous in the IT community at large. The most important thing to remember
from a security perspective is that there is a more significant impact when a host OS with user applications and interfaces
is running outside of a VM at a level lower than the other VMs (i.e., a Type 2 architecture). Because of its architecture,
the Type 2 environment increases the potential risk of attacks against the host OS. For example, a laptop running VMware
with a Linux VM on a Windows XP system inherits the attack surface of both OSs, plus the virtualization code (VMM).
Virtualization Management Roles:
The roles assumed by administrators are the Virtualization Server Administrator, Virtual Machine Administrator,
and Guest Administrator. The roles assumed by administrators are configured in VMS and are defined to provide role
responsibilities.
1. Virtual Server Administrator — This role is resp onsible for installing and configuring the ESX Server hardware,
storage, physical and virtual networks, service console, and management applications.
2. Virtual Machine Administrator — This role is res ponsible for creating and configuring virtual machines, virtual
networks, virtual machine resources, and security policies. The Virtual Machine Administrator creates, maintains, and
provisions virtual machines.
15
3. Guest Administrator — This role is responsib le for managing a guest virtual machine Tasks typically performed by
Guest Administrators include connecting virtual devices, adding system updates, and managing applications that may
reside on the operating system.
Some of the vulnerabilities exposed to any malicious-minded individuals regarding security in virtual
environments:
Shared clipboard — Shared clipboard technology allows data to be tra nsferred between VMs and the host, providing a
means of moving data between malicious programs in VMs of different security realms.
Keystroke logging — Some VM technologies enable the logging of keystr okes and screen updates to be passed across
virtual terminals in the virtual machine, writing to host files and permitting the monitoring of encrypted terminal
connections inside the VM.
VM monitoring from the host — because all network packets coming from or going to a VM pass through the host, the
host may be able to affect the VM by the following:
1. Starting, stopping, pausing, and restart VMs.
2. Monitoring and configuring resources available to the VMs, including CPU, memory, disk, and network usage of VMs.
3. Adjusting the number of CPUs, amount of memory, amount and number of virtual disks and number of virtual network
interfaces available to a VM.
4. Monitoring the applications running inside the VM.
5. Viewing, copying, and modifying data stored on the VM’s virtual disks.
Virtual machine monitoring from another VM — Usually, VMs should not be able to directly access one another’s
virtual disks on the host.
Virtual machine backdoors — a backdoor, covert communications channel between the guest and host could allow
intruders to perform potentially dangerous operations.
16
Prepared by,
CHITRA M [41205211]
Part Time / Guest Lecturer
120, Government Polytechnic College,
Purasaiwakkam, Chennai – 12.
17