CS435-Highlighted Handouts VU-Gateway
CS435-Highlighted Handouts VU-Gateway
https://youtu.be/sA3ASVZYgZM?si=oymSpxYGFX107N_p
https://youtu.be/javSLA4k6Js?si=_fkYVu9BvSQ4NbdE
https://chat.whatsapp.com/GecIrau2Nit0D4F5DjDiGM
Raja Mushtaq: +923055868956
Module No – 116: Resource Cluster Mechanism:
Module No – 117:
•
heterogenous devices jo ky cloud
Multi-Device Broker: This mechanism is used to transform the messages (received from
ki servicesko use kr rae hn unhy trnsform kr dyta hn standard format mi bkaz yh easily convey hoskti hn cloud py
heterogenous devices of Cloud consumers ) into a standard format before conveying them
to the Cloud service.
o The response messages from Cloud service are intercepted and transformed back to
the device specific format before conveying to the devices through the multi-device
broker mechanism.
• State Management Database: It is a device used to temporarily store the state data of
software programs.
o State data can be (for example) the configuration and number of VMs being
employed to support a user subscription to a PaaS instance.
o In this way, the programs do not use the RAM for state-caching purposes and thus
the amount of memory consumed is lowered.
Raja Mushtaq: +923055868956
o The services can then be in a “stateless” condition.
like application or software bni
hoti hn apny just work krna hota hn
o For example, a PaaS instance (ready-made environment) requires three VMs. If user
pauses activity, the state data is saved in state management software and the
underlying infrastructure is scaled in to a single VM.
o When the user resumes the activity, the state is restored by scaling out on the basis
of data retrieved from state management database.
Lesson No. 25
CLOUD MANAGEMENT
Module No – 118: Remote Administration System
• It is a Cloud mechanism which provides the APIs and tools to the providers to develop and
used online portals.
• These portals also provide some administrative controls to the Cloud consumers as well.
• Usage and Administration Portal:
o Management controlling of Cloud IT resources
o IT resources usage reports
• Self-Service Portal:
o The consumer can look at and choose various Cloud services
o The chosen services/package is submitted to Cloud provider for automated
provisioning
• The remote administration console can be used to:
o Configure and setting cloud services
o Provision and releasing IT resources for on-demand usage
o Monitor cloud service status, usage and performance
o QoS and SLA fulfillment monitoring
o IT-resource leasing cost and usage fee management
o Managing user accounts, security credentials, authorization and access control
o The remote administration console can be used to:
o Capacity planning
• If allowed, a Cloud consumer can create its own front-end application using API calls of
remote administration system.
• Utilizes the virtual infrastructure manager (VIM) for creating and managing the virtual IT
resources.
• Typical tasks include:
o Managing the templates used to initialize the VMs
o Allocating and releasing the virtual IT resources
o Starting, pausing, resuming and termination of virtual IT resources in response to
allocation/release of these resources
Raja Mushtaq: +923055868956
o Coordination of IT resources for resource replication, load balancer and failover
system
o Implementation of usage and security policies for a Cloud service
o Monitoring the operational conditions of IT resources
• These tasks can be accessed by the cloud resource administrators (personnel) employed by
the cloud provider or cloud consumer.
• The provider (and/or the administrator staff of provider) can access the resource
management directly through native VIM console.
• The consumer (and/or administrator staff of the consumer) use the remote administration
system(created by the provider and) based upon API calls of resource management system.
• The SLA management system provides features for management and monitoring of SLA.
• Uses a monitoring agent to collect the SLA data on the basis of predefined metrics.
• The SLA monitoring agent periodically pings the service to evaluate the “down” time if
occurs. apka system slow work krna start krta hn tu down time start hot hn forran sy Ping create kr syta hen
• The collected data is made available to the usage and administrative portals so that an
external and/or internal administrator can access the data for querying and reporting
purposes.
• The SLA metrics monitored are in accordance with the SLA agreement.
• The billing management system collects and processes the data related to service usage.
• This data is used to generate consumer invoice and for accounting purposes provider.
Raja Mushtaq: +923055868956
• The pay-as-you-go type of billing specifically require the usage data.
• The billing management system can cater for different pricing (pay-per-use, flat rate, per
allocation etc.) models as well as custom pricing models.
• Billing arrangement can be pre-usage or post-usage.
Lesson No. 26
FUNDAMENTAL CLOUD ARCHITECTURES
Module No – 121: Resource Pooling Architecture
• It is based upon using one or more resource pool in which identical IT resources are
grouped and maintained automatically by a system which also ensures that the resource
pools remain synchronized.
• A few examples of resources pools are as follows:
o Physical server pools consisting of (ready to use) networked servers with installed OS
and other tools.
o VM (virtual server) pool/s configured by using one or more templates selected by
the consumer during provisioning.
Ques: Cloud storage kin ko contain krta hn??
o Cloud storage pools consisting of file/block based storage structures. diff devices ko
Ques: Network storage kin ko contain krta hn??
o Network pools consist of different (preconfigured) network connecting devices that connect kr rah hota
hn jo connect ki jati
are created for redundant connectivity, load balancing and link aggregation. hn r redundant
connectivity, load
o CPU pools are ready to be allocated to VMs by the multiple of single core. balancing and link
aggregation ky sathh
▪ Dedicated pools can be created for each type of IT resources.
▪ Individual resource pools can become sub-groups into larger pool.
▪ A resource pool can be divided into sibling pools as well as nested pools.
▪ Sibling pools are independent and hoty
independent isolated
hn isolated from
hoty hn aik each
2sry sy diffother. May
hoty hn es mi have use ho rae hoti hn
diff IT resources
different types of IT resources.
▪ Nested pools are drawn from a bigger pool and consist of the same types of
IT resources as are present in the parent pool. bigger pool sy drwan krty hn phr bigger pool ko break up krty hn small piece
bnaty hn usy khty hn Nested Phool
• Resource pools created for different consumers are isolated from each other.
• The additional mechanisms associated with resource pooling are:
o Audit monitor: Tracks the credentials of consumers when they login for IT resource
usage.
o Cloud Usage Monitor
o Hypervisor
o Logical Network Perimeter
o Pay-Per-Use Monitor
o Remote Administration System
o Resource Management System
o Resource Replication
Raja Mushtaq: +923055868956
Module No – 122: Dynamic Scalability Architecture:
• Dynamic scalability is provided through dynamic allocation of available resources from the
resource pool. Allocation ho rae hoti hen diff dynamic resources ki from resource pool
• can be horizontal & vertical and can also be through dynamic relocation. Scaling
(considered in this topic) is preconfigured and according to some preset thresholds.
• To implement this architecture, the automated scaling listener (ASL) and Resource
Replication Mechanism are utilized.
• Cloud usage monitor and pay-per-use monitor can complement this architecture for
monitoring and billing purposes.
Module No –
• Cloud costing model for disk storage may charge on the basis of total volume of allocated
storage space instead of total space used.
• The elastic disk provisioning architecture implements a dynamic storage provisioning based
billing.
• The user is charged only for the consumed storage.
• The technique of thin-provisioning of storage is used.
• Thin-provisioning allocates the storage space dynamically for the VM’s storage.
• Requires some extra overhead when more storage space is to be allocated.
• The thin-provisioning software is required to be installed on VMs to coordinate the thin-
provisioning process with the hypervisor.
• Requires the implementation of:
o Cloud usage monitor
o Resource replication module (for converting thin-provisioning into thick or static
disk storage)
monitor check and monitore kr rhy hoty hn
o Pay-per use monitor tracks and reports the granular billing related to disk usage.
• In order to avoid data loss and service unavailability due to disk failure, redundant storage is
applied.
• Additionally, in case of network failure, the disruptions in Cloud services can be avoided
through redundant storage incident.
• This is part of failover system (active-passive).
• The primary and secondary storage are synchronized so that in case of a disaster, the
secondary storage can be activated.
• A storage device gateway (part of failover system) diverts the Cloud consumers’ requests to
secondary storage device whenever the primary storage device fails.
• The primary and secondary storage locations may be geographically apart (for disaster
recovery) with a (possibly leased) network connection among the two sites.
Lesson No. 27
ADVANCED CLOUD ARCHITECTURES
Module No – 125: Hypervisor Clustering Architecture:
• It balances the physical server utilization through VM migration in the hypervisor cluster
architecture. esy rukh dyta hn physical server so that mazeed prob create nah kr sky jb fail ho jata hen tub
• Avoids over/under utilization of physical servers.
Raja Mushtaq: +923055868956
• Maintains performance of services hosted on VMs.
• Implements a capacity watchdog/monitor system consisting of:
o Cloud usage monitor
o Live VM migration module
o Capacity planner
• The cloud usage monitor tracks the usage of physical server and VMs hosted on that server.
In case of fluctuation in usage, it reports to capacity planner module.
• The capacity planner modules dynamically matches the capacities of physical servers and the
resource demands of hosted VMs.
• If any VM is facing resource shortage then the capacity planner initiates the VM migration to
the suitable server with sufficient capacity.
• The following modules are integrated into this architecture:
• Automated scaling listener (for monitoring workload over VMs) and load balancer
• Logical network perimeter to comply with privacy requirements of SLA
• Resource replication for load balancing
• The failure of the physical server results in the unavailability ofVMs hosted on that server.
• The services deployed over the unavailable VMs are obviously disrupted.
• The Zero downtime architecture implements a failover system through which the VMs
(from the failed physical server) are dynamically shifted to another physical server without
any interruption. QUes: wo kia chez hn jis ki help sy VM ko shift kr skty hn 2sry physical server py without any interruption?????????????
ANs: Zero downtime architecture
• The VMs are required to be stored on a shared storage.
• The additional modules required may include: att
• A situation of resource constraint may arise when two or more Cloud-consumers (sharing some
IT-resources such as a resource pool) experience a performance loss when the runtime
resource demand exceeds the capacity of the provided resources.
• Resource constraint situation may also arise for the IT-resources not configured for sharing
such as nested and/or sibling pools when one pool borrows the resources from the other
Raja Mushtaq: +923055868956
pool. The lending pool may create resource constraints for its consumers later on if the
borrowed resources are not returned sooner.
• If each consumer can be assured the availability of a minimum volume of:
o Single IT resource
o Portion of an IT resource
o Multiple It resources
▪ Then this implements a resource reservation architecture.
• In case of implementation for resource pools, the reservation system must assure that each
pool maintains a certain volume of resource/s in unborrowable form.
• The resource management system mechanism (studied earlier) can be utilized for resource
reservation.
• The resource/s volume in a pool or the capacity of a single IT resource which exceeds the
reservation threshold can be shared among the consumers.
• The resource management system manages the borrowing of IT resources across multiple
resource pools.
• The additional modules that can be implemented are:
o Cloud usage monitor
o Logical network perimeter (for resource borrowing boundary)
o Resource replication (just in case new IT resources are to be generated)
• It may be possible to detect and counter some failures in Cloud environment if there is an
automated system with failure diagnosis and solution selection intelligence.
• This architecture establishes a resilient watchdog/module containing the definitions of pre-
marked events and the runtime logic to select the best (predefined) routine to coup with
those events.
• The resilient module generates alarms/reports the events which are not predefined.
• The resilient watchdog module performs the following five core functions:
o Monitoring
o Identifying an event
o Executing the reactive routine/s
o Reporting
• This architecture allows the implementation of an automated recovery policy consisting of
predefined steps and may involve actions such as:
o Running a script
o Sending a message
o Restarting services
• Can be integrated into a failover system along with SLA management system.
• The provisioning Cloud IT-resources can be automated to save time, reduce human related
errors and to increase the throughput.
• For example a consumer can initiate the automated provisioning of 50 VMs simultaneously
instead of waiting for one VM at a time.
• The rapid provisioning architecture has a (centralized) control module complemented by:
o Server templates
o Server images (for bare-metal provisioning)
o Applications and PaaS packages (software and applications & environments)
• OS and Application baselines (configuration templates applied after installation of OS and
applications)
• Customized scripts and management modules for smooth procedures
• The following steps can be visualized during the automated rapid provisioning:
o A consumer chooses a VM package through self-service portal and submits the
provisioning request.
o The centralized provisioning module selects an available VM and initiates it through
a suitable template.
o Upon initiation, the baseline/s templates are applied.
o The VM is ready to use now.
Raja Mushtaq: +923055868956
Initial distribution of logical unit numbers across the Cloud storage devices.
Raja Mushtaq: +923055868956
Storage capacity monitoring module indicates the Storage capacity system for migration of logical
unit number to another storage device.
Storage capacity system identifies the destination storage device and shifts the logical unit number to
destination device.
Storage capacity monitoring module indicates the Storage capacity system for migration of logical
unit number to another storage device.
Raja Mushtaq: +923055868956
Storage capacity system identifies the destination storage device and shifts the logical unit number to
destination device.
The result is the even distribution of logical unit numbers across all storage devices.
Raja Mushtaq: +923055868956
Module Nojis–ky134:
wo process andhr Direct
VM AccessI/O Architecture:
kr rah horta hn diff physical I/O circuit ko jis ki help sy host krta hen physical server ko with the help of Hypervisor
• The VMs access various physical I/O circuits/cards of the hosting physical server through
the hypervisor. This is called I/O virtualization.
• However, times may come when the hypervisor assisted access may become a bottleneck for
concurrent I/O requests.
• The direct I/O architecture is the possibility of accessing the physical I/O devices from
VMs without intervention of hypervisor.
• The physical server’s CPU has to be compatible to direct I/O.
• Additional modules required are:
o Cloud usage monitor
o Logical network perimeter (to allow only a limited number of VMs to for direct I/O)
o Pay-per-use monitor
• It is a type of direct I/O in which the VMs access the logical unit numbers directly.
• The VMs can also be given direct access to block level storage.
A type of direct I/O in which the VMs access the logical unit numbers directly.
Raja Mushtaq: +923055868956
• Network bandwidth limit may inhibit the performance and may become a bottleneck.
• It is the software which implements the dynamic scalability of network bandwidth.
• The scalability is provided on per user basis.
• Each user is connected to a separate network port.
• Automated scaling listener, elastic network capacity controller and a resource pool of
network ports are used for implementation.
• The automated scaling listener monitors the network traffic and indicates the elastic network
capacity controller to enhance the bandwidth and/or number of ports when required.
• When applied to virtual switches, then each virtual switch is configured to induct more
physical uplinks.
• Alternatively, the direct I/O can be used to enhance network bandwidth for any VM.
• The approach is to dynamically shift the logical unit number to another storage device with
larger capacity in terms of number of requests processed per second and the amount of data
being handled.
o As compared to traditional approach, it is not constrained by the availability of free
space on the physical storage device hosting the logical unit number.
o Automated scaling listener and storage management modules are required for the
implementation.
o The automated scaling listener monitors the number of requests being sent to the
logical unit numbers.
o When a pre-set threshold of number of requests to a logical unit number is reached,
the automated scaling listener signals the storage management module to shift that
logical unit number to another device with higher capacity.
o While moving a logical unit number, the connectivity/availability of data is not
interrupted.
• Required when there are security and/or legal constraints regarding data migration across
different storage devices. jb security require ho tu hum legal coinstraints ko use krty hn data ko migrate krny ky liye
• The data is stored over logical unit numbers.
• This is the implementation of vertical scaling capability over a single cloud storage device.
• The single storage device optimally uses different disks with varying features/capacities.
• Different disks are graded and marked according to capacity.
• Implemented through automated scaling listener and storage management software.
• The automated scaling listener monitors the logical unit numbers.
Raja Mushtaq: +923055868956
• A logical unit number is hosted over a disk. The grade of the disk may be chosen randomly
or according to a policy.
• Upon rise of performance requirements for a logical unit number, the automated scaling
listener signals the storage management program to move the logical unit number to a disk
with higher grade.
• The Cloud storage devices needs to undergo for maintenance process in order to maintain
their working potential.
• A Cloud storage device hosts multiple logical unit numbers.
• It is not practical to disconnect the storage device/s and then perform maintenance.
• In order to maintain the availability of data, this architecture temporarily copies the data
from a to-be-maintained storage device to a secondary device.
• The data is (for example) arranged/stored in the form of logical unit numbers which in-turn
are connected to different VMs and/or accessed by different consumers.
• It is therefore important that the logical unit numbers be migrated live.
• The connectivity and availability of data are maintained.
• Once the data is migrated, the primary device is made unavailable. The secondary device
serves the data requests even during migration.
• The storage service gateway forwards the consumer requests to secondary storage.
Raja Mushtaq: +923055868956
• The data is moved back to the primary storage after the maintenance is over.
• The whole process remains transparent.
Lesson No. 28
CLOUD FEDERATION & BROKERAGE
Module No – 144: Cloud Federation:
• Due to the availability of a finite number of physical resources, a single Cloud can handle a
certain number of consumers’ requests in a unit time.
• We are supposing that a time deadline exists to process a consumer’s request.
• If a Cloud infrastructure cannot meet the requests’ deadlines, then it is experiencing resource
shortage or congestion.
• At this point, the chances of SLA violation start becoming solid.
• The Cloud provider may be heading towards SLA penalties if the situation persists.
• A decision has to be made by the Cloud provider to process the consumer requests that are
in excess to the current capacity on the basis of:
• Revenue to be earned from processing the extra requests
• The cost to be paid to other provider/s
• The deadline of the requests vs. latency of remote provider
• A Cloud federation may also be created to fulfill the requests of a remote consumer through
the closest provider in that region to reduce network latency.
• Thus federation of Clouds offer a better solution to resource shortage and latency issues in
Cloud computing.
• Federation can be horizontal. In this, the Cloud services (IaaS, PaaS and SaaS) are
horizontally expanded.
Raja Mushtaq: +923055868956
• In vertical federation, a Cloud provider A (for example) may host a SaaS/PaaS instant of
another provider B over its own IaaS to fulfil the requests of provider A.
• Federation can also be hybrid.
Module No – 146: Cloud Brokerage:
Lesson No. 29
CLOUD DELIVERY/SERVICE MODELS’ PERSPECTIVES
Module No – 147: Cloud Provider's Perspective about IaaS:
• In this and next two modules, we shall discuss the overall perspective of Cloud provider in
establishing and managing of Cloud services. Namely:
o IaaS
o PaaS
o SaaS
• The two basic IT resources of IaaS are:
o VMs
o Cloud storage
• These are offered along with the:
o OS
o (virtual) RAM
o (virtual) CPU
o (virtual) Storage
• VMs are usually provisioned through VM images which are predefined configurations.
• Bare-metal provisioning is also provided to the consumers with administrative access.
• Snapshots of VMs can be occasionally taken for failover and replication purposes.
Raja Mushtaq: +923055868956
• A cloud may be provisioned through multiple data centers spanning at different geographical
locations and connected through highspeed networking.
• VLANs and network access control are used to isolate a networked set of VMs (into a
network perimeter) which are provisioned to a single consumer/organization.
• Cloud resource pools and resource management systems can be used to provide scalability.
• Replication is used to ensure high availability and forming a failover system.
• Multipath resource access architecture is used to provide reliability.
• Resource reservation architecture is used for provisioning of dedicated IT resources.
• Different monitors such as pay-per-use monitor and SLA monitors continuously overlook
VM lifecycles, data storage and network usage to establish billing system and SLA
management.
• Cloud security (encryption, authentication and authorization systems) are to be implemented.
Module No – 148: Cloud Provider's Perspective about PaaS: Iaas mi app consumer ki mrzi sy chal rae hotu hn
Paas mi app developer ki mrzi sy chl rae hoti hn
Platform-as-Service (PaaS) is another obstruction level offered by the CSP where consumers (cloud users) deploy their
own applications developed in any programming language that is supported by the CSP-provided environment. In PaaS,
the cloud user controls the deployment, hosting, and configuration of user-created applications. Microsoft Azure and
Google App Engine are examples of PaaS services. However, in PaaS, cloud users are not responsible for maintaining
and managing the cloud Infrastructure like managing OS, server configuration, and storage.
In Infrastructure-as-a-Service (IaaS), the Cloud service providers manage and maintain a huge set of computing
resources such as processing and storage capacity. Virtualization of these resources enables Cloud service providers to
split physical resources and build dynamically resizable ad-hoc computing systems. Moreover, virtualization provides
scalability in terms of run-time lease and release of virtual resources and a high level of customization according to the
user's requirement
Raja Mushtaq: +923055868956
PaaS instances may comprise of multiple VMs and can be distributed across different data
centers.
• Pay-per-use monitor and SLA monitor can be used to collect data regarding resource usage
and failures.
• The security features of IaaS are usually ample for PaaS instances.
• SaaS instances are unique from IaaS and PaaS instances due to the existence of concurrent
users.
• The SaaS implementations depend upon scalability& workload distribution mechanisms and
arraam sy work kr sky withour any
failure non-disruptive service relocation architectures for smooth provisioning and overcoming
failures. its provide th best services
• Unlike the IaaS and PaaS, every SaaS deployment is unique from other implementations.
• Every SaaS deployment has different programming logic, resource requirements and
consumer workloads.
• The diverse SaaS deployments include: Wikipedia, Google talk, email, Android play store,
Google search engine etc.
• The implementation mediums include:
o Mobile apps
o REST service
o Web service
• These mediums also provide API calls. The examples include: electronic payments services
such as PayPal, mapping and routing services (Google Maps) etc.
• Mobile based SaaS implementations are usually supported by multi-device broker
mechanism for heterogeneous device-based access.
• Therefore, SaaS implementation requires the implementation of:
Raja Mushtaq: +923055868956
o Service load balancing, Dynamic failure detection and recovery, storage maintenance
window, elastic resource/network capacity and Cloud balancing architectures.
o Monitoring is usually performed through pay-per-use monitors to collect consumer
usage related data for billing
o Additional security features (as already provided by underlying IaaS environment)
may be deployed according to business logic.
• A consumer accesses the VM through a remote terminal application. The VM has to have an
OS installed.
o Remote desktop client for Windows
o SSH client for Mac and Linux based systems
• Cloud storage device can directly be connected to the VM or to a local device on-premises.
• The Cloud storage data can be handled and rendered through Networked file system, storage
area network and/or object-based storage accessible through Web-based interface.
• The administrative rights of the IaaS consumer include, controlling of:
o Scalability
o Life cycle of VM (powering-On/Off and restarting)
o Network setting (firewall and network perimeter)
o Cloud storage attachment
o Failover setting
o SLA monitoring
o Basic software installations (OS and pre installed software)
o VM initializing image selection
o Passwords and credentials management for Cloud IT-resources
o Costs
• IaaS resources are managed through remote administration portals and/or command line
interfaces through execution of code scripts.
Lesson No. 30
INTER-CLOUD RESOURCE MANAGEMENT
Module No – 153:
• The term Inter-Cloud refers to as Cloud of Clouds just asInternet is regarded as network of
networks.
• Cloud computing has proliferated throughout the computing world.
• The providers are of two types:
• With extra (idle) resources
• With resource shortage
• Many providers look for getting reasonable clients to generate more revenue and to make
good use of idle resources.
• Cloud federation gives a solution to this problem.
• But a bigger picture lies in Inter-Cloud where the global federation takes place.
• The Inter-Cloud can be established where each member Cloud is connected to other
member Clouds just like Internet connects the networks.
• It is the ultimate future of Cloud federation.
• Technological giants such as IBM, HP, CISCO, RedHat etc. are actively working on
establishment of cloud-of-clouds.
• We hope that soon the issues of interoperability, inter-cloud communication, security and
workload migration will be addressed.
Raja Mushtaq: +923055868956
Lesson No. 31
CLOUD COST METRICS AND PRICING MODELS
Module No – 154:
• In next few modules, we shall discuss different cost metrics and pricing models of Cloud.
• Business Cost Metrics: The common types of metrices related to cost benefit analysis of
Cloud computing.
• Upfront Costs: Related to initial investment regarding IT resource acquiring and
installations.
High costs for on-premises installation as compared to leased from Cloud.
• On-going Costs: Include the running costs of the IT resources e.g., licensing fee, electricity,
insurance and labor.
The long term ongoing costs of Cloud IT-resource can exceed the on-premises costs.
• Additional Costs: These are specialized cost metrics. These may include:
o Cost of Capital: It is the cost of raising a capital amount. It is higher if a high capital
is to be arranged in short time. The organization may have to bear some costs in
raising a large amount. This is important decision for up-front cost metrics.
o Sunk Costs: These are the costs already spent by the organization over IT-
infrastructure. If the Cloud is preferred then these costs are sunk. Hence should be
considered along with up-front cost of Cloud. Difficult to justify the leasing of Cloud
IT resources in the presence of high sunk costs.
Raja Mushtaq: +923055868956
o Integration Costs: The time and labor costs required to integrate a Cloud solution
new software hn kuch error agaya ab which include the testing of Cloud services acquired.
ap us error ko sae krwany py jo o
paisaa
Locked-in Costs: The costs related to being dependent upon a single Cloud
waste kro gy usy khy gy
provider due to lack of interoperability among different providers. Affects the
business benefits of leasing the Cloud based IT-resources.
Module No – 155:
• Cloud Usage Cost Metrics: In this module we shall study different metrics related to cost
calculation of Cloud IT resource usage.
o Network Usage: Cumulative of, or separate of outbound and inbound network traffic
in bytes over the monitored time. Costing may be cumulative or separate for
inbound and outbound traffic. Many Cloud providers do not charge for inbound
traffic to promote the consumers to shift their data towards Cloud.
▪ May also be based upon the static IP address usage and network traffic
processed by Virtual Firewall.
VM
vm mi kinty connection hn kitniomemory Related to the number of VMs and the usage of the allocated VMs. Can
use hoe hn be static cost, pay-per-use, or according to the features of VM. Applicable to IaaS
and PaaS instances.
o Cloud Storage Device Usage: It is charged by the amount of storage used. Usually the on-
demand storage allocation pattern in used to calculate bill on time basis for example on
hourly basis. Another(scarcely used) billing option is to charge on the basis of I/O
operations to and from storage.
o Cloud Service Usage: The service usage can be charged on the basis of duration of
subscription, number of nominated users and/or number of transaction served by
the service.jitny tym service use krty hn us hasassb sy charge krty hn
Module No – 156: Case Study for Total Cost of Ownership (TCO) Analysis
• The TCO includes the costs of acquiring, installing and maintaining the hardware and
software to perform the IT tasks of the organization.
• In this module, we shall perform a case study to evaluate the TCO for on-premises and
Cloud based solution.
• Suppose a company wants to migrate a legacy application to PaaS. The application requires a
database server and 4 VMs hosted on 2 physical servers.
• Next we perform a TCO analysis for 3 years:
Raja Mushtaq: +923055868956
• Cost management can take place across the lifecycle phases of Cloud services. These phases
may include:
o Design & Development
o Deployment
o Service Contracting
o Provisioning & Decommissioning
• The cost templates used by the providers depend upon:
o Market competition
o Overhead occurred during design, deployment and operations of the service
o Cost reduction considerations through increased sharing of IT resources
• A pricing model for Cloud services can be composed of:
o Cost metrics
o Fixed and variable rates definitions
o Discount offerings
o Cost customization possibilities
o Negotiations by consumers
o Payment options
Module No – 158:
• Case study: We shall now see and example case of different price offering from a Cloud
provider.
Raja Mushtaq: +923055868956
Lesson No. 32
CLOUD SERVICE QUALITY METRICS
Module No – 159:
•
o The value in % of up-time e.g., 100%.
o Measured as total up-time/total time.
o Monitored weekly, monthly and/or yearly.
o Applied to IaaS, PaaS and SaaS.
o Expressed as cumulative value.
o E.g., 99.5% minimum
• Down-time Duration Metric:
o Expresses the maximum and average continuous down-time.
o Covers the duration of outage.
o Measured whenever the outage event occurs.
o Applied to IaaS, PaaS and SaaS.
o E.g., 1 hr max, 15 min average
• Reliability in context to Cloud IT-resources refers to the probability that an IT-resource can
be performing its intended function under predefined conditions without experiencing
failure.
o Focuses on the duration in which the service performs as expected.
o This requires the service to be operational and available during that time.
• Mean-Time Between Failures Metric:
o Expected time between two consecutive failures.
Raja Mushtaq: +923055868956
o Measured as normal operation duration/number of failures.
o Measured as monthly and/or yearly.
o Applicable to IaaS and PaaS.
o E.g., 90 days average
• Service Reliability Rate Metric: It is the percentage of successful service outcomes.
o Measures the non-critical errors during the up-time.
o Measured as total number of successful responses/total number of requests
o Measured as weekly, monthly and/or yearly.
o Applicable to SaaS.
o E.g. minimum 99.5%
• These are related to the IT resource’s elastic capacity, the maximum capacity that an IT
resource can reach and the adaptability of an IT resource to workload fluctuations.
• For example a VM can be scaled up to 64 cores and 256 GB of RAM or can be scaled out to
8replicated instances.
• Storage Scalability (Horizontal) Metric: The permissible capacity change of a storage device in
accordance with the increase in workload.
o Measured in GB.
o Applicable to IaaS, PaaS and SaaS.
o E.g., 1000 GB maximum (automatic scaling)
• Server Scalability (Horizontal) Metric: The permissible server capacity in response to increased
workload.
o Measure in number of VMs in resource pool.
o Applicable to IaaS, PaaS
o E.g., 1 VM minimum up to 10 VMs maximum (automated scaling)
• Server Scalability (vertical) Metric: Measured in terms of number of vCPUs, vRAM size in GB.
o Applicable to IaaS and PaaS.
o E.g., 256 cores maximum and 256 GB od RAM
Raja Mushtaq: +923055868956
• In this module, we shall discuss some of the best practices of Cloud consumers for dealing
with SLAs.
yh test kiye jae thaa ky• Mapping of test-cases to the SLAs: A consumer should highlight some test cases
koi isuue aye usy resolve (disasters, performance, workload fluctuations etc.) and evaluate the SLA accordingly. The
kia ja sky SLA should be aligned with the consumer’s requirements of the outcome of these test-cases.
user ko chexk krna chae• Understanding the scope of SLA: A clear understanding of the scope of SLA should be
SW use sy phly kia kia
drawback ho skty hn made. It is possible that a software solution may be partially covered by an SLA for example
disaster ajaye unhy ny
koi solution implemennt the database may be left uncovered.
kia hogaa
• Documenting the guarantees: It is important to document all the guarantees at proper
granularity. Any particular guarantee requirement should also be properly and clearly
mentioned in SLA.service aggrement document kry note down ky badd
Raja Mushtaq: +923055868956
• Defining penalties: The penalties and reimbursements should be clearly defined and
documented in SLA. jo provider ny guranties di hn wo documented ho
• SLA Monitoring from independent party: Consider the SLA monitoring from a third
party. jesi 3rd party hn monitor krwa liya jaa dobut clear ho sky
• SLA monitoring data archives: The consumer may want the provider to delete the
monitored data due to privacy requirement. This should be disclosed as an assurance by the
provider in SLA. user ka data clode sy remove kr dyna chae agr wo user quit krta hn tu
Lesson No. 33
CLOUD SIMULATOR
Module No – 166:CloudSim: Introduction
• Some configurations are required for the CloudSim. The important requirements are
discussed in this module.
• CloudSim requires Sun’s Java 8 or newer version. Older versions of Java are not compatible.
• You can download Java for desktops and notebooks from https://java.com/en/download/
• CloudSim requires Sun’s Java 8 or newer version. Older versions of Java are not compatible.
• You can download Java for desktops and notebooks from https://java.com/en/download/
• CloudSim setup is just needed to be unpacked before using. If you want to remove
CloudSim, remove the folder.
• CloudSim setup comes with various coded examples which can be test run for
understanding the CloudSim architecture.
• CloudSim site has video tutorial explaining the step-by-step configuration and execution.
Raja Mushtaq: +923055868956
• CloudSim setup is just needed to be unpacked before using. If you want to remove
CloudSim, remove the folder.
• CloudSim setup comes with various coded examples which can be test run for
understanding the CloudSim architecture.
• CloudSim site has video tutorial explaining the step-by-step configuration and execution.
Lesson No. 34
COMPUTER SECURITY BASICS
Module No – 169: Computer Security Overview:
communicatio es trah sy ki jae intruder ko pata nagh laag sky es text ka mean kia hn secret info ky liye
Module No – 172: Cryptography: yh use hoti hn
sms es trah sy likhy koi 3rd phry usy smj nah aye es kam actual mean kiya hn
• A firewall is a hardware and/or software based module to block unauthorized access (but
allowing authorized access) in a networked environment.
o Stands between a local network and Internet.
o Filters the harmful traffic.
• Firewall preforms packet filtering on the base of source/destination IP address.
• Firewall checks the packets on the basis of connections (stateful firewall).
• Other types of firewalls also exist.
• Intrusion detection system (IDS) is a software or hardware device installed on a network or a
host to detect intrusion attempts, monitors malicious activity or policy violations.
• Intrusion detection system (IDS) is a software or hardware device installed on a network or a
host to detect intrusion attempts, monitors malicious activity or policy violations.
• The installation of operating system requires some security measures such as:
• Planning: The purpose, user, administrator and data to be processed on that system.
178 must raed?
• Installation: The security measures should start from the base.
• BIOS level access should be secured and with a password.
• The OS should be patched/updated with latest critical security patches before installing any
applications.
• Remove unnecessary services, applications and protocols.
• Configure the users, groups and authentication according to security policy.
• Configure the resource control/permissions. Avoid the default permissions. Must go
through all the permissions.
• Install additional security tools such as anti-virus, malware removal, intrusion detection
system, firewall etc.
• Identify the white listed applications which can execute on the system.
• Virtualization Security: The main concern should be:
o Isolation of all guest OSs.
o Monitoring all the guest OSs.
o Maintenance and security of the OS-images and snapshots.
• Can be implemented through:
o Clean install of hypervisor from secure and known source.
o Ensure only the administrative access to hypervisor, snapshots and OS images.
o The guest OS should be preconfigured to not to allow any modifications/access to
underlying hypervisor by the users.
o Proper mapping of virtual devices over physical devices.
Raja Mushtaq: +923055868956
o Network monitoring etc.
• Threat: It is a potential security breach to affect the privacy and/or cause a harm.
destroy and evaluate the o Can occur manually and/or automatically.
Privacy
o A threat executed results in an attack.
o Threats are designed to exploit the known weaknesses or Vulnerabilities.
• Vulnerability: It is a (security) weakness which can be exploited.
o It exists because of:
o Insufficient protection exists and/or the protection is penetrated through an attack.
o Configuration deficiencies
o Security policy weaknesses
o User error
o Hardware or firmware weaknesses and software bugs
o Poor security architecture
• Risk: It is a possibility of harm or loss as a result of an activity.
o Measured according to
▪ Threat level
▪ Number of possible vulnerabilities
▪ Can be expressed as:
▪ Probability of occurring of a threat to exploit vulnerabilities
▪ The expectation of loss due to compromise of an IT resource
Lesson No. 35
NETWORK SECURITY BASICS
Module No – 178: Internet Security:
• It is a branch of computer security which specifically deals with threats which are Internet
based.
• The major threats include the possibilities of unauthorized access to any one or more of the
following:
o Computer system
o Email account
o Website
o Personal details and banking credentials
• Viruses and other malware
• Social engineering
• Secure Socket Layer (SSL): It s security protocol for encrypting the communication
between a web browser and web server.
o The website has to enable SSL over its deployment.
o The browser has to be capable of requesting a secure connection to the websites.
o Upon request, the website shares its security certificate (issued by a Certificate
Authority (CA)) with the browser which the browser confirms for validity.
Raja Mushtaq: +923055868956
o Upon confirmation of security certificate, the browser generates the session key for
encryption and shares it with website, after this the encrypted communication
session starts.
o Websites implementing the SSL use HTTPS (https://...) in the URL instead of
HTTP (http://...) and a sign of padlock before the URL.
• The wireless network security is applied to wireless networks and is also known as wireless
security.
• It is used to secure the wireless communication from unauthorized access.
• There are a lot of threats for wireless networks. Such as:
o The packets can be easily eavesdropped and recorded.
o The traffic can be modified and retransmitted more easily as compared to wired
networks.
o Prone to DoS attacks at access points (APS).
• Some prominent security protocols for wireless security are:
o Wired Equivalent Privacy (WEP): Designed to provide the same level of security
as the wired networks.
▪ First standard of 802.11
▪ Uses RC4 standard to generate encryption keys of length 40-128 bits.
▪ Has a lot of security flaws, difficult to configure and can easily be cracked.
o Wi-Fi Protected Access (WPA): Introduced as an alternative to WEP while a long-
term replacement to WEP was being developed.
▪ Uses enhanced RC4 through Temporal Key Integrity Protocol (TKIP) which
improves wireless security.
▪ Backward compatible with WEP.
o Wi-Fi Protected Access 2 (WPA2): Standardized release by IEEE as 802.11i the
successor to WPA.
▪ Considered as the most secure wireless security standard available
▪ Replaces the RC4-TKIP with stronger encryption and authentication
methods:
▪ Advanced Encryption Standard (AES)
▪ Counter Mode with Cipher Block Chaining Message Authentication Code
Protocol (CCMP)
▪ Allows seamless roaming from one access point to another without
reauthentication.
Lesson No. 36
CLOUD SECURITY MECHANISMS
Module No – 183: Encryption:
• mi The
numb word etc ki form data by default in human readable format calledplaintext.
text ho skta hn
• If transmitted over network, the plaintext data is vulnerable to malicious access.
Raja Mushtaq: +923055868956
• Encryption is a digital coding system to transform the plaintext data into a protected and
nonreadable format while preserving the confidentiality and integrity.
• The algorithm used for encryption is called cypher.
• The encrypted text is also called cyphertext.
• The encryption process uses encryption key which is a string of characters. It is secretly created
and shared among authorized parties.
• The encryption key is combined with the plaintext to create the encrypted text.
• Encryption helps in countering:
o Traffic eavesdropping
o Malicious intermediary
o Insufficient authorization
o Overlapping trust boundaries
• This is because the unauthorized user finds it difficult to decrypt the intercepted messages.
• There are two basic types of encryption:
o Symmetric Encryption: It uses single key for encryption and decryption. Also
known as secret key cryptography. Simpler procedure. Difficult to verify the sender if the
key is shared by multiple users.
o Asymmetric Encryption: Uses two different keys (private and public key pair).
Also known as public key cryptography. A message encrypted with public key can only
be decrypted by the respective private key and vice versa.
o Any party can acquire a public-private key pair. Only the public key is shared
publicly.
o The senders can use the public key of the receiver to encrypt messages. Only the
user with corresponding private key can decrypt the message.
o Successful decryption can ensure confidentiality but does not assure integrity and
authenticity of the sender as anyone can encrypt the message using public key.
Module No – 184:
• It is a mechanism comprising of policies and procedures to track and manage the user
identities and access privileges for IT resources.it user ki previllages ko manage krta hn aur jo login krty hn unhy nhi
• Consist of four main components:
o Authentication: Usernames+passwords, biometric, remote authentication through
registered IP or MAC addresses.
legal hn usy kis kis type ky control overhand krny
o Authorization: Access control and IT resource availability chae
o User management: Creating new user-identities, password updates and managing
privileges.
o Credential management: It establishes identities and access control rules for defined user
accounts.
• As compared to PKI, the IAM uses access control policies and assigns user privileges.
• Single Sign-On: Saves the Cloud consumers from signing-in to subsequent services if the
aik he jaga sign ina and login
consumer
krna pry agai and agai authentication is executing
ki need nah pry an activity which requires several Cloud services.
o A security broker authorizes the consumer and creates a security context persistent
across multiple services.
Raja Mushtaq: +923055868956
Module No – 187:
• Cloud-based Security Groups: Cloud IT resources are segmented for easy management
and provisioning to separate users and groups.
o The segmentation process creates Cloud-based security groups with separate security
policies.
o These are logical groups which act as network perimeters.
o Each Cloud-based IT resource is assigned to atleast one logical cloud-based security
group.
o Multiple VMs hosted over same physical server can be allocated to different cloud-
based security groups.
o Safeguard against DoS attacks, insufficient authorization and overlapping trust
boundaries threats.
o Closely related to logical network perimeter mechanism.
• Hardened Virtual Server Images: It is a process of removing unnecessary software
components from the VM templates.
o It also includes closing unnecessary ports, removing root access and guest login and
disabling unnecessary services.
o Makes the template more secured than non-hardened server image templates.
un awnated componenent nikal dy sw sy wo chez bchy gi jo sirf humy pata hn
Lesson No. 37
PRIVACY ISSUES OF CLOUD COMPUTING
Module No – 188: Lack of user control:
• Data privacy issues such as unauthorized access, secondary usage of data without
permission, retention of data and data deletion assurance occur in Cloud Computing.
• With the data of a SaaS user placed in Cloud, there is a lack of user control over that data.
• A few reasons are as follows:
o Ownership and control of infrastructure: The user has neither ownership nor the control of
underlying infrastructure of the Cloud. There is a threat of theft, misuse and
unauthorized sale of user’s data.
no surity cloud serever user ka data
nah read kry
o Access and transparency: In many cases, it is not clear that a Cloud service provider
can/will access the users’ data. It is also not clear that an unauthorized access can be
detected by the Cloud user/provider.
o Control over data lifecycle: The Cloud user can not confirm that the data deleted by the
apny data dek kr dia ab not user cloud
owner bhi del nia hn ky nae user is actually been deleted. There is no assurance for the data deletion of
terminated accounts as well. There is no regulation to implement a must-erase
liability on Cloud provider.
o Changing provider: It is not clear how to completely retrieve the data from previous
provider and how to make sure that the data is completely deleted by the previous
provider.
o Notification and redress: It is not clear how to determine the responsibility of (user or
provider for) an unauthorized access.
koi mechanism ni hn kis py responsibilities daalii jae user or cloud zimadar hnn agr unauthorize acces hoti hn tu
Raja Mushtaq: +923055868956
• The deployment and running of Cloud service may require the recruitment of highly skilled
personals.
• For example the STEM skills (Science, Technology, Engineering and Mathematics) should
be present in the recruited people.
• The lack of STEM skilled and/or trained persons can be a Cloud security issue.
• Such people may also lack the understanding of the privacy impact of their decisions.
• Due to the rapid speed and spread of computing devices among the employees, now more
employees may introduce a privacy threat on average.
• For example multiple employees may leave their laptops unattended with a further possibility
of unencrypted sensitive data.
• The employees can access different public Cloud services through self service portals.
• Care and control must be observed regarding public Cloud access to overcome the privacy
issues.
Module No – 190: Unauthorized Secondary Usage: possilbily hn ky jo data cloud py rkhty hn uska ownwer or user hn u
ska dta batae beghair use krna read krna yh possiblities ho skti hn
• There is a high tendency that the data stored or processed over Cloud may be put to
unauthorized usage.
• A legal secondary-usage
us data miof
sy Cloud consumers’
kuch data user ka aggy data is to sell
advertysyng compthe
ko statistics forpaisy
sale kr dy wo targeting theyh leagal way hn
charge kry
advertisements. user ny ijazat di hn
• However an illegal secondary-usage example is the selling of sales data to competitors of the
consumer.
• Therefore it may be necessary to legally address the usage of consumer’s data by the Cloud
provider.
• So far there are no measures and means to verify the illegal secondary-usage of consumers’
data by the Cloud provider/s.
• In future, a technological solution may be implemented for checking and preventing the
unauthorized secondary usage of consumers’ data.
Module No – 191: Complexity of Regulatory Compliance:
cloud cpm all over the world rule ko follow ni kr skti
• The global nature of Cloud computing makes it complex to abide by all the rules and
regulations in different regions of the world.
• The legal bindings regarding data location is complex to implement because the data may be
replicated on multiple locations at the same time.
• It is also possible that the each replicated copy of the data is managed by different entities
for example backup services obtained from two different providers.
• The backup provided by a single provider may be spread across different data centers which
may or may not be within the legal location-boundary.
Raja Mushtaq: +923055868956
• The rapid provisioning architecture of the Cloud makes it impossible to predict the location
of to-be-provisioned Cloud resource such as storage and VMs.
• The cross border movement of data while in transit is very difficult to control. Specially
when the data processing is outsourced to another Cloud provider. Then the location
assurance of such Cloud provider is a complex task at runtime.
• The privacy and data protection regulations in many countries restrict the trans-border flow
of personal information of the citizens.
• These countries include EU and European Economic Area (EEU) countries, Australia,
Canada etc. yh data citizen ka out ni krti
• From EU/EEU countries, the personal information can flow to countries which have
adequate protection. These include the EU/EEU countries and Canada etc.
• The flow of personal information to other countries is restricted, unless some
rules/agreements are followed by those countries.
• For example the information can be transferred from EU to USA if the receiving entity has
joined the US Safe Harbor agreement.agr us country mi eradicate ho sec about then data dynd kr skty hn
• If the receiving country has signed a model contract with the EU country/ies then the
personal information can flow towards the receiving country.
• So far the trans-border regulations are not complied with Cloud computing and there is
more to be done to implement these data flow restrictions.
• A Cloud Service Provider (CSP) may be forced to hand over the consumers’ data due to a
court writ.order kry gi cloud ko ky user ka jo record hn show kia jae
• For example, in a case handled by the US court of law, with state vs. the defendant, the US
govt. was allowed the access to Hotmail service (of Microsoft) through the court orders.
• . The govt. always wants to check the relevance of evidence with the case. For that, the court
can allow access to consumers’ data.
• But for private entities, this situation can be avoided through the clauses of legal agreement
to bind the CSP for disallowing any access(by a non govt. entity) to the data. OR to govern
the response of CSP to any writ from such entities.
Module No – 194: Legal Uncertainty:
• Since the Cloud computing moves ahead of the law, there are legal uncertainties about the
privacy rights in the Cloud.
• Also, it is hard to predict the outcome of applying the current legal rules regarding trans-
border flow of data to Cloud computing.
Raja Mushtaq: +923055868956
• One of the areas of uncertainty is about the procedure of anonymizing or encrypting of
personal data requires a legal consent from the owner and the processing related to
enhancement of data privacy is exempt from privacy protection requirements?
• Also, it is not clear that the anonymized data (which may or may not contain personal data)
is also governed by the trans-border data flow legislations or not.
• In short, the legal uncertainty exists regarding the application of legal frameworks for privacy
protection upon Cloud computing.
Lesson No. 38
SECURITY ISSUES OF CLOUD COMPUTING
Module No – 196: Gap in Security:
• Although the security controls for the Cloud are same as of other IT environments, but the
lack of user control in Cloud computing introduces security risks.
• These security risks are due to a possible lack of effort for addressing the security issues by
the Cloud service provider.
• SLAs do not include any provision of the security procedures made necessary by the
consumer or through any standard.
• The gap in security also depends upon the type of service (IaaS, PaaS & SaaS).
• The more privileges given to the consumer (for example in IaaS), the more responsibility of
security procedures lies with the consumer.
• The consumer may need to gain the knowledge of the security procedures of provider.
• The provider gives some security recommendations to IaaS and PaaS consumers.
Raja Mushtaq: +923055868956
• For SaaS, the consumer needs to implement its own identity management system for access
security.
• Generally, it is very difficult to implement protection throughout the Cloud. In few cases the
Cloud providers are bound by law for the protection of personal data of the citizens.
• It is difficult to ensure the standardized security when a Cloud provider is outsourcing
resources from other providers.
• Currently the providers take no responsibility/liability for deletion, loss or alteration of data.
• The terms of service are usually in favor of the provider.
Module No – 198: Vendor Lock-in: aik cloud server sy 2sry server mi dta
aik vendor py rely honaa uski mjbbori
bn jana wo 2sry vendor or server
py move nah kr sky synd krny sy phly dat ki format
• Cloud computing in today's time lacks interoperability standards. change krni pry gi bkaz aik serveer ky
satndard 2sry server ky standard sy
• There are certain limitations such as diff hn
o Difference between common hypervisors.
o Gap in standard APIs for management functions.
o Lack of commonly agreed data formats.
o Issues with machine-to-machine interoperability of web services.
• The lack of standards makes it difficult to establish security frameworks for heterogeneous
environments.
• People mostly depend upon common security best practices.
• Since there is no standardized communication between Cloud providers and no standardized
data export format, it is difficult to migrate from one Cloud provider to another or to bring
back the data and process it in-house.
Raja Mushtaq: +923055868956
sec issue>>>>>>>>
Module
jo user ny data del kiyya thaNo – py
cloud 199: Inadequate
kia cloud Data
sy bhi del ho gaya Deletion:
hn ky nae jb subscription khtm hoe thi kia wqi cloud owner ny data remove kr dia hn ab wo data
recover ni ho skta esa koi procedure as of now implement nae hn
• So far there is no surety or confirmation functionality for the deleted data being really
deleted and non-recoverable by the service provider.
• This is due to lack of consumer control over life cycle of the data (as discussed before).
back up data agr nah kry del
• This problem is increased with the presence of duplicate copies of the data.
• It might not be possible to delete a virtual disk completely because several consumers might
be sharing it or the data of multiple consumers resides over same disk.
• For IaaS and PaaS, the reallocation of VMs to subsequent consumers may introduce the
problem of data persistency across multiple reallocations.
• This problem exists until the VM is completely deleted.
• For SaaS, each consumer is one of the users of a multitenant application. The customer’s
data is available each time the customer logs-in.
• The data is deleted when the SaaS consumer’s subscription ends.
• There is correspondingly higher risk to customers’ data when the Cloud IT-resources (such
as VM and storage) are reused or reallocated to a subsequent consumer.
• As discussed previously, the management interfaces are available through remote access via
Internet.
• This poses an increased risk compared to traditional hosting providers.
• There can be vulnerabilities associated with browsers and remote access.
• These vulnerabilities can result in the grant of malicious access to a large set of resources.
• This increased risk is persistent even if the access is controlled by a password.
• In order to provide high level of reliability and performance, a Cloud provider makes
multiple copies of the data and store them at different locations.
• This introduces many vulnerabilities.
• There is a possibility of data loss from Storage as a Service.
• A simple solution is to place data at consumer’s premises and use the Cloud to store
(possibly encrypted) backup of data.
• A loss of data may occur before taking backup.
• A subset of the data may get separated and unlinked form the rest and thus becomes
unrecoverable.
• The failure/loss of data-keys may significantly destroy the data context.
• Sometimes the consumers of traditional (non-Cloud) backup service suffer a complete loss
of their data on non-payment of periodic fee.
• In general, the Cloud service show more resiliency than these traditional (non-Cloud)
services.
Raja Mushtaq: +923055868956
seprate
Module No – 202: Isolation Failure: multi-tenant multi user work at the same tym
data ko 2sry person sy hidden rkhny ky liye partitioning krty hn
• The multi-tenant SaaS applications developed by Cloud providers use logical/virtual
partitioning of the data of each consumer.
• It is possible that such applications be storing the personal and financial data of the
consumers on Cloud. sensative data or personal data inh py store kia jaa rah ho
• This responsibility of securing this data is of the Cloud provider.
• Due to the possibility of the failure of data separation mechanisms, the other tenants can
access the sensitive information.failure mi customer aik 2sry ka data bhi get kr skty hn
• Virtualization is widely used in Cloud computing. The VMs although are isolated from each
other, yet the virtualization based attacks may compromise the hosting server and hence
isolation failure attack or bugs ki waja sy bhi ho saktaa henn
expose all the hosted VMs to the attacker.
believe shafafiaat agr missess hn tu cloud mi kia kia sec issue ho sktyhn
Module No – 203: Missing Assurance and Transparency:
cloud provider data loss ki responsibilities kaam sy kaam rkhtaaa hen. . .
• As discussed before, the Cloud provider take lesser liabilities in case of data loss.
• Therefore, the consumers should obtain some assurance from the Cloud provider regarding
the safety of their data.consumer ko kuch assurance requrre hoti hn uska data safe rhy gaa cloid py
• Consumers may also demand for getting the warning/s regarding any attack/unauthorized
access/loss of data.
• A few frameworks exist for security assurance in Cloud. The Cloud providers offer the
assurance on the basis of these frameworks.
• However these assurances may not be applied in case of frequent data accesses and/or in
case of some instances such as isolation failure (discussed previously).
• Still, there is no compensation offered by the Cloud providers for the incidents of data loss.
• The best assurance for data security in Cloud computing is achievable through keeping the
data in private Cloud.
• Although automated data security assurance evaluation frameworks exist but they still need
to evolve in order to comply with all the security issues discussed in this course.
• A Cloud consumer should be able to audit the data processing over Cloud to ensure that the
Cloud procedures are in compliance with the security policy of the consumer.
• Similarly the Cloud consumers may want to monitor SLA compliance by the provider but
the complexity of Cloud infrastructure makes it very difficult to extract the appropriate
information or to perform a correct analysis.
• Cloud providers could implement the internal compliance monitoring controls in addition to
external audit process.
• The consumers may even b allowed a ‘right to audit’ for those particular consumers who
have regulatory compliance responsibilities.
Raja Mushtaq: +923055868956
• Although the existing procedures for audit can be applied to Cloud computing but the
provision of a full audit trail with the public Cloud models is still an unsolved issue.
Lesson No. 39
TRUST ISSUES OF CLOUD COMPUTING
Module No – 206: Trust in the Clouds:
• Cloud consumers have to trust the Cloud mechanisms for storing and processing the
sensitive data.
• Traditionally, a security perimeter (such as a firewall) is instantiated to setup a trust boundary
within which there is a self-control over computing resources and where the sensitive
data/information is stored and processed. firewall add ki jae es thk apka data sensatively ap access kr skty hn
• . The network provides trusted links to other trusted end hosts.
• This may work perfectly for the Internet but may not work for public and hybrid Clouds.
• This is because the data may be stored and/or processed beyond the security perimeter such
as supply chain issues discussed before.
• The consumers have to extend the trust boundaries to the Cloud provider.
• Therefore, the consumers should only trust the Cloud provider if the information about the
reliability of internal mechanisms is provided by trusted entities such as consumer groups,
auditors, security experts, reputed companies and established Cloud providers etc.
• The trust relationships can be the decision affecting factors for adopting/accepting a
particular security and privacy solution.
• Trust attains a higher level of importance if personal or business critical information is to be
stored in Cloud.
• Therefore, the Cloud providers have to have high trust from the consumers.
Raja Mushtaq: +923055868956
Module No – 207: Lack of Consumer Trust:
• In the past various surveys in Europe have revealed the lack of consumer trust upon the
protection of their data kept online. jo data store hn online user trust ni krty mostly
• Up to 70% of Europeans were concerned about the non authorized secondary usage of their
data.
• The survey about trust on Cloud provider showed the following statistics:
o Reputation: 29%
o Recommendation from trusted party: 27% looogo ki kaam taadahhd hn jo clouder py trust krti hn
o Trial experience: 20%
o Contractual: 20%
o Others: 4%
• The consumer trust depends upon the compatibility level of data protection provided by the
Cloud provider vs. the consumer’s expectations.consumer ki expectation ky about clouder data ki sec provide kr rahh hennn
•
rule
A few such expectations include the regulatory compliance of data handling procedures and
control over data lifecycle even in supply chain Cloud provisioning.
• 70% of the business users (in selected regions of the world) are already using private Clouds
according to a study.
• However different surveys showed that the enterprises are concerned about:
o Data security: 70% consumer ka cloud py weak trust chal rahh hnnnn
o SLA compliance : 75%
o Vendor lock-in:79%
o Interoperability: 63%
Module No – 208: Weak Trust Relationships: aik cloud provider 2sry cloud provider sy services ly ky aggy consumer ko provide krr rahhh hnn
• Although the Cloud provider/s may be using a supply chain mechanism through the IT
resources of subcontractors. 3rd party sy contact kr ky provide kr rah hnn dataa
• This may jeopardize the security and privacy of the consumers’ data (as discussed before)
data ko cloud provider ki trf syy aggy subcontractor (3rd person) ko handover krna its cloud be harmful
and thus weakens the trust relationships.
• Even if the trust relationships are weak in service delivery chain, but at least some trust
exists so that the rapid provisioning of the Cloud services can be performed.
• Significant business risks may arise when critical data is placed on cloud and the consumer
has lack of control over the passing of this data to a subcontractor.
• So the trust along the service delivery chain from the consumer to Cloud provider is non-
transitive.
• There is a lack of transparency for the consumer in the process of data flow. The consumer
may even not know the identity of the subcontractor/s.
• In-fact, the ‘On-demand’ and ‘pay-as-you-go’ models may be based upon weak trust
relationships.
• This is because new providers have to be added on the go to provide the extra capacity on
short notice.
Raja Mushtaq: +923055868956
trust ko maintain rkhny ky liye kia kia approchies follow ki jaen. . .
Module No – 209: Lack of Consensus About Trust Management Approaches to Be Used:
trust aik concept hn esko number mi represent krna muskal hnn
• The consensus about the use of trust management approaches for Cloud computing is
missing.
• Trust measurement is a major challenge due to the difficulty of contextual representation of
trust.
• Some standardized trust models are required to be created for evaluating and assurance of
accountability.
• Almost all of the existing models for trust evaluation are not adequate for Cloud computing.
• The existing models of trust evaluation in Cloud computing partially cover the trust
categories.
• Trust models are lacking a suitable metrics for accountability.
• There is no consensus on type of evidence required for the verification of the effectiveness
of trust mechanisms.
• In order to monitor and evaluate the trust, the systematic trust-management is required.
• There should be a system to manage the trust.
• The trust management system should be able to measure the “trustfulness” of the Cloud
services.
• The following attributes can be considered:
o Data integrity: Consisting of security, privacy and accuracy.
o Security of consumers’ personal data.
o Credibility: Measured through QoS.
Raja Mushtaq: +923055868956
o Turnaround efficiency: The actual vs. promised turnaround-time. It is the time from
placement of consumer’s task to the finishing of that task.
o Availability of Cloud service provider’s resources and services.
o Reliability or success rate of performing of agreed upon functions within the agreed
upon time deadline.
o Adaptability with reference to avoidance of single point of failures through
redundant processing and data storage.
o Customer support provided by the Cloud provider.
o The consumer feedback on the service being offered.
• These attributes can be graded and trust computation can be performed. The computed
value can be saved for future comparison.
• In this module we shall briefly discuss the possible approaches to solve the privacy, security
and trust issues in Cloud.
• There are three main dimensions in this regard:
• Innovative regulatory frameworks to facilitate the Cloud operations as well as to solve the
possible issues regarding privacy, security and trust.
• Responsible company governance should be exhibited by the provider to show the intension
of safeguarding the consumer’s data and intension to prove this intension through audit.
• Use of various supporting technologies for privacy enhancement, security mechanisms,
encryption, anonymization etc.
• By using a combination of these dimensions, the consumers can be reassured of the security
and privacy of their data and the Cloud provider can earn the trust.
Lesson No. 40
OPEN ISSUES IN CLOUD
Module No – 212: Overview:
• The real time applications require high performance and high degree of predictability.
• Cloud computing shows some performance issues which are similar to those of other forms
of distributed computing. fix tym mi output dyn
ROund trip tym: sender ny sms synd o kiaLatency:
reciever ko reachAs measured
howa then recieverthrough round-trip-time
ny response genrate kiaa aur (thesendingtime
devicfrom sending
thk reach howa es a mimessage
jitna tym laggaato esy khty hen
receiving a response) is not predictable for Internet based communications.
o Offline Data Synchronization: For the offline updates in data, the synchronization
with all the copies of data on Cloud is a problem. The solution to this problem
requires
cloud py data backup mi hn ab user internet nah hony the
ki waja sy clode mechanisms
sy connect of kryversion
ni hn ab wo jo changes control,
ga backup file mi group
changes nae hongi collaboration
unsynchronized and
he rhy ga data jb thk other
user cloud ky sth conncect ni hn
synchronization capabilities.
o Scalable Programming: The legacy applications have to be updated to fully benefit
from scalable computing capacity feature of Cloud computing.
trust issue user ko concern rhta gn cloud py kn
o Data
access kr rah hn unauthorized access ki try kry consumer ko Storage Management: The consumers require the control over data life cycle
on tym report mil jaeeee
and the information regarding any intrusion or unauthorized access to the data.
• It is a probability that a system will offer failure-free service for a specified period of time for
a specified environment. clod aik specific tym mi service provide kry gaa
• It depends upon the Cloud infrastructure of the provider and the connectivity to the
subscribed services.
• Measuring the reliability of a specific Cloud will be difficult due to the complexity of Cloud
procedures.
• Several factors affect the Cloud reliability:
o Network Dependence: The unreliability of Internet and the associated attacks
cloud neteork py rely hn network latency and attack ki waja sy cloud ki reliabilities py effect prhtaa hn
affect the Cloud reliability.
o Safety-Critical Processing: The critical applications and hardware such as controls
of avionics, nuclear material and medical devices may harm the human life and/or
cause the loss of property. cloud ki esi application jo bulidings and user ko bhi nuqsaan reach krwa skti hn
▪ These are not suitable to be hosted over Cloud
Module No – 215:
upfront costs jo start mi pay krni hoti hn it infrastructure get ky liyee yh sy bacha lytaa hn pay ni krna prhtaaa, consumer ky liye maintainanace ki
koi cost ni hn overall cloud consumer ky liyee economic benifit ly ky aatiii hennn
• Economic Goals: Although the Cloud provides economic benefits such as saving upfront
costs and elimination of maintenance costs and provides consumers with economies of
scale.
Raja Mushtaq: +923055868956
• However there are a number of economic risks associated with Cloud computing.
•
esa template ayejo autoly bata
dy khaa py issue hn hamra data SLA Evaluation: The lack of automated mechanisms for SLA compliance by the provider
safe hn nah cloud py or naeee
requires the development of a common template that could cover the majority of SLA
clauses and could give an overview of SLA complaisance.
• This would be useful in decision making for investing the time and money in manual audit.
• Portability of Workloads: The initial barriers to Cloud adoption are the needs of a reliable
and secure mechanism for data transfer to Cloud as well as to port the workload to other
cloud py jo consumer data trnsfer krta hn esa mechanism hona chaye consumer securly data send kr sky but its open issue
providers are open issues.
aik cloud provider sy 2sry cloud
provide ky ammong data synd • krnaInteroperability
muskall hotaa hen between Cloud Providers: The consumers face or are in fear of vendor
lock-in due to lack of interoperability among different providers.
• Disaster Recovery: The physical and/or electronic disaster recovery requires the
implementation of recovery plans for hardware as well as software based disasters so that the
provider and consumers can be saved from economic and performance losses.
Lesson No. 41
DISASTER RECOVERY IN CLOUD COMPUTING
Module No – 219: Understanding the threats:
• Disk Failure: disk drives are electro-mechanical devices which wear out and eventually fail.
• Failure can be due to disaster such as fire and floods. Can also be due to theft.
• All mechanical devices have mean time between failure (MTBF).
• The MTBF values given by the manufacturers are usually generic values calculated for a set
of devices. org khti hn yh device 5k hour bad fail hogi zaroori ni hn 5k baad fail ho us sy phly bhi ho skti hn fail. . . dont rely on MTBF
• Therefore, instead of relying upon the MTBF, there must be a disaster recovery plan for the
disk failure.
• The following strategies can be utilized:
o Traditional approach: It is to have backup on separate storage. If the disk fails due to
any disaster, the data can be recovered on a new disk from the backup. But if the
backup is also destroyed or stolen, then there is a complete loss of data. Also, the
recovery
multiple harddisk hoti hn unh ko RAID ky thorugh maintain kia jta hn process is time consuming.
diff place py as a backup store kia jata hn agr aik
ohogaaRedundant Array of Independent Disks (RAID): It is a system consisting of multiple
place py data loss hota hn 2sroo py koi effect ni
disk drives. Multiple copies of data are maintained and stored in a distributed way
over the disks. If one disk fails, simply the disk is replaced and the RAID system
copies the data over the new disk. But still backup is required because if the entire
RAID is destroyed or stolen, there is a complete data loss.
o Cloud based data storage ad backup: Cloud not only provides the facility of data
access over the Internet, but it also provides enhanced data replication. The
replication is sometimes performed by default without any extra charges. The Cloud
based backup is stored at a remote site so it is an extra advantage as compared to
onsite backup placement.
o Further, the Cloud based backup is readily available and thus reduces the downtime
as compared to recovery using traditional tape-based backup.
Raja Mushtaq: +923055868956
• Power Failure or Disruption: The Computers can be damaged due to a power surge
caused by a storm or some fault in power supply system.
• Power surge may permanently damage the disk storage.
• The user looses all the unsaved data when a power-blackout happens.
• A few disaster recovery plans are as follows:
jo light shock bear kr skti hnnn
o Traditionally, the surge protector devices are used. But these devices are not helpful
in saving the (unsaved) data in case of a blackout.
o The in-house data centers can use huge and expensive uninterruptable power
supply (UPS) devices and/or generators.
o Another solution is to shift the data to another site. But this is expensive and time
consuming.
o The best option is to move the data center to Cloud. The Cloud providers have
better(and expensive) power backups and their cost is divided among the consumers.
Also, the Cloud mechanism may automatically shift the data to a remote site on
another power grid (in case of power failures of longer duration).
•
net sy illegal SW download sy Virus
aasktaa hen Computer Viruses: While surfing the web, the users may potentially be downloading and
installing software and/or share the drive such as junk drives over their computing devices.
• These devices are at the risk of attacks through computer virus and spyware.
• Traditionally, the following techniques have been used for safeguarding against the virus
attacks:
o Making sure each computer has anti-virus installed and set to auto-update to get the
most recent virus and spyware signatures.
o Restrict the user privilege to install software.
new user hn usko kaam acces dy wo ghlti sy Illegal SW download kry koi nuqsaan nah ho
• Fire, Flood & Disgruntled Employees: The fire as well as the fire extinguishing practices
can destroy the computing resources, data and backup.
• Similarly the heavy and/or unexpected rainfall may cause an entire block or whole city
including the computing equipment to be affected by a flood.
• . Similarly an angry employee can cause harm by launching a computer virus, deleting files
and leaking the passwords.
Raja Mushtaq: +923055868956
• Traditionally the office equipment is ensured to lower the monitory damage. Backup is used
for data protection. Data centers use special mechanisms for fire-extinguishing without
water sprinkles.
• By residing the data center over Cloud, the consumer is freed from making efforst and
expenditures for fire prevention systems as well as for data recovery. The cloud provider
manages all these procedures and includes the cost as minimal part of the rental.
• Unlike fire, the floods can not be avoided or put-off.
• The only possibility to avoid the damage due to floods is to avoid setting up the data center
in a flood zone.
• Similarly, choose a Cloud provider which is outside any flood zone.
• Companies apply access control and backup to limit the access to data as well as the damage
to data due to unsatisfied employees.
• In Cloud, the Identity as a Service (IDaaS) based single sign-on excludes the access privileges
of terminated employees as quickly as possible to prevent any damages.
• Lost Equipment & Desktop Failure: The loss of equipment such as a laptop may
immediately leads to the loss of data and a possible loss of identity.
• If the data stored on the lost device is confidential then this may lead to even more damage.
• Traditionally the risk of damage due to lost or stolen devices is reduced by keeping backup
and to safeguard the sensitive data, login and strong password for the devices are used.
• But even the strong passwords are not difficult to break for the experienced hackers. Yet
most of the criminals are still prevented to access the data.
• For the Cloud computing, the data can be synchronized over multiple devices using the
Cloud service. Therefore the user can get the data from online interface or from other
synced devices.
• In case of desktop failure, the user (such as an employee of a company) becomes offline
until the worn out desktop is replaced.
• If there was no backup, the data stored on the failed desktop may become unrecoverable.
• Traditionally, data backup is kept for the desktops in an enterprise. The backup is stored on
a separate computer. In case of desktop failure, the maintenance staff tries to provide
alternative desktop and restore the data s soon as possible.
• Whereas in Cloud, the employees work on the instances of IaaS or Desktop as a Service by
using the local desktops.
• In case of desktop failure, the employee can just walk to another computer and log in to the
Cloud service to resume the work.
• Server failure & Network Failure: Just like the desktops, the severs can also fail.
Raja Mushtaq: +923055868956
• The replacement of blade server is relatively simple process and mostly the blade servers are
preferred by the users.
• Ofcourse there has to be a replacement server in stock to replace with the failed server.
• Traditionally the enterprises keep redundant servers to quickly replace a failed server.
• In case of Cloud computing, the providers of IaaS and PaaS manage to provide 99.9% up-
time through server redundancy and failover systems. Therefore the Cloud consumers do
not have to worry about server failure.
• The network failure can occur due to a faulty device and will cause downtime.
• Traditionally, the users keep 3G and 4G wireless hotspot devices as a backup. While the
enterprises obtain redundant Internet connections from different providers.
• Since the Cloud consumers access the Cloud IT resources through the Internet, the
consumers have to have redundant connections and/or backup devices for connectivity.
• Same is true for the Cloud service provider. The 99.9% up-time is assured due to
backup/redundant Network connections.
• Database System Failure & phone system failure: Most of the companies rely upon
database systems to store a wide range of data.
• There are many applications dependent upon database in corporate environment such as
customers record keeping, sale-purchase and HR systems etc.
• The failure of data base will obviously makes the dependent application unavailable.
• . Traditionally, the companies either use a backup or replication of database instances. The
former case results in downtime of database system while the latter results in minimum
downtime or no downtime but is more complicate to implement.
• The Cloud based storage and database systems use replication to minimize the downtime
with the help of failover systems.
• Many companies maintain phone systems for conference calling, voice mail and call
forwarding.
• Although the employees can switch to using mobile phones in case the phone system fails.
But the customers are left unaware of the phone number to connect to the company till the
phone system recovers.
• Traditionally, the solutions are applied to reduce the impact of phone failure.
• Cloud based phone systems on the other hand provide reliable and failure safe telephone
service. Internally, the redundancy is used in the implementation.
• The process of reducing risks will often have some cost. For example the resource
redundancy and backups etc.
• This indicates that investment on risk-reduction mechanisms will be limited.
Raja Mushtaq: +923055868956
• The IT staff should therefore evaluate and classify each risk according to its impact upon the
routine operations of the company.
• A tabular representation of the risks, the probability of occurrence and the business
continuity impact can be shown.
• The next step is to formally document the disaster recovery plan (DRP).
• A template of DRP can contain the plan overview, goals and objectives, types of events
covered, risk analysis and the mitigation techniques for each type of risk identified in earlier
step.
• Data Access Standards: Before developing the Cloud based applications, the consumers
should make sure that the application interfaces provided in Cloud are generic and/or data
adaptors could be developed for portability and interoperability of the Cloud applications
can happen when required. jb user ko need ho wo easily ail sy 2sry clouder oy shidt ho sky koi chez happen sy phly tadabbeer kr lynn
• Data Separation: The consumer should make sure that proactive measures are
implemented at the provider’s end for separation of sensitive and non-sensitive data.
sensative and unsensative data 2no process ho rhy hn provider ko cahie ky 2no type ky betwwn lodical sepration hoo
• Data Integrity: Consumers should use checksum and replication technique to ensure the
integrity of the data to detect any violations of data integrity.data mi koi change tu ni ayeen
• Data Regulations: The consumer is responsible to ensure that the provider is complying
with all the regulations data regarding data which are applicable to consumer regarding data
py rule apply hota hn wo aik region mi data rkh skty hn agr cloud py data rkha hnn tu consumer sure kry clouder bhi wo he rule follow kr rh
storage and processing.ky nae
• Data Disposition: The consumer should make sure that the provider offer such
mechanisms which delete the data of consumer whenever the consumer requests for it. Also
jb consumer data del krta hen tu cloud ny bhi del kiya hn ky nae yh consumer ki zin=mdaari hn confrm kry
make sure that the evidence or proof of data deletion is generated.
• Data Recovery: The consumer should examine the data backup, archiving and recovery
procedures of the provider and make sure they are satisfactory.
• VM vulnerabilities: When the provider is offering Cloud IT resources in the form of VMs,
the consumer should make sure that the provider has implemented sufficient mechanisms to
avoid attacks from other VMs, physical host and network.
• Also make sure the existence of IDS/IPS systems and network segmentation techniques
such as VLANs.
• VM Migration: Thekiconsumers
cloud provider di hoe VM machineshould
hn consumerplan forcloudVM
us sy 2sry migration
py data across
migrate krwaaa skyyyy agr yhdifferent
hen then esi koproviders
prefer krna chae consumer koo
just in case.
•
fix tym response dyna hota hm
Time-critical Software: Since the public Clouds have unreliable response time therefore
the consumers should avoid using the Cloud for the deployment of time-critical software.
• Safety-critical Software: Due to the unconfirmed reliability of Cloud subsystems, the use of
Cloud for deployment of safety-critical software is discouraged.
• Application development Tools: When using the application development tools provided
by the service provider, preference should be given to the tools which support the
sec future easily integrate ho sky
application development lifecycle with security features integrated.
• Application Runtime Support: Before deploying an application over Clouds, the consumer
should make sure that the libraries calls used in application work extra correctly
process hotyand alllibraries
hn jo sub those ho run tym py koi linraries crash
nah kryy hang nah kryy
libraries are dependable in terms of performance and functionality.
• Application Configuration: The consumer should make sure that the applications being
deployed over the Cloud can be configured to run in a secured environment such as in a
VLAN segment. clouder ko saftwy waly software provide krny chae
• Also make sure that various security frameworks can be integrated with the applications
according to requirements of security policies of the consumer.
• Standard Programming Languages: Whenever possible, the consumers should prefer
those Clouds which work in standardized programming languages and tools.
agr provider apko unknown or useful lang provide kry then apkoavoid krna chaeee
Lesson No. 42
MIGRATING TO THE CLOUD
Module No – 231: Define System Goals and Requirements:
• The migration to Cloud should be well planned. The first step should be to define the
system goals and requirements. The following considerations are important:
o Data security and privacy requirements
Raja Mushtaq: +923055868956
o Site capacity plan: The Cloud IT resources needed initially for application to operate.
o Scalability requirements at runtime
o System uptime requirements
o Business continuity and disaster requirements
o Budget requirements
o Operating system and programming language requirements
o Type of Cloud: public, private or hybrid
o Single tenant or multitenant solution requirements
o Data backup requirements
o Client device support requirements such as for desktop, tab or smartphone
o Training requirements
o Programming API requirements
o Data export requirements
o Reporting requirements
• [Jamsa, K. (2012). Cloud computing. Jones & Bartlett Publishers]
Module No – 232: Protect existing data and know your application characteristics:
• It is highly recommended that before migrating to Cloud, the consumer should backup the
data. This will help in restoring the data to a certain time.
• The consumer should discuss with provider and agree upon a periodic backup plan.
• The data life cycle and disposal terms and conditions should be finalized at the start.
• If the consumer is required to fulfill any regulatory requirements regarding data privacy,
storage and access then this should be discussed with the provider and be included in the
legal document of the Cloud agreement.
• The consumer should know the IT resource requirements of the application being deployed
over the Cloud.
• The following important features should be known:
o High and low demand periods in terms of time
o Average simultaneous users
o Disk storage requirements
o Database and replication requirements
o RAM usage
o Bandwidth consumption by the application
o Any requirement related to data caching
Module No – 233: • Establish a realistic deployment schedule, Review budget and Identify
IT governance issues:
• Many companies use a planned schedule for Cloud migration to provide enough time for
training and testing the application after deployment.
• Some companies use a beta-release to allow employees to interact with the Cloud based
version to provide feedback and to perform testing.
Raja Mushtaq: +923055868956
• Many companies use key budget factors such as running cost of in-house datacenter, payrolls
of the IT staff, software licensing costs and hardware maintenance costs.
• This helps in calculation of total cost of ownership (TCO) of Cloud based solution in
comparison.
• Many Cloud providers offer solutions at lower price than in-house deployments.
• Regarding the IT governance requirements, the following are important point:
o Identify how to align the Cloud solution with company’s business strategy.
o Identify the controls needed within and outside the Cloud based solution so that the
application can work correctly.
o Describe the access control policies for various users
o Describe how the Cloud provider logs the errors and system events and how to
access the log and performance monitoring tools made available to the consumer.
• Identify functional and non-functional requirements: Before beginning the design phase
of a Cloud application, the system requirements must be obtained and finalized.
• Personal meeting may be very helpful in this regard.
• Identification of errors and omission at early stage will save considerable cost and time later.
• The system requirements are of two types:
o Functional
o Non-functional
• Functional requirements: Define the specific tasks the system will perform. These are
provided by the system analyst to the designer.
• Non-functional requirements: These are usually related to quality metrics such as
performance, reliability and maintainability.
• Existing & Future capacity: If the application is being migrated to Cloud, then the current
requirement of IT resources should be evaluated and used for initial deployment as well as
for horizontal or vertical scaling configuration.
• Configuration management: Since the Cloud based solutions are accessed through any
OS, browser and device, therefore the interfaces of the application should be able to render
the contents with respect to OS, browser and user device.
• Deployment: The deployment issues such as related to OS, browser and devices should be
addressed for the initial deployment as well as for future updates.
• Environment (Green computing): Design consideration for the Cloud based solution
should contain considerations for power efficient design in order to reduce the
environmental effect of carbon footprint of the Cloud based solution. power consumption
• Disaster recover: The Cloud solution design should have consideration of disaster recovery
mechanisms. The potential risks for business continuity should be identified and cost
effective mitigation techniques should be configured for these risks.
• Interoperability: The design consideration should contain possibility of interoperability
between different Cloud solutions in terms of exchange of data.data mis match nah during transfer
• Maintainability: The Cloud solution should be designed to increase the reusability of code
through loose coupling of the modules. This will lower down the maintenance cost. esy design kry
code usko reuse kr
lyn
• Reliability: Design should include the consideration for hardware failure events. The
redundant configuration might be applied according to mean time between failure (MTBF)
for each hardware device or establish a reasonable downtime.
• Response time: The response time should be as less as possible. Specifically for the online
form submissions and reports.
• Robustness: It refers to the continuous working capability of the solution despite the errors
or system failure. This can be complemented with Cloud resource usage monitoring for
timely alarm for critical events.
• Security: The developer should consider the Cloud based security and privacy issues while
designing.
• Testability: Test cases should be developed to test the fulfilment of functional and non-
functional requirements of the solution.
• Usability: The design can be improved for usability by implementing a prototype and
getting users’ reviews to enhance the ease of usability of the Cloud solution.
Lesson No. 43
CLOUD APPLICATION SCALABILITY AND RESOURCE SCHEDULING
Module No – 239: Cloud Application Scalability:
• Review Load Balancing Process: Cloud based solutions should be able to scale up or
down according to demand.
o Remember, the scaling out and scaling up mean acquiring new resources and
upgrading the resources respectively. Scaling in and scaling down are exactly the
reverse of these.
o There should be a load balancer module specially in case of horizontal scaling.
o Load balancing or load allocating (in this regard) is performed by distribution of
workload (which can be in the form of clients’ requests) to Cloud IT resources
acquired by the Cloud solution.
o The allocation paten can be through round robin, random or a more complex
algorithm containing multiple parameters. (More on this in later module).
• Application Design: Cloud based solutions should neither be having no-scaling nor the
unlimited scaling.
o There should be a balanced design of Cloud application regarding scaling with
reasonable expectations.
o Both horizontal and vertical scaling options should be explored either individually or
in combination.
• Minimize objects on key pages: Identify the key pages such as home page, forms and
frequently visited pages of Cloud based solution.
o Reduce the number f objects such as graphics, animation, audio etc. from these
pages so that they can load quickly.
• Selecting measurement points: Remember a rule that a 20% of code usually performs the
80% of processing.
o Identify such code and apply scaling to it.
o Otherwise applying scaling may not have the desired performance improvements.
• Analyze database operations: The read/write operations should be analyzed for
improving performance.
o The read operations are non-conflicting and hence can be performed on replicated
databases (horizontal scaling).
o But write operations on one replica database requires the synchronization of all
database instances and hence the horizontal scaling becomes time consuming.
o The statistics of database operations should be used for decision about horizontal
scaling.
• Evaluate system's data logging requirements: The monitoring system regarding the
performance and event logging may be consuming disk space and CPU.
o Evaluate the necessity of logging operations before applying or periodically
afterwards and tune them to reduce disk storage and CPU wastage
capacity aik esi activity jis mi plan kia jata hn khaas tym py kesi app ko kitny resources require ho skty hn
Scalability mui jn work load increase hota hn tu unhy ky under it resources ki taadaad increase ho jati hn
Module No – 241: Cloud Application Scalability:
• Capacity planning vs Scalability: Capacity planning is planning for the resources needed
at a specific time by the application.
o Scalability means acquiring additional resources to process the increasing workload.
o Both capacity planning and scalability should be performed in harmony.
• Diminishing return: The scaling should not be performed beyond a point where there is
no corresponding improvement in performance.
• Performance tuning: In addition to scaling, the application performance should be tuned
by reducing graphics, page load time and response time.
o Additionally the use of caching should be applied. It is the use of faster hard disks,
using RAM contents for content rendering and optimizing the code using 20/80
rule.
• Cost & Time-Based Resource Scheduling: Time based scheduling may miss some tasks’
deadlines or may prove to be expensive if over provisioning of IT resources is used to meet
deadlines.
o The cost based scheduling may miss some deadlines and/or cause starvation to some
costs.
o Better to use a hybrid approach for resource scheduling to gain cost as well as to
minimize task deadline violations.
• Profit-Based Resource Scheduling: This type of scheduling aims at increasing the profit
of Cloud provider.
o This can be done either by reducing the cost or increasing the number of
simultaneous users.
o The SLA violation is to be considered while making the profit based scheduling
decisions.
o The penalties of SLA violations may nullify the profit gained.
Module No – 248: Cloud Resource Scheduling Overview:
• SLA & QoS Based Resource Scheduling: In this scheduling, the SLA violations are
avoided and QoS is maintained.
o The more load put on IT resources, the more tasks may be completed in a unit time.
o Yet it may cause SLA violation when IT resources are overloaded.
o Hence the QoS consideration is applied to ensure SLA is not violated.
o Suitable for homogeneous tasks for which the estimation can be performed for
expected workload and expected time of completion.
• Energy-Based Resource Scheduling: The objective is to save energy at data center level
to decrease the running cost and to contribute towards environment.
o Energy consumption estimation is required for each scheduling decision. There can
be a number of possible task distribution across servers and VMs.
o Only that distribution is preferred which shows the least energy consumption for a
batch of tasks at hand.
Module No – 250: Cloud Resource Scheduling Overview:
• VM-Based Resource Scheduling: Since the VMs can host Cloud based applications and
the VMs can be migrated, the resource scheduling can be performed on VM level.
o The overall demand of all applications hosted on a VM is considered for scheduling.
If a VM is facing resource starvation, it can be migrated to another server with
available IT resources.
o The disadvantage is, there is no guarantee that the destination host also runs out of
IT resources due to already deployed VMs
• Introduction: Mobile devices are frequently being used throughout the world.
o Over the time, the users have started to rely more and more upon mobile devices
due to no constraints of time and location.
o The applications installed over mobiles are of various types and of various
computational requirements.
• Overview: The mobile devices are inherently constrained by resources shortage such as
processing, memory, storage, bandwidth and battery etc.
o There might be a number of situations when mobile devices become incapable of
processing or running the applications due to resource shortage.
o On the other hand, Cloud computing offers unlimited IT resources over Internet on-
the-go.
• Definition: Mobile cloud computing at its simplest, refers to an infrastructure where both
the data storage and data processing happen outside of the mobile device. Mobile cloud
applications move the computing power and data storage away from mobile phones and into
the cloud, bringing applications and Mobile Computing to not just smartphone users but a
much broader range of mobile subscribers’.
o [Dinh, H. T., Lee, C., Niyato, D., & Wang, P. (2013). A survey of mobile cloud
computing: architecture, applications, and approaches. Wireless communications and
mobile computing, 13(18), 1587-1611.]
• There are various scenarios which indicate the need of a Mobile Cloud computing
environment.
• This module presents a few examples in this regard.
• Optical character recognition (OCR) is used to identify and translate the text from one
language to another. An OCR application could be installed over a mobile device for
tourists.
• But due to resource shortage over the mobile devices, a better solution is to develop a
Mobile Cloud application.
• Data sharing such as images form a site of disaster can be performed over Mobile Cloud
application to help in developing an overall view of the site.
• The readings from sensors of multiple mobile devices spread across a vast region can not be
otherwise collected and processed except through a Mobile Cloud application.
• Mobile Commerce: The applications of mobile commerce face the complexities such as
bandwidth limitation,device configuration and security. In order to address these issues, the
mobile commerce applications are integrated into Cloud computing.
• Mobile Learning: The mobile learning apps face the limitations in terms of high cost of
devices & data plan and network bandwidth. Along with the limitation of storage space over
mobile devices, these limitations can be overcome through shifting these applications over
Cloud. This results in rendering of larger sized tutorials, faster processing and battery
efficiency. education tutorial
• Mobile Healthcare: The mobile healthcare applications based upon Mobile Cloud
computing offer the following benefits in addition to assuring the security and privacy:
caution dyti hn emergeny
o Remotetym monitoring of pulse rate, blood pressure etc. for patients over Internet.
o Timely and effective cautioning and guidance to ambulances in case of medical
emergencies.
• Mobile Gaming: Rendering of contents over mobile devices while executing the game
engine over Cloud. Only the screens of the mobile devices are used, the rest is being done
on Cloud.
• [Fernando, N., Loke, S. W., & Rahayu, W. (2013). Mobile cloud computing: A survey. Future
generation computer systems, 29(1), 84-106.]
• Cost benefit analysis proves to be useful for deciding about offloading the workload to
Cloud.
• This analysis may consider the total investment (initial and running costs) and compare with
the benefits of Mobile Cloud computing.
• Considering the goals of performance, energy conservation and quality to decide which
server should receive the offload from mobile devices. Thus the cost benefit analysis in this
case is from the point of view of Cloud infrastructure. Prediction can be used to estimate the
performance, energy consumption and quality.
• The data related to devices’ energy consumption, network throughput and application
characteristics can be used to decide for offloading a task (of the profiled application) to
Cloud or execute it locally in order to (for example)
conserve battery.
• Security and privacy requirements may also be the base of task migration to Cloud.
• There are a number of data security and privacy issues in Mobile Cloud computing. These
are in addition to the security and privacy issues of Cloud computing.
• The following are the key areas for mobile Cloud security:
• Mobile devices themselves: Attacks, Virus and other malwares
• Mobile network: Related to wireless security
• Vulnerabilities in mobile Cloud applications: regarding the security and privacy bugs.
• There are some communication issues regarding Mobile Cloud computing. The researchers
have also proposed different solutions in this regard.
• Low Bandwidth: It is one of the biggest issues for mobile Cloud computing because the
radio resource for wireless networks is much scarce as compared with thewifitraditional netwoek
networks. radio signla bandwidth ky related hen radio signal carrier signal mi majood ni hoty wo mob device cloud sy connect ho sky
• Availability: The availability of the service becomes an important issue when the mobile
deice has lost contact with mobile Cloud application due to network failure, congestion and
loss of signals. clud comp sy connect break ho skta weak of signal
• Heterogeneity: The mobile devices accessing a Mobile Cloud Computing application are of
numerous types and use various wireless technologies such as 2G, 3G etc and WLAN. An
important issue is how to maintain the wireless connectivity along with satisfying the
requirements of Mobile Cloud computing such as high-availability, scalability and energy
efficiency etc. mean variaty diff org ky cell hn jin 3g 2g 4g hoti hn communication mi hardles aaskti hn
• The mobile device has to make the decision for offloading the computational workload to
the Cloud.
• If the offloading is not performed efficiently then the desired performance may not be
achieved. Also the battery may get depleted faster than executing the workload locally.
• There are two main types of computational offloading:
o Static: In which the offloading decisions (consisting of workload partitioning) are
made at the execution start of a task or a batch of tasks. es mi work loading mi predict kia jata hn aany wali work load
kaa or anny wali variation ka
o Dynamic: The offloading decisions depend upon the run-time conditions of dynamic
es mi run tym condition ko dekh ky
decision kia jata hen aggy kia krnaa hen. . .
parameters such as network bandwidth, congestion and battery life etc.
• The static offloading decisions may not turn out to be fruitful if the dynamic parameters
change unexpectedly.
• Better not to offload if the time/battery consumption (cost) for offloading is higher than the
cost of locally processing the task.
• Service Availability and Performance assurance issues: Mobile devices undergo loss of
connectivity due to signal loss, network error, battery depletion etc. and thus service
.
availability and performance assurance turn to become challenges
• Data access over mobile Cloud applications may be challenging in case of low bandwidth,
signal loss and/or battery life.
• Accessing the files through mobile devices may turn out to be expensive in terms of data
transmission cost, network delays and energy consumption.
• Data access approaches are needed to be developed/polished to maintain a performance
level and to save energy.
• Some approaches have optimized the data access patterns.
• Another approach is to use mobile cloudlets which are intermediate devices acting as file
cache.
• Interoperability of data is also a challenge to provision data across heterogeneous devices
and platforms. A generic representation of data should be preferred.
• Resource Management: A mobile Cloud application can acquire all the IT resources from
Cloud. Another method is to use the cloudlets which are individual computers or even
clusters in the vicinity of the mobile device running the mobile Cloud application.
khae sy resources nae mily sary work cell py he krny pry
o In worst case, the mobile device resources are utilized. All these situations require
separate resource management techniques.
• Processing Power: The processing power of a single mobile device is not at all comparable
to Cloud.
• The issue is how to efficiently utilize the huge processing power of Cloud to execute the
tasks of mobile Cloud applications.
• Battery Consumption: Computational offloading becomes more energy efficient if the
code size is large and vice versa. aik kB code uplaod kiya hn kitny unit battery use hogi
• For example, offloading 500KB of code will take 5% of battery as compared to 10% battery
usage if this code is locally processed. Thus 50% of battery is saved when offloading the
code.
• But 250KB code is only 30% battery efficient if uploaded.
• Support of mobility while assuring connectivity to the Cloud. The network connectivity
becomes very important in this case. Cloudlets can support the connectivity but these only
exist at certain locations such as cafés and malls.
• The adhoc creation of mobile Cloud over a set of a set of mobiles around a location depends
upon the availability of capable devices and cost benefit analysis.
• Assurance of security is an on-going challenge to ensure privacy and security and to establish
trust between the mobile device users and the service provider/resource provider.
• The conduct and manage the incentives among the resource lenders (in case of a mobile
adhoc Cloud) requires the establishment of trust, payment method and methOds to prevent
free riders.
• Typically, both the Cloud computing and Mobile Cloud computing are dependent upon
remote usage of IT resources offered by Cloud.
• Cloud computing traditionally works to provide various Cloud services such as IaaS, PaaS
and SaaS etc. to the consumers. cloud comp yh services provide krti hen
• Mobile Cloud computing is however more towards providing Cloud based application over
mobile devices and to deal with the connectivity, security and performance issues.
indviual mob user ko mob clod app
provide ki jaee uski acces provide
• Cloud computing deals with user requirements from a single user to an enterpriseki jaeeee
level.
• Mobile Cloud applications are more accessed by individual users for personal computing
purposes.
• There are multiple models of Cloud computing such as Private, Public, Community and
Hybrid.
• Mobile Cloud can be setup over Cloud, Cloudlets and on adhoc basis by using the capable
and resource rich mobile devices sharing a common location on map.
Lesson No. 45
SPECIAL TOPICS IN CLOUD COMPUTING AND CONCLUSION OF COURSE
Module No – 270: Big Data Processing in Clouds: Overview of Big Data:
popular
• The term “Big Data” refers to such enormous volume of data that can not be processed
through traditional database technologies.
bhut bara data jo traditinally process ni kr skti database
• Cloud computing infrastructure can fulfill the data storage and processing requirements to
store and analyze the Big Data.
• The data can be stored in large fault tolerant databases. Processing can be performed trough
parallel and distributed algorithms.
• Cloud storage can be used to host Big Data while the processing can be done locally on
commodity computers.
• Big data Cloud applications can be built to host and process the big data on Cloud.
• There are three popular models for big data:
o Distributed Map Reduce model popularized by Hadoop es ky through efficent proceesss kia jaa skta hn
o NoSQL model used for non-relational, non-tabular storage
o SQL RDBMS model for relational tabular storage of structured data.
• Traditional tools for big data processing for example can be deployed over Cloud.
• Top-rated Hadoop options include Apache Hadoop, SAP's HANA/Hadoop combination,
Hortonworks, Hadapt and VMware's Cloud Foundry, as well as services provided by IBM,
Microsoft and Oracle.
• For NoSQL, consider Cassandra, Hbase or MongoDB. IBM also offers NoSQL for the
cloud, and there are plenty of other NoSQL providers.
• [http://searchcloudapplications.techtarget.com/tip/How-to-choose-the-best-cloud-big-data-
platform]
• In this module we shall cover a few examples of usage of Cloud computing for Big Data
hosting and processing as case studies.
•
esi comp hn jo mob or pc py SwiftKey: I t is a smart prediction technology for mobile device virtual keyboards.
keyboard ko predict krti hn
o Terabytes of data is collected and analyzed for active users around the globe for
prediction and correction of text through an artificial engine.
o Uses Amazon Simple Storage Service and Amazon Elastic Cloud to host Hadoop.
• Halo Game: More than 50 million copies have been sold worldwide.
o Collects the game usage data for the players globally for player-ranking in online
gaming tournaments. online players ka data collect krti hen
o Windows Azure HDInsight Service (based on Hadoop) is used for this purpose.
• Nokia: The well known mobile manufacturer collects terabytes of data for analysis,
o Uses Teradata Enterprise Data Warehouse, Oracle and MySQL data marts,
visualization technologies, and Hadoop.
• In this module we shall briefly discuss a few challenges and issues related to Big Data
processing on Cloud.
• Scalability assurance for storage of rising volume of Big Data.
• Availability assurance of any data out of Big Data stored on Cloud storage is a challenge.
• Data quality refers to the possibility of data-source verification. It is a challenging task for
Big Data (for example) collected from mobile phones.
• Simultaneously handling heterogeneous data is challenging.
• Privacy issues arise when the processing of Big Data (through data mining techniques) may
lead to sensitive and personal information. Another issue is the lack of established laws and
regulations in this regard.
• Architecture & Processing: In this module we shall consider the example of multimedia
edge Cloud (MEC) consisting of cloudlets.
• The multimedia Cloud providers can use the IT resources of cloudlets which are physically
placed over the edge (means very close to the multimedia service consumers) to reduce
network latencies.
• There can be multiple MECs which are geographically distributed.
• The MECs are connected to central servers through content delivery network (CDN).
• The MECs provide multimedia services and maintain the QoS.
• Network operators often have to configure the devices (switches & routers) separately and
by using vendor specific commands.
• Thus, implementing high level network policies is hard and complex in traditional IP
networks.
• The dynamic response and reconfiguration is almost non-existent in current IP networks.
Enforcing the network policies dynamically is therefore challenging.
• Further, the control plane (the decision making and forwarding rules) and the data plane
(which performs traffic forwarding according to the decisions made by control plane) are
bundled inside the networking device.
• All this reduces the flexibility, innovation and evolution of the networking infrastructure.
• Software Defined Networking (SDN) is the new paradigm of networking that separates the
control plane from data plane.
• It reduces the limitations of traditional networks.
• The switches become the forwarding-only devices, while the control plane is handled by a
software controller.
• The controller and switch have a software interface between them.
• Controller directly exercises direct control over the data plane devices through a well defined
application program interface (API) such as OpenFlow.
o [Jain, R., & Paul, S. (2013). Network virtualization and software defined networking
for cloud computing: a survey. IEEE Communications Magazine, 51(11), 24-31.]
o [Kreutz, D., Ramos, F. M., Verissimo, P. E., Rothenberg, C. E., Azodolmolky, S., &
Uhlig, S. (2015). Software-defined networking: A comprehensive survey. Proceedings
of the IEEE, 103(1), 14-76.]
o [Nunes, B. A. A., Mendonca, M., Nguyen, X. N., Obraczka, K., & Turletti, T. (2014).
A survey of software-defined networking: Past, present, and future of programmable
networks. IEEE Communications Surveys & Tutorials, 16(3), 1617-1634.]
• SDN has its roots in history as long ago as 80s and 90s with the development of Network
Control Point (NCP) technology.
• NCP was introduced by AT&T as probably the first established technique to separate the
data plane and control plane.
• Active Networks was another attempt to introduce computational and packet modification
capabilities to the network nodes. run tym traffic monitoryng perform ki jaa sky aur network packt by SW updatw kia jaa sky
• Network virtualization is a recent development which allows hypervisor like environment to
network infrastructure.
• OpenFlow based network operating systems such as ONOS have emerged to make network
administration easier and to develop/deploy new protocols and management applications.
• Each computer system needs at least one L2 NIC (Ethernet card) for communication.
• A physical system must have at least one physical NIC (pNIC).
• Each VM has at least one virtual NIC (vNIC).
• All the vNICs on a physical host (server) are interconnected through a virtual switch
(vSwitch).
• The vSwitch is connected to the pNIC.
• Multiple pNICs are connected to a physical switch (pSwitch)
• There are a number of standards available for NIC virtualization.
• A physical ethernet switch can be virtualized by implementing IEEE Bridge Port Extension
standard 802.1BR
• The VLANS can span over multiple data centers and there are several approaches to manage
the VLANS.
• A VM can be migrated across different data centers by following multiple techniques
proposed by researchers.
• The modern processors allow the implementation of software based network devices such as
L2 switch, L3 router etc.
• The rise in demand for network virtualization has attracted the virtualization software
vendors to integrate SDN features in their products.
• Centralized controllers such as Beacon can handle more than 12 million flows per second and
can fulfill the requirements of enterprise level networks and data centers for hosting Cloud.
• The SDN can be helpful in monitoring, filtering and managing the network traffic over
virtual as well as physical networks inside a Cloud hosting data center.
heterogenous devices apaas mi communicate and coperate kry for the processing aur network storage and
other operaton task perform krny ky liye APAAS mi tahwaaan kry
Module No – 285: Fog Computing:
jah sy comp device connect ho rae hn cloud ky sath wah py he cloud ki services provide kr si jae
• It is an emerging paradigm of Cloud computing.
• Fog Computing or Fog extends the Cloud computing and services to the edge of the network.
• Provides data, computing, storage ad application services to end-users that can be hosted at
the network edge or end devices such as set-top-boxes or access points.
• Fog will support Internet of Everything (IoE) applications such as industrial automation,
transportation, network of sensors and actuators etc..
• These applications demand real-time/predictable latency and mobility.
• Fog can therefore be considered a candidate technology for beyond 5G networks.
• Fog will result in the defusing of Cloud among the client devices.
• Fog Computing is a scenario where huge number of heterogeneous, ubiquitous and
decentralized devices communicate and potentially cooperate among them and the network
to perform storage and processing tasks without the intervention of third parties.
• Network virtualization and SDN are going to be the essential parts of Fog computing.
o [Kitanov, S., Monteiro, E., & Janevski, T. (2016, April). 5G and the Fog—Survey of
related technologies and research directions. In Electrotechnical Conference
(MELECON), 2016 18th Mediterranean (pp. 1-6). IEEE]
• Cloud computing provides the computing/IT resources to the users over the Internet in a
pay-as-you-go type of business model.
• This course has covered almost all the aspects of Cloud computing and the advanced topics
related to Cloud.
• We are hopeful that you will find this course interesting, informative and comprehensive.
• We hope that the students of Cloud Computing subject will surely know the importance and
ubiquity of Cloud computing.
• We hope that this subject will become the foundation of advanced courses and an initial
source of knowledge for all in this regard.