0% found this document useful (0 votes)
8 views9 pages

Capacity Plannign For Soa E-Business Application

The document discusses the Computer Measurement Group (CMG), a non-profit organization focused on computer system performance evaluation and capacity management. It presents a method for performance modeling and capacity planning of Service Oriented Architecture (SOA)-based e-business applications using Layered Queuing Networks (LQN). The paper outlines the challenges of predicting performance in SOA architectures and proposes a proactive approach to optimize deployment and resource allocation during the software development lifecycle.

Uploaded by

kmdbasappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views9 pages

Capacity Plannign For Soa E-Business Application

The document discusses the Computer Measurement Group (CMG), a non-profit organization focused on computer system performance evaluation and capacity management. It presents a method for performance modeling and capacity planning of Service Oriented Architecture (SOA)-based e-business applications using Layered Queuing Networks (LQN). The paper outlines the challenges of predicting performance in SOA architectures and proposes a proactive approach to optimize deployment and resource allocation during the software development lifecycle.

Uploaded by

kmdbasappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

The Association of System

Performance Professionals

The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing
professionals committed to the measurement and management of computer systems. CMG members are primarily concerned
with performance evaluation of existing systems to maximize performance (eg. response time, throughput, etc.) and with capacity
management where planned enhancements to existing systems or the design of new systems are evaluated to find the necessary
resources required to provide adequate performance at a reasonable cost.

This paper was originally published in the Proceedings of the Computer Measurement Group’s 2009 International Conference.

For more information on CMG please visit http://www.cmg.org

Copyright 2009 by The Computer Measurement Group, Inc. All Rights Reserved
Published by The Computer Measurement Group, Inc., a non-profit Illinois membership corporation. Permission to reprint in whole
or in any part may be granted for educational and scientific purposes upon written application to the Editor, CMG Headquarters,
151 Fries Mill Road, Suite 104, Turnersville, NJ 08012. Permission is hereby granted to CMG members to reproduce this
publication in whole or in part solely for internal distribution with the member’s organization provided the copyright notice above is
set forth in full text on the title page of each item reproduced. The ideas and concepts set forth in this publication are solely those
of the respective authors, and not of CMG, and CMG does not endorse, guarantee or otherwise certify any such ideas or concepts
in any application or usage. Printed in the United States of America.
Performance modeling and capacity planning
of a Service Oriented Architecture (SOA)- based
E-business Application using Layered Queuing Networks (LQN tool)

This paper presents a method for performance modeling and capacity planning for an e-business
application built with SOA architectures. An SOA application is built with many services and
components..Determining the best topology for deploying SOA components for optimum performance is
always a challenging task for an SOA architect at an architectural stage of Software Development Life
cycle. This method proposes a Layered Queuing Network model proactive approach in modeling an SOA
application at the architectural stage.

1. Overview study. The LQN models of the given application are


constructed in section 4 and 5 respectively. Section 5
SOA-based enterprise software applications are based on includes the analysis of outputs of applications’ LQN
highly distributed, multi-tiered architectures comprising model is presented, along with measured results for
multiple components as layers deployed in a comparison purposes. Section 6 includes the conclusion
heterogeneous environment. The inherent complexity of and summary for this approach..
these SOA architectures makes it extremely difficult for
system developers to predict the performance and 2. Literature Survey
estimate the size and capacity of the deployment Several approaches have been proposed for early software
environment needed to guarantee that Service Level performance analysis over the period of time. These
agreements are met. These challenges have led to the performance modeling methods are used for comparing
development of a model for predicting the performance of two or more system architectures, system tuning, finding
SOA-based software applications. This paper proposes a performance bottlenecks, characterizing the workload on
solution for performance modeling and capacity planning the system, determining the number and size of
of SOA-based enterprise applications based on a Layered components (capacity planning), finding the best
Queuing Networks (LQN) approach. The performance of deployment topology at architectural stage and predicting
SOA-based enterprise applications is predicted and the performance at future loads. To facilitate this kind of
evaluated at the early stage of the Software Development early analysis and to make these models easily usable for
Life Cycle.; or example at the PoC stage, before the developer’s groups, a number of methods and tools for
whole application is built. The application performance is automating various activities involved in performance
also evaluated under an insufficient configuration of modeling have been developed.
middleware server’s software resources such as portal Based on the literature survey, Queuing Network Models
server threads, process server threads, database server (QNM), Extended Queuing Network Models, Layered
processes, business components (EJB) pool sizes, DB Queuing Networks (LQNs) models and tools available to
connection pool sizes etc. This approach helps to identify realize the performance modeling have been explored.
the software resource bottlenecks along with hardware One of the popular approaches for performance modeling
resource bottlenecks with the help of the performance is QNM. An SOA application built with analytical QNM
model for a specified user workload distribution and is presented in the paper [7]. A queuing network model
characterization. With the help of this model, the best computer system is represented as a network of queues,
configuration for deployment of various service where the network of queues is a collection of service
components in physical server boxes to meet the existing centers which symbolize the system resources and
workload non-functional requirements is proposed. This customers which depict the users/transactions. Initial
model helps to predict the performance and estimate the QNMs were designed to model resource contentions
required capacity for a given SOA application. This among independent jobs; they lacked the parallel system
Modeling exercise is performed at the architectural stage and synchronization representation. Later they were
of SDLC, with the help of Proof-of-Concept (PoC)-built extended, to represent the synchronization, simultaneous
applications as well as at the requirements gathering stage resource possession, software resources and dynamic
with the use of historic case services data collected from software characteristics resulting in Extended Queuing
previous projects for current workload requirements. Networks (EQN), Stochastic Rendezvous Network
(SRVN) and Layered Queuing Networks (LQN)[1],[3]. In
The remainder of this paper is organized as follows: this paper, using LQN tool, the SOA application
Section 2 gives a sketch of the various forms of performance model is built, validated with practical
performance models based on a literature survey. Section experimentation results and the best deployment topology
3 describes a typical SOA application and its suggested at an architecture stage.
configuration details used for a performance modeling
This order processing application developed with SOA and
3. SOA Benchmark Application built with BPEL order processing business processes,
IBM SOA solution reference architecture has been used for including: placing a new order, changing an order, order
building this application. The BPEL process is the heart of status checking use cases, and mediation service that
the layer in the architecture which orchestrates different manages the interaction between business processes and
service components to realize a business function. As the actual services used in the process. The functionalities of the
components are built with different protocols and mediation module consists of message processing that
technologies and, deployed in heterogeneous environments, includes logging, data and protocol transformations etc. All
a mediation component is necessary to nullify interface, data these bpel, esb and service components are internally
and protocol mismatches between the BPEL process service realized with scalable enterprise JavaBeans by the IBM
consumer and service providers. A typical SOA layered WebSphere Integration Developer. The application is
architecture is shown in below figure 1. deployed with the configuration as shown in figure 5 for our
model validation. An IBM portal server is used for
deploying presentation services of the application functions
web for people and the process integration to realize SOA in the
true sense.
HTTPS
The New Order and Change Order transactions in the Order
Presentation Layer (Portal Server) entry application of the customer domain were identified for
web server modeling. The deployment diagram in figure 2 suggests that
Portal Container the following software and hardware resources were used
Presentation Layer ( Portal Server )
during the transactions processing.
RMI/IIOP

Process Server A portal server thread


The CPU of portal server
Business Process Layer (BPEL)
The CPU of a process server
Mediation Layer( ESB)
The BPEL service ejb component pool
Service Layer
The ESB mediation ejb component pool
The service component pool
JDBC A database connection
EIS Layer ( DB Server ) The CPU of the database server
A database server process
Figure 1: SOA services layered architecture The disk subsystem of the database server (I/O)

Before modeling this application we need to, know the The following services were used in the SOA benchmark
application architecture and deployment environment, application.
identify the request processes to be modeled, identify the
hardware and software resources used by the request New Order Process service
processes and their corresponding service demands on the Change Order process service
hardware resources. New Order mediation service
The SOA application is deployed in the following
Change order mediation service
deployment environment as shown in figure 2. It has an
IBM portal server, IBM process server and an IBM DB2 V9 New Order SCA component service
database server (DBS) for data persistence. Change Order SCA component service
New Order data service
Change Order data service.

The service demands for these two business use cases and
the hardware resources were determined by profiling the
application with JProbe and SQL analyzer tools and by
measuring the time spent by each service at each resource.

Figure 2: SOA Application deployment diagram


machine, so process server processor was modeled as a
Trans Portal PS-CPU PS-CPU PS – DBS- DBS- process sharing device
action Serve (in sec) (in sec) CPU CPU I/O
Type r CPU BPEL Mediatio (in sec) (in sec) (in · Database Server Processor (DBP) – Single DB server
(in
sec)
Process
EJB
n
EJB
New
Order
sec)
executed multiple database instances in sharing mode,
Service therefore it was modeled as process sharing device in
EJB
New 0.00 0.004 0.003 0.001 0.002 0.00 LQN model.
Order 2 15 · Database Disk (DBDiskP) – Only one database disk
Change 0.00 0.002 0.004 0.006 0.016 0.00 was considered in deployment diagram and as DB disk
Order 3 23 has First-In First-Out (FIFO) scheduling, it was
modeled as single device having FIFO scheduling.
Table 1: Workload Service Demands
Once the nodes are identified, they are connected as per the
4. LQN Model for SOA Benchmark deployment environment. The above mentioned tasks and
Application processors were connected using the arcs referencing the
The formulation of the LQN model for the given SOA links in the deployment diagram as shown below in figure 3.
application is explained here. LQN modeling starts with The transaction flow paths are used to model the entries
identification of nodes for the acyclic graph. First the high corresponding to the services provided by each task. The
level component instances in application architecture (figure NewOrder transaction was modeled by adding the
2) were modeled as following LQN tasks as shown figure 3: corresponding entries in all the tasks with their service
demands as shown in the figure 3. All the synchronous
· Users - The multiple users accessing the system with requests from entries to entries were modeled with the
some think time were modeled as multiple Users
probability of one for the number of calls for each entry.
reference tasks.
· Portal Serve Task (PortSer) – Multiple instances of Portal Server
createU
PortServer, which was an active task as it received the (0,0,0)
User P
Users
request from users and sent it to Process server, were
represented in model. (1,0,0)
· BPEL Process Task (BPELProc) – Multiple instances (1,0,0) createPS
of BPEL Process, which was an active task as it (0.002,0,0)
Portal P
received request from Portal server and sent it to ESB Portal Server

Process, were represented in model.


(1,0,0)
· ESB Service Task (ESBProcess) – Multiple instances of
ESB Process, which was an active task as it received BPEL createBP
Server
(0.004,0,0)
request from BPEL process and sent it to Service BPEL Server
Process, were represented in model.
· Component Service task (Service) – Multiple instances (1,0,0)

of ESB Process, which was an active task as it received createES


request from Portal server and sent it to DB server, ESB Server
(0.003,0,0) Process P

were represented in model.


· Database server (DBServer) – Database connection (1,0,0)

pool having multiple instances of database connection


createS
was modeled in the LQN as multiple DBServer tasks. DBDisk (0.001,0,0)
· Database Disk (DBDisk) – DBDisk was modeled as Component Service

multiple pure server tasks to model the database disk (1,0,0)


I/O operations.
createES
Next, the hardware devices (processor and disk) shown in DBServer
createDB
(0.003,0,0)
(0.002,0,0) Db P
the application deployment diagram (figure 1) were mapped DBServer
to LQN devices/processors in figure 3:
(1,0,0)
· User Processor (UserP) – Multiple users accessed the
system from separate machines, therefore multiple User Portal P createDisk
processors were shown in LQN model. DBDisk
(0.006,0,0)
DBdiskP
· Portal Server Processor (PortalP) – All the application
server instances were running on single machine, so
application server processor was modeled as a process Figure 3: SOA Application LQN model
sharing device.
· Process Server Processor (ProcessP) – All the The traversal path of a client request in the model is as
application server instances were running on single follows: Every client request spends the user specified think
time before joining the client queue system. When the portal components, EJB pool size of 50 service components, and
server threads are available, each client request consumes 30 DBS processes.
the thread resource and then moves to the portal server CPU Performance test results were captured. The simulation
queue to get serviced. It receives the service from the CPU model is built with the same modeling parameters and
of portal server for the specified service time. The request simulations are run for longer durations to obtain stable
then moves to get one of the BPEL process ejb pool results. The performance metrics obtained from the tests
instances. After acquiring the pool instance, the request
moves to the queue of Process server CPU. If the process Metrics LQN Test % error
server CPU is available for service, the request acquires it resul Results
ts
and holds it for the service time of BPEL process. The Portal Server CPU% 38.9 36.2 7.4%
request releases the process server CPU and looks for the
availability of the mediation component EJB pool instance. Process Server CPU% 77.8 76.5 1.6%
Utilization
If the instance is available, it picks that pool instance and Bpel component)
again waits for the process server CPU for its service. After Process Server CPU% 58.3 54.3 7.3%
getting the process server CPU, the request holds it during Utilization ( Mediation
the mediation component execution time and then releases component)
it. Now the request is moved to pick the service component Process Server CPU% 19.4 18.7 3.74%
Utilization( Service
ejb pool instance. After picking this instance, the request component)
again seeks service of the process server CPU for the DB CPU % Utilization 38.92 34.8 11.8%
service time of the service component execution. Once the DB Disk % Utilization 29.19 26.7 9.3%
process server CPU is available, it picks and holds it for the
service time and releases it. The request is similarly Response time (in Sec) 0.027 0.035 22.8%
processed in the DB server with the DB CPU and Disks. Throughput (req/sec) 194.6 192.4 1.1%

5. LQN Performance Model Analysis Table 2: Case 1 performance metrics


The LQN performance model is built for the SOA
benchmark application mentioned above and is used for and the simulation model results with their percentage
what-if analysis. The various test conditions used and, their errors are shown in table 2.
results, are described in this section. Case 1 used the single On studying the CPU utilization at various servers, the
class models with different settings for the multiplicity of process server was identified to be the bottleneck device.
software resources and cpu’s to identify the system Because, the sum of the BPEL, mediation and service
bottlenecks and configure these parameters for required components utilization are all sub parts of the process
performance. Case 2 is presented for horizontal scalability server utilization is exceeding more than 100% (50.74% +
with additional increases of hardware to check the model 38.05% + 12.68%). Hence, to support a larger number of
validity. Case 3 is designed to use the multi-class models to concurrent users, another process server needs to be added
demonstrate the accuracy of the modeling results for in a clustered configuration. Alternatively, since the
multiple requests classes (scenario mix.). services are loosely coupled, the process (BPEL) services
The LQNS tool requires its own LQN format input files to may be deployed in one box and, the mediation services and
depict the above model and is shown in Appendices A and business service components are deployed in a separate box.
C. After running the LQN tool, an output text file is
generated which consists of response time, throughput and Case 2: Single class of request with cluster
all hardware and software resource utilization. The tool
reports the resource utilization and throughputs directly, Of the above two cases, the bottleneck for the new order
while the response times taken represent the service times processing application, for the deployment given in figure
for the user entries in the output file as it includes - queuing 2, was detected to be at the process Server. Therefore in this
for all processors, service time at all processors, queuing for case, one more CPU was added at the process server and all
all serving tasks and phase one service times at all serving the other modeling parameters remained same. Study of
tasks in the request’s path [3]. The analyses of these models these performance metrics evinces that by adding one
are also given concisely for different configurations. The Process Server CPU, the throughput has increased to 194.6
performance model is accepted as the model reported from 126.85 in above cases and response time has reduced
system resource utilization and throughputs are within 10% from 0.588 sec to 0.0277 sec. As the user load is shared by
error and the response time errors are within 30% with the two processors, the utilization has also come down to
practically measured results. 77.84% per processor. In the below table per processor of
Case 1: Single Class of Request Process Server utilization is shown. Now the process
Initially, the New Order transaction was considered for server’s CPUs are equally sharing the load. As the
study. Performance tests for this transaction were conducted throughput is increased, since the service time at DB disk is
on the system with 200 system users having an average high, this device is operating at 38.9 %.
think time of 1 sec, 80 portal server threads, EJB pool size
of 70 bpel components, EJB pool size of 60 ESB mediation
6. Summary and Conclusions
Metrics LQN Test % error
resul Results
This paper presents how to use the popular analytical
ts modeling technique, LQN,for evaluating the performance of
Portal Server CPU% 25.4 23 10.4% SOA applications. It demonstrates how this technique can
Process Server CPU% 50.74 52 2.4%
be used for finding the best deployment topology for the
Utilization services to be hosted in the distributed environment. As the
Bpel component) services are self locatable and loosely coupled, they are
Process Server CPU% 38.05 37.3 2% flexible for deployment in different physical servers with
Utilization ( Mediation
component)
the utilization indicative figures reported by the LQN tool.
Process Server CPU% 12.68 11.28 12.4% The tool also indicates and reports the exact number of
Utilization( Service software resources (instances) to be configured for these
component) deployed services. The performance results obtained from
DB CPU % Utilization 25.37 23.0 10.3% corresponding analytical tools for the SOA application
DB Disk % Utilization 19.03 17.0 11.94% models are compared with its measurement results.
Response time (in Sec) 0.588 0.674 12.7%
Throughput (req/sec)
7. Acknowledgement
126.85 125.6 0.99%
The author would like to acknowledge Murray Woodside,
Carleton University, for providing the layer queuing
Table 3: Case2 performance metrics network solver (LQNS) for analytical modeling of SOA
based e-business applications.
Case 3: Multi Class of Requests (Scenario Mix)
Appendix
Appendix A: LQN model
In this case we study our model for a mix of different
classes of requests. New Order and Change Order
G
transactions of the order processing application were
"SOA 4 tier application with portal server as tier1,
modeled and tested for 20 clients (10 for each request class)
process server with 3 logical layers, DB CPU and DB
with average think time of 1 sec, 10 portal server threads (5
Disk"
for each request class) and infinite number of DB
0.00001
connection pools and DB disk processes. Service demands
100
for each entry on respective software servers input to
1
models were picked up from table 1. The total number of
0.5
users was uniformly distributed with a ratio of 1:1 for these
#End of general information
two transactions. In the model, portal server, process server
-1
CPU and DB server CPUs request processing queues are
scheduled with Process Sharing mode to process both class
#Processor information
of requests at high sampling intervals. The result of testing,
P5
simulation models for this case is summed up in table 4
p UserP f i
given below.
p PortalP s
METRICS MODEL TEST % ERROR
p ProcessP s
RESULTS RESULTS p DBP s
Portal Server 4.14 4.34 4.6% p DBDiskP f
CPU% #End of processor info
Process Server CPU% 18.09 19.3 6.2% -1
Utilization
( Bpel component)
DB CPU % Utilization 14.82 15.32 3.2% #Task Information
T0
DB Disk % Utilization 9.12 9.7 5.9%
t Users r user -1 UserP z 1 m 200
New Order 0.015 0.0207 27.4% t PortalSer n newOrderW -1 PortalP m 80
Response time (in Sec)
t BPELProcess n newOrderBPEL -1 ProcessP m 70
New Order throughput 9.84 9.79 0.5%
t ESBProcess n newOrderMed -1 ProcessP m 60
Change order 0.0389 0.04128 5.8% t Service n newOrderSer -1 ProcessP m 50
response time
t DB n newOrderDB -1 DBP m 30
Change order throughput 9.63 9.60 0.3%
t DBDisk n newOrderDisk -1 DBDiskP
Overall throughput 9.7 9.69 0.1% #End of Task information
-1
Table 4: Case 3 performance metrics
#Entry Information
E0
s user 0 0 0 -1 Utilization and waiting per phase for processor: ProcessP
y user newOrderW 1 0 0 -1
s newOrderW 0.002 0 0 -1 Task Name Pri n Entry Name Utilization Ph1 wait
y newOrderW newOrderBPEL 1 0 0 -1 BPELProcess 0 70 newOrderBPEL 0.507485
s newOrderBPEL 0.004 0 0 -1 0.135225
y newOrderBPEL newOrderMed 1 0 0 -1 ESBProcess 0 60 newOrderMed 0.3806 0.101546
s newOrderMed 0.003 0 0 -1 Service 0 50 newOrderSer 0.126867 0.0340536
y newOrderMed newOrderSer 1 0 0 -1 Total processor utilization: 1.01495
s newOrderSer 0.001 0 0 -1
y newOrderSer newOrderDB 1 0 0 -1
s newOrderDB 0.002 0 0 -1 Utilization and waiting per phase for processor: DBP
y newOrderDB newOrderDisk 1 0 0 -1
s newOrderDisk 0.0015 0 0 -1 Task Name Pri n Entry Name Utilization Ph1 wait
#End of Entry Information DB 0 30 newOrderDB 0.253743
-1 0.000320638

Appendix B: LQN solution


Utilization and waiting per phase for processor: DBDiskP
Service times:
Task Name Pri n Entry Name Utilization Ph1 wait
Task Name Entry Name Phase 1 DBDisk 0 1 newOrderDisk 0.190309 0
Users user 0.575956
PortalSer newOrderW 0.57283
BPELProcess newOrderBPEL 0.548351
ESBProcess newOrderMed 0.279375
Service newOrderSer 0.0735783
DB newOrderDB 0.00447638 Appendix C: LQN model for multi class requests
DBDisk newOrderDisk 0.0015
Throughputs and utilizations per phase: G
"Simple SOA Scenario mix application"
Task Name Entry Name Throughput Phase 1 0.00001
Total 100
Users user 126.956 73.1214 73.1214 1
PortalSer newOrderW 126.956 72.7245 0.5
72.7245 #End of general information
BPELProcess newOrderBPEL 126.871 69.57 -1
69.57
ESBProcess newOrderMed 126.867 35.4435 #Processor information
35.4435 P5
Service newOrderSer 126.867 9.33469 9.33469 p UserP f i
DB newOrderDB 126.872 0.567925 p WebP s
0.567925 p ProcP s
DBDisk newOrderDisk 126.873 0.190309 p DBP s
0.190309 p DBDiskP s
#End of processor info
-1
Utilization and waiting per phase for processor: UserP
#Task Information
Task Name Pri n Entry Name Utilization Ph1 wait T0
Users 0 200 user 0 0 t Users1 r createuser -1 UserP z 1 m 10
t Users2 r updateuser -1 UserP z 1 m 10
t WebSer1 n createW -1 WebP m 5
Utilization and waiting per phase for processor: PortalP t WebSer2 n updateW -1 WebP m 5
t ProcSer1 n createProc -1 ProcP i
Task Name Pri n Entry Name Utilization Ph1 wait t ProcSer2 n updateProc -1 ProcP i
PortalSer 0 80 newOrderW 0.253913 t DB1 n createDB -1 DBP i
0.000333746 t DB2 n updateDB -1 DBP i
t DBDisk1 n createDisk -1 DBDiskP i
t DBDisk2 n updateDisk -1 DBDiskP i DB1 createDB 9.84562
0.0388556 0.0388556
#End of Task information DB2 updateDB 9.63204
-1 0.201872 0.201872
#Entry Information DBDisk1 createDisk 9.84562
E0 0.0152856 0.0152856
s createuser 0 0 0 -1 DBDisk2 updateDisk 9.63204
s updateuser 0 0 0 -1 0.0228947 0.0228947
y createuser createW 1 0 0 -1
y updateuser updateW 1 0 0 -1
s createW 0.002 0 0 -1 Utilization and waiting per phase for
y createW createProc 1 0 0 -1 processor: UserP
s updateW 0.003 0 0 -1
y updateW updateProc 1 0 0 -1 Task Name Pri n Entry Name
s createProc 0.008 0 0 -1 Utilization Ph1 wait
y createProc createDB 1 0 0 -1 Users1 0 10 createuser
s updateProc 0.012 0 0 -1 0 0
y updateProc updateDB 1 0 0 -1 Users2 0 10 updateuser
s createDB 0.002 0 0 -1 0 0
y createDB createDisk 1 0 0 -1 Total processor utilization:
s updateDB 0.016 0 0 -1 0
y updateDB updateDisk 1 0 0 -1
s createDisk 0.006 0 0 -1
s updateDisk 0.005 0 0 -1 Utilization and waiting per phase for
processor: WebP
#End of Entry Information
-1 Task Name Pri n Entry Name
Utilization Ph1 wait
WebSer1 0 5 createW
Appendix D: LQN solution for mix scenario
0.0196912 4.64813e-05
Task Name Entry Name Phase 1
WebSer2 0 5 updateW
Users1 createuser 0.0156803
0.0288961 6.67344e-05
Users2 updateuser 0.0382023
Total processor utilization:
WebSer1 createW 0.0157304
0.0485873
WebSer2 updateW 0.0385064
ProcSer1 createProc 0.0136374
ProcSer2 updateProc 0.035373
Utilization and waiting per phase for
DB1 createDB 0.00394649 processor: ProcP
DB2 updateDB 0.0209583
DBDisk1 createDisk 0.00155253 Task Name Pri n Entry Name
DBDisk2 updateDisk 0.00237693 Utilization Ph1 wait
ProcSer1 0 1 createProc
0.078765 0.000845453
Throughputs and utilizations per phase: ProcSer2 0 1 updateProc
0.115584 0.0012073
Task Name Entry Name Total processor utilization:
Throughput Phase 1 Total 0.194349
Users1 createuser 9.84562
0.154382 0.154382
Users2 updateuser 9.63204 Utilization and waiting per phase for
0.367966 0.367966 processor: DBP
WebSer1 createW 9.84562
0.154875 0.154875 Task Name Pri n Entry Name
WebSer2 updateW 9.63204 Utilization Ph1 wait
0.370895 0.370895 DB1 0 1 createDB
ProcSer1 createProc 9.84562 0.0196912 0.000196981
0.134269 0.134269 DB2 0 1 updateDB
ProcSer2 updateProc 9.63204 0.154113 0.0012907
0.340714 0.340714 Total processor utilization:
0.173804
Utilization and waiting per phase for
processor: DBDiskP

Task Name Pri n Entry Name


Utilization Ph1 wait
DBDisk1 0 1 createDisk
0.0147684 5.25261e-05
DBDisk2 0 1 updateDisk
0.0221537 7.69344e-05
Total processor utilization:
0.0369221

References
1. Murray Woodside, “Layered Resources, Layered
Queues and Software Bottlenecks: A tutorial”
Performance Tools 2003 conference, Sept 2, 2003.
2. Roy Greg Franks, “Performance Analysis of
Distributed Server Systems”, PhD thesis, Carleton
University, Canada, Dec 20, 1999.
3. Greg Franks, Peter Maly, Murray Woodside, Dorin C.
Petriu, Alex Hubbard. “Layered Queuing Network
Solver and Simulator User Manual”, Carleton
University, Canada, Dec 15, 2005.
4. Samuel Kounev and Alejandro Buchmann,
“Performance Modelling of Distributed E-Business
Applications using Queuing Petri Nets”, IEEE
International Synopsium on Performance Analysis of
Systems and Software 2003.
5. http://www.perfeng.com/
6. http://www.spec.org/osg/jAppServer2001/
7. Henry H. Liu, Pat V. Crain, “An Analytic Model For
Predicting The Performance Of SOA-Based Enterprise
Software Applications”, 30th International Computer
Measurement Group Conference, USA, December 5-
10, 2004.

Note: Java is a trademark of Sun Microsystems in the


United States, other countries, or both. DB2 is a registered
trademark of IBM Corporation in the United States, other
countries, or both. IBM is a registered trademark of IBM
Corporation in the United States, other countries, or both.
WEBSPHERE is a registered trademark of IBM
Corporation in the United States, other countries, or both.
Other company, product, or service names may be
trademarks or service marks of others.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy