0% found this document useful (0 votes)
5K views98 pages

Opentaps Dev. Guide

This document provides an overview of domain-driven design and how it has been implemented in OpenTaps. It begins by defining domain-driven design and explaining how it separates an application into distinct domain, application, and infrastructure tiers. It then discusses the key benefits of domain-driven design such as organizing an application into natural domains and separating business logic from infrastructure concerns. The rest of the document details how domain-driven design concepts have been applied in OpenTaps, including the use of entities, repositories, factories, services, and exceptions. It provides an example and discusses how all the pieces fit together.

Uploaded by

yoogi85
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5K views98 pages

Opentaps Dev. Guide

This document provides an overview of domain-driven design and how it has been implemented in OpenTaps. It begins by defining domain-driven design and explaining how it separates an application into distinct domain, application, and infrastructure tiers. It then discusses the key benefits of domain-driven design such as organizing an application into natural domains and separating business logic from infrastructure concerns. The rest of the document details how domain-driven design concepts have been applied in OpenTaps, including the use of entities, repositories, factories, services, and exceptions. It provides an example and discusses how all the pieces fit together.

Uploaded by

yoogi85
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 98

c  


   


Technical Reference
1. Developer Documentation
2. Tips and Tricks
3. API and Technical Design Reference
Recommended Readingp
p
p p  p pp p p
 p p
  p

p
p

u p   

x c p  
p  ppp p
http://www.opensourcestrategies.com/ofbiz/tutorials.php
x j  p p p   p
In this technical reference document, we will cover the standard approach to importing
data from external sources. Everything you need for this can be found in the dataimport
module in hot-deploy.
x c p pj  p  p
The goal of the Data Import module is not to build a set of data import tools
against a particular "standard," but rather to recognize that each organization has
legacy or external data in its own unique format. Therefore, the Data Import module is a
set of flexible tools which you can use as a reference point for setting up your own
custom import and export. The existing services and entities can be used "as is" or with
little modification if your data happens to be similar, or you can add to and extend them
if you have additional data.
The Data Import module sets up "bridge entities" which are de-normalized and
laid out in a way that is similar to most applications' data definitions. There are no
foreign key relationships to any other opentaps entity, so any data could be imported
into them. You would use your own database's import tools to import records into the
bridge entities. Then, you would run one of the Data Import module's import services to
transform the data in the bridge entities into the opentaps system. The Data Import
services all follow a common standard:
Each row of data in a bridge entity is wrapped in its own transaction when it is
imported and succeeds or fails on its own.
When a row of data in a bridge entity is imported successfully, the importStatusId
field will be set to DATAIMP_IMPORTED
If the import failed for the row, the status will be DATAIMP_FAILED and the
importError field will contain any error messages.
x j  p 
p
To support this pattern, we have created a simple and extensible import
framework. All the difficult details about setting up an import, starting transactions and
handling errors are encapsulated into the OpentapsImporter class. Additionally, we
have an interface called an ImportDecoder which is responsible for processing a single
row from the bridge entity and mapping it onto a set of Opentaps Entities.
When used properly, you will be able to focus the majority of your development on the
problem of mapping the import data into the opentaps model. You will also be able to
take advantage of polymorphism to re-use common mapping patterns or customize
existing ones for the particularities of your data.
cp pj  p p
A brief outline of the import process is as follows,
Break your original data into a set of suitably de-normalized CSV files.
For example, put all your customer data in one CSV and all your product data in
another.
The goal here is to minimize the amount of data manipulation. This will be handled in
the import service.
For each CSV file, create an Opentaps Import Entity (i.e., the bridge table) that has the
same fields as the CSV.
Add three more fields for use by the import system: importStatusId, importError, and
processedTimestamp
Import your CSV data into this table using standard SQL procedures for your database
Define a     opentaps service that will execute your import (use-
transaction="false")
You may wish to implement the opentapsImporterInterface service, which defines
parameters to control the way the import runs
Create an implementation of ImportDecoder, which requires a decode() method
In the decode() method, you are passed a row from the bridge entity.
Use the row data to create the equivalent set of opentaps entities.
If there are problems that should cause the row to not be imported, throw any kind of
exception. The exception message will be stored in importError
All operations in decode() will be rolled back if an exception is thrown.
Return a list of opentaps entities to persist, they will be done in one update operation for
efficiency.
In the service implementation, create an instance of OpentapsImporter
Specify the name of your Opentaps Import Entity in the constructor
Specify the ImportDecoder that you just created
Run the import by calling opentapsImporter.runImport()

Domain Driven Architecture


Domain Driven Architecture
One problem with early versions of opentaps is that the ofbiz framework which we used
is not an object-oriented framework. Instead, it is based on a data model which is
fundamentally relational, and that data model is accessed via a map-like Java object
called GenericValue. Most of the services in the business tier used a GenericDelegator
to retrieve GenericValues from the database, performed operations on them, and then
stored them back into the database again using the same GenericDelegator.
While this lightweight architecture could do a lot of things, as opentaps grew it became
apparent that some of the application could significantly benefit from an object-oriented
architecture. A few months ago, we started down this path and thought about how to
write more object-oriented code with the ofbiz framework. More recently, after reading
about Domain Driven Design and Domain Driven Design Quickly, we realized that what
we really needed was not just object-oriented code, but rather a more formal
classification of our business logic into domains. This document explains what domain
driven architecture is, how we have implemented it, and how it could help you structure
your code.
   p
[hide]
1 What is Domain Driven Design?
2 Why Domain Driven Design?
3 Terminology
4 How Domain Driven Design is Implemented
4.1 Entity
4.2 Infrastructure
4.3 Factory
4.4 Repository
4.5 Repositories or Factories?
4.6 Services
4.7 Exceptions
4.8 Specifications
5 An Example Using Domains
6 Putting It All Together

What is Domain Driven Design?


The basic idea behind a domain is to group together all the "domain expertise," or
business knowledge, of an application and separate it from the application and its
infrastructure. It is a different way of thinking about how to organize large software
applications and complements the popular Model View Controller (MVC) architecture,
which we also use in opentaps. With the Model View Controller architecture, the
application's user interface (View) is separated from its business logic (Model), and a
Controller directs requests from the view layer to the relevant business logic found in
the model layer. The advantage of doing this is that the same business logic could then
be reused elsewhere, either in another page in the view layer or as part of other
business logic in the model layer.
MVC, however, doesn't really say how your model should be structured. Should it be
object-oriented, or should it all be written in procedural languages or just SQL? Should
they reside in separate components and packages, or could you just have one big file,
which has all of your business logic? The domain driven design answers this question
by separating the model layer ("M") of MVC into an application tier, a domain tier, and
an infrastructure tier. The infrastructure tier is used to retrieve and store data. The
domain tier is where the business knowledge or expertise is. The application tier is
responsible for coordinating the infrastructure and domain tiers to make a useful
application. Typically, it would use the infrastructure tier to obtain the data, consult the
domain tier to see what should be done, and then use the infrastructure tier again to
achieve the results.
For example, let's say that you wanted to assess late charges on all of your customers'
outstanding invoices. MVC would tell you that your application should have a screen
which shows you a list of outstanding invoices, and when the user says "Assess Late
Charges", the controller would pass the users' input parameters to business logic in the
model tier to do the dirty work of assessing those late charges.
With a domain driven design, we would look more deeply at what that dirty work actually
involved. The application tier would call upon the infrastructure tier to retrieve all the
invoices which   get assessed charges. Then, it would present that list of invoices to
the domain tier, which has the business expertise to say "Should this invoice get
charged?" and if so "How much should this invoice get charged?" The domain tier would
then return the late charges for each invoice to the application tier. The application tier
would then call on the infrastructure tier again to store the late charges into the
database.
Why Domain Driven Design?
Why do we want to do all this?
Œ p pp p p 
p p  p p 
The first and most obvious benefit of domain driven design is that it helps us organize
our application into natural domains, so you don't have to come in contact with all the
800+ tables in opentaps and the over 1,200 services that support them. For example, a
domain driven design would allow us to break an application down into a few large
domains, such as Customer, Order, and Invoice, and hide all the details within each of
those domains from developers who don't need to work with them. Thus, if you are
working on the Order domain, you may need to know a little bit about a Customer, such
as his home address, shipping addresses, payment methods, but you don't really need
to know all the tables used to track the relationship of customer information and their
histories.
A related advantage is that it allows us to separate business tier expertise from
infrastructure expertise. Thus, if you are working primarily with implementing business
processes, you can write code which basically work with the different domains. You'll be
happy to leave the database to somebody whose job is working on the infrastructure
tier, and who's probably glad not to have to worry about your business processes.
Œ p pp p p p  p p 
Imagine that you worked in an industry or a company that had customers, but they did
some special things for their customers that most other companies don't. With an
object-oriented domain driven design, you will be able to extend the existing Customer
domain objects from opentaps with new methods specific to your industry or company,
while still using everything from the opentaps Customer.
Œ p pp p p p  pp  p
A potentially more valuable advantage is that domain driven design gets us closer to a
plug-and-play application. Imagine again that your application is broken down into the
Customer and Order domains, so that the Order domain interacts with customer
information only through the Customer domain. What if you wanted to use the opentaps
order entry and order management tool with another CRM application, like SugarCRM
or SalesForce.com? With good domain separation, it would be a matter of just
implementing the Customer domain objects used by the Order domain to call the new
CRM application. Alternatively, if you wanted to use opentaps CRM with a legacy order
management system, you could implement the Order domain objects used by the
Customer domain in opentaps.
Finally, by separating out the domain tier of business knowledge from the infrastructure
tier, it also allows us to deploy opentaps on a different infrastructure tier later as well.
For example, instead of using the entity engine, you could use Hibernate or even the
Google storage API instead. This frees your application from lock-in to a particular
framework.
If these advantages sound familiar, they should be. They are in fact the advantages of
encapsulation, polymorphism, and inheritance of object oriented programming. Domain
driven design is essentially a practice for realizing those advantages in a large-scale
application.
Terminology
Now let's look at some of the terminology used by Domain Driven Design, which will
serve as our starting point:
  is a body of business expertise. For example, you might have a domain of all
business expertise about customers -- who is responsible for them, what prices they
should get, how to contact them, etc.
   is an object which has a distinct identity. For example, a Customer entity has a
distinct identity with an ID.
 pc is an object which has no distinct identity. For example, the color of a
product does not have a distinct identity if you think the "blue" of two blue shirts are the
same thing.
  is a higher level entity which could be viewed from the outside and in turn
links you to other entities and value objects. For example, Customer might be an
aggregate, so you can view Customer from Orders, Invoices, etc., but a Customer's
addresses and phone numbers  only be retrieved by going through Customer
first.
j     is where the lower level infrastructure of your application is available. For
example, it would provide you with the ability to access databases, remote web
services, etc.
  is used to create Entities. For example, a Factory might create an Invoice entity
(and its related entities and value objects) from an Order entity.
!   is used to retrieve, store, and delete Entities from the database. For
example, a Repository might help you store the Invoice (and related entities) your
Factory created and then bring them back from the database.
 is business logic that involves several domain Entities or Aggregates. For
example, creating Invoices from Orders is a service.
How Domain Driven Design is Implemented
When we started to implement the domain driven design, we faced a common issue for
many developers: How could we need true to the spirit of a domain driven design, but at
the same time live with our existing framework and code base?
What we did is first implement a set of foundation classes in org.opentaps.foundation.*
to support the Entity, Repository, Inrastructure, Factory, and Service concepts under the
ofbiz framework. For each of these, we implemented an interface, and then we
implemented a specific version for the ofbiz framework. Thus, we have the following
interfaces:
* org.opentaps.foundation.entity.EntityInterface
* org.opentaps.foundation.repository.RepositoryInterface
* org.opentaps.foundation.factory.FactoryInterface
* org.opentaps.foundation.infrastructure.Infrastructure
* org.opentaps.foundation.service.ServiceInterface
Then, for each of these we implemented a version for the ofbiz framework:
* org.opentaps.foundation.entity.ofbiz.Entity
* org.opentaps.foundation.repository.ofbiz.Repository
* org.opentaps.foundation.factory.ofbiz.Factory
* org.opentaps.foundation.infrastructure.ofbiz.Infrastructure
* org.opentaps.foundation.service.Service
Each of these is designed to map legacy code from ofbiz and ofbiz-based portions of
opentaps into the concepts of the domain driven design:
  p
The Entity object is a Java class equivalent for ofbiz GenericValues. There is a package
of entity classes which are automatically generated for each entity defined by the
entitymodel XMLs, including all the original ofbiz applications, the opentaps
applications, and any of your own custom applications which extend existing entities or
add new ones.
You can use than base entities as they are, converting back and forth between the
GenericValue or Java Maps, or you can extend the base entities with additional
methods that encapsulate business logic. For example, there is an Invoice base entity
class with fields such as invoiceId, invoiceDate, referenceNumber and accessor method
such as getInvoiceId(), setInvoiceId(String invoiceId), etc. Then, there is an Invoice
class in the billing domain which extends the base Invoice class and has methods such
as getInvoiceTotal(), getAppliedAmount(), etc.
j    p
The Infrastructure class is a global directory of infrastructure resources, such as
database access connections, located across all the frameworks and platforms of
opentaps. Initially, it can be used to obtain the delegator and dispatcher of the ofbiz
framework, but as more applications are added to opentaps, it will also return the
infrastructure their frameworks require, including JDBC connections or hibernate
session factories. These infrastructure resources are passed to the Repository and the
Factory classes so that they can interact with the database and external Web services.
Part of the infrastructure package is the User class, which constructs a cross-framework
user object that can be used by all the different applications and their frameworks. For
example, you can create a User from the ofbiz UserLogin GenericValue, or you will also
be able to create Users from legacy or external applications, kerberos tokens, or LDAP
and returned their ofbizUserLogin. Note that this User class is not an extension of the
UserLogin base entity class. The UserLogin class is designed to model user logins as
data, whereas the User classes designed to pass user authentication between
applications.
 p
The Factory class is designed to create Entity objects based on other parameters. For
example, you might want to create an Invoice Entity based on customer and invoice
terms, or you might want to create an Invoice Entity based on an existing Order, taking
its customer and list of items as a starting point. The Factory class is meant to be
extended to create Factories for the different domain aggregates such as Invoice,
Customer, Order, etc.
Under the ofbiz framework, the Factory often needs to use legacy services. Note that
this is an interesting issue: in a classic domain driven design, the Factory would create
the Entity as pure objects, and then the Repository would be responsible for storing
them to the database. Such separation of roles is not present in the ofbiz framework,
however, where virtually every service would access the database, create new data,
and then store it back into the database. Thus, to reuse these existing services, our
Factories sometimes end up storing the objects to the database first by calling an ofbiz
service, then retrieve them again and return them as Entity objects.
!  p
The Repository is designed to help retrieve and store Entities and is meant to be
extended for the major Entities, so the foundation Repository should be extended to
CustomerRepository, InvoiceRepository, and OrderRepository to support Customer,
Invoice, and Order Entities.
For the ofbiz framework, the preferred way to retrieve and store data could either use
the service dispatcher or the delegator. Therefore, the Repository could be instantiated
either with the delegator alone or with the delegator, dispatcher, and user login. The
Repository should offer a set of methods for retrieving or persisting its related Entity and
then either use the delegator or call the service to do it.
As a good design pattern, a repository should fetch objects directly and not
associations. The modeling details of a particular implementation should be hidden from
the domain. As an example, the repository should expose methods such as
getPostalAddress() and getPhoneNumber() while the association tables
PartyContactMech and InvoiceContactMech are dealt with within the implementation of
these methods.
!  p p "p
Since almost all legacy ofbiz services store values into the database, it may not be clear
at first which one you should use. Remember that Factories are intended to create new
Entity objects, while Repositories are intended to retrieve and store them. Therefore, we
would follow the following rules for Factories and Repositories:
Use Factories for create and Repositories for get and store
Always return the domain's Entity object from your Factory, so it looks like a real object
Factory
Factories will almost always use the service dispatcher, whereas Repositories will
usually use the dispatcher but may sometimes use the delegator
p
Services are designed to encapsulate business logic that span multiple Entities, such
creating Invoice from an Order. With the opentaps Service foundation, you can create
your services as plain Java objects (POJOs), similar to the Spring framework or JBoss
Seam. When your services object is instantiated, it will be created with Infrastructure, a
User object, and the locale. From these objects, you can obtain the ofbiz framework's
delegator, dispatcher, and UserLogin GenericValue. The Service foundation class also
can load the domains directory (see below) for you, and this is done automatically if you
use the POJO Service Engine. Parameters for your service are passed into your service
via set methods, the execution of your services via a void method, errors are
propagated by exceptions, and the results of your service are passed back via get
methods. This is in contrast to the ofbiz framework, where services are defined as static
Java methods (don't  write one in minilang!), the parameters are passed in a map,
the results and any error messages are returned in a map.
Because these services are Java objects, we follow the convention to group services
with similar parameters together into one class. For example, all services which create
Invoices from Order should be in one class, and all services which create Invoices from
Shipment should be in another. This allows them to share set and get methods without
having one service class which is too long.
What domain should a service be a part of? By convention, we recommend that the
service is part of the domain of its 

, so all services which create Invoices should
be part of the Invoice domain, whether the invoices are created from orders, shipments,
or recurring agreements.
  p
All exceptions should be implemented as subclasses of
org.opentaps.foundation.exceptions.FoundationException, which is a base class that in
its turn extends the ofbiz GeneralException class. EntityException, ServiceException,
RepositoryException, InfrastructureException, and FactoryException all extend the base
FoundationException class. You can in turn implement specific exceptions which
subclass these general exceptions.
The FoundationException class can be instantiated with an exception message or
another exception. It allows you to set a locale, a message UI label, and a Map context
for the message UI label:
throw new ServiceException("Service failed");
throw new ServiceException("OpentapsErorLabel", UtilMisc.toMap("orderId", orderId),
locale);
The FoundationException class's getMessage() method has been overridden to expand
the UI label with the context map and the locale.
It also allows you to set your error message in one level in your code and then localize it
at a higher level once the locale is known. For example, in your repository, you can
throw a not found exception without setting the locale, and then the POJO Service
Engine will catch that exception and automatically localize it:
// in repository:
throw new EntityNotFoundException("OpentapsErrorLabel", UtilMisc.toMap("orderId",
orderId));

// POJOJavaEngine
... catch (Throwable t) {
if (t instanceof FoundationException) {
(FoundationException) t.setLocale(locale);
return ServiceUtil.returnError(t.getMessage());
}
Finally, the FoundationException class allows you to set whether your exception
requires a global rollback or not. By default, exceptions do require rollback, but you can
turn it off with setRequiresRollback:
ServiceException ex = new ServiceException(...);
ex.setRequiresRollback(false);
Setting requires rollback to false will cause the POJO Service Engine to use
ServiceUtil.returnFailure instead of ServiceUtil.returnError. This will cause the service to
return an error message, but it will not cause the other services in a chain to abort. You
can also use the requires rollback flag for your own exception management.
   p
Most developers know that they should not use literal strings in their code. For example,
we all feel that it would be bad to write code like this:
if (order.getStatusId().equals("ORDER_APPROVED")) {
// .... }
Our first reaction is always to define a file of string literals, and then reuse the
predefined literals:
public static final String ORDER_APPROVED = "ORDER_APPROVED";
//...
if (order.getStatusId().equals(ORDER_APPROVED)) {
// ... }
This is nice: now, the compiler will be able to check if our status IDs are correct, and if
somebody decides to change the ID code, all we have to do is change in one place.
But inevitably, we run into other problems with this kind of code. Somebody might
decide that instead of having one state called approved, we want to have several states,
like: approved, in production, pending shipment, etc. that have the same meaning as
being approved. Later, somebody else might want to have more complicated logic: an
order might considered approved if it is either in the approved state or does not contain
certain hazardous materials and is in the in production or pending shipment state, for
example.
Now we'll have to change all our code again. A developer's job is never done!
The main problem we all face is one of logic: most ERP software, like opentaps, operate
on a set of data in one logical state (i.e., orders that are approved) and transform them
into other data and other logical states (i.e., invoices that are created.) The problem is
that these logical states are   denoted as strings in a database field, but they are

 much more subtle and complex in real life. Thus, we developers are lulled
into thinking that  logical states can be modeled as literal strings. This usually works,
but in those 10% of the cases when it's not the case, our code is usually not well
structured enough to deal with it.
The solution is to  p p p   . Instead, separate the logical checking
code of a domain into a separate class, so that it can be modified as needed. This is the
role for Specifications: defining literal values and logical states.
In practice, we recommend having one Specification class for each domain. For
example, for the order domain, there should be an OrderSpecification class and a
corresponding interface. Because in practice specifications are usually closely related to
the way data is modeled in your database, we have kept it linked through the Repository
class of each domain. Thus, to get the OrderSpecification, use its repository:
OrderRepositoryInterface orderRepository = orderDomain.getOrderRepository();
OrderSpecificationInterface orderSpecification =
orderRepository.getOrderSpecification();
We have also found the following best practices to be helpful when implementing
specifications:
If you need to check whether a condition is true or not, implement Boolean methods in
the specifications, which your domain objects can use, rather than using literal strings
directly in the domain objects. For example, instead of:
if (orderSpecification.ORDER_APPROVED.equals(order.getOrderStatusId()))
use:
if (orderSpecification.isApproved(order))
If you need to get certain values from your specification for other purposes, get lists of
values instead of string literals. For example, if you need to get OrderRole objects
related to an order in the role of customer, instead of implementing a literal like
orderSpecification.BILL_TO_CUSTOMER_ROLE, implement a method which returns a
list of potential roles: orderSpecification.billToCustomerRoleTypeIds. You can then use
the SQL IN operator to retrieve parties in all possible bill to customer roles and thus not
be constrained to use only one potential role.
In both cases, by abstracting logical states and by making type codes more general-
purpose, your code will be able to handle changing requirements much more easily.
An Example Using Domains
Now let's consider an example. Suppose we want to create an invoice for all the order
items which are not physical products and which have been marked as performed (See
Fulfilling Orders for Services.) Using the ofbiz framework, we would first define a
service:
<service name="opentaps.invoiceNonPhysicalOrderItems" engine="java"
location="com.opensourcestrategies.financials.invoice.InvoiceServices"
invoke="invoiceNonPhysicalOrderItems">
<description>Creates an invoice from the non-physical items on the order. It will
invoice from the status in the orderItemStatusId,
or if it is not supplied, default to ITEM_PERFORMED. After the invoice is created,
it will attempt to change the items' status
to ITEM_COMPLETE.</description>
<attribute name="orderId" type="String" mode="IN" optional="false"/>
<attribute name="orderItemStatusId" type="String" mode="IN" optional="true"/>
<attribute name="invoiceId" type="String" mode="OUT" optional="false"/>
</service>
Then, we would create a static Java method for the service:
public static Map invoiceNonPhysicalOrderItems(DispatchContext dctx, Map context)
{
LocalDispatcher dispatcher = dctx.getDispatcher();
GenericValue userLogin = (GenericValue) context.get("userLogin");
Locale locale = (Locale) context.get("locale");

String orderId = (String) context.get("orderId");


String orderItemStatusId = (String) context.get("orderItemStatusId");

try {
// validate that the order actually exists and get list of non-physical
GenericValue order = delegator.findByPrimaryKey("OrderHeader",
UtilMisc.toMap("orderId", orderId));
if (UtilValidate.isEmpty(order)) {
return ServiceUtil.returnError("Order [" + orderId + "] not found");
}

// set default item status


if (UtilValidate.isEmpty(orderItemStatusId)) {
Debug.logInfo("No status specified when invoicing non-physical items on
order [" + orderId + "], using ITEM_PERFORMED", module);
orderItemStatusId = "ITEM_PERFORMED";
}

// get the non-physical items which have been performed


List<GenericValue> orderItems = order.getRelatedByAnd("OrderItem",
UtilMisc.toMap("statusId", orderItemStatusId));
List<GenericValue> itemsToInvoice = new ArrayList();
for (GenericValue orderItem:orderItems) {
if (!UtilOrder.isItemPhysical(orderItem)) {
itemsToInvoice.add(orderItem);
}
}

// check if there are items to invoice


if (UtilValidate.isEmpty(itemsToInvoice)) {
return
UtilMessage.createAndLogServiceError("OpentapsError_PerformedItemsToInvoiceNotF
ound", locale, module );
}

// create a new invoice for the order items


Map tmpResult = dispatcher.runSync("createInvoiceForOrder",
UtilMisc.toMap("orderId", orderId, "billItems", itemsToInvoice, "userLogin", userLogin),
7200, false); // no new transaction
if (ServiceUtil.isError(tmpResult)) {
return tmpResult;
}

// change the status of the order items to COMPLETED


for (GenericValue orderItem:itemsToInvoice) {
tmpResult = dispatcher.runSync("changeOrderItemStatus",
UtilMisc.toMap("orderId", orderItem.getString("orderId"), "orderItemSeqId",
orderItem.getString("orderItemSeqId"), "statusId", "ITEM_COMPLETED", "userLogin",
userLogin));

// return invoiceId of new invoice created


String invoiceId = (String) tmpResult.get("invoiceId");

tmpResult = ServiceUtil.returnSuccess();
tmpResult.put("invoiceId", invoiceId);
return tmpResult;
} catch (GeneralException e) {
return UtilMessage.createAndLogServiceError(e, module);
}
}
So what's there not to love about this code?
It is closely tied to the database. Even though there's not a single line of SQL here, you
have to know that orders are stored in "OrderHeader", and that it is related to
"OrderItem", and that there are fields like statusId. You also have to use the string
literals for status, like ITEM_COMPLETED, ITEM_PERFORMED, etc.
This method depends on things spread out in different parts of the application, like the
UtilOrder class and the createInvoiceForOrder and changeOrderItemStatus services.
This code is completely dependent on the ofbiz framework's GenericValue, entity
engine delegator, and local dispatcher.
Static Java methods like this, while easier to work with than minilang, do not enjoy the
benefits of real object-oriented programming.
In other words, for somebody to write this code, they have to know a lot about the
framework, the data model, and the application tier.
Here's a re-write of the everything inside the try ... catch block using the domain driven
design:
// validate that the order actually exists and get list of non-physical
OrderRepository orderRepository = new OrderRepository(new
Infrastructure(dispatcher), userLogin));
Order order = orderRepository.getOrderById(orderId);
if (UtilValidate.isEmpty(orderItemStatusId)) {
Debug.logInfo("No status specified when invoicing non-physical items on order ["
+ orderId + "], using [" + OrderSpecification.ITEM_STATUS_PERFORMED + "]",
module);
orderItemStatusId = OrderSpecification.ITEM_STATUS_PERFORMED;
}
List<GenericValue> itemsToInvoice =
order.getNonPhysicalItemsForStatus(orderItemStatusId);

// check if there are items to invoice


if (UtilValidate.isEmpty(itemsToInvoice)) {
return
UtilMessage.createAndLogServiceError("OpentapsError_PerformedItemsToInvoiceNotF
ound", locale, module );
}

// create a new invoice for the order items


Map tmpResult = dispatcher.runSync("createInvoiceForOrder",
UtilMisc.toMap("orderId", orderId, "billItems", itemsToInvoice, "userLogin", userLogin),
7200, false); // no new transaction
if (ServiceUtil.isError(tmpResult)) {
return tmpResult;
}

// change the status of the order items to COMPLETED


order.setItemsStatus(itemsToInvoice,
OrderSpecification.ITEM_STATUS_COMPLETED);

// return invoiceId of new invoice created


String invoiceId = (String) tmpResult.get("invoiceId");

tmpResult = ServiceUtil.returnSuccess();
tmpResult.put("invoiceId", invoiceId);
return tmpResult;
This code is the programming equivalent of the missing link: it has many features of the
old code, but a few important differences as well. What we have done is push
everything related to orders to the Order Entity object, its OrderRepository, and
OrderSpecification. We don't care where the order came from, how we can get the
items of an order, or even how the status codes of an order are defined any more,
because those are all responsibilities of the Order domain objects. (Even the validation
that an order was obtained is handled by the OrderRepository, which will throw a
RepositoryException if nothing is found from orderId.) We are also no longer tied to the
delegator, although the Order domain may itself require the delegator. (The casting of
itemsToInvoice to GenericValue is vestigal -- remember that our Entity object extends
GenericValue, and a specific Java object may in turn extend Entity.)
We are, however, still tied to the createInvoiceForOrder service and the ofbiz service
engine. That will have to wait until the next evolutionary step (which happened the next
day). Using the Service class from above, we can implement a POJO version of this
service:
public class OrderInvoicingService extends Service {

private static final String module = OrderInvoicingService.class.getName();

protected String orderId = null;


protected String invoiceId = null;
// by default, non-physical order items in this state will be invoiced
protected String statusIdForNonPhysicalItemsToInvoice =
OrderSpecification.ITEM_STATUS_PERFORMED;

public OrderInvoicingService(Infrastructure infrastructure, User user, Locale locale)


throws ServiceException {
super(infrastructure, user, locale);
}
public void setOrderId(String orderId) {
this.orderId = orderId;
}

public String getInvoiceId() {


return this.invoiceId;
}

/**
* Set the status id of non-physical order items to be invoiced by
invoiceNonPhysicalOrderItems, or
* OrderSpecification.ITEM_STATUS_PERFORMED will be used
* @param statusId
*/
public void setStatusIdForNonPhysicalItemsToInvoice(String statusId) {
if (statusId != null) {
statusIdForNonPhysicalItemsToInvoice = statusId;
}
}

public void invoiceNonPhysicalOrderItems() throws ServiceException {


try {
// validate that the order actually exists and get list of non-physical
OrderRepository orderRepository = new OrderRepository(new
Infrastructure(dispatcher), user);
Order order = orderRepository.getOrderById(orderId);
List<GenericValue> itemsToInvoice =
order.getNonPhysicalItemsForStatus(statusIdForNonPhysicalItemsToInvoice);

// check if there are items to invoice


if (UtilValidate.isEmpty(itemsToInvoice)) {
// TODO: Fix localization of errors
throw new
ServiceException("OpentapsError_PerformedItemsToInvoiceNotFound");
}

// create a new invoice for the order items


// because of the way createInvoiceForOrder is written (665 lines of code!) we'd
have to do some re-factoring before we can add the items to an existing invoice
Map tmpResult = getDispatcher().runSync("createInvoiceForOrder",
UtilMisc.toMap("orderId", orderId, "billItems", itemsToInvoice, "userLogin", user), 7200,
false); // no new transaction
if (ServiceUtil.isError(tmpResult)) {
throw new ServiceException(ServiceUtil.getErrorMessage(tmpResult));
}
// change the status of the order items to COMPLETED
order.setItemsStatus(itemsToInvoice,
OrderSpecification.ITEM_STATUS_COMPLETED);

// set the invoiceId of new invoice created


this.invoiceId = (String) tmpResult.get("invoiceId");
} catch (GeneralException ex) {
throw new ServiceException(ex) ;
}
}

}
Then, the original Java static method simply has to pass the parameters to it, execute
the method in the OrderInvoicingService, get its result, and pass it back. Here's the
content of that try ... catch block again:
OrderInvoicingService invoicingService = new OrderInvoicingService(new
Infrastructure(dispatcher), new User(userLogin), locale);
invoicingService.setOrderId(orderId);
invoicingService.setStatusIdForNonPhysicalItemsToInvoice(orderItemStatusId);
invoicingService.invoiceNonPhysicalOrderItems();

Map tmpResult = ServiceUtil.returnSuccess();


tmpResult.put("invoiceId", invoicingService.getInvoiceId());

Congratulations! Now your business logic is a POJO. You can add annotations, use
dependency injection with it, and use it with other Java frameworks now. (Is this how
that missing link felt, seeing all those primordial forests for the first time?)
Your service is using a legacy ofbiz service "createInvoiceForOrder" still through its
getDispatcher() method, but that's not so bad. If you want to use an ofbiz service, you
should use its dispatcher. In this example, however, you still had to write a static Java
method for your service, because you are using the ofbiz static Java method service
engine. With the POJO Service Engine, however, that is no longer necessary, and you
can remove the code in InvoiceServices.java and call
OrderInvoicingServices.invoiceNonPhysicalOrderItems() directly.
A final round of enhancements used the base entities instead of GenericValues and the
domains directory to load the order domain and the order repository, so this order
invoicing service could function independent of the order management system. See
POJO Service Engine for the code sample.
Putting It All Together
Now, let's see how we could put all this together to create applications around the
domain driven architecture. As we discussed before, related data Entities could be
grouped together as an Aggregate, which will have related Factories, Repositories, and
Services. For example, an aggregate of concepts related to invoicing might include the
Invoice, InvoiceItem, InvoiceStatus, InvoiceContactMech, InvoiceAttribute entities as
well as invoice factories, invoice repositories, and several invoicing services:

Several of these Aggregates may then form a Domain of related business knowledge.
For example, the Billing domain may consist of Invoice and Payment aggregates and
their related factories, repositories, and services. This Domain would interact with other
domains, such as Organization, Ledger, Party, and Product:
An application, such as opentaps Financials application, could be built from several
relatively independent domains:

To keep them relatively independent of each other, an interface should be declared for
each domain, and they should return interfaces to the repositories, factories, and
services. Interfaces are not necessary for the entities, however, since entities represent
a data model, which must be implemented in the same way for all opentaps
applications. For example, Invoice will always have to have an invoice ID field, and the
getInvoiceId() method should always return the value of that field. If different underlying
invoicing systems use different types of invoice IDs, it is the responsibility of the invoice
repository to parse that and store it in the invoice ID field of Invoice. The Invoice entity
does not need to be changed. Here is an example of the interface for the billing domain,
defined in org.opentaps.domain.billing.BillingDomainInterface:
import org.opentaps.domain.billing.invoice.InvoiceRepositoryInterface;

public interface BillingDomainInterface {

public InvoiceRepositoryInterface getInvoiceRepository();


}
To make life a little easier, an abstract Domain class is available to encapsulate
Infrastructure and User, so you don't have to set the Infrastructure and User after
getting each repository, factory, or service. Instead, you can associate the Infrastructure
and User with an actual implementation of a domain, and it will automatically populated
Infrastructure and User for you. Here is its corresponding implementation in opentaps
financials, org.opentaps.financials.domain.billing:
public class BillingDomain extends Domain implements BillingDomainInterface {

public InvoiceRepository getInvoiceRepository() {


InvoiceRepository repository = new InvoiceRepository();
repository.setInfrastructure(getInfrastructure());
repository.setUser(getUser());
return repository;
}

}
There should only be one directory of domains at any one time, so that all the opentaps
applications use the same domains. In opentaps, this domain directory is defined in the
DomainsDirectory class, and the actual domains are defined in hot-deploy/opentaps-
common/config/domains-directory.xml:
<beans>

<bean id="opentapsBillingDomain"
class="org.opentaps.financials.domain.billing.BillingDomain"/>

<bean id="domainsDirectory" class="org.opentaps.domain.DomainsDirectory">


<property name="billingDomain"><ref bean="opentapsBillingDomain"/></property>
</bean>

</beans>
Note that domains are declared explicitly in the DomainsDirectory, rather than as a
Map. To add a new domain, you must modify the DomainsDirectory class to add a new
member plus accessor (set/get) methods. To change your domains, you can just modify
this xml file. For example:
<beans>

<bean id="myBillingDomain" class="com.mycompany.domain.billing.BillingDomain"/>

<bean id="domainsDirectory" class="org.opentaps.domain.DomainsDirectory">


<property name="billingDomain"><ref bean="myBillingDomain"/></property>
</bean>

</beans>
When you restart opentaps, the new domains will be loaded.
To load your domains, use DomainsLoader, which can be instantiated with
Infrastructure and User:
// get the domain
DomainsLoader dl = new DomainsLoader(new Infrastructure(dispatcher), new
User(admin));
DomainsDirectory domains = dl.loadDomainsDirectory();
BillingDomainInterface billingDomain = domains.getBillingDomain();

// now we can use it


InvoiceRepositoryInterface invoiceRepository = billingDomain.getInvoiceRepository();
Invoice invoice = invoiceRepository.getInvoiceById("10000");

Using the Query Tool


Using the Query Tool
A common problem with the ofbiz entity engine and delegator is that it could be difficult
to construct complex queries. For example, try writing this query using the delegator:
SELECT OI.PRODUCT_ID, OI.QUANTITY FROM ORDER_ITEM AS OI
LEFT JOIN ORDER_HEADER AS OH WHERE OI.ORDER_ID = OH.ORDER_ID AND
OH.ORDER_TYPE_ID = 'SALES_ORDER'
LEFT JOIN PRODUCT AS PR WHERE OI.PRODUCT_ID = PR.PRODUCT_ID
WHERE ((OI.STATUS_ID = 'ITEM_APPROVED') OR
(OI.STATUS_ID <> 'ITEM_CANCELLED' AND
OH.STATUS_ID NOT IN ('ORDER_CANCELLED', 'ORDER_REJECTED',
'ORDER_COMPLETED')))
AND OI.PRODUCT_ID IN (SELECT PRODUCT_ID FROM
PRODUCT_CATEGORY_MEMBER WHERE PRODUCT_CATEGORY_ID = '100')
You will find yourself having to declare either a view entity in XML or a
DynamicViewEntity in Java, then write the conditions out in reverse Polish notation with
EntityOperators and lists of sub-conditions, and then realize that there is actually no
support for sub selects. So then you will have to write a query first and use an EntityUtil
method to filter out the list of product IDs so that you can create an EntityOperator.IN for
it.
To solve this problem, we created an opentaps query tool, which works with the ofbiz
delegator but allows you to write your queries in familiar SQL. The query tool could then
return the results of your query as a ResultSet, a List of Maps, a List of entity engine
GenericValues, or the entity engine's EntityListIterator.
Another benefit of the query tool is that it allows you to use JDBC prepared statements,
which is more efficient because the query can be reused for different parameters, rather
than a different query being passed to your database each time, which is the case with
the delegator. (See this article about JDBC performance tuning.)
This feature has been implemented in the opentaps 1.0 trunk and will be available in
future versions of opentaps, such as 1.2/1.4 or beyond. For older versions of opentaps,
such as 1.0.x, download the querytool.patch. For older versions of opentaps, such as
0.9.x, you can put the query package classes into another opentaps component such as
crmsfa.
How It Works
To use the query tool, create a QueryFactory from the delegator:
import org.opentaps.common.query.*;
QueryFactory qf = new QueryFactory(delegator); // creates QueryFactory from the
default "org.ofbiz" group-name in entitygroup.xml
QueryFactory qf2 = new QueryFactory(delegator, "com.mine"); // in case you had a
different group-name
Then, use your QueryFactory to create either a Query. The syntax should be similar to
Hibernate querying:
Query q = qf.createQuery("SELECT * FROM STATUS_ITEM WHERE STATUS_ID
LIKE 'INVOICE%'");
p#p p  p p  p p$%p p    p & On
Linux, for example, MySQL will not recognize lowercase table names.
You can then run your query in the following ways:
// run the query and get result set
q.executeQuery();
ResultSet rs = q.getResultSet();

// run the query and get it back as a List of Maps or the first value as a Map
List list1 = q.list();
Map map1 = q.firstResult();

// run the query and get an EntityListIterator. Specify the entity name and optionally a
list of fields
EntityListIterator eli1 = q.entityListIterator("StatusItem");
EntityListIterator eli2 = q.entityListIterator("StatusItem", UtilMisc.toMap("statusId",
"statusTypeId", "description"));

// run the query and get a List of GenericValues. Specify the entity name and optionally
a list of fields
List list3 = q.entitiesList("StatusItem");
List list4 = q.entitiesList("StatusItem", UtilMisc.toList("statusId", "statusTypeId",
"description"));
Because Query implements the standard JDBC PreparedStatement, you can set
parameters to your Query as if it were a PreparedStatement:
Query q2 = qf.createQuery("SELECT * FROM STATUS_ITEM WHERE STATUS_ID
LIKE ? AND STATUS_TYPE_ID LIKE ?");
q2.setString(1, "%APPROVE%");
q2.setString(2, "INVOICE%");
List list5 = q2.list();
Technical Notes
When the Query is first instantiated, a PreparedStatement is instantiated, and on the
first call to a method which would cause the query to be executed, such as .list(), the
PreparedStatement is called, a ResultSet is obtained, converted to a List, and then
closed. Subsequent calls to .list() only return the previously stored list and does not
cause another query to be run. If you need to run the query again, call
.clearQueryResults();
Converting the query results to GenericValues/GenericEntities requires the use of the
ofbiz entity engine's EntityListIterator. If you use the .entityListIterator(..) method, the
EntityListIterator will be returned to you, and it will handle the closing of the connection
with its own .close() method. If you use the .entitiesList(..) methods, the EntityListIterator
and the ResultSet will be automatically closed.
The ResultSet is automatically closed on finalize().
The Query and QueryFactory throw a QueryException. If GenericValues/GenericEntities
are involved, they also will throw the GenericEntityException.

Base Entity Classes


Base Entity Classes
   p
[hide]
1 Introduction
2 Generating Base Entities
3 Using Base Entities
4 Localization
5 Entity Relations
6 Interacting with the Database
7 Convenience Methods
j     pp
To support the object-oriented Domain Driven Architecture, there is a set of Java entity
classes in the org.opentaps.domain.base.entities package for all the entities defined
with the ofbiz entity engine, both from the original ofbiz applications and the opentaps
applications. The entity classes contain all the fields of the entity, accessor (get/set)
methods for each field, and fromMap and toMap methods to convert the Java class to a
Map. The only exception is that all floating-point values are automatically returned as
BigDecimal, instead of Double. This is done in the base Entity.java
' p(p  pp
The entity classes are automatically generated using a freemarker template hot-
deploy/opentaps-common/templates/BaseEntity.ftl, based on the entitymodel.xml
definitions for all the entities, including view-entities and fields defined by the extend-
entity tags, and the Java types defined in the fieldtype XML files for the entity engine. To
generate base entities, from the opentaps directory,
$ ant make-base-entities
It will clear out all the files in the base entities package, start opentaps and load the
delegator, and then regenerate the Java classes based on the current entity definitions.
)p(p  pp
The base entity Java classes could be used as a replacement for the ofbiz
GenericEntity/GenericValue objects. To go from a GenericValue to a Java class, use
the Repository.loadFromGeneric methods, such as:
Repository repository = new Repository(delegator);
List enumerationEntities = repository.loadFromGeneric(Enumeration.class,
enumerations);
When you are working from a repository, you should use the loadFromGeneric method
which also sets the repository for a new object:
GenericValue value = getDelegator().findByPrimaryKey("Invoice",
UtilMisc.toMap("invoiceId", invoiceId));
Invoice invoice = (Invoice) this.loadFromGeneric(Invoice.class, value, this);
These methods uses reflection to access the fromMap method of entity classes. They
can create a one object from one GenericValue or a List of objects from a List of
GenericValues.
To go from a Java class, you can simply use the toMap method to create a
GenericValue, such as:
Enumeration enumeration = new Enumeration();
// set its values
GenericValue value = new GenericValue(enumeration.toMap());
For convenience, we have also implemented all the get_(String fieldName) and the set
(String fieldName, Object value) methods of the ofbiz GenericEntity/GenericValue, so
you can use these classes with the "." notation in freemarker pages. For example, for an
object of the Invoice class, you can use
Invoice.getInvoiceId()
or
Invoice.invoiceId
as before.
cp*cp+cjŒp,p( p*jŒp% & They should be automatically
generated every time your data model changes, so all your changes will be overwritten.
If you have more complex classes, extend these base entity classes and implement the
additional methods there.
%    pp
Backward-compatible localization is supported with two special get methods for all
entities. You can specify the field name and a locale with:
get(fieldName, locale);
Or, you can specify the UI labels resource, such as FinancialsUiLabels, to use with:
get(fieldName, resourceName, locale);
  p!   pp
The auto generated based entities also provide you with methods to traverse the entity
relationships defined in the entitymodel XML files. You can either use the basic
getRelated_ methods, which allow you to specify the class name and the relationship
name, such as:
getRelated(ReturnItem.class, "ReturnItem");
Or, you can use the methods which are also automatically generated from the entity
model XML definitions for the relationships. For example, if your entity had a one to one
relationship defined as:
<relation type="one" fk-name="RTN_ITEM_RTN" rel-entity-name="ReturnHeader">
Then your auto generated based entity would have a method defined as:
public ReturnHeader getReturnHeader()
Similarly, if you define a one to many relationship, there will be a method which returns
a list rather than an object. For example,
<relation type="many" fk-name="RTN_ITEM_OISGIR" rel-entity-
name="OrderItemShipGrpInvRes">
will cause the following method to be created:
public List<? extends OrderItemShipGrpInvRes> getOrderItemShipGrpInvReses()
Note that the method names are automatically pluralized if the relationship is one to
many.
Finally, if your relationship has a title, it will also be added to the method, so:
<relation type="one" fk-name="ORDER_HDR_OFAC" title="Origin" rel-entity-
name="Facility">
will create a method:
public Facility getOriginFacility()
j  p p p pp
The ofbiz specific implementation of the Repository provides you with the following
methods for interacting with the database which abstracts the underlying data access
layer:
findOne: finds an Entity by its primary key
findOneNotNull: like findOne but throws EntityNotFoundException instead of returning
null
findList: finds a list of Entities using the arguments. Similar to findByAnd and
findByCondition of the delegator.
getFirst: gets the first Entity from a list, or null if there is no first value in the list
These methods are not available through the RepositoryInterfaces. They are made
publicly available in the ofbiz Repository classes only for convenience, in case you need
to access them in your scripts. However, please remember that in programming as in
life, too many "conveniences" eventually lead to problems. Therefore, we recommend
that as much as possible, you use these methods in your scripts only for prototyping,
and then you put the finished find methods into a Repository method and write unit tests
for it.
For example, in your script you can get the ofbiz Repository implementation, and then
use it for a query:
Repository repository = new
org.opentaps.foundation.repository.ofbiz.Repository(infrastructure, user);
List<Order> orders = repository.findList(Order.class, conditions);
But unless this script is just a one-time thing, and not terribly important, you should
eventually move the find operation into a Java repository class. To keep your
repositories manageable, you can push less used methods into specialized repositories.
For example, this method could be part of an OrderViewRepository instead of the main
OrderRepository where the more commonly used order repository methods are:
public class OrderViewRepositoryInterface {
public List<Order> findOrdersBy__(..);
}

public class OrderViewRepository implements OrderViewRepositoryInterface {


public List<Order> findOrdersBy__(..) {
return repository.findList(Order.class, conditions);
}
}
You should never use these find* methods directly in services or higher-level business
logic then your repositories.
 p+  pp
getDistinctFieldValues: a static method on Entity which gets you a Set of the distinct
values for a field name.

POJO Service Engine


POJO Service Engine
POJO Service Engine
The POJO service engine is designed to allow you to mount your Java service objects
directly on to the ofbiz service engine, without having to write a static Java method to
call it. To use the POJO service engine, you have to declare your service with service
engine XML file, just like for all of the other ofbiz services, but use pojo instead of java
as the engine:
<service name="opentaps.invoiceNonPhysicalOrderItems" engine="pojo"
location="org.opentaps.financials.domain.billing.invoice.OrderInvoicingService"
invoke="invoiceNonPhysicalOrderItems">
<description>Creates an invoice from the non-physical items on the order. It will
invoice from the status in the orderItemStatusId,
or if it is not supplied, default to ITEM_PERFORMED. After the invoice is created,
it will attempt to change the items' status
to ITEM_COMPLETE.</description>
<attribute name="orderId" type="String" mode="IN" optional="false"/>
<attribute name="orderItemStatusId" type="String" mode="IN" optional="true"/>
<attribute name="invoiceId" type="String" mode="OUT" optional="false"/>
</service>
Then, you could write your service as a Java class with the following requirements:
It must extend the base org.opentaps.foundation.service.ofbiz.Service class
You must have a default constructor which takes no parameters
For each input parameter defined in your services XML, you must have a set method.
The set method must be named "set" plus the name of the variable, with the first letter
capitalized. For example, if you have "orderId" as an input parameter of your service,
you must have a "setOrderId" method. It can not be "SetOrderId", "setorderid", or
"setorderId". This is done intentionally to enforce code consistency.
Each set method must take one parameter, and it must match the parameter in your
services XML. For example, if your services XML specifies "java.util.List", your set
method take a single parameter of the java.util.List class, not ArrayList, FastList,
LinkedList, etc.
For each output parameter defined in your services XML, you must define one get
method which takes no parameters. The name of the get method must be "get" plus the
name of the variable with the first letter capitalized (ie, "getInvoiceId()" for "invoiceId").
The invoke method must be a void method with no parameters.
Instead of returning ServiceUtil.returnError when there is a problem with the service,
throw exceptions such as ServiceException
You can define more than one service inside the same Java class by using different
void methods without parameters. The same Java class should be shared among
services that have similar input and output parameters.
Within the service, you can:
access other domains of the Domain Driven Architecture using the
getDomainsDirectory() method of the Service superclass.
get the ofbiz framework's delegator and dispatcher through the Infrastructure with the
getInfrastructure() method, then calling getDelegator() and getDispatcher()
get a UserLogin GenericValue for running legacy services written in the ofbiz framework
by getting the User object from the getUser() method, then calling its
getOfbizUserLogin() method
Here is a complete example:

public class OrderInvoicingService extends Service implements


OrderInvoicingServiceInterface {

private static final String module = OrderInvoicingService.class.getName();

protected String orderId = null;


protected String invoiceId = null;
// by default, non-physical order items in this state will be invoiced
protected String statusIdForNonPhysicalItemsToInvoice =
OrderSpecification.ITEM_STATUS_PERFORMED;

public OrderInvoicingService() {
super();
}

public OrderInvoicingService(Infrastructure infrastructure, User user, Locale locale)


throws ServiceException {
super(infrastructure, user, locale);
}

public void setOrderId(String orderId) {


this.orderId = orderId;
}

public String getInvoiceId() {


return this.invoiceId;
}

/**
* Set the status id of non-physical order items to be invoiced by
invoiceNonPhysicalOrderItems, or
* OrderSpecification.ITEM_STATUS_PERFORMED will be used
* @param statusId
*/
public void setOrderItemStatusId(String statusId) {
if (statusId != null) {
statusIdForNonPhysicalItemsToInvoice = statusId;
}
}

public void invoiceNonPhysicalOrderItems() throws ServiceException {


try {
// validate that the order actually exists and get list of non-physical
OrderDomainInterface orderDomain =
getDomainsDirectory().getOrderDomain();
OrderRepositoryInterface orderRepository = orderDomain.getOrderRepository();

Order order = orderRepository.getOrderById(orderId);


List<OrderItem> itemsToInvoice =
order.getNonPhysicalItemsForStatus(statusIdForNonPhysicalItemsToInvoice);

// check if there are items to invoice


if (UtilValidate.isEmpty(itemsToInvoice)) {
throw new
ServiceException("OpentapsError_PerformedItemsToInvoiceNotFound",
UtilMisc.toMap("orderId", orderId));
}

// create a new invoice for the order items


// because of the way createInvoiceForOrder is written (665 lines of code!) we'd
have to do some re-factoring before we can add the items to an existing invoice
Map tmpResult =
getInfrastructure().getDispatcher().runSync("createInvoiceForOrder",
UtilMisc.toMap("orderId", orderId, "billItems",
Repository.genericValueFromEntity(getInfrastructure().getDelegator(), "OrderItem",
itemsToInvoice), "userLogin", getUser().getOfbizUserLogin()), 7200, false); // no new
transaction
if (ServiceUtil.isError(tmpResult)) {
throw new ServiceException(ServiceUtil.getErrorMessage(tmpResult));
}

// change the status of the order items to COMPLETED


order.setItemsStatus(itemsToInvoice,
OrderSpecification.ITEM_STATUS_COMPLETED);

// set the invoiceId of new invoice created


this.invoiceId = (String) tmpResult.get("invoiceId");

} catch (GeneralException ex) {


throw new ServiceException(ex) ;
}
}
}

Unit Testing
Unit Testing
   p
[hide]
1 How to Write Unit Tests
1.1 opentaps 1.0
1.2 opentaps 0.9
2 Where are the Unit Tests?
3 Setting Up For Unit Testing
4 Unit Testing Strategies
5 A Unit Testing Tutorial
6 Creating Reference Data Sets
7 Running a Unit Test from Beanshell
8 Debugging Unit Tests with IntelliJ
9 Dealing with Concurrency
10 Warning about running Unit Tests in MySQL

How to Write Unit Tests


 p-&.p
For opentaps 1.0, you would write a set of Junit tests in a class, then define it in an XML
testdef file like this:
<test-suite suite-name="entitytests"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://www.ofbiz.org/dtds/test-suite.xsd">
<test-case case-name="security-tests"><junit-test-suite class-
name="com.opensourcestrategies.crmsfa.test.SecurityTests"/></test-case>
</test-suite>
You can define multiple tests per testdef xml file. Then, add the testdef file to your ofbiz-
component.xml, like this:
<test-suite loader="main" location="testdef/crmsfa_tests.xml"/>
Then, when you do
$ ant run-tests
your tests will be run.
 p.&/p
In opentaps 0.9, you would write your Junit tests class and add your it to the
base/config/test-containers.xml file, in the "junit-container" at the bottom, like this:
<container name="junit-container" class="org.ofbiz.base.container.JunitContainer">
<property name="base-test" value="org.ofbiz.base.test.BaseUnitTests"/>
<property name="entity-test" value="org.ofbiz.entity.test.EntityTestSuite"/>
<property name="service-test" value="org.ofbiz.service.test.ServiceEngineTests"/>
<property name="crm-security"
value="com.opensourcestrategies.crmsfa.test.SecurityTests"/> <!-- your unit tests -->
<!--
<property name="usps-test"
value="org.ofbiz.shipment.thirdparty.usps.UspsServicesTests"/>
<property name="jxunit-test" value="net.sourceforge.jxunit.JXTestCase"/>
-->
</container>
Then you would do
$ ant run-tests
Your tests will run alongside the existing OFBIZ test suites.
j+c! *:
Use a "test" delegator to point your tests to a separate database, and make sure it is
defined in framework/entity/config/entityengine.xml is set to the right database.
The opentaps tests are commented out in hot-deploy/component-load.xml by default so
don't forget to activate them.
Where are the Unit Tests?
All opentaps unit tests are located in hot-deploy/opentaps-tests
There are also a small number of unit tests from ofbiz in their respective modules, such
as framework/entity for the entity engine unit tests.
Setting Up For Unit Testing
We recommend that you create a separate database on the same database server for
testing purposes and install all demo data into the testing database. Let's say that this
database is called "opentaps_testing". Then, edit the file
framework/entity/config/entityengine.xml and define opentaps_testing as a new
datasource, called "localmysltesting" or "localpostgrestesting". Next, initiate the demo
data into the testing database by editing the default delegator:
<delegator name="default" entity-model-reader="main" entity-group-reader="main"
entity-eca-reader="main" distributed-cache-clear-enabled="false">
<group-map group-name="org.ofbiz" datasource-name="localXXXtesting"/>
</delegator>
Then do an
$ ant run-install
to install all the seed and demo data into the testing database. Then you can edit the
default delegator back to your original delegator, and set the  delegator to the
testing database:

<delegator name="test" entity-model-reader="main" entity-group-reader="main"


entity-eca-reader="main">
<group-map group-name="org.ofbiz" datasource-name="localXXXtesting"/>
</delegator>
All unit tests should be run to use the test delegator. This can be done by instantiating
the the "test" delegator by name and using that delegator to instantiate a dispatcher. Or
you can just write a test suite which extends the OpentapsTests base class, which does
it for you.
If you need to modify port settings for the testing instance, you should edit the file
framework/base/config/test-containers.xml.
Unit Testing Strategies
These are some strategies for unit testing:
Transaction comparison - Compare the transaction produced with a sample transaction,
possibly pre-loaded into the system. For example, posting a paycheck to the ledger and
then comparing with test data of a correct ledger transaction to make sure that they are
equivalent. Equivalence is a very important concept: it is not possible that two sets of
transactions are identical, since at a minimum they would have different ID numbers,
and they would probably reference other transactions with different IDs. For example,
each order would have a different orderId and different inventory item Ids reserved
against it. However, two orders may be considered equivalent if they have the same set
of items, prices, shipping methods, customer, addresses, tax and promotional amounts,
etc.
State change - Compare the state of the system before and after a transaction has
occurred. For example, check the inventry of an item, then ship an order, and check the
resulting inventory to make sure that it is correctly decremented. This could get very
complex: Shipping an order could cause customer balances, invoices, ledger postings,
and inventory changes. Multiple tests could be run off the same transaction event.
Absolute state check - At all times, certain relationships must hold. For example, the
sum of all debits must equal sum of all credits.
Tests should be written against the services that create the original data. For example, if
you are writing tests against CRMSFA activity, you can use users from the demo data
set, but you should use the CRMSFA activity services to create or update your
activities. Otherwise, if you create those activities with some other method, future
changes to the services to create activities will not be covered by your unit tests.
Tests should be run against a dedicated testing database with demo and seed data
rather than production data. Therefore, the tests generally should set up their own initial
conditions and run to completion, but they do not need to "tear down" and remove all
data created by the tests. (This would be very impractical: imagine creating and
shipping an order. To tear it down would involve reverting order data, customer
information, inventory data, shipment data, invoices and payments, and accounting
entries.) A good test for the tests is that if you ran the test suite in succession multiple
times, they should pass during the second and third runs as well as the first run.
A Unit Testing Tutorial
IMPORTANT: Each unit test method must start with the word "test" -- it must be called
testXXX(), not tryXXX() or verifyXXX().
Now let's walk through a particular unit test and see how it works. The one that we're
looking at is the ProductionRunTests.java's testProductionRunTransactions method.
This particular test verifies that a standard manufacturing process is working correctly
and checks the inventory results and financial statements. As you read through the
code, you will notice that it does the following
Sets up by first receiving the raw materials (MAT_A_COST and MAT_B_COST) into
inventory
Checks the initial state by getting the GL account balances and the initial inventory
quantities, both ATP and QOH
Runs through the production run
Checks the final state by getting the GL account balances and the inventory quantities
for raw materials and the finished product.
Verify the following:
The change in inventory quantities is correct: raw materials are used, so their quantities
are reduced, and finished product's quantity is increased because it is produced.
The change in GL account balances are correct: inventory value increases and are
offset by raw materials and manufacturing expenses.
The financial statements are in balance at all times.
The financial transactions created by this production run is in agreement with the
reference transactions MFGTEST-1, -2, -3. This is done by finding all new financial
transactions after the production run has been begun, as they should only be generated
by the production run.
The unit value of the finished product is correct.
Along the way, the tests will verify that all the services are run correctly and return
success as well.
The test case uses the classes InventoryAsserts and FinancialAsserts to obtain
information and run tests on the inventory balances and financial statement values. This
is a common "delegation" pattern to separate the code for testing assertions to new
classes. It also uses methods such as assertMapDifferenceCorrect and
assertTransactionEquivalence which are inherited from the OpentapsTestCase and
FinancialsTestCase base classes.
For comparisons of GL account changes, we have set a convention so that increases in
the balances of debit GL accounts are positive, and increases in the balances of credit
GL accounts are negative values. So, if a transaction caused an inventory GL account
to increase by 100 and accounts payable to increase by 100 as well, the GL account
changes would be {INVENTORY: 100, ACCOUNTS_PAYABLE: -100}
Note that in this case we used the receiveInventoryProduct service to receive inventory,
called the various production run services, and then compared the results versus pre-
stored AcctgTrans and AcctgTransEntries. With other tests, such as those for invoices
and payments, we have used pre-existing Invoice and Payment records stored in the
hot-deploy/opentaps-tests component and merely changed their status codes to verify
the results. This brings up an interesting question--what should be done during the
service, and what with existing data?
Our recommendation is this:
What you are testing must be done with the services that you would normally use as
part of your application. For example, when we are testing GL postings, the user does
not actually call the "postInvoiceToGl" service anywhere on the screen. Instead, she
would set the invoice status, and the services would run behind the scenes. Therefore,
it would not do to run the "postInvoiceToGl" test, or worse, manually create the results
of that service in the database. Instead, we should be calling the service to set invoice
status, which is the same one accessed via the controller.
For everything else, do whatever is easiest to set up the pre-conditions for testing.
Calling receiveInventoryProduct is pretty easy compared to storing InventoryItem and
InventoryItemDetail, so we decided to use that. Calling createInvoice and
createInvoiceItem would have been many lines of code, so we just stored an invoice in
the database.
Creating Reference Data Sets
In many tests you will see comparisons against pre-stored AcctgTrans and
AcctgTransEntries. These are reference data sets which are used to compare actual
transactions' results and make sure that they are consistent with the reference.
Reference data sets are created in the following way:
Run through a set of business transactions, such as creating an invoice and marking it
as READY.
Go to Webtools > XML Data Export and select the entities to export. In this case, it
might be the Invoice, InvoiceItem, AcctgTrans, AcctgTransEntry entities. Export them
either to a file or to a browser and copy them to a file.
Edit the file of transactions and change the following:
All IDs from the system-generated 100xx to something like "XXX-TEST-###" so that
they would not cause primary key conflicts.
For AcctgTrans, change the glFiscalTypeId of all the AcctgTrans to "REFERENCE"
from "ACTUAL" so they would not interfere with actual records.
Remove references to entities which would not be part of the reference set. For
example, the invoice might be part of the reference set, but workEffortId,
inventoryItemId, etc. referenced by AcctgTransEntry would not be.
Test by loading the new entity XML into your dedicated testing database. It should
cause no conflicts.
Add it to the opentaps-tests component's ofbiz-component.xml so that it would load for
future tests and commit it!
Running a Unit Test from Beanshell
After you have written a lot of unit tests, running all of them could take a long time.
Fortunately, you can use beanshell to run just one unit test at a time to speed up your
development. To do this, you would need to telnet into your beanshell port, then
instantiate an object of the unit tests class, and run your test method:
si-chens-computer:~ sichen$ telnet localhost 9990
Trying ::1...
Connected to localhost.
Escape character is '^]'.
BeanShell 2.0b4 - by Pat Niemeyer (pat@pat.net)
bsh % import org.opentaps.tests.purchasing.MrpTests;
bsh % mrpTests = new MrpTests();
bsh % mrpTests.testMrpPurchasedProduct();
bsh %
If the test succeeded, you would see no messages on your beanshell console. If it
failed, you would see a stack trace. In both cases, you should see the log messages in
runtime/logs/ofbiz.log or runtime/logs/console.log
To make your life even simpler, you can put all of this into a .bsh file of your own, like
myMrpTests.bsh, and then just call it with the source method from the beanshell
console:
bsh % source("myMrpTests.bsh");
Debugging Unit Tests with IntelliJ
The default task for tests will do a global compile. To skip this, you can redefine the run-
tests target in build.xml as follows,
<target name="run-tests">
<java jar="ofbiz.jar" fork="true">
<arg value="test"/>
</java>
</target>
Using a debugger can help speed up development of the unit tests. You can enable
debugging by specifying the JVM arguments for your debugging system. For instance, if
you have the IntelliJ IDE, the run-tests target becomes,
<target name="run-tests">
<java jar="ofbiz.jar" fork="true">
<jvmarg value="${memory.max.param}"/>
<jvmarg value="-Xdebug"/>
<jvmarg value="-Xnoagent"/>
<jvmarg value="-Djava.compiler=NONE "/>
<jvmarg value="-
Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"/>
<arg value="test"/>
</java>
</target>
You should be able to attach the debugger immediately after running  p 0  .
Don't forget to recompile the component where your tests live.
Another tip is to comment out all unnecessary test suites. Unfortunately, this involves
searching every ofbiz-component.xml. One way to find them, if you're on a POSIX OS,
is to use find,
$ find . -name ofbiz-component.xml -exec grep test-suite {} \; -print
Dealing with Concurrency
Since each test method runs in a separate thread, there may be concurrency issues
when you are using demo or test data in a test. For example, if you are testing whether
a certain number of commission invoices are being generated for sales invoices,
another thread could be generating additional ones at the same time. This leads to
unexpected results which can seem very mysterious.
To avoid concurrency issues, ensure that your tests are using generated data for
comparison purposes. For example, rather than using DemoCustomer as the target of a
sales invoice, you can create a copy of DemoCustomer,
String customerPartyId = createPartyFromTemplate("DemoCustomer");
// create invoice for customerPartyId
From this point, any data that relies on the partyId is sure to be specific to that test only.
An example would be when we're checking the customer balance against
customerPartyId. If we were checking the balance of DemoCustomer, then we might be
thrown off if another thread happens to create an invoice for DemoCustomer at the
same time.
Warning about running Unit Tests in MySQL
When running unit tests in MySQL that use transactions please be aware that an assert
failure can issue a rollback on the transaction leaving the data on the database in an
inconsistent state, potentially allowing the test to pass even if in reality it failed.

Working with the Domain Driven Architecture


Working with the Domain Driven Architecture
Here are some tips for working with the Domain Driven Architecture:
   p
[hide]
1 Instantiating Services and Repositories
2 Documenting your Services
3 Messages from Your Services
4 Working with Entity Field Names

j   ppp!  p


If you have a domain, the easiest way to instantiate a Service or Repository from your
domain is to use the instantiateService or instantiateRepository methods, like this:
public OrderInventoryService getOrderInventoryService() throws ServiceException {
return instantiateService(OrderInventoryService.class);
}
This will automatically set up the Infrastructure and User for your Service or Repository.
   p pp
Since your domain Services do not have to be routed through the ofbiz service engine,
you should put the documentation in the Java method so that they would show up in the
Java docs as well. For example:
/**
* Adds a <code>LockboxBatchItemDetail</code> to an existing
<code>LockboxBatchItem</code>.
* @throws ServiceException if an error occurs
* @see #setLockboxBatchId required input <code>lockboxBatchId</code>
* @see #setItemSeqId required input <code>itemSeqId</code>
* @see #setAmountToApply required input <code>amountToApply</code>
* @see #setCashDiscount required input <code>cashDiscount</code>
* @see #setPartyId optional input <code>partyId</code>
* @see #setInvoiceId optional input <code>invoiceId</code>
*/
+p  pŒ pp
Service can return success and error messages to the user.
To return an error message simply throw a   .
To return a success message, you can use either   + or
 +. The former is mean to return a single message whereas the
later allows you to return a list of messages.
Each method supports localization by using labels that are translated into the user
locale, refer to the API for more details.
For example:
public void serviceA() throws ServiceException {
try {
// perform some modifications ...
if (some error) {
// return an error label, the second arguemnt is the context for substitution in the
message
throw new ServiceException("ErrorLabel", null);
}
} catch (SomeException e) {
// generic error that you do not handle are also displayed back to the user
// some exception are already localized (for example the RepositoryException
related to not found entities)
throw new ServiceException(e);
}
setSuccessMessage("SuccessLabel");
}
Beware that nested direct service calls can override the success message, if you do not
want that be sure to set it as the last instruction of the method.
public void serviceB() throws ServiceException {
// perform some modifications ...
setSuccessMessage("SuccessLabelB");
}

public void serviceA() throws ServiceException {


// perform some modifications ...
setSuccessMessage("SuccessLabelA");
serviceB();
// SuccessLabelB has overwritten the success message
}
If you need to return a list of messages use  +, and beware that
  + will overwrite the list.
public void serviceB() throws ServiceException {
// perform some modifications ...
addSuccessMessage("SuccessLabelB");
}

public void serviceA() throws ServiceException {


// perform some modifications ...
addSuccessMessage("SuccessLabelA");
serviceB();
// returns [SuccessLabelA, SuccessLabelB]
}
# 
p p  p p*p
Each base entity has an enumeration listing its fields, and this enumeration is used in
the framework API instead of using string literals. This means it is possible to use the
IDE autocompletion but also that the resulting code is no longer at risk of resulting in
runtime errors due to typos.
For example, when writing condition maps for the repository find methods, the
repository provides a  method which allows you to write the map like this:
findOne(Order.class, map(Order.Fields.orderId, "WS10000"));

findList(OrderItemAssoc.class, map(OrderItemAssoc.Fields.orderId, "WS10000",


OrderItemAssoc.Fields.orderItemSeqId, "00001"));
It is also used when retrieving distinct field values from a list, like for example:
List<OrderItem> items = findList(OrderItem.class, map(OrderItem.Fields.orderId,
"WS10000"));
Set productIdsInOrder = Entity.getDistinctFieldValues(items,
OrderItem.Fields.productId);

opentaps Google Web Toolkit


Opentaps Google Web Toolkit
   p
[hide]
1 Building GWT
2 Configuring Server Side Interaction
3 Permission
4 Building Widgets with Base Panels
4.1 The Base Classes
4.2 Validation
4.3 Notification

(  p'#p
The Google Web Toolkit (GWT) is built independently of opentaps. To build the Google
Web toolkit widgets,
$ ant gwt
To clear the previous build,
$ ant clean-gwt
This will cause ant to look for look "gwt" in the opentaps components' build.xml files and
build them one at a time. In the component build.xml, the following directories are
specified for building gwt:
<property name="gwt.deploy.dir" value="./webapp/crmsfagwt"/>
<property name="gwt.module.base" value="org.opentaps.gwt.crmsfa"/>
<property name="gwt.src.common" value="../opentaps-
common/src/org/opentaps/gwt"/>
<property name="gwt.src.base" value="./src/org/opentaps/gwt/crmsfa"/>
Then, when ant tries to build gwt, it will look all that gwt modules specified in the
build.xml. Each module is specified at a path of
${gwt.deploy.dir}/${gwt.module.base}.${module}.${module} For example, if you specify
contacts as the module to compile, then opentaps will try to compile
org.opentaps.gwt.crmsfa.contacts.contacts.gwt.xml, which should be in your src/ path.
When you have an additional GWT module to build, add it to the list of modules:
<foreach list="contacts,accounts,leads,partners" target="gwtcompile"
param="module"/>
To speed up the build during development, you can setup GWT to only compile for one
of the supported browsers. This is configured in the common module in hot-
deploy/opentaps-common/src/org/opentaps/gwt/common/common.gwt.xml. For
example, you can enable it for only Mozilla/Firefox by setting the user.agent property to
"gecko1_8":
<set-property name="user.agent" value="gecko1_8"/>
   pppj   p
Your GWT widgets will need to interact with server-side services to store and retrieve
data. A "best practices" pattern we have started in opentaps is to create a configuration
Java file for each server side service available for GWT client-side widgets. For
example, there is a
org.opentaps.gwt.crmsfa.contacts.client.form.configuration.QuickNewContactConfigurati
on Class which contains the server-side URL and all the form parameters for interacting
with the quick new contact service on the server. This is part of the GWT client package
and is designed to be used by all the client-side widgets. Note that the pattern is to have
one Configuration Java file for each   -side service, to be shared by many client-
side widgets which may access the same server-side service, not to have a
configuration file for each client-side widget.
 p
The GWT widgets do not perform security checks, but users permissions are made
available to them in order to adapt the user interface accordingly. The real security
checks allowing a user to perform an operation or retrieve data are performed server
side.
Client-side permission checking is handled in the following way:
The server side uses the User object to determine what permissions the currently
logged in user has and puts it into the webpage sent to the client as an object using
JavaScript. This is done in the main-decorator.bsh and header.ftl of the server.
On the client-side, the Permission class retrieves the security permissions set into the
browser via JavaScript. Your GWT widget can use its hasPermission method to check if
the user has permissions to access certain sections of your page:
if (!Permission.hasPermission(Permission.CRMSFA_CONTACT_CREATE)) {
return;
}
# !*j*': Do not rely on those checks to hide sensitive data or services from a user.
Specifically, it is possible for the end user to modify the JavaScript and add permissions
for their displayed widgets. Therefore, you should always filter out sensitive data before
sending them to the client-side widgets, and every operation on the server side should
check permissions again. Client-side widget permission checking is only for hiding parts
of the user interface and should not be considered a security feature.
(  p# p p(p p
Base panels are base classes providing handlers and utility methods to quickly build
forms that integrate with the application.
p(p p
(  is the base of all forms and it provides the following:
  and !1   methods that set the correct class for the labels and
set the handler that submits the form when the user hits the Enter key
   (  that places a submit button
the three  %  event handlers that performs validation before submitting,
display an activity indicator when the form is submitted, handles exception returned by
the server
a mechanism to notify any registered widgets when the form has been successfully
submitted
    is the base class for forms that should fit on the left column, in all
aspects it behaves the same as (  only the CSS classes applied differ.
  provides methods to create a multiple tabs form such as the one
used to present the filters available in Find Contacts. The tabs created are
   which provide the same add fields methods than (  .
   p
(  provides simple validation that is automatically called before trying to
submit the form. It works by checking each field in the form against its own internal
validation method, for example it checks that all required fields are filled, that email
address input fields have valid email addresses, etc ...
In order to implement a more complex validation, simply override the   method
(be sure to call the base implementation first to keep checking field validation methods).
*    p
In order to notify another widget it must first implement the  *    j  .
You can register (using the 
  method) that widget in your form and the method

  of that widget will get called if your form was successfully submitted. This is
normally done in the entry class which is the place where all widgets are loaded.
For example, the  %  implements it allowing form that create new parties like
the $ 
*     to notify the list that is should reload in order to display the
newly created entity.
And the notification is setup in the contact entry point:
if (RootPanel.get(QUICK_CREATE_CONTACT_ID) != null) {
loadQuickNewContact();
// for handling refresh of lists on contact creation
if (myContactsForm != null) {
quickNewContactForm.register(myContactsForm.getListView());
}
if (findContactsForm != null) {
quickNewContactForm.register(findContactsForm.getListView());
}
}
So when $ 
*   has successfully submitted,
-    
 
 makes a call to -    
 
which notifies each registered widget by calling 
 
  from the
 *    j  .
 

 
!
 is a  %  which implements that interface and
its 
  method performs 

    .
Form macros:
How to use the opentaps Form Macros
opentaps Form Macros Documentation
j   p* p p p 2p p
p p   p p p 3pj p p

pp p p p 4p p  pp p   &
Why It Exists
Many of us are constantly creating forms for our users to enter and display data. Most of
those forms share common elements: text input fields, date input fields, drop downs,
etc. etc. Wouldn't it be nice to have a tool which helps you make and manage? At the
same time, the tool should still give you control over the design of your form, so you
don't end up with an ugly cookie-cutter look for all your forms. You should be able to
add form elements in HTML when they are appropriate, or completely change the layout
and design of your forms by just changing the HTML.
The opentaps Form Macros were created for this reason: to make writing forms easier,
while still giving you control over the final layout. The macros help you design form
elements such as input rows, select boxes, and date fields more efficiently but do not
force you to use them--you can write some form elements with them, write others in
HTML or anything else. It is completely written in Freemarker and can be accessed from
any Freemarker page, so you can combine opentaps Form Macros, HTML, Freemarker
in the same form. It is also easily extend or re-skin: you edit the form macros file and
make your changes there, without updating XSD definitions or Java code.
How It Works
First, you must make sure that the opentaps form macro importing tool is loaded. This
can be done by including the following code in your beanshell (.bsh) script for your
page. They can be put in main-decorator.bsh so that the form macros would work for
your entire webapp:
loader = Thread.currentThread().getContextClassLoader();
globalContext.put("import",
loader.loadClass("org.opentaps.common.template.freemarker.transform.ImportTransfor
m").newInstance());
globalContext.put("include",
loader.loadClass("org.opentaps.common.template.freemarker.transform.IncludeTransfo
rm").newInstance());
The form macros are located in an FTL file in your opentaps-common directory:
hot-deploy/opentaps-common/webapp/common/includes/lib/opentapsFormMacros.ftl
To use it, simply include the form macros in your Freemarker (FTL) page, like this:
<@import location="component://opentaps-
common/webapp/common/includes/lib/opentapsFormMacros.ftl"/>
<@import /> is an opentaps Freemarker extension which allows macros to be imported
into the current context from any file in your opentaps applications.
j 

 

 
 
Now you are ready to use the form macros, like this:
<#list inventoryProduced as inventoryItemProduced>
<#assign inventoryItem =
inventoryItemProduced.getRelatedOne("InventoryItem")/>
<#if inventoryItem.inventoryItemTypeId == "SERIALIZED_INV_ITEM">
<tr class="${tableRowClass(rowIndex)}">
<@displayLink
href="EditInventoryItem?inventoryItemId=${inventoryItem.inventoryItemId}"
text="${inventoryItem.inventoryItemId}"/>
<@display text="${inventoryItem.productId}"/>
<@inputText name="serialNumber_o_${rowIndex}"
default="${inventoryItem.serialNumber?if_exists}"/>
<@inputHidden name="_rowSubmit_o_${rowIndex}" value="Y"/>
<@inputHidden name="inventoryItemId_o_${rowIndex}"
value="${inventoryItem.inventoryItemId}"/>

In this example, we've mixed Freemarker directives (if, list, assign), HTML and CSS
tags (tr, class), and opentaps forms macros (displayLink, display, inputText,
inputHidden.) The form macros are just macros for generating the appropriate HTML
around the parametrized fields nad values. The list of form macros and how to use them
are given in the API below.
That's all there is to it.
The opentaps Form Macros API
Notation:
@inputHidden name value=""
means that the macro can be used as:
<@inputHidden name="facilityId">
which creates a hidden input with default value of "". Or, it can be used as:
<@inputHidden name="facilityId" value="${facilityId}">
which creates a hidden input with default value of whatever ${facilityId} is in the context.
Each attribute (ie, name or value in this case) after the name of the macro (inputHidden
in this case) is a parameter. If there is an ‰ after the attribute, it defines a default
value.
By convention, these are standard fields for all macros which may use them:
name: name of the field.
title: descriptive title of this field, used for rows (ie, "Charge Tax?")
form: name of the current form, for javascript such as lookup widgets
list: used for dropdowns. The list of maps or GenericValue entities where the
information for the select option elements will be retrieved
key: used for dropdowns. For each map in "list", a select "option" element is generated,
and its value (the "value" attribute) comes from map or GenericValue's entry under the
key in "key"; if the value of "key" is empty, then the value of "name" is used as the
lookup key instead.
displayField: used for dropdowns. The value of lookup key used to retrieve (from each
map in "list") the display text of the generated option. Note that if "displayField" is empty,
then the macro expects a nested string that contains the FTL string that will be used as
the option display text.
default: the default value for the field. For dropdowns, which is the option that will be
initially selected by the browser (optional).
index: used for multi-row submit forms. By default, index is set to -1, which means
nothing happens. If set to a different value, then"_o_${index}" is appended to the field
name. For example, if you call a macro with name="productId" and index="5", you will
get "product_o_5" for the name of the field.
required: for dropdowns, whether the user is required to select an option from the select
box or not (if not, then a "default" option element with an empty value is generated, in
addition to the other ones).
The form macros can be divided into two sub-groups: element and row macros.
Element macros are for creating a single cell or form element. Row macros use the
element macros for creating an entire input row. For example, an element macro might
be used to create a date entry field, which can be used in a multi-row or single submit
form. A date entry row macro might then use the date entry field element macro to
create a row with a title ("Start Date") and the elemnt macro, all wrapped in TR and TD
tags.



  
   


  

!

 !"
! !
!!
#  
 

Row Macros
+ p )p
@inputRowText
name title size=30 Creates a text entry row for field with name and displays the title.
maxlength="" Optionally specify size, maxlength, and default values.
default=""
@inputRowLookup
name title lookup
Creates a text entry row with a lookup. The lookup URL is
form size=20
specified in the "lookup" parameter.
maxlength=20
default=""
@inputHidden name
Creates a hidden input for name with default value of ""
value=""
@inputRowDateTime
name title form Creates a date and time input row.
default=""
@inputRowIndicator
name title Creates an input row with a Y/N dropdown (select) box. This
required=true uses inputIndicator (see above.)
default=""
@inputRowSubmit Creates a submit button. Specify the word in the button with title
title colspan="1" and how many columns it spans.
Element Tags
+ p )p
@displayTitle text class="tableheadtext" Display a title tag. Used by inputRowText.
+ p )p
width=200
An input element with a lookup button
@inputLookup name lookup form default=""
next to it. lookup is the controller request
size=20 maxlength=20 index=-1
for the lookup (ie, LookupProduct)
Creates dropdown (select) box of Y/N for
@inputIndicator name required=true
the name. If required=true, user must
default=""
select one.
Creates a select box. defaultOptionText is
the value of "required" is false, and an
empty default option element is
generated, then "defaultOptionText" will
contain the display text for that option.
For a row:
@inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
or: @inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
@inputSelect name list title="" key="" display="row"
displayField="" default="" index=-1 For a cell:
required=true defaultOptionText="" @inputSelect name list key=""
display="row|cell|block|inline" displayField="" default="" index=-1
required=true defaultOptionText=""
display="cell"
For a block:
@inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
display="block"
Inline:
@inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
display="inline"
@inputText name size=30 maxlength="" Create an input text box. index not
default="" index=-1 implemented yet.
Creates a confirmation button. When the
@inputConfirm title href="" form="" button is pressed, it will produce a popup
confirmText=uiLabelMap.OpentapsAreYouS confirmation dialogue. If the user cancels,
ure class="buttonDangerous" then nothing happens. If the user
confirms, then either the given form name
+ p )p
is submitted or the user is sent to the
given href link. The text in the popup
window can be set with confirmText, but
the buttons are browser specific. (See the
javascript function confirm() for
reference.)
Creates two dropdowns for the user to
select state and country. If the country is
changed, the state dropdown will be
updated to show the states in that
country. The default country is defined in
opentaps.properties as
defaultCountryGeoId.
You may either pass in a PostalAddress
with the address= argument, or you can
specify the parameter field names with
stateInputName and countryInputName.
In order for this macro to work properly,
@inputStateCountry address=null
the following script should be called in the
stateInputName="stateProvinceGeoId"
implementing screen,
countryInputName="countryGeoId"
components://opentaps-
common/webapp/common/WEB-
INF/actions/includes/stateDropdownData.
bsh
For a row:
@inputRowSelect title
stateInputName="stateProvinceGeoId"
countryInputName="countryGeoId"
For a cell:
@inputCellSelect
stateInputName="stateProvinceGeoId"
countryInputName="countryGeoId"
Header and Menu Tags
+ p )p
Creates a header for a subsection within an
OpenTaps screen. Parameter "title" is the title that
will be shown in header, "headerClass" is the class
@sectionHeader title of the section header DIV element, and "titleClass"
headerClass="subSectionHeader" is the class of the actual title (technically, the DIV
titleClass="subSectionTitle" element that contains the title, which in turn is
contained within the section header DIV element).
Note that additional contents, such as FTL code for
menu buttons, can be "nested" within this macro.
Other Macros
+ p )p
Generates a pagination block for a list, eg: Previous 21-35 of 35 Next

viewIndex: Current starting record number, eg: 21


viewSize: Number of records to show, eg: 20
currentResultSize: Number of records currently showing, eg: 15
totalResultSize: Total records in result set, eg: 35. If not supplied,
total results will not appear (eg: Previous 21-35 Next)
requestName: request to pass back and forth in the Previous and
Next links
extraParameters: Any extra request parameters to pass in the
Previous/Next links. This will be HTML-encoded by the macro.

Usage example:

In screen definition:

<set field="viewIndex" from-field="parameters.VIEW_INDEX"


@pagination type="Integer" default-value="1"/>
viewIndex <set field="viewSize" from-field="parameters.VIEW_SIZE"
viewSize type="Integer" default-value="20"/>
currentResultSiz
e requestName In bsh:
totalResultSize=- lotListIt = delegator.findListIteratorByCondition("Lot", ...);
1 lotList = lotListIt.getPartialList(viewIndex.intValue(),
extraParameters viewSize.intValue());
="" lotListIt.last();
lotsTotalSize = lotListIt.currentIndex();
context.put("lotList", lots);
context.put("lotsTotalSize", lotsTotalSize);

In FTL:
<#assign exParams = "&doLookup=Y&supplierPartyId=" +
parameters.supplierPartyId?if_exists/> <@pagination
viewIndex=viewIndex viewSize=viewSize
currentResultSize=lotList?size requestName="manageLots"
totalResultSize=lotsTotalSize extraParameters=exParams/>

Will generate the following code:


<div class="pagination">
<span class="paginationPrevious"><a
href="/warehouse/control/manageLots?VIEW_INDEX=1&doLookup=
Y&supplierPartyId=">Previous</a></span>
<span class="paginationText">3 - 4 of 7</span>
+ p )p
<span class="paginationNext"><a
href="/warehouse/control/manageLots?VIEW_INDEX=5&doLookup=
Y&supplierPartyId=">Next</a></span>
</div>
Important!
The pagination macro only works with GET forms. If you have a form
which feeds a page with parameter values (example:
warehouse/control/backOrderedItems), the form must use the GET
method, not POST. Otherwise there can be two values for the same
parameter name (one passed via POST and one passed by the
pagination macro in the querystring) and an ArrayList results from
parameters.get(), not a String, which makes things explode.
Generates a clickable control block with headline, which triggers the
expansion/contraction of an inner block.
targetId: DOM ID for the flexArea. Must be unique to the screen.
Used to trigger the collapse/expand and to persist the state of the
flexArea.
title: Headline for the control block.
class: CSS class for the opened/expanded state of the inner block.
Defaults to flexAreaContainer_open.
style: CSS styles to override the expanded class of the inner block.
controlClassOpen: CSS class for the expanded state of the control
block. Defaults to flexAreaControl_open.
@flexArea
controlClassClosed: CSS class for the collapsed state of the control
targetId title=""
block. Defaults to flexAreaControl_closed.
class="" style=""
controlStyle: CSS styles to override both the expanded and collapsed
controlClassOpe
states of the control block.
n=""
state: Controls the initial state of the flexArea on page load. If not
controlClassClos
specified and no saved state exists in the database, defaults to
ed=""
closed.
controlStyle=""
save: If true, the state of the flexArea will be saved to the database
state=""
each time it it expanded or contracted.
save=false
enabled: If false, clicking on the control block will not expand or
enabled=true
contract the flexArea. Specify false when another DOM element
should control the expansion.

Usage example:

In bsh (include in a global bsh such as main-decorator.bsh so that


every screen has access to the saved states of its flexAreas):
screenName = parameters.get("thisRequestUri");
prefMap = UtilMisc.toMap("application", opentapsApplicationName,
"screenName", screenName, "userLoginId",
userLogin.getString("userLoginId"));
+ p )p
viewPrefs = delegator.findByAnd("ViewPrefAndLocation", prefMap);
vpit = viewPrefs.iterator();
while (vpit.hasNext()) {
viewPref = vpit.next();
foldedStates.put(viewPref.get("domId"),
viewPref.getString("viewPrefString"));
}
globalContext.put("foldedStates", foldedStates);

In FTL:

For a simple flexArea:


<@flexArea targetId="..." title="Click me to
expand/contract"><div>Elements to hide and
show</div></@flexArea>

For a flexArea which is always open on page load:


<@flexArea targetId="..." title="..." state="open"
save=false>...</@flexArea>

For a flexArea which is hidden and closed on page load and has its
expansion triggered by an external event:
<@flexArea targetId="..." title="..." controlClassClosed="hidden"
state="closed" save=false enabled=false>...</@flexArea>

To override the default classes:


<@flexArea targetId="..." title="..."
class="customClassForOpenInnerBlock"
controlClassClosed="customClassForClosedControlBlock"
controlClassOpen="customClassForOpenControlBlock">...</@flexAr
ea>
isOpen(domId,
default="")

openOrClosedCl
Supporting functions for the flexArea macro. Not useful separately.
ass(domId,
openClassName,
closedClassNam
e, default="")

opentaps Form Macros API


Opentaps Form Macros API Reference
This document provides a reference for each of the available macros and functions
available in opentapsFormMacros.ftl. There are various kinds of macros available:
(  p Macros that generate basic display and input elements. Those that
( 
 p
generate plain HTML are called display macros. Those that generate
<input> elements are called input macros.
For convenience, these macros also come with Cell and Row flavors.
A Cell macro will wrap the display or input in <td>. All Cell macros
allow you to specify the class of this <td> using a parameter
 
 . (Some don't yet.. this is TODO)
A Row macro will have a   argument and it will generate a two
column row with the title in the first column and the display or input in
the second column. The title class is titleCell by default, but can be
overridden with the    parameter.
j p+ p The Building Block input macros have a number of convenience
features built into them.
The  parameter refers to the name of the input.
All input macros accept an  parameter. If this parameter is
greater than or equal to 0, then the input will be treated as a multi
sumbit input element. That is, the field name will have a _o_${index}
suffix.
Input macros that have a   parameter have a special behavior
with regards to default values. If the default value is specified, then it
will use that value as the default instead of rendering an empty field. If
it is not specified, it will attempt to look for an existing value in the
parameters map.
%p These macros generate more complicated form inputs or display
(  p HTML. For example, there is a macro to generate a state and country
( 
 p
dropdown which has the ability to change the states when the country
is changed. These macros may be composed of other Building
Blocks.
'  p Functions in FTL are a lot like macros, but they can be invoked from
    p
with the ${} substitution notation.
 p There are a variety of convenience macros to perfom common things,
+  p
such as rendering text in a tooltip style.
Building Blocks
Each of these macros also have Cell and Row versions. For example, the <inputText>
macro is also available as <inputTextCell> and <inputTextRow>. The Row version has
an additional required parameter named  .
<@display text="" class="tabletext" style="">
Displays the given text in a <span> with default class tabletext. It also accepts a CSS
style specification. This macro is generally not that useful, but it is used by other macros
to generate content. The Row and Cell versions are somewhat more useful.
<@displayLink href text class="buttontext">
Generates a hypertext link with the given href. The link text is specified by text
argument. It will transform the href depending on what the string starts with:
String starts with "javascript:" - No wrapping
String starts with "/" - Wraps in <@ofbizUrl> and passes externalLoginKey. (e.g.,
"/warehouse/control/main")
All other cases: Wraps in <@ofbizUrl>
<@displayCurrency currencyUomId="" amount=0 class="tabletext">
Renders currency in the current locale. If the amount is negative the result will be red. If
a different class is specified, negative amounts will still be red. The Cell and Row
versions of this always right align the currency via the css class currencyCell.
<@inputHidden name value index=-1>
Renders a hidden input with the given name and value. This is mostly useful for making
multi-submit hidden inputs, which would otherwise be clunky to write by hand.
<@inputText name size=30 maxlength="" default="" index=-1>
Renders a text input field with the given name.
<@inputTextarea name rows=5 cols=60 default="" index=-1>
Renders a textarea with the given name.
<@inputSelect name list key="" displayField="" default="" defaultOptionText=""
required=true index=-1>
Renders a dropdown. The list is generated from the parameter 'list'. If this list contains
the display field you want to render, then you can do the following,
<@inputSelect name="statusId" list=statusItems key="statusId"
displayField="description" />
If you need to do additional processing of the list to get the text for each field, you can
do the following,
<@inputSelect name="statusTypeId" list=statusItems
key="statusTypeId" ; option> <#assign statusType =
option.getRelatedOneCache("StatusType")> ${statusType.description}
</@inputSelect>
This macro can generate an empty first row. To do this, set 1  to false. If you
want to put some text in this empty row, such as 'please select something', you can
specify it with   c   .
<@inputLookup name lookup form default="" size=20 maxlength=20 index=-1>
Like <@inputText> but also renders a lookup button next to it. The name of the lookup,
such as LookupPartyName, should be specified with the
 parameter. The name
of the form must also be supplied with .
<@inputCurrency name list currencyName="currencyUomId" default=""
defaultCurrency="USD" index=-1>
Renders a small text input and a dropdown of currencies for entering a currency
amount. Pass in a list of )  entities of type )!!*Œ+ )!. Since this
macro generates two separate inputs, the  parameter refers to the text input,
where the amount is entered, and the  * refers to the name of the
currency dropdown. This macro might be simplified in the future since it's rare that such
flexibility is required.
<@inputSubmit title onClick="submitFormWithSingleClick(this)">
Renders a form submit button whose default onClick action is to prevent doubleposting.
The   parameter is the button text. You may want to override onClick to do
something fancy. Note that onClick is case sensitive.
<@inputButton title onClick="">
Renders a button with text from the   parameter. This is useful when you want a form
control with an onClick handler. Note that onClick is case sensitive.
<@inputIndicator name required=true default="" index=-1>
Renders a Y or N dropdown for indicator fields. Set 1  to false if you want to
allow a null value.
Larger Building Blocks
<@inputDateTime name form default="">
Renders a date time input with separate fields for hours, minutes, and AM/PM. Must
specify the name of the form with .
<@inputSelectTaxAuthority list defaultGeoId="" defaultPartyId="" required=false>
Renders a special tax authority dropdown. This is rarely used, mostly for generating
invoice items and tax authority lists.
<@inputState name="stateProvinceGeoId" countryInputName="countryGeoId"
address=<>>
Renders a dropdown of states for the default application country. This dropdown is
AJAX enhanced and will change to match the states or provinces of the selected
country from the dropdown specified by   j *. This macro comes in Cell
and Row flavors.
If you want, you can pass in an address Map or GenericValue. The state of this address
will be used to select the default value.
<@inputCountry name="countryGeoId" stateInputName="stateProvinceGeoId"
address=<>>
Renders a dropdown of countries. The default country is determined from the
application properties     ' j. This dropdown is AJAX enhanced and will
cause the state dropdown specified by   j * to change to match the
selected country. This macro comes in Cell and Row flavors.
If you want, you can pass in an address Map or GenericValue. The country of this
address will be used to select the default value.
<@inputStateCountry stateInputName="stateProvinceGeoId"
countryInputName="countryGeoId" address=<>>
Renders the <@inputState> and <@inputCountry> dropdowns together. Not really that
useful, but it's here for legacy support. This macro comes in Cell and Row flavors.
<@inputConfirm title href="" form="" confirmText=uiLabelMap.OpentapsAreyouSure
class="buttonDangerous">
Renders a special link that will bring up a confirmation dialogue. If the user confirms the
dialgoue, then the action will proceed.
This macro may be linked to a form or it can be standalone. If the  is speficied,
then a form.submit() action will take place when the user confirms. If a  is supplied,
then the user will be sent to that link.
<@inputHiddenRowSubmit index submit=true>
Renders the special hidden variable that controls whether a given row will be processed
in a multi form. If you are making a multi input form, you must call this macro for each
row. If for some reason you want to disable the row with the macro, you can specify
  to false, and the row will not be procesed.
Global Functions
tableRowClass( rowIndex, rowClassOdd="rowWhite", rowClassEven="rowLightGray" )
This function returns the value of   c if the index is even, otherwise
  . Yes, this is a little backwards and will probably be fixed. It is used as
follows,
<#list fooList as foo> <tr
class="${tableRowClass(foo_index)}"></tr> </#list>
Convenience Macros
<@tooltip text="" class="tooltip">
Renders a tooltip. This message stands out and can be used for notifying the user of
important things.
 p p p


<@displayError name index=-1>
Renders a message if there is an error for the given field name (and index if a multi-
form). The css class is errortooltip. This is used with the UtilMessage.addFieldError()
methods. A form field is not required, this can be used to render messages anywhere
on the screen. Sample use case:
// in a bsh file called before the FTL
UtilMessage.addFieldError(request, "foo", "OpentapsError_PermissionDenied");
<!-- in the FTL file --> <@displayError name="foo" />
The output will be:
Sorry, you do not have permission to perform this action.

opentaps Ajax Pagination Framework


Opentaps Ajax Pagination Framework
   p
[hide]
1 Configuring the Screen Widget
2 Configuring the Pagination Query
3 Paginating in the FTL
4 Debugging
5 Notes

In this guide, we will show you how to replace a static form-widget list form with an Ajax
paginated form using the opentaps Form Macros pagination framework.
The screen is the Financials > Configure >> Chart of Accounts screen and displays all
the general ledger accounts configured for a company. Originally, the list of accounts
was created with the ofbiz form widget, but because a company typically has several
hundred accounts associated with it, such a static form was not very user-friendly. It
always displayed 100 GL accounts per page, and paging through was slow.
Configuring the Screen Widget
The first step is to edit screen widget XML definition and remove the references to the
form widget. Edit the file hot-
deploy/financials/widget/financials/screens/ConfigurationScreens.xml and look for the
screen "listGlAccounts". Since the ajax pagination is done within freemarker (FTL)
templates, you can remove the following lines which referenced the old form widget :
<container style="screenlet-body">
<include-form name="listGlAccounts"
location="component://financials/widget/financials/forms/configuration/ConfigurationFor
ms.xml"/>
</container>
You can also remove these lines:
<set field="viewIndex" from-field="parameters.VIEW_INDEX" type="Integer" default-
value="0"/>
<set field="viewSize" from-field="parameters.VIEW_SIZE" type="Integer" default-
value="100"/>
These are no longer needed because they were used to control the pagination of the list
of GL accounts from the server side, but the opentaps Ajax pagination form macro
allows the user to set pagination choices.

Configuring the Pagination Query


The second step is to configure the data for the pagination. If you are doing a query or
building a List of Maps or GenericValues, you do not need to do anything special: you
can just pass your List directly to the paginator.
You can also define a query for the paginator in a beanshell BSH script, which is
typically used to retrieve data for display. This would cause the paginator to do the
query for you, rather than holding the entire of List of values in memory. In most
opentaps applications, the BSH script is used to do a lookup of data and then put the
resulting list or list iterator (cursor) into the context for an FTL page or form widget XML
to display, like this:
accounts =
EntityUtil.filterByDate(delegator.findByAndCache("GlAccountOrganizationAndClass",
UtilMisc.toMap("organizationPartyId",
session.getAttribute("organizationPartyId")), UtilMisc.toList("accountCode")));
context.put("accounts", accounts);
For Ajax pagination, however, the FTL page needs to interact with the data directly, so
we need to pass the data lookup query information to the FTL. This done by following
the closure pattern, where a function is passed to the pagination object in FTL. This
function is created in the BSH to represent what data lookup should be performed:
glAccountListBuilder(organizationPartyId) {

entityName = "GlAccountOrganizationAndClass";
where = UtilMisc.toList(
new EntityExpr("organizationPartyId", EntityOperator.EQUALS,
organizationPartyId),
EntityUtil.getFilterByDateExpr()
);
orderBy = UtilMisc.toList("accountCode");

return this;
}
This function essentially defines what entity (GlAccountOrganizationAndClass) will be
queried, what the conditions are in the   statement, and how the query results will
be ordered. The fields 

" ,  ,   -,   ,  
, and 
 #p
 p$pp
p  p
p% p  p 
 p"
p

p  -p p p
p
 
 p  p$p% p& p
p 
 p  p
p pp  ppp
p

p p  'p p


pp$p p
p 
p
pp
p
p  p  'p 
 

   
 ½
return this;
The next step is just to pass this function to the FTL, like this:
context.put("glAccountListBuilder",
glAccountListBuilder(session.getAttribute("organizationPartyId")));
Paginating in the FTL
The pagination of the GL accounts will be handled in an FTL file like glAccounts.ftl,
which previously only displayed a header. The first thing we will need is to import the
opentaps form macros:
<@import location="component://opentaps-
common/webapp/common/includes/lib/opentapsFormMacros.ftl"/>
Then, we simply call the <@paginate> form macro with the list we created in the BSH
script:
<@paginate name="glAccountOrganization" list=myList>
Alternatively, we can pass in the function we built in BSH:
<@paginate name="glAccountOrganization" list=glAccountListBuilder>
We will need to turn off freemarker parsing inside the paginate macro:
<#noparse>
Next, we bring in the pagination buttons, which let you scroll back and forth and to the
beginning and end of the list of results with the <@paginationNavContext/>:
<div class="subSectionHeader">
<div class="subMenuBar">
<@paginationNavContext />
</div>
</div>
That's most of the magic required. You would just build a table of your form in HTML
with headers, like any other table, but use the <@headerCell> macro to define the
heading cell as something that the user could order the list results by:
<table class="listTable">
<tr class="listTableHeader">
<@headerCell title=uiLabelMap.FinancialsGLAccountCode
orderBy="accountCode"/>
<@headerCell title=uiLabelMap.FinancialsGLAccountName
orderBy="accountName"/>
<@headerCell title=uiLabelMap.FinancialsPostedBalance
orderBy="postedBalance"/>
<td> </td>
</tr>
For columns which shouldn't be sorted, just use the HTML TD tag.
Next, you would use the FTL <#list> directive to display the individual rows.  (
is returned from the pagination macro:
<#list pageRows as row>
<tr class="${tableRowClass(row_index)}">
<@displayCell text=row.accountCode/>
<@displayCell text=row.accountName/>
<td class="textright" style="padding-right: 40px"><@displayCurrency
amount=row.postedBalance/></td>
<td>
<@displayLink href="reconcileAccounts?glAccountId=${row.glAccountId}"
text=uiLabelMap.FinancialsReconcile/>
<@displayLink href="updateGlAccountScreen?glAccountId=${row.glAccountId}"
text=uiLabelMap.CommonEdit/>
<@displayLink href="addSubAccountScreen?glAccountId=${row.glAccountId}"
text=uiLabelMap.FinancialsAddSubAccount/>
<@displayLink
href="removeGlAccountFromOrganization?glAccountId=${row.glAccountId}&organizatio
nPartyId=${row.organizationPartyId}" text=uiLabelMap.FinancialsDeactivate/>
</td>
</tr>
</#list>

$(  is created for you by the pagination macro to define different CSS
classes for different rows. You can use either FTL and HTML to display the results or
use one of the other form macros, such as <@displayLink> or <@displayCell>.
Finally, you would wrap up like this:
</table>
</#noparse>
</@paginate>
And that's it!
Debugging
There are a few things you should know about debugging the paginator:
The ofbiz framework caches the freemarker files, so after changing your .ftl file, make
sure you clear the cache in Webtools > Cache. Otherwise, the changes may not appear.
The paginator's content is retrieved via AJAX after the main page has loaded. Thus, if
you did a "View Page Source" on your browser, it would not show the content inside
paginate. If you are using Firefox, you can highlight the paginated area, right click on
your mouse, and click on "View Selection Source" to view the HTML code of your
paginator.
Notes
If you use an EntityListBuilder, then add additional fields, you will not be able to sort by
the fields which are not part of the database table.
The paginator can accept additional parameters into it. They can be passed in as part of
the @paginate directive, like this:
<@paginate name="pendingInboundEmails" list=inboundEmails
teamMembers=teamMembers>
Then, inside of the paginator, you can access them using the parameters Map, like this:
<#if parameters.teamMembers?has_content>
...
<#list parameters.teamMembers as option>

Writing an ofbiz Container


Writing an ofbiz Container
The ofbiz framework has a container architecture that allows you to set up containers
which load up system infrastructure such as the delegator (database access),
dispatcher (business logic tier), and web servers like Tomcat. The standard containers
are used to run opentaps from the Tomcat server, run the POS terminal as a desktop
application, or install data or write unit tests. You can also create your own containers
for other purposes. For example, we created a custom container to generate Java
classes for all the entities in the entitymodel XML files.
To write your own container, you would need to create two configuration files, and a
Java container file. Your container will be called based on the commandline parameters
when the office framework is started. For example,
$ ant make-base-entities
causes the following target in the build.xml to be called:
<target name="make-base-entities">
<!-- some other stuff -->
<java jar="ofbiz.jar" fork="true">
<jvmarg value="${memory.max.param}"/>
<arg value="pojoentities"/>
</java>
</target>
which in turn calls ofbiz with the following commandline parameters:
$ java -jar ofbiz.jar pojoentities
means that it will call up the pojoentities container. When ofbiz starts up, it will look for a
file called pojoentities.properties in the org.ofbiz.base.start package. This file is in fact
located in framework/base/src/start/org/ofbiz/base/start/pojoentities.properties and
defines the runtime configuration of ofbiz. For example, it defines where the
configuration files are, where the log directory is, and whether the container will shut
itself down automatically. The most important definition is an XML file which configures
the container:
# --- Location (relative to ofbiz.home) for (normal) container configuration
ofbiz.container.config=framework/base/config/pojoentities-containers.xml
This file is in framework/base/config and defines the loaders for the container and then
the container classes. In this case, it is
org.opentaps.domain.container.PojoGeneratorContainer. The XML file also specifies
the parameters for the container, which in this case is the name of the delegator and
where the template and output file path for the POJOs are:
<container name="pojo-generator-container"
class="org.opentaps.domain.container.PojoGeneratorContainer">
<property name="delegator-name" value="default"/>
<property name="template" value="hot-deploy/opentaps-
common/templates/BaseEntity.ftl"/>
<property name="output-path" value="hot-deploy/opentaps-
common/src/org/opentaps/domain/base/entities/"/>
</container>
Finally, your container must have a standard init method, and the start() method will call
the logic of the container. You can get the configuration properties of your container like
this:
public boolean start() throws ContainerException {
ContainerConfig.Container cfg = ContainerConfig.getContainer(containerName,
configFile);
ContainerConfig.Container.Property delegatorNameProp =
cfg.getProperty("delegator-name");
ContainerConfig.Container.Property outputPathProp = cfg.getProperty("output-
path");
ContainerConfig.Container.Property templateProp = cfg.getProperty("template");
// rest of container actions

And everything else can follow from there.

pp
p
Creating and Applying Patches
Creating and Applying Patches
   p
[hide]
1 Creating Patches
1.1 Patch of Changes that I Made
1.2 Patch of Specific Revision of Opentaps
2 Applying Patches
2.1 Dealing with Patch Rejects

Creating Patches
Patch of Changes that I Made
To make a patch of the changes you made to opentaps, you can use the svn diff
command from a terminal or command prompt.
First, ensure you are in the root directory of opentaps,
prompt> cd opentaps
To verify that you're in the right directory, ensure that it contains the build.xml and
startofbiz.sh files. Next, execute the svn diff command,
prompt> svn diff
It will print the patch of all changes you made to the screen. To save the output to a file
instead, use a redirect,
prompt> svn diff > mychanges.patch
This command will create a mychanges.patch file that contains all changes you made to
opentaps.
If you wish to see changes of only one file or directory, you can specify the file or
directory explicitly,
prompt> svn diff applications/product
This command will make a patch of all your changes to the applications/product/
directory and its children.
Patch of Specific Revision of Opentaps
Let's say you want to create a patch against a specific revision of opentaps, such as the
bugfix revision 9593. In order to do this, you will need either a complete checkout of
opentaps that's fully up to date or internet access to the opentaps subversion repository.
Since it's simpler to use the online opentaps subversion repository, we will go over this
technique here.
To make the patch, use the svn diff command and use the -c argument to specify the
revision. You must also specify the location of the opentaps repository from the trunk
directory. The full command is as follows,
prompt> svn diff -c 9593 svn://svn.opentaps.org/opentaps/versions/1.0/trunk >
bugfix.patch
A file named bugfix.patch is created and it contains revision 9593 of opentaps.
Applying Patches
If you get a patch, you can use it to modify your files with the patch command. patch is a
standard UNIX command, and a Windows version is also available. First ensure that
you are in the root directory of opentaps,
prompt> cd opentaps
It should contain the build.xml and startofbiz.sh files.
We recommend copying the patch file to this directory for convenience. For instance, if
you have the bugfix.patch patch file from the above example, copy it into the opentaps
root directory. Also make sure the patch is not compressed (.zip or .gz).
Next, use the patch command with -p0 arguments as follows,
prompt> patch -p0 < bugfix.patch
If you did not copy the path file to the opentaps root directory, you will have to specify
the full path to your patch file,
prompt> patch -p0 < /path/to/bugfix.patch
Assuming you have made no major changes that would conflict with the patch, it should
be applied without errors. You can check to see if the patch was applied correctly using
svn diff.
Dealing with Patch Rejects
Sometimes the patch might fail to be applied to a certain file. In this case, a rejection file
is created with information about what caused the problem. Rejection files have the
same name and location as the file that was not patched, except that it has an
extension .rej.

Avoiding Database Deadlocks


Avoiding Database Deadlocks
In a high-volume scenario, repeated access to the database could cause a database
deadlock, where a table becomes unavailable. This could in turn cause the system to
become stuck and for critical functions to fail.
The actual occurrence of database deadlocks will depend somewhat on the database
itself. However, there are good general practices that you could follow to reduce your
risk. To minimize the risk of deadlocks, Reducing SQL Server Deadlocks recommended
the following:
Keep transactions as short as possible. One way to help accomplish this
is to reduce the number of round trips between your application and SQL
Server by using stored procedures or keeping transactions with a single
batch. Another way of reducing the time a transaction takes to complete
is to make sure you are not performing the same reads over and over
again. If your application does need to read the same data more than
once, cache it by storing it in a variable or an array, and then
re-reading it from there, not from SQL Server.
In opentaps, we recommend that you follow these best practices when writing more
complex business logic:
Do not rely on the automatic transaction management of the ofbiz service engine. If you
have business logic which requires repeated access to the database, both to retrieve
and store data, turn off transactions at the service level by using the
use-transaction="false"
flag in the service XML definition. Manage the transaction inside of your Java code.
As a general rule, if you find yourself needing to set a longer transaction timeout for
your service, such as by using the
transaction-timeout="600"
parameter in the service XML definition, then your service should be rewritten to avoid
potential deadlocks.
Put all of your database access code together, especially writes to the database, and
put them inside a transaction block.
Your code should be structured as much as possible to follow this three-part pattern: get
data, process data, store data. A transaction should only be opened in the part of the
code where you're actually storing data.
For example, this hypothetical service has a greater risk of database deadlocks:
<!-- services XML definition -->
<service name="myService" engine="java" path="org.opentaps.MyServices"
invoke="myService"/>

// inside of myService
List orderHeaders = delegator.findByAnd("OrderHeader", UtilMisc.toMap("statusId",
"ORDER_APPROVED"));
for (GenericValue orderHeader: orderHeaders) {
List orderItems = orderHeader.getRelatedByAnd("OrderItem",
UtilMisc.toMap("statusId", "ITEM_APPROVED"));

for (GenericValue orderItem: orderItems) {


delegator.create(myEntity, myMapOfValues);
// or
Map tmpResult = dispatcher.runSync("someOrderRelatedService",
paramterMapValues);
}
}
Essentially, this code is going through the database again and again to read and write
data. What if you had 1,000 approved orders with an average of 10 items per order?
You would be doing 1,000 SELECT queries and nesting inside of each SELECT 10
INSERT queries. This is very risky and will probably lead to a deadlock, especially if
several threads start trying to run the same service and do all those selects and inserts
around the same time.
A better way to do it would be like this (remember this is just an example and is not
meant to run in real life):
<!-- services XML definition -->
<service name="myService" engine="java" path="org.opentaps.MyServices"
invoke="myService" use-transaction="false"/>

// inside of myService

// group all your select queries together


List orderHeaders = delegator.findByAnd("OrderHeader", UtilMisc.toMap("statusId",
"ORDER_APPROVED"));
List orderIds = EntityUtil.getFieldFromEntityList(orderHeaders, "orderId", true); // get
List of distinct orderIds
List orderItems = delegator.findByAnd("OrderItem", UtilMisc.toList(
new EntityCondition("orderId", EntityOperator.IN, orderIds),
new EntityCondition("statusId", EntityOperator.EQUALS,
"ITEM_APPROVED")));

// make a list of values to store


List valuesToCreate = new LinkedList();
for (GenericValue orderItem: orderItems) {
// do something
GenericValue newValue = delegator.makeValue(myEntity, myMapOfValues);
valuesToCreate.add(newValue);
}
}

// one transaction to store all your values


TransactionUtil.begin();
delegator.storeAll(valuesToCreate);
TransactionUtil.commit();

Database Tips
Database Tips
   p
[hide]
1 General
2 PostgreSQL Tips
2.1 Monitoring PostgreSQL Deadlocks
2.2 Checking Open PostgreSQL Connections
3 MySQL Tips
3.1 Table Name Case Sensitivity
3.2 UTF-8 Support
4 DB2 Tips
4.1 DB2 Basics
4.2 Making DB2 Work

General
J2EE Transaction Management =>
http://www.javaworld.com/jw-07-2000/jw-0714-transaction.html
A transaction can be defined as an indivisible unit of work comprised of several
operations, all or none of which must be performed in order to preserve data integrity.
For example, a transfer of 00 from your checking account to your savings account
would consist of two steps: debiting your checking account by 00 and crediting your
savings account with 00. To protect data integrity and consistency -- and the interests of
the bank and the customer -- these two operations must be applied together or not at
all. Thus, they constitute a transaction.
  p pp   p
All transactions share these properties: atomicity, consistency, isolation, and durability
(represented by the acronym ACID).
 : This implies indivisibility; any indivisible operation (one which will either
complete fully or not at all) is said to be atomic.
  2 A transaction must transition persistent data from one consistent state to
another. If a failure occurs during processing, the data must be restored to the state it
was in prior to the transaction.
j   2 Transactions should not affect each other. A transaction in progress, not yet


 or p$ ) (these terms are explained at the end of this section), must be
isolated from other transactions. Although several transactions may run concurrently, it
should appear to each that all the others completed before or after it; all such
concurrent transactions must effectively end in sequential order.
   2 Once a transaction has successfully committed, state changes committed
by that transaction must be durable and persistent, despite any failures that occur
afterwards.

A transaction can thus end in two ways: a 


' the successful execution of each
step in the transaction, or a $ )' which guarantees that none of the steps are
executed due to an error in one of those steps.
  p   p  p
The 
 p measures concurrent transactions' capacity to view data that have
been updated, but not yet committed, by another transaction. If other transactions were
allowed to read data that are as-yet uncommitted, those transactions could end up with
inconsistent data were the transaction to roll back, or end up waiting unnecessarily were
the transaction to commit successfully.
A higher isolation level means less concurrence and a greater likelihood of performance
bottlenecks, but also a decreased chance of reading inconsistent data. A good rule of
thumb is to use the highest isolation level that yields an acceptable performance level.
The following are common isolation levels, arranged from lowest to highest:
!)  2 Data that have been updated but not yet committed by a
transaction may be read by other transactions.
!  2 Only data that have been committed by a transaction can be read by
other transactions.
!  !2 Only data that have been committed by a transaction can be read by
other transactions, and multiple reads will yield the same result as long as the data have
not been committed.
  2 This, the highest possible isolation level, ensures a transaction's exclusive
read-write access to data. It includes the conditions of ReadCommitted and
RepeatableRead and stipulates that all transactions run serially to achieve maximum
data integrity. This yields the slowest performance and least concurrency. The term
   $ in this context is absolutely unrelated to Java's object-serialization
mechanism and the java.io.Serializable interface.

  p   p p56p


The Java 2 Enterprise Edition (J2EE) platform consists of the specification, compatibility
test suite, application-development blueprints, and reference implementation. Numerous
vendors provide application servers/implementations based on the same specification.
J2EE components are meant to be specification-centric rather than product-centric (they
are built to a specification, rather than around a particular application-server product).
J2EE applications include components that avail of the infrastructural services provided
by the J2EE container and server, and therefore need to focus only on "business logic."
J2EE supports flexible deployment and customization in the target production
environment, using declarative attributes provided by a deployment descriptor. J2EE
aims to protect IT efforts and reduce application-development costs. J2EE components
may be built in-house or procured from outside agencies, which can result in flexibility
and cost benefits for your IT department.
Transaction support is an important infrastructural service offered by the J2EE platform.
The specification describes the Java Transaction API (JTA), whose major interfaces
include javax.transaction.UserTransaction and javax.transaction.TransactionManager.
The UserTransaction is exposed to application components, while the underlying
interaction between the J2EE server and the JTA TransactionManager is transparent to
the application components. The TransactionManager implementation supports the
server's control of (container-demarcated) transaction boundaries. The JTA
UserTransaction and JDBC's transactional support are both available to J2EE
application components
The J2EE platform supports two transaction-management paradigms: 
p

 
 p 
 and   
p
 
 p 
 
Declarative transaction demarcation
Declarative transaction management refers to a non-programmatic demarcation of
transaction boundaries, achieved by specifying within the deployment descriptor the
transaction attributes for the various methods of the container-managed EJB
component. This is a flexible and preferable approach that facilitates changes in the
application's transactional characteristics without modifying any code. Entity EJB
components must use this container-managed transaction demarcation.
What is a transaction attribute?
A transaction attribute supports declarative transaction demarcation and conveys to the
container the intended transactional behavior of the associated EJB component's
method. Six transactional attributes are possible for container-managed transaction
demarcation:
!1 2 A method with this transactional attribute must be executed within a JTA
transaction; depending on the circumstances, a new transaction context may or may not
be created. If the calling component is already associated with a JTA transaction, the
container will invoke the method in the context of said transaction. If no transaction is
associated with the calling component, the container will automatically create a new
transaction context and attempt to commit the transaction when the method completes.
!1 *2 A method with this transactional attribute must be executed in the
context of a new transaction. If the calling component is already associated with a
transaction context, that transaction is suspended, a new transaction context is created,
and the method is executed in the context of the new transaction, after whose
completion the calling component's transaction is resumed.
*    2 A method with this transactional attribute is not intended to be part of a
transaction. If the calling component is already associated with a transaction context,
the container suspends that transaction, invokes the method unassociated with a
transaction, and upon completion of the method, resumes the calling component's
transaction.
   2 A method with this transactional attribute supports the calling component's
transactional situation. If the calling component does not have any transactional context,
the container will execute the method as if its transaction attribute was NotSupported. If
the calling component is already associated with a transactional context, the container
will execute the method as if its transactional attribute was Required.
+ 2 A method with this transactional attribute must only be called from the
calling component's transaction context. Otherwise, the container will throw a
javax.transaction.TransactionRequiredException.
*2 A method with this transactional attribute should never be called from a calling
component's transaction context. Otherwise, the container will throw a
java.rmi.RemoteException.

Methods within the same EJB component may have different transactional attributes for
optimization reasons, since all methods may not need to be transactional. The isolation
level of entity EJB components with container-managed persistence is constant, as the
DBMS default cannot be changed. The default isolation level for most relational
database systems is usually ReadCommitted.
Programmatic transaction demarcation
Programmatic transaction demarcation is the hard coding of transaction management
within the application code. Programmatic transaction demarcation is a viable option for
session EJBs, servlets, and JSP components. A programmatic transaction may be
either a JDBC or JTA transaction. For container-managed session EJBs, it is possible --
though not in the least recommended -- to mix JDBC and JTA transactions.
JDBC transactions
JDBC transactions are controlled by the DBMS's transaction manager. The JDBC
Connection -- the implementation of the java.sql.Connection interface - supports
transaction demarcation. JDBC connections have their auto-commit flag turned on by
default, resulting in the commitment of individual SQL statements immediately upon
execution. However, the auto-commit flag can be programmatically changed by calling
the setAutoCommit() method false with the argument. Afterward, SQL statements may
be serialized to form a transaction, followed by a programmatic commit() or rollback().
Thus, JDBC transactions are delimited with the commit or rollback. A particular DBMS's
transaction manager may not work with heterogeneous databases. JDBC drivers that
support distributed transactions provide implementations for
javax.transaction.xa.XAResource and two new interfaces of JDBC 2.0,
javax.sql.XAConnection and javax.sql.XADataSource.
JTA transactions
JTA transactions are controlled and coordinated by the J2EE transaction manager. JTA
transactions are available to all the J2EE components -- servlets, JSPs, and EJBs -- for
programmatic transaction demarcation. Unlike JDBC transactions, in JTA transactions
the transaction context propagates across the various components without additional
programming effort. In J2EE server products, which support the distributed two-phase
commit protocol, a JTA transaction can span updates to multiple diverse databases with
minimal coding effort. However, JTA supports only flat transactions, which have no
nested (child) transactions.
The javax.transaction.UserTransaction interface defines methods that allow applications
to define transaction boundaries and explicitly manage transactions. The
UserTransaction implementation also provides the application components -- servlets,
JSPs, EJBs (with bean-managed transactions) -- with the ability to control transaction
boundaries programmatically. EJB components can access UserTransaction via
EJBContext using the getUserTransaction() method. The methods specified in the
UserTransaction interface include begin(), commit(), getStatus(), rollback(),
setRollbackOnly(), and setTransactionTimeout(int seconds). The J2EE server provides
the object that implements the javax.transaction.UserTransaction interface and makes it
available via JNDI lookup. The isolation level of session EJB components and entity
EJB components that use bean-managed persistence may be programmatically
changed using the setTransactionIsolation() method; however, changing the isolation
level in mid-transaction is not recommended.
c   p p p56p    p   p
Some aspects of the J2EE platform are optional, which may be due to evolving
standards and introducing new concepts gradually (in terms of Internet time). For
example, in the EJB 1.0 specification, 

p$  (and container-managed
persistence) was a relatively new concept and an optional feature. Support for entity
beans became mandatory about a year later in the EJB 1.1 specification because of
high market acceptance and demand. As products mature and support more
sophisticated features, non-trivial features may be made a mandatory part of the
specification. The following are some optional transaction-related aspects:
 p p  p
Sanjay Mahapatra is a Sun Certified Java programmer (JDK 1.1) and architect (Java
Platform 2). He currently works for Cook Systems International, a consulting and
systems integration vendor for the Java 2 Platform.
+  p p   p pp   p   pp p  0p
  p  2 The J2EE 1.2 specification does not require a J2EE server
implementation to support access to multiple JDBC databases within a transaction
context (and support the two-phase commit protocol). The
javax.transaction.xa.XAResource interface is a Java mapping of the industry-standard
XA interface based on X/Open CAE specification. (See Resources.) X/Open is a
consortium of vendors who aim to define a Common Applications Environment that
supports application portability. Support for the multiple JDBC data sources,
javax.transaction.xa.XAResource, two-phase commit, etc., is optional in the current
specification, though the next version will likely mandate such support. Sun
Microsystems's J2EE reference implementation, for instance, supports access to
multiple JDBC databases within the same transaction using the two-phase commit
protocol.
   p   p p   p  pp  : The J2EE 1.2
specification does not require that transactional support be made available to
application clients and applets. Some J2EE servers may provide such support in their
J2EE server products. As a design practice, transaction management within application
clients should be avoided as much as possible, in keeping with the thin client and three-
tier model. Also, a transaction, being a precious resource, must be distributed sparingly.
j 0#0   p   p   p   2 The J2EE 1.2 specification
does not mandate that the transaction context be propagated between Web
components. Typically, Web components like servlets and JSPs need to make calls on
(session) EJB components, rather than to other Web components.

   p   pp    p


In the interest of component portability, it is important for you -- the designer and
developer -- to understand which aspects of transactional support are mandatory and
which are optional. In the J2EE model, components are written against a specification
and are meant to be deployed on J2EE-compliant application servers from various
vendors -- all in the interest of protecting IT investment and cross-J2EE-server
portability. But if a crucial transactional functionality needs an optional transactional
feature, take adequate care to declare, document, and highlight the dependency clearly,
explicitly, and as early as possible.
   p
J2EE's declarative transaction demarcation approach is more elegant than
programmatic transaction demarcation. At the same time, using declarative transaction
demarcation means relinquishing control of the isolation level, since one is limited to the
default level provided in the DBMS. If you must use programmatic transaction
demarcation, JTA transactions are generally preferred over JDBC transactions. JTS
transactions, however, cannot be nested. In the interest of portability, be aware of the
optional and mandatory aspects of transactional support in the J2EE platform. Against
this background, your application's specific transactional needs will naturally govern
your choice of transaction management strategy.
Nested Transactions
Introduction to nested transactions
Using transactions as we have to this point does not always allow applications the
granularity of error isolation that may be desired. If the transaction aborts, all changes
are rolled back. As our example application is now written, this is the desired behavior.
However, for more complicated transactions, we may want a finer granularity in error
isolation. For example, we may not want to undo all parts of a transaction due to an
error in one operation.
As an example, consider the billing part of our application. Currently, the billing
algorithm is the following:
The order server makes an RPC to the billing server.
The billing server, using PPC, queries the billing database on the mainframe and
decrements the account balance.
If the customer has insufficient funds in the billing database, the transaction is aborted.
Any other PPC failures also result in the transaction being aborted.
The abort in Step 3 results in all parts of the transaction being aborted. Not only is the
change to the billing database backed out; changes to the inventory database are
backed out, and the shipping request is dequeued. This algorithm is used for orders
from all customers.
However, suppose we want to extend credit to preferred customers. These customers
are listed in preferred customer database, which also records the current credit and
maximum credit for preferred customers. The preferred customer database is local; thus
our application does not have to access the mainframe for preferred customers. To use
this database, we change the billing algorithm as follows:
The order server first checks the preferred customer database (which could be another
RDBMS or an SFS file).
If the customer has an entry in that database, we increment the "current credit" amount
by the amount of the order.
If the current order places that customer over the credit limit, we abort the transaction to
back out any changes we made to the database.
Only if the customer is not a preferred customer or does not have sufficient credit do we
make an RPC to the billing server.
In our current transactional model, the abort in Step 3 in the new billing algorithm backs
out not only any changes to the preferred customer database but all work done by the
transaction. We can of course change the algorithm so that the application does not
abort in the case of insufficient funds but instead queries the database and then
decrements it only if sufficient funds exist. However, as we will discuss in Changing the
design of the application server, there are reasons for not doing so.
We need a way to isolate any errors that occur in the interaction with the local database,
preventing such errors from aborting the entire transaction. The solution is to check and
decrement the local database from within a nested transaction. A 
p
 
 is
a new transaction begun from within the scope of another transaction.
Nested transactions offer several features, including:
Nested transactions enable an application to isolate errors in certain operations.
Nested transactions allow an application to treat several related operations as a single
atomic operation.
Nested transactions can operate concurrently.
Nested transactions, like any other transactions, do incur a performance cost.
Therefore, they should be used only when necessary.
Nested and top-level transactions
As described in the previous section, a nested transaction is begun within the scope of
another transaction. The transaction that starts the nested transaction is called the
 
of the nested transaction. There are two types of nested transactions:
A 
p
*p
 
 commits or aborts independently of the enclosing
transaction. That is, after it is created, it is completely independent of the transaction
that created it. The Tran-C % construct for creating nested top-level
transactions. The syntax of this construct is identical to that of the   
construct, but the % keyword is used instead of the    keyword.
A 
p$
 
 commits with respect to the parent transaction. That is, even
though the subtransaction commits, the permanence of its effects depends on the
parent transaction committing. If the parent transaction aborts, the results of the nested
transaction are backed out. However, if the nested transaction aborts, the parent
transaction is not aborted. The easiest way to create a nested subtransaction
transaction in Tran-C is to simply use a    block within the scope of an
existing transaction. Tran-C automatically makes the new transaction a subtransaction
of the existing transaction.
In this chapter, when we discuss nested transactions, we are generally referring to
nested subtransactions unless we specify otherwise.
A series of nested subtransactions is viewed as a hierarchy of transactions. When
transactions are nested to an arbitrary depth, the transaction that is the parent of the
entire tree (family) of transactions is referred to as the
*p
 
 . If the top-
level transaction aborts, all nested transactions are aborted as well.
By default, nested subtransactions of the same parent transaction are executed
sequentially within the scope of the parent. The Tran-C    and  
statements can be used to create subtransactions that execute concurrently with each
other on behalf of their parent transaction. For more information, see the   p
 
 p   p+.

JDBC Best Practices


http://www.precisejava.com/javaperf/j2ee/JDBC.htm
This topic illustrates the best practices to improve performance in JDBC with the
following sections:
Overview of JDBC
Choosing the right Driver
Optimization with Connection
Set optimal row pre-fetch value
Use Connection pool
Control transaction
Choose optimal isolation level
Close Connection when finished
Optimization with Statement
Choose right Statement interface
Do batch update
Do batch retrieval using Statement
Close Statement when finished
Optimization with ResultSet
Do batch retrieval using ResultSet
Setup proper direction of processing rows
Use proper getxxx() methods
Close ResultSet when finished
Optimization with SQL Query
Cache the read-only and read-mostly data
Fetch small amount of data iteratively instead of fetching whole data at once
Key Points
Overview of JDBC
JDBC defines how a Java program can communicate with a database. This section
focuses mainly on JDBC 2.0 API. JDBC API provides two packages they are java.sql
and javax.sql . By using JDBC API, you can connect virtually any database, send SQL
queries to the database and process the results.
JDBC architecture defines different layers to work with any database and java, they are
JDBC API interfaces and classes which are at top most layer( to work with java ), a
driver which is at middle layer (implements the JDBC API interfaces that maps java to
database specific language) and a database which is at the bottom (to store physical
data). The following figure illustrates the JDBC architecture.
JDBC API provides interfaces and classes to work with databases. Connection interface
encapsulates database connection functionality, Statement interface encapsulates SQL
query representation and execution functionality and ResultSet interface encapsulates
retrieving data which comes from execution of SQL query using Statement.
The following are the basic steps to write a JDBC program
1. Import java.sql and javax.sql packages
2. Load JDBC driver
3. Establish connection to the database using Connection interface
4. Create a Statement by passing SQL query
5. Execute the Statement
6. Retrieve results by using ResultSet interface
7. Close Statement and Connection
We will look at these areas one by one, what type of driver you need to load, how to use
Connection interface in the best manner, how to use different Statement interfaces, how
to process results using ResultSet and finally how to optimize SQL queries to improve
JDBC performance.
Note1: Your JDBC driver should be fully compatible with JDBC 2.0 features in order to
use some of the suggestions mentioned in this section.
Note2: This Section assumes that reader has some basic knowledge of JDBC.

 p pp
Here we will walk through initially about the types of drivers, availability of drivers, use of
drivers in different situations, and then we will discuss about which driver suits your
application best.
Driver is the key player in a JDBC application, it acts as a mediator between Java
application and database. It implements JDBC API interfaces for a database, for
example Oracle driver for oracle database, Sybase driver for Sybase database. It maps
Java language to database specific language including SQL.
JDBC defines four types of drivers to work with. Depending on your requirement you
can choose one among them.
Here is a brief description of each type of driver :

Type
of Tier Driver mechanism Description
driver
This driver converts JDBC calls to ODBC
calls through JDBC-ODBC Bridge driver
1 Two JDBC-ODBC
which in turn converts to database calls.
Client requires ODBC libraries.
This driver converts JDBC calls to database
Native API - Partly - Java
2 Two specific native calls. Client requires
driver
database specific libraries.
This driver passes calls to proxy server
through network protocol which in turn
3 Three JDBC - Net -All Java driver
converts to database calls and passes
through database specific protocol. Client
doesn't require any driver.
Native protocol - All - Java This driver directly calls database. Client
4 Two
driver doesn't require any driver.

Obviously the choice of choosing a driver depends on availability of driver and


requirement. Generally all the databases support their own drivers or from third party
vendors. If you don't have driver for your database, JDBC-ODBC driver is the only
choice because all most all the vendors support ODBC. If you have tiered requirement (
two tier or three tier) for your application, then you can filter down your choices, for
example if your application is three tiered, then you can go for Type three driver
between client and proxy server shown below. If you want to connect to database from
java applet, then you have to use Type four driver because it is only the driver which
supports that feature. This figure shows the overall picture of drivers from tiered
perspective.

This figure illustrates the drivers that can be used for two tiered and three tiered
applications. For both two and three tiered applications, you can filter down easily to
Type three driver but you can use Type one, two and four drivers for both tiered
applications. To be more precise, for java applications( non-applet) you can use Type
one, two or four driver. Here is exactly where you may make a mistake by choosing a
driver without taking performance into consideration. Let us look at that perspective in
the following section.
Type 3 & 4 drivers are faster than other drivers because Type 3 gives facility for
optimization techniques provided by application server such as connection pooling,
caching, load balancing etc and Type 4 driver need not translate database calls to
ODBC or native connectivity interface. Type 1 drivers are slow because they have to
convert JDBC calls to ODBC through JDBC-ODBC Bridge driver initially and then
ODBC Driver converts them into database specific calls. Type 2 drivers give average
performance when compared to Type 3 & 4 drivers because the database calls have to
be converted into database specific calls. Type 2 drivers give better performance than
Type 1 drivers.
Finally, to improve performance
1. Use Type 4 driver for applet to database communication.
2. Use Type 2 driver for two tiered applications for communication between java client
and the database that gives better performance when compared to Type1 driver
3. Use Type 1 driver if your database doesn't support a driver. This is rare situation
because almost all major databases support drivers or you will get them from third party
vendors.
4.Use Type 3 driver to communicate between client and proxy server ( weblogic,
websphere etc) for three tiered applications that gives better performance when
compared to Type 1 & 2 drivers.

c   p p   


java.sql package in JDBC provides Connection interface that encapsulates database
connection functionality. Using Connection interface, you can fine tune the following
operations :
1. Set optimal row pre-fetch value
2. Use Connection pool
3. Control transaction
4. Choose optimal isolation level
5. Close Connection when finished
Each of these operations effects the performance. We will walk through each operation
one by one.
-&p p   p p0  p 
We have different approaches to establish a connection with the database, the first type
of approach is :
1. DriverManager.getConnection(String url)
2. DriverManager.getConnection(String url, Properties props)
pppp 3. DriverManager.getConnection(String url, String user, String password)
4. Driver.connect(String url, Properties props)
When you use this approach, you can pass database specific information to the
database by passing properties using Properties object to improve performance. For
example, when you use oracle database you can pass default number of rows that must
be pre-fetched from the database server and the default batch value that triggers an
execution request. Oracle has default value as 10 for both properties. By increasing the
value of these properties, you can reduce the number of database calls which in turn
improves performance. The following code snippet illustrates this approach.
java.util.Properties props = new java.util.Properties();
props.put("user","scott");
props.put("password","tiger");
props.put("defaultRowPrefetch","30");
props.put("defaultBatchValue","5");
Connection con = DriverManger.getConnection("jdbc:oracle:thin:@hoststring",
props);
You need to figure out appropriate values for above properties for better performance
depending on application's requirement. Suppose, you want to set these properties for
search facility, you can increase defaultRowPrefetch so that you can increase
performance significantly.
The second type of approach is to get connection from DataSource.
You can get the connection using javax.sql.DataSource interface. The advantage of
getting connection from this approach is that the DataSource works with JNDI. The
implementation of DataSource is done by vendor, for example you can find this feature
in weblogic, websphere etc. The vendor simply creates DataSource implementation
class and binds it to the JNDI tree. The following code shows how a vendor creates
implementation class and binds it to JNDI tree.
DataSourceImpl dsi = new DataSourceImpl();
dsi.setServerName("oracle8i");
dsi.setDatabaseName("Demo");
Context ctx = new InitialContext();
ctx.bind("jdbc/demoDB", dsi);
This code registers the DataSourceImpl object to the JNDI tree, then the programmer
can get the DataSource reference from JNDI tree without knowledge of the underlying
technology.
Context ctx = new InitialContext();
DataSource ds = (DataSource)ctx.lookup("jdbc/demoDB");
Connection con = ds.getConnection();
By using this approach we can improve performance. Nearly all major vendor
application servers like weblogic, webshpere implement the DataSource by taking
connection from connection pool rather than a single connection every time. The
application server creates connection pool by default. We will discuss the advantage of
connection pool to improve performance in the next section.
6&p)p   p
Creating a connection to the database server is expensive. It is even more expensive if
the server is located on another machine. Connection pool contains a number of open
database connections with minimum and maximum connections, that means the
connection pool has open connections between minimum and maximum number that
you specify. The pool expands and shrinks between minimum and maximum size
depending on incremental capacity. You need to give minimum, maximum and
incremental sizes as properties to the pool in order to maintain that functionality. You
get the connection from the pool rather directly .For example, if you give properties like
min, max and incremental sizes as 3, 10 and 1 then pool is created with size 3 initially
and if it reaches it's capacity 3 and if a client requests a connection concurrently, it
increments its capacity by 1 till it reaches 10 and later on it puts all its clients in a
queue.
There are a few choices when using connection pool.
1. You can depend on application server if it supports this feature, generally all the
application servers support connection pools. Application server creates the connection
pool on behalf of you when it starts. You need to give properties like min, max and
incremental sizes to the application server.
2. You can use JDBC 2.0 interfaces, ConnectionPoolDataSource and
PooledConnection if your driver implements these interfaces
3. Or you can create your own connection pool if you are not using any application
server or JDBC 2.0 compatible driver.
By using any of these options, you can increase performance significantly. You need to
take care of properties like min, max and incremental sizes. The maximum number of
connections to be given depends on your application's requirement that means how
many concurrent clients can access your database and also it depends up on your
database's capability to provide maximum number of connections.

&p   p   
In general, transaction represents one unit of work or bunch of code in the program that
executes in it's entirety or none at all. To be precise, it is all or no work. In JDBC,
transaction is a set of one or more Statements that execute as a single unit.
java.sql.Connection interface provides some methods to control transaction they are
public interface Connection {
boolean getAutoCommit();
void setAutoCommit(boolean autocommit);
void commit();
void rollback();
}
JDBC's default mechanism for transactions:
By default in JDBC transaction starts and commits after each statement's execution on
a connection. That is the AutoCommit mode is true. Programmer need not write a
commit() method explicitly after each statement.
Obviously this default mechanism gives good facility for programmers if they want to
execute a single statement. But it gives poor performance when multiple statements on
a connection are to be executed because commit is issued after each statement by
default, that in turn reduces performance by issuing unnecessary commits. The remedy
is to flip it back to AutoCommit mode as false and issue commit() method after a set of
statements execute, this is called as batch transaction. Use rollback() in catch block to
rollback the transaction whenever an exception occurs in your program. The following
code illustrates the batch transaction approach.
try{
connection.setAutoCommit(false);
PreparedStatement ps = connection.preareStatement( "UPDATE employee SET
Address=? WHERE name=?");
ps.setString(1,"Austin");
ps.setString(2,"RR");
ps.executeUpdate();
PreparedStatement ps1 = connection.prepareStatement( "UPDATE account SET
salary=? WHERE name=?");
ps1.setDouble(1, 5000.00);
ps1.setString(2,"RR");
ps1.executeUpdate();
connection.commit();
connection.setAutoCommit(true);
}catch(SQLException e){ connection.rollback();}
finally{
if(ps != null){ ps.close();}
if(ps1 != null){ps1.close();}
if(connection != null){connection.close();}
}
This batch transaction gives good performance by reducing commit calls after each
statement's execution.
*&p p   p   p 
Isolation level represent how a database maintains data integrity against the problems
like dirty reads, phantom reads and non-repeatable reads which can occur due to
concurrent transactions. java.sql.Connection interface provides methods and constants
to avoid the above mentioned problems by setting different isolation levels.
public interface Connection {
public static final int TRANSACTION_NONE =0
public static final int TRANSACTION_READ_COMMITTED =2
public static final int TRANSACTION_READ_UNCOMMITTED = 1
public static final int TRANSACTION_REPEATABLE_READ =4
public static final int TRANSACTION_SERIALIZABLE =8
int getTransactionIsolation();
void setTransactionIsolation(int isolationlevelconstant);
}
You can get the existing isolation level with getTransactionIsolation() method and set
the isolation level with setTransactionIsolation(int isolationlevelconstant) by passing
above constants to this method.
The following table describes isolation level against the problem that it prevents :

Permitted Performanc
Transaction Level
Phenomena e impact
Dirt Non Repeatabl Phanto
y e reads m reads
reads
TRANSACTION_NONE N/A N/A N/A FASTEST
TRANSACTION_READ_UNCOMMITE
YES YES YES FASTEST
D
TRANSACTION_READ_COMMITED NO YES YES FAST
TRANSACTION_REPEATABLE_READ NO NO YES MEDIUM
TRANSACTION_SERIALIZABLE NO NO NO SLOW
YES means that the Isolation level does not prevent the problem
NO means that the Isolation level prevents the problem
By setting isolation levels, you are having an impact on the performance as mentioned
in the above table. Database use read and write locks to control above isolation levels.
Let us have a look at each of these problems and then look at the impact on the
performance.
 pp  p2
The following figure illustrates Dirty read problem :
Step 1: Database row has PRODUCT = A001 and PRICE = 10
Step 2: Connection1 starts Transaction1 (T1) .
Step 3: Connection2 starts Transaction2 (T2) .
Step 4: T1 updates PRICE =20 for PRODUCT = A001
Step 5: Database has now PRICE = 20 for PRODUCT = A001
Step 6: T2 reads PRICE = 20 for PRODUCT = A001
Step 7: T2 commits transaction
Step 8: T1 rollbacks the transaction because of some problem
The problem is that T2 gets wrong PRICE=20 for PRODUCT = A001 instead of 10
because of uncommitted read. Obviously it is very dangerous in critical transactions if
you read inconsistent data. If you are sure about not accessing data concurrently then
you can allow this problem by setting TRANSACTION_READ_UNCOMMITED or
TRANSACTION_NONE that in turn improves performance otherwise you have to use
TRANSACTION_READ_COMMITED to avoid this problem.

)  pp  p2


The following figure illustrates Unrepeatable read problem :
Step 1: Database row has PRODUCT = A001 and PRICE = 10
Step 2: Connection1 starts Transaction1 (T1) .
Step 3: Connection2 starts Transaction2 (T2) .
Step 4: T1 reads PRICE =10 for PRODUCT = A001
Step 5: T2 updates PRICE = 20 for PRODUCT = A001
Step 6: T2 commits transaction
Step 7: Database row has PRODUCT = A001 and PRICE = 20
Step 8: T1 reads PRICE = 20 for PRODUCT = A001
Step 9: T1 commits transaction
Here the problem is that Transaction1 reads 10 first time and reads 20 second time but
it is supposed to be 10 always whenever it reads a record in that transaction. You can
control this problem by setting isolation level as
TRANSACTION_REPEATABLE_READ.
 pp  p2
The following figure illustrates Phantom read problem :
Step 1: Database has a row PRODUCT = A001 and COMPANY_ID = 10
Step 2: Connection1 starts Transaction1 (T1) .
Step 3: Connection2 starts Transaction2 (T2) .
Step 4: T1 selects a row with a condition SELECT PRODUCT WHERE
COMPANY_ID = 10
Step 5: T2 inserts a row with a condition INSERT PRODUCT=A002 WHERE
COMPANY_ID= 10
Step 6: T2 commits transaction
Step 7: Database has 2 rows with that condition
Step 8: T1 select again with a condition SELECT PRODUCT WHERE
COMPANY_ID=10
and gets 2 rows instead of 1 row
Step 9: T1 commits transaction
Here the problem is that T1 gets 2 rows instead of 1 row up on selecting the same
condition second time. You can control this problem by setting isolation level as
TRANSACTION_SERIALIZABLE
 pp p   p  p p p 2
Choosing a right isolation level for your program depends upon your application's
requirement. In single application itself the requirement generally changes, suppose if
you write a program for searching a product catalog from your database then you can
easily choose TRANSACTION_READ_UNCOMMITED because you need not worry
about the problems that are mentioned above, some other program can insert records
at the same time, you don't have to bother much about that insertion. Obviously this
improves performance significantly.
If you write a critical program like bank or stocks analysis program where you want to
control all of the above mentioned problems, you can choose
TRANSACTION_SERIALIZABLE for maximum safety. Here it is the tradeoff between
the safety and performance. Ultimately we need safety here.
If you don't have to deal with concurrent transactions your application, then the best
choice is TRANSACTION_NONE to improve performance.
Other two isolation levels need good understanding of your requirement. If your
application needs only committed records, then TRANSACTION_READ_COMMITED
isolation is the good choice. If your application needs to read a row exclusively till
you finish your work, then TRANSACTION_REPEATABLE_READ is the best choice.
* : Be aware of your database server's support for these isolation levels. Database
servers may not support all of these isolation levels. Oracle server supports only two
isolation levels, TRANSACTION_READ_COMMITED and
TRANSACTION_SERIALIZABLE isolation level, default isolation level is
TRANSACTION_READ_COMMITED.
7&p p   pp 
Closing connection explicitly allows garbage collector to recollect memory as early as
possible. Remember that when you use the connection pool, closing connection means
that it returns back to the connection pool rather than closing direct connection to the
database.

c   p p  


Statement interface represents SQL query and execution and they provide number of
methods and constants to work with queries. They also provide some methods to fine
tune performance. Programmer may overlook these fine tuning methods that result in
poor performance. The following are the tips to improve performance by using
statement interfaces
1. Choose the right Statement interface
2. Do batch update
3. Do batch retrieval using Statement
2. Close Statement when finished
-&p p p   p  
There are three types of Statement interfaces in JDBC to represent the SQL query and
execute that query, they are Statement, PreparedStatement and CallableStatement.
Statement is used for static SQL statement with no input and output parameters,
PreparedStatement is used for dynamic SQL statement with input parameters and
CallableStatement is used for dynamic SQL satement with both input and output
parameters, but PreparedStatement and CallableStatement can be used for static SQL
statements as well. CallableStatement is mainly meant for stored procedures.
PreparedStatement gives better performance when compared to Statement because it
is pre-parsed and pre-compiled by the database once for the first time and then
onwards it reuses the parsed and compiled statement. Because of this feature, it
significantly improves performance when a statement executes repeatedly, It reduces
the overload incurred by parsing and compiling.
CallableStatement gives better performance when compared to PreparedStatement and
Statement when there is a requirement for single request to process multiple complex
statements. It parses and stores the stored procedures in the database and does all the
work at database itself that in turn improves performance. But we loose java portability
and we have to depend up on database specific stored procedures.
6&p p p  
You can send multiple queries to the database at a time using batch update feature of
statement objects this reduces the number of JDBC calls and improves performance.
Here is an example of how you can do batch update,
statement.addBatch( "sql query1");
statement.addBatch(" sql query2");
statement.addBatch(" sql query3");
statement.executeBatch();
All three types of statements have these methods to do batch update.

&p p p  p p  


You can get the default number of rows that is provided by the driver. You can improve
performance by increasing number of rows to be fetched at a time from database using
setFetchSize() method of the statement object.
Initially find the default size by using
Statement.getFetchSize(); and then set the size as per your requirement
Statement.setFetchSize(30);
Here it retrieves 30 rows at a time for all result sets of this statement.
*&p p   pp 
Close statement object as soon as you finish working with that, it explicitly gives a
chance to garbage collector to recollect memory as early as possible which in turn
effects performance.
Statement.close();

c   p p! 


ResultSet interface represents data that contains the results of executing an SQL Query
and it provides a number of methods and constants to work with that data. It also
provides methods to fine tune retrieval of data to improve performance. The following
are the fine tuning tips to improve performance by using ResultSet interface.
1. Do batch retrieval using ResultSet
2. Set up proper direction for processing the rows
3. Use proper get methods
4. Close ResultSet when finished
-&p p p  p p! 
ResultSet interface also provides batch retrieval facility like Statement as mentioned
above. It overrides the Statement behaviour.
Initially find the default size by using
ResultSet.getFetchSize(); and then set the size as per requirement
ResultSet.setFetchSize(50);
This feature significantly improves performance when you are dealing with retrieval of
large number of rows like search functionality.
6&p p p  p p p 
ResultSet has the capability of setting the direction in which you want to process the
results, it has three constants for this purpose, they are
FETCH_FORWARD, FETCH_REVERSE, FETCH_UNKNOWN
Initially find the direction by using
ResultSet.getFetchDirection(); and then set the direction accordingly
ResultSet.setFetchDirection(FETCH_REVERSE);

&p)p p 89p  
ResultSet interface provides lot of getxxx() methods to get and convert database data
types to java data types and is flexibile in converting non feasible data types. For
example,
getString(String columnName) returns java String object.
columnName is recommended to be a VARCHAR OR CHAR type of database but it can
also be a NUMERIC, DATE etc.
If you give non recommended parameters, it needs to cast it to proper java data type
that is expensive. For example consider that you select a product's id from huge
database which returns millions of records from search functionality, it needs to convert
all these records that is very expensive.
So always use proper getxxx() methods according to JDBC recommendations.
*&p p!  pp 
Close ResultSet object as soon as you finish working with ResultSet object even though
Statement object closes the ResultSet object implicitly when it closes, closing ResultSet
explicitly gives chance to garbage collector to recollect memory as early as possible
because ResultSet object may occupy lot of memory depending on query.
ResultSet.close();

c   p p$%p$ p


This is one of the area where programmers generally make a mistake
If you give a query like
Statement stmt = connection.createStatement();
ResultSet rs = stmt.executeQuery("select * from employee where name=RR");
The returned result set contains all the columns data. you may not need all the column
data and want only salary for RR.
The better query is "select salary from employee where name=RR"
It returns the required data and reduces unnecessary data retrieval.

p p0  pp0  p 


Every database schema generally has read-only and read-mostly tables. These tables
are called as lookup tables. Read-only tables contain static data that never changes in
its life time. Read-mostly tables contain semi dynamic data that changes often. There
will not be any sort of writing operations in these tables.
If an application reads data from these tables for every client request, then it is
redundant, unnecessary and expensive. The solution for this problem is to cache the
read-only table data by reading the data from that table once and caching the read-
mostly table data by reading and refreshing with time limit. This solution improves
performance significantly. See the following link for source code of such caching
mechanism.
http://www.javaworld.com/javaworld/jw-07-2001/jw-0720-cache.html
You can tweak this code as per application requirement. For read-only data, you need
not refresh data in its life time. For read-mostly data, you need to refresh the data with
time limit. It is better to set this refreshing time limit in properties file so that it can be
changed at any time.
 p p  p p p   p p p  p p p p 
Applications generally require to retrieve huge data from the database using JDBC in
operations like searching data. If the client request for a search, the application might
return the whole result set at once. This process takes lot of time and has an impact on
performance. The solution for the problem is
1. Cache the search data at the server-side and return the data iteratively to the client.
For example, the search returns 1000 records, return data to the client in 10 iterations
where each iteration has 100 records.
2. Use Stored procedures to return data iteratively. This does not use server-side
caching rather server-side application uses Stored procedures to return small amount of
data iteratively.
Out of these solutions the second solution gives better performance because it need not
keep the data in the cache (in-memory). The first procedure is useful when the total
amount of data to be returned is not huge.

:p  
Use Type two driver for two tiered applications to communicate from java client to
database that gives better performance than Type1 driver.
Use Type four driver for applet to database communication that is two tiered
applications and three tiered applications when compared to other drivers.
Use Type one driver if you don't have a driver for your database. This is a rare
situation because all major databases support drivers or you will get a driver from
third party vendors.
Use Type three driver to communicate between client and proxy server ( weblogic,
websphere etc) for three tiered applications that gives better performance when
compared to Type 1 &2 drivers.
Pass database specific properties like defaultPrefetch if your database supports
any of them.
Get database connection from connection pool rather than getting it directly
Use batch transactions.
Choose right isolation level as per your requirement.
TRANSACTION_READ_UNCOMMITED gives best performance for concurrent
transaction based applications. TRANSACTION_NONE gives best performance for
non-concurrent transaction based applications.
Your database server may not support all isolation levels, be aware of your
database server features.
Use PreparedStatement when you execute the same statement more than once.
Use CallableStatement when you want result from multiple and complex statements
for a single request.
Use batch update facility available in Statements.
Use batch retrieval facility available in Statements or ResultSet.
Set up proper direction for processing rows.
Use proper getXXX() methods.
Close ResultSet, Statement and Connection whenever you finish your work with
them.
Write precise SQL queries.
Cache read-only and read-mostly tables data.
Fetch small amount of data iteratively rather than whole data at once when
retrieving large amount of data like searching database etc.

PostgreSQL Tips
PostgreSQL Wiki Article on Performance Optimization
Optimizing PostgreSQL
+  p  $%p 
p
You can monitor any database locks using the Entity SQL Processor in # p0;p
  p$%p   with the following query,
select pg_class.relname, pg_locks.mode, pg_locks.relation, pg_locks.transaction,
pg_locks.pid
from pg_class, pg_locks
where pg_class.relfilenode = pg_locks.relation
order by pid
This will show what kinds of locks are active on what entities. If there is an exclusive
lock on a table followed by a bunch of pids that are waiting for it, then you have a
deadlock.

pcp  $%p   p
Run this query:
select datname, numbackends from pg_stat_database;
to see the number of open connections to each of your databases. See How to Know
Number of Connections Made with Database - PostgreSQL
If you are running out of connections, edit the file
framework/entity/config/entityengine.xml and increase the number of connections
available.
MySQL Tips
 p*pp  p
If you use Linux or Unix for your MySQL server, the table names may be case sensitive,
so PRODUCT and product are not the same table. You can turn this off by configuring
mysqld on startup to ignore table names with the lower-case-table-names flag, such as
this example from /etc/init.d/mysql:
$bindir/mysqld_safe --datadir=$datadir --lower-case-table-names=1 --pid-
file=$server_pid_file $other_args >/dev/null 2>&1 &

See MySQL manual on identifier case sentivity


)0<p   p
By default, MySQL supports the Latin1 character set, which is intended for European
languages such as English. If you wish to use MySQL for other language types, you
may need to set up a database or UTF-8 character set encoding. To do this, you would
need to create your database using UTF-8 first:
mysql> create database opentaps default character set utf8 collate utf8_general_ci;
Then you would need to set your framework/entity/config/entityengine.xml file for the
MySQL database to use the UTF-8 character set:
character-set="utf8"
collate="utf8_general_ci"
Note that it is not clear that my SQL supports case sensitive UTF-8 coalition at this
point, although you may be able to use UTF-8 binary collation.
DB2 Tips
(6p(p
You must configure DB2 to have tablespaces of 8K or more. This can be done when
you create the database from the Control Center:

If you get an error message from DB2, you will get a SQLCODE like below:
DB2 SQL Error: SQLCODE=-270, SQLSTATE=42997, SQLERRMC=63,
DRIVER=3.50.152
To figure out what it is, you have to run db2 from the command line:
$ db2 ? sql-270
Some of the more popular codes are:
SQL-204: <name> not recognized. Most likely, you are referencing a table that doesn't
exist.
SQL-270: Function not supported. See the SQLERRMC for the message code. If you
get sql-270 with sqlerrmc=63, it means that you are trying to select a CLOB/BLOB type
with a scroll insensitive cursor.
SQL-286: insufficient page size for CREATE TABLE
SQL-530: foreign key violation
SQL-803: operation violates a unique value constraint
+
p(6p# 
pp
There are three issues with using DB2 and opentaps:
You must define a fieldtypedb2.xml file for your framework/entity/fieldtype/ directory.
You can Start with the field type XML from another database, such as MySQL. Most of
the valid DB2 field types are similar, but DB2 does not have a "NUMERIC" type. It is
called "DECIMAL" instead of must be used for floating-point and currency field types.
On startup, the ofbiz entity engine does a check of the database against the entity
model definitions. Part of the check is to verify that the primary keys of all the tables are
correctly defined, but the entity engine attempts to obtain the primary key information for
all the tables of the database at once, which is not supported by DB2. To make this
feature work, you need to modify DatabaseUtil.java to have the entity engine check the
primary keys one table at a time.
The biggest problem with DB2 is that it does not support SELECT operations which
include CLOB/BLOB fields when the ResultSet is scroll insensitive (See [1].) The
solution is not as simple as just changing the result set type, because DB2 also does
not support (i) SELECT operations on views or with JOIN using scroll sensitive cursor or
(ii) moving around with .absolute(i) or .relative(i) operations on a ResultSet of
TYPE_FORWARD_ONLY. This means that the view entities which include CLOB/BLOB
types cannot be SELECTED (because you cannot use a scroll insensitive ResultSet), or
that the EntityListIterator.getPartialList method will not work (because you cannot use
.absolute and .relative), so the ofbiz form widget's list form will not paginate correctly.
There is no solution for this problem, but the following workarounds exist:
Since the majority of the large object (LOB) types are CLOB for long character strings,
you can redefine the field type for your blob and very-long to be the longest possible
VARCHAR instead of using CLOB.
You can avoid using the getPartialList feature and instead use findAll or findByAnd to
return a Java list, and then use the sublist() method on it. These queries are done with
TYPE_FORWARD_ONLY and return the entire list at once, but the drawback is that a
Java list has a limited capacity of about 65,000 records.
If neither of these workarounds are acceptable, you would have to rewrite certain
features (like surveys with long text responses) to conform to DB2's restrictions.
In practice, most ERP-related uses of opentaps would not require CLOB's, so the first
option should suffice. Only with content management features would such field types be
required, and those features would need to be rewritten for DB2 compatibility.

Source Code Repository Management:


SVN Tips
SVN Tips
Setting up Commit Emails
Before we begin, ensure that mail can be sent from the server on which the subversion
repository lives. If you're on a UNIX OS, this can be done by running the mail program
from the command line,
prompt:> mail -s "test subject" someone@somewhere.com
Type in body of email and press CTRL-D to send.
^D
prompt:>
Next, go to the hooks/ directory of our subversion repository. It should contain several
files ending in .tmpl. These are shell script templates that allow you to hook actions to
subversion events. In particular, we are looking for post-commit.tmpl which we will use
to send email whenever a commit is made.
Ensure that commit-email.pl exists in hooks/ directory. If you cannot find it, get it from
here.
Create a copy of post-commit.tmpl named post-commit
Edit post-commit and add the following before the call to commit-email.pl
PATH=/path/to/subversion/hooks
Comment out log-commit.py if present, it's not necessary. This script only needs to do
commit-email.pl
Specify the email or space separated list of emails to send these commit notifications to
as the last arguments to commit-email.pl
Make sure that commit-email.pl and post-commit are executable by anyone
chmod a+x commit-email.pl post-commit
This should basically be it. Commit something to test things out. If there is a problem, it
will usually show up in the mail server logs. The next sections cover possible issues.
Permission Denied
If you're running svnserve to allow checkout using svn:// URL notation, then you have to
make sure that the user that runs svnserve can also read and execute the scripts in
hooks/. Make sure this user owns that directory and all files in it.
You might also want to make sure this user can send emails to the mailserver. Log in as
the user and use the mail command to test this.

How to use SVK


How to use SVK
   p
[hide]
1 SVK Setup and Usage Instructions
1.1 Setup
1.2 Migrating an Existing Repository
1.3 Migrating Changes Between the Parent Mirror and the Child Depot
SVK Setup and Usage Instructions
 pp
These are steps which need to be followed to set up SVK initally, whether there is an
existing SVN repository to migrate or not. Assumes a default install of Subversion and
SVK. More information can be found in the book Version Control with SVK.
svk depotmap --init
& 
 p,
svk mirror svn://user@svn.parentrepository.com/project //project_parent
Mirror initialized. Run svk sync //project_parent to start mirroring.
-  p
p 
p 
 p
p p p

svk sync //project_parent
Syncing svn://user@svn.parentrepository.com/project
Retrieving log information from 1 to 134
Committed revision 2 from revision 1.
...
Committed revision 135 from revision 134.
(
p p
p  p p
p 
p 
 p
p
p p  p

p
p
 p p$pp
p  p p  ½p)p p**)
p./p00 1
2 

svk cp //project_parent //project_new


Committed revision 136.
 
p pp
p$ p p
p  p p
p 
p 
 p,pp) p
  
p

p
ppp 
p
p
p 

svnserve -d -r /path/to/.svk/local
- p  p
p,p
p p
p"p 
p"
p  p p  p
p
p
 p. p p p p p  p
p
p p p
p ½00p 

svn co svn://localhost/project_new /path/to/working/copy/of/project
3p$  p
p
p)p
p p ) pp p
pp
p p ) pp
 p$p
p p   p
p"p
p*p
'p
'p

+ pp p!  pp
These are steps which need to be followed when a previously existing SVN repository
needs to be migrated to SVK. Assumes the setup steps above have been completed.
svk mirror svn://user@childrepositoryhost/project_old //project_old
Mirror initialized. Run svk sync //project_old to start mirroring.
-  p
pp 
 pp
p p$ p$ p p
p 
p 
 p
pp

p
p p,p

svk sync //project_old
Syncing svn://user@childrepositoryhost/project_old
Retrieving log information from 1 to 6
Committed revision 138 from revision 1.
...
Committed revision 143 from revision 6.
(
p p
p  p p
pp"p 
 p
p
p,p
p  p p
pp

 
kill 1234
,p
p  p ppp  p
pp"p 
 p$


p
p 
p
 p&/
svk merge -r 138:143 -l //project_old //project_new
Committed revision 144.
- p
p  p p
p,p
pp  p
pp 
 p
p
pp

ppp pp p
p  p
p p
p 
p 
 pp  pp$p

p p p
p*p  
p  p p #
pp
p$p
p$p
,p
p
p
p p 
 p p
p*&p
p 
p p p  p"
p

p
p  p
$ p $p p
p
p,p  'p 
p
p"p  p p
pp"p

 
svk mirror -d //project_old
Mirror path '//project_old' detached.
(p
p * p  p p
pp"p 
 p
 
svn up /path/to/working/copy/of/project
3
p
p ) pp p
pp
p
p
p p
  p p
p  p
p
pp"p 
 
+ pp( p p p+ pp p p pp
To pull all changes from the parent repository into the mirror, and push all changes from
the mirror into the parent repository:
svk sync //project_parent
As above, but only up to a specific revision:
svk sync -t 133 //project_parent
To merge a specific revision range from the parent depot (mirror) to the child depot
(copy of mirror):
svk merge -r 1000:1050 //project_parent //project_child (Note that the revision numbers
refer to the SVK depot revision numbers, not the revision numbers of the parent
repository.)
To auto-merge all changes from the parent depot to the child depot:
svk smerge //project_parent //project_child or svk pull //project_child
As above, synchronizing the parent mirror first:
svk smerge -s //project_parent //project_child

Trac Tips
Trac Tips
Prefixing the Email Subject with [projectname trac]
It's useful to prefix the emails sent from trac with the project or company name to
distinguish from other trac systems. To do this, edit the & file and change the
following line,
smtp_subject_prefix = [projectname trac]
As with all trac configuration changes, there is no need to restart the server. The next
trac email subject will be prefixed by [projectname trac].

CSS Display Bugs in IE


CSS Display Bugs in IE
Float Drop Bug
This bug occurs when you have a floating box and the content that should be next to it
is pushed underneath. It is a well known bug and has many solutions. The best solution
is to avoid using CSS and use tables for this kind of layout. Sometimes this is not
possible, so we must hack the CSS until it does what we want.
As an example, suppose you have a fixed width main content area, say 800 pixels wide.
On the left is a floating sidebar that has float: left and is 200 pixels wide. We want the
main body of content to fill the rest of the space.
<div style="width: 800px;">
<div style="float: left; width: 200px;">
Sidebar content goes here.
</div>
<div class="body">
Rest of content should flow around the sidebar
</div>
</div>
In IE, the contents in the   can end up below the sidebar rather than to the right of
it. To fix this, make sure all content in   is less than the width of the body area,
which is 800 - 200 = 600 pixels wide. Then modify the   CSS definition,
.body {
display: inline;
width: 600px;
}
Header Background Color Vanishes
Problem: A colored header, such as the blue screenlet-header style, loses its
background color if it touches the top of the screen. This generally occurs when there is
a floating element next to the header.
This problem is most noticable when creating orders in the OFBiz ordermgr. Some of
the headers have yellow links in them, and when the blue background does not show
up, it looks really ugly.
We fixed the problem with the CSS tag .subSectionHeader in opentaps.css, which is
why we prefer to use that over screenlet-header. To fix it for screenlet-header or other
styles that might have a problem,
.screenlet-header {
_height: 1em;
}
The height is interpreted by IE as the minimum height that the div should span, forcing it
to color at least that much. IE is the only browser that understands the underscore, so
this is a safe hack.

Performance Analysis and Troubleshooting

Performance Analysis and Troubleshooting


This is a page to assist with performance analysis and troubleshooting.

   p
[hide]
1 Monitoring Deadlocks in PostgreSQL
2 Suspending Runaway Threads
3 Profiling with AspectJ
3.1 Out of the Box Profiling
3.2 Understand AspectJ code
Monitoring Deadlocks in PostgreSQL
See Database_Tips#Monitoring_PostgreSQL_Deadlocks
Suspending Runaway Threads
Suppose you start a process that you realize will take forever and need to stop it.
However, it can't be stopped because it was activated by an HTTP request and killing
the browser session doesn't work. First, check the log to see if you can identify the
thread that is running this process. For instance, suppose you have the following line in
your log that corresponds to what your process is doing,
2008-01-23 18:55:47,585 (TP-Processor10) [ Something.java:1015:WARN ]
Something that identifies your process
This thread is TP-Processor10. You can use the Java Thread API to suspend it by
hand. The easiest way to do this is to use a bsh script or the bsh terminal. First, you will
want to know the number of threads in the system. Load up # p0;p5 p0;p
p% and count the rough number of threads displayed. Suppose you have
about 50 threads.
Once you know the rough size, run the following script, either via the bsh terminal or by
hooking it up to a controller.xml request,
threads = new Thread[50];
size = Thread.enumerate(threads);
for (i = 0; i < size; i++) {
print(i + ": " + threads[i]);
}
This will print out the index and name of each thread. Find the index of TP-Processor
10. Suppose it is index 37. You can then suspend the thread by doing this,
t = threads[37];
t.suspend();

Profiling with AspectJ


In opentaps 1.0, profiling is done using Aspect Oriented Programming. AOP provides us
with the ability to specify measurement points around any part of the code without
having to modify the Java code itself. This is a brief tutorial on how to take advantage of
this to profile suspected problem spots in the system.
Performance monitoring is accomplished using the JETM library, which comes with
opentaps.
c p p p( p  pp
Presently, performance profiling can be triggered by running the tests. First thing is to
compile the system and tests as normal. Then, you will want to compile profiling support
as follows,
$ ant -f hot-deploy/opentaps-common/build.xml profiling
This will apply the profiling aspects using bytecode weaving, a process which modifies
the existing jars and inserts new bytecode that represents our profiling code.
Make sure that the  0   target in the main   & file does not have any
dependencies. Otherwise, when you run the tests, it will recompile the codebase without
the aspects. This is a more efficient setup for testing anyway, since we do not want to
recompile the entire system. When ready, run the tests,
$ ant run-tests
Then when you look at the logs, you have:
A confirmation the profiling library is loaded.
Example:
* Start JETM monitoring
* JETM 1.2.2 started.
A table with all the profiling details.
Example:
TODO: Put an example here
) p  5p pp
Some theory:
First you have to create a file with your aspect. We are not talking anymore about class,
but aspect. Actually, it use a specific compiler. (Compiler errors message are not really
understandable in my mind... You can activate the options verbose and showWaveInfo
to have some more info.)
AOP is a way to add code to several points of the project, based on method selections
or pointcut.
Creation of a poincut:
pointcut testRunContainerStart() : execution(public boolean
org.ofbiz.testtools.TestRunContainer.start());
You can use wildcard: =4p>4p&&
= replace a name or part of the name
public * fr.umlv.*.test.*.start*(int, String, *)
> used to not tell the name of the parameter of the method or the name of the package
public void fr..Test.SetParams(..)
all the public method of SetParam whatever are their parameters,
returning void and available in the classes Test which are wherever in
the package fr
&& used to define whatever sub-type of a class or interface.
void fr.umlv.test.IMouseListener+.set*(..)
all the methods which begins with set in the classes implementing the
interface IMouseListener.
Some Practical comments:
In my case I almost always give the full method name with the argument. Because ofbiz
and opentaps project are big projects, side effects could happen in case of not specifing
the full method name I want to work on.
Let's go back to theory:
You can define the poincut on various events:  4p  4p 4p 4p 4p
    4p    4 staticinitialization, adviceexecution
 call of the method
   execution of the method.
What is the difference between call and execution?
Mainly it is about heritance. In the case of call we are working with the reference which
is used in the program (it can be the interface). In the case of execution we are working
on the object which is instanciated.
There is more differences.
In case of a call, the compiler modifies the invoking code. In case of an execution, the
compiler modifies the target code.
It's made problematic crosslibrary using of "call", if the execution used around pointcut
is inlined in target method.
I.e.
someMethod() {
Debug.logInfo("Enter method");
Debug.logInfo("Leave method")
}
execution(someMethod)
around(someAdvice) {
Debug.logInfo("Enter advice");
proceed()
Debug.logInfo("Leave advice")
}
compiler will produce
someMethod {
Debug.logInfo("Enter advice");
Debug.logInfo("Enter method");
Debug.logInfo("Leave method")
Debug.logInfo("Leave advice")
}
A bit crazy:)
This can bring a problem with debugging and breakpoints in advises. Eclipse has
special options for this case.
During the creation of a pointcut, you can use logic and associate various call,
execution with && or ||
The interesant method is args which let you access to the parameter of the method.
When you pointcut is defined, you can realize a treatment.
  to realize the treatment before to reach the pointcut.
  to realize the treatment after to reach the pointcut.
If the program is multithread (case of opentaps) and you have to do two
treatment before and after the pointcut, you can use   and  . proceed will
execute the code corresponding to the poincut in a around instruction.
This is a really quick overview, to understand the code added to
opentaps and there is some more to know, some trap (infinite loops) etc...
  http://www.eclipse.org/aspectj/doc/released/progguide/implementation.html
  http://www-igm.univ-mlv.fr/~dr/XPOSE2005/sophie/#

Running Tsung against opentaps server


Running Tsung against opentaps server
This is a page to assist with running tsung against opentaps server.
   p
[hide]
1 Installing tsung
2 Configuration file
2.1 readcsv.erl
2.2 tsung.xml
2.2.1 Configuration of the client
2.2.2 Configuration of the server
2.2.3 Configuration of the monitoring (cpu, memory, network)
2.2.4 Arrival of the clients on the tested server
2.2.5 The different options
2.3 userlist.csv
3 Logs, Reports and Graphics generated
3.1 Logs
3.2 Reports
3.3 Graphics

Installing tsung
Homepage http://tsung.erlang-projects.org/
Download page http://tsung.erlang-projects.org/dist/
Documentation http://tsung.erlang-projects.org/user_manual.html
Tsung needs the erlang platform to be installed. We will not extend over it, because rpm
or deb versions just runs fine. http://www.erlang.org/
Tsung is a tool only available for linux platform under binary form. Maybe it could be
runned against windows with the erlang windows version and the cygwin platform.
We used Tsung with erlang 5.6.1 The version 1.2.2 has some problem linked to xml
parsing, so we prefer the version 1.2.0
You may want to modify tsung-1.2.0/src/tsung_controller/ts_os_mon.erl, depending on
which platform you are running the opentaps server. The scripts which do the os
monitoring (cpu, memory and network graphs) may not be convenient for your platform.
Compilation
For debian users, just type fakeroot debian/rules binary.
For the others a ./configure, make, make install will do it.
To run the client with the configuration file tsung.xml, just have
$ tsung -f tsung.xml start
Configuration file
The configuration file and all the file needed to do a tsung stress testing are available in
the directory hot-deploy/opentaps-tests/scripts/tsung/
readcsv.erl a small erlang script used to generate the login string from the user and
password read
tsung.xml the configuration file for tsung
userlist.csv the list of the users which will be used successively to login
& pp
In this file, there is only one function called user. The step done are:
ts_file_server:get_next_line() to read one line in the file
string:tokens(Line,";") to separate the user and the password
"USERNAME=" ++ Username ++"&PASSWORD=" ++ Passwd to return the login string
You have to compile this file
$ erlc readcsv.erl
You will get a file called readcsv.beam, which you have to copy in the tsung binary
directory. In our case it is
/usr/lib/erlang/lib/tsung-1.2.0/ebin/
 & pp
The tsung.xml file is the configuration file which has the different scenarios to execute.
There is some comments in there.
Here is an overview:
     p p p  pp
By default only one http client is configured which is localhost. You can configure as
many http clients as you want. The computer which fires the tests must have a ssh
access with passphrase.
<clients>
<client host="localhost" use_controller_vm="true"/>
</clients>
     p p ppp
By default the opentaps http server is configured on localhost:8443. You can change it
too.
<servers>
<server host="localhost" port="8443" type="ssl"/>
</servers>
     p p p  p8 4p 4p  
9pp
The monitoring has to be configured to access the opentaps server. In our case it is
localhost. The computer which fires the tests must have a ssh access with passphrase.
<monitoring>
<monitor host="localhost" type="erlang"/>
</monitoring>
 p p p  p p p  ppp
It is configured in the load node and by a phase system. For each phase you put the
duration of the phase and the frequency of arrival of the clients. You can have as many
phase as you want. In this case, there is one phase of one minute which is configured,
where clients arrived each 25 seconds. We will have 4 clients.
<load>
<arrivalphase phase="1" duration="1" unit="minute">
<users interarrival="25" unit="second"/>
</arrivalphase>
</load>
p  p   pp
user_agent UserAgent string to use
thinktime The thinktime of a user between his last response received and the next
request he will send.
file_server the file with the login
<options>
<!-- which type of client are we going to fire -->
<option type="ts_http" name="user_agent">
<user_agent probability="80">Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8)
Gecko/20050513 Galeon/1.3.21</user_agent>
<user_agent probability="20">Mozilla/5.0 (Windows; U; Windows NT 5.2; fr-FR;
rv:1.7.8) Gecko/20050511 Firefox/1.0.4</user_agent>
</option>
<!-- Each client has a random thinktime (time between two request) around 3 -->
<option name="thinktime" value="3" random="true" override="true"/>
<!-- TOBEMODIFIED Absolute path for the file from which we generate the login and
password -->
<option name="file_server" value="userlist.csv"/>
</options>
  &pp
It is a basic csv file which associate on one line a user and a password separated by a ;
ie
DemoSalesManager;crmsfa
This users will be used successively to login into the opentaps server.
Logs, Reports and Graphics generated
% pp
In the directory ~/.tsung/log/ you will get the logs file.
tsung_controller@FabsLaptop.log is the log where you can have the errors related to
tsung
tsung.dump is the dump of all the requests and responses generated
tsung.log is used to generate the reports and graphics
tsung.xml is the configuration file
To generate the reports and graphics, just execute /usr/lib/tsung/bin/tsung_stats.pl in
the directory of logs.
!  pp
Main statistics

connect is the time to make a tcp connect page is the time to download a whole page
request is the time to send a request session is the time to get through a scenario node
in the configuration file (in our case, to create and approve an order)
Transactions Statistics

tr_approveSalesOrder is the time to get through the transaction node in the


configuration file called approveSalesOrder tr_createSalesOrder is the time to get
through the transaction node in the configuration file called createSalesOrder tr_login is
the time to get through the transaction node in the configuration file called login
Network Throughput

size_rcv is the quantity of bits received size_sent is the quantity of bits sent
Counters Statistics

finish_users_count match is the quantity of verification done by the match node, which
success newphase nomatch is the quantity of verification done by the match node,
which fails users_count is the quantity of users generated
Server monitoring

cpu is the consumption of cpu freemem is the free memory available recvpackets is the
quantity of bits received sentpackets is the quantity of bits sent
HTTP return code

The apparition of the different http code. We observe 200, which is the normal http back
code, and 302, which is received sometimes. This is normal I observe both during the
building of the scenario.
'pp
Response Time

Mean
page correspond to the mean time to charge one page tr_* correspond to the mean time
to charge one page in the specified transaction

Mean
connect correpond to the mean time to do a tcp connect request correspond to the
mean time to do a http request
Throughput

Rate
page correspond to the mean time to charge one page tr_* correspond to the mean time
to charge one page in the specified transaction
Rate
connect correpond to the mean time to do a tcp connect request correspond to the
mean time to do a http request

Quantity of bit sent and received

Quantity of users doing an operation at the same time


Simultaneous Users
simultaneous users

Matching response or not


Server OS monitoring

cpu mean
freemem mean
HTTP return code Status (rate)

different http code returned

Google Web Toolkit (GWT) Tips


Google Web Toolkit (GWT) Tips
General Concepts
The general idea of GWT is that you write a small "applet" which have a variety of user
interface elements, like input boxes, buttons, etc., that communicate asynchronously
with your server. They are written in Java, which GWT compiles to cross-platform
JavaScript for you. It is then "wired" to your webpage using the id attribute of your
HTML. For example, if your HTML has the following tag:
<tr><td id="newContact"/>
<td>Rest of opentaps goes here</td>
</tr>
Then, in GWT, you can add widgets to that part of your webpage with:
RootPanel.get("newContact").add(vPanel);
The RootPanel of GWT is like the background of your screen. You add panels, buttons,
and other widgets to it to make your screen.
Often, the GWT API is a bit low level, and you can save a lot of code by making simple
extensions or helpful methods like these for your repetitive UI elements:
private TextBox getTextBox(int visibleLength, int maxLength) {
TextBox textBox =new TextBox();
textBox.setVisibleLength(visibleLength);
textBox.setMaxLength(maxLength);
return textBox;
}

private Label getLabel(String text, String styleName) {


Label label = new Label(text);
label.setStyleName(styleName);
return label;
}

private void addWidgetWithLabelToPanel(VerticalPanel panel, String labelText, String


labelStyle, Widget widget) {
panel.add(getLabel(labelText, labelStyle));
panel.add(widget);
}
To work with the different elements of your GWT "applet", you would define them as
variables, instantiate them, pass them along, and then modify them later. For example,
you can define an input field called "firstNameInput" in your class:
private TextBox firstNameInput = null;
Then, you can instantiate it and add it to a panel:
firstNameInput = new TextBox();
addWidgetWithLabelToPanel(vPanel, "First Name", "requiredField", firstNameInput);
Later, when somebody clicks on a button, you can access it by referencing the original
object:
createButton.addClickListener(new ClickListener() {
public void onClick(Widget sender) {
// ...
dialogBox.setText("Create contact " + firstNameInput.getText()); // ...
}
});

p
p
jpp pp! p
opentaps JavaDocs
LiveCatalog XML-RPC API
Amazon Integration
Manufacturing Model
Configuring the POS Store
Configuring Authorize.NET
Implementing CVV Security Code Checking

Recommended Reading
Show me your flowcharts and conceal your tables, and I'll continue to be mystified.
Show me your tables, and I won't usually need your flowcharts; they'll be obvious.
-- The Mythical Man Month by Fred Brooks
Even after four decades, and long after the word "flowchart" has been replaced by
"UML", data models still play a central role in software design. The following three
books will give you three different perspectives on data modeling for enterprise
applications and help you understand the heart of opentaps. You should read all three
to get a balanced perspective:
Data Model Resource Book, Volume 1. -- This comprehensive volume and approaches
it from a relational perspective for transactional systems.
Domain Driven Design -- This book approaches the design of transactional applications
from an object-oriented perspective.
Data Warehouse Toolkit -- This book gives a comprehensive treatment from the
analytical perspective.
Other helpful references:
Opentaps Source Licensing -- If you want to learn more about open source software
licenses, read this book instead of all the mailing list graffiti.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy