Opentaps Dev. Guide
Opentaps Dev. Guide
Technical Reference
1. Developer Documentation
2. Tips and Tricks
3. API and Technical Design Reference
Recommended Readingp
p
pp
ppp p p
pp
p
p
p
u p
x c p
p
ppp p
http://www.opensourcestrategies.com/ofbiz/tutorials.php
x j ppp p
In this technical reference document, we will cover the standard approach to importing
data from external sources. Everything you need for this can be found in the dataimport
module in hot-deploy.
x cppj pp
The goal of the Data Import module is not to build a set of data import tools
against a particular "standard," but rather to recognize that each organization has
legacy or external data in its own unique format. Therefore, the Data Import module is a
set of flexible tools which you can use as a reference point for setting up your own
custom import and export. The existing services and entities can be used "as is" or with
little modification if your data happens to be similar, or you can add to and extend them
if you have additional data.
The Data Import module sets up "bridge entities" which are de-normalized and
laid out in a way that is similar to most applications' data definitions. There are no
foreign key relationships to any other opentaps entity, so any data could be imported
into them. You would use your own database's import tools to import records into the
bridge entities. Then, you would run one of the Data Import module's import services to
transform the data in the bridge entities into the opentaps system. The Data Import
services all follow a common standard:
Each row of data in a bridge entity is wrapped in its own transaction when it is
imported and succeeds or fails on its own.
When a row of data in a bridge entity is imported successfully, the importStatusId
field will be set to DATAIMP_IMPORTED
If the import failed for the row, the status will be DATAIMP_FAILED and the
importError field will contain any error messages.
x j p
p
To support this pattern, we have created a simple and extensible import
framework. All the difficult details about setting up an import, starting transactions and
handling errors are encapsulated into the OpentapsImporter class. Additionally, we
have an interface called an ImportDecoder which is responsible for processing a single
row from the bridge entity and mapping it onto a set of Opentaps Entities.
When used properly, you will be able to focus the majority of your development on the
problem of mapping the import data into the opentaps model. You will also be able to
take advantage of polymorphism to re-use common mapping patterns or customize
existing ones for the particularities of your data.
cp pj p p
A brief outline of the import process is as follows,
Break your original data into a set of suitably de-normalized CSV files.
For example, put all your customer data in one CSV and all your product data in
another.
The goal here is to minimize the amount of data manipulation. This will be handled in
the import service.
For each CSV file, create an Opentaps Import Entity (i.e., the bridge table) that has the
same fields as the CSV.
Add three more fields for use by the import system: importStatusId, importError, and
processedTimestamp
Import your CSV data into this table using standard SQL procedures for your database
Define a
opentaps service that will execute your import (use-
transaction="false")
You may wish to implement the opentapsImporterInterface service, which defines
parameters to control the way the import runs
Create an implementation of ImportDecoder, which requires a decode() method
In the decode() method, you are passed a row from the bridge entity.
Use the row data to create the equivalent set of opentaps entities.
If there are problems that should cause the row to not be imported, throw any kind of
exception. The exception message will be stored in importError
All operations in decode() will be rolled back if an exception is thrown.
Return a list of opentaps entities to persist, they will be done in one update operation for
efficiency.
In the service implementation, create an instance of OpentapsImporter
Specify the name of your Opentaps Import Entity in the constructor
Specify the ImportDecoder that you just created
Run the import by calling opentapsImporter.runImport()
// POJOJavaEngine
... catch (Throwable t) {
if (t instanceof FoundationException) {
(FoundationException) t.setLocale(locale);
return ServiceUtil.returnError(t.getMessage());
}
Finally, the FoundationException class allows you to set whether your exception
requires a global rollback or not. By default, exceptions do require rollback, but you can
turn it off with setRequiresRollback:
ServiceException ex = new ServiceException(...);
ex.setRequiresRollback(false);
Setting requires rollback to false will cause the POJO Service Engine to use
ServiceUtil.returnFailure instead of ServiceUtil.returnError. This will cause the service to
return an error message, but it will not cause the other services in a chain to abort. You
can also use the requires rollback flag for your own exception management.
p
Most developers know that they should not use literal strings in their code. For example,
we all feel that it would be bad to write code like this:
if (order.getStatusId().equals("ORDER_APPROVED")) {
// .... }
Our first reaction is always to define a file of string literals, and then reuse the
predefined literals:
public static final String ORDER_APPROVED = "ORDER_APPROVED";
//...
if (order.getStatusId().equals(ORDER_APPROVED)) {
// ... }
This is nice: now, the compiler will be able to check if our status IDs are correct, and if
somebody decides to change the ID code, all we have to do is change in one place.
But inevitably, we run into other problems with this kind of code. Somebody might
decide that instead of having one state called approved, we want to have several states,
like: approved, in production, pending shipment, etc. that have the same meaning as
being approved. Later, somebody else might want to have more complicated logic: an
order might considered approved if it is either in the approved state or does not contain
certain hazardous materials and is in the in production or pending shipment state, for
example.
Now we'll have to change all our code again. A developer's job is never done!
The main problem we all face is one of logic: most ERP software, like opentaps, operate
on a set of data in one logical state (i.e., orders that are approved) and transform them
into other data and other logical states (i.e., invoices that are created.) The problem is
that these logical states are denoted as strings in a database field, but they are
much more subtle and complex in real life. Thus, we developers are lulled
into thinking that logical states can be modeled as literal strings. This usually works,
but in those 10% of the cases when it's not the case, our code is usually not well
structured enough to deal with it.
The solution is to ppp
. Instead, separate the logical checking
code of a domain into a separate class, so that it can be modified as needed. This is the
role for Specifications: defining literal values and logical states.
In practice, we recommend having one Specification class for each domain. For
example, for the order domain, there should be an OrderSpecification class and a
corresponding interface. Because in practice specifications are usually closely related to
the way data is modeled in your database, we have kept it linked through the Repository
class of each domain. Thus, to get the OrderSpecification, use its repository:
OrderRepositoryInterface orderRepository = orderDomain.getOrderRepository();
OrderSpecificationInterface orderSpecification =
orderRepository.getOrderSpecification();
We have also found the following best practices to be helpful when implementing
specifications:
If you need to check whether a condition is true or not, implement Boolean methods in
the specifications, which your domain objects can use, rather than using literal strings
directly in the domain objects. For example, instead of:
if (orderSpecification.ORDER_APPROVED.equals(order.getOrderStatusId()))
use:
if (orderSpecification.isApproved(order))
If you need to get certain values from your specification for other purposes, get lists of
values instead of string literals. For example, if you need to get OrderRole objects
related to an order in the role of customer, instead of implementing a literal like
orderSpecification.BILL_TO_CUSTOMER_ROLE, implement a method which returns a
list of potential roles: orderSpecification.billToCustomerRoleTypeIds. You can then use
the SQL IN operator to retrieve parties in all possible bill to customer roles and thus not
be constrained to use only one potential role.
In both cases, by abstracting logical states and by making type codes more general-
purpose, your code will be able to handle changing requirements much more easily.
An Example Using Domains
Now let's consider an example. Suppose we want to create an invoice for all the order
items which are not physical products and which have been marked as performed (See
Fulfilling Orders for Services.) Using the ofbiz framework, we would first define a
service:
<service name="opentaps.invoiceNonPhysicalOrderItems" engine="java"
location="com.opensourcestrategies.financials.invoice.InvoiceServices"
invoke="invoiceNonPhysicalOrderItems">
<description>Creates an invoice from the non-physical items on the order. It will
invoice from the status in the orderItemStatusId,
or if it is not supplied, default to ITEM_PERFORMED. After the invoice is created,
it will attempt to change the items' status
to ITEM_COMPLETE.</description>
<attribute name="orderId" type="String" mode="IN" optional="false"/>
<attribute name="orderItemStatusId" type="String" mode="IN" optional="true"/>
<attribute name="invoiceId" type="String" mode="OUT" optional="false"/>
</service>
Then, we would create a static Java method for the service:
public static Map invoiceNonPhysicalOrderItems(DispatchContext dctx, Map context)
{
LocalDispatcher dispatcher = dctx.getDispatcher();
GenericValue userLogin = (GenericValue) context.get("userLogin");
Locale locale = (Locale) context.get("locale");
try {
// validate that the order actually exists and get list of non-physical
GenericValue order = delegator.findByPrimaryKey("OrderHeader",
UtilMisc.toMap("orderId", orderId));
if (UtilValidate.isEmpty(order)) {
return ServiceUtil.returnError("Order [" + orderId + "] not found");
}
tmpResult = ServiceUtil.returnSuccess();
tmpResult.put("invoiceId", invoiceId);
return tmpResult;
} catch (GeneralException e) {
return UtilMessage.createAndLogServiceError(e, module);
}
}
So what's there not to love about this code?
It is closely tied to the database. Even though there's not a single line of SQL here, you
have to know that orders are stored in "OrderHeader", and that it is related to
"OrderItem", and that there are fields like statusId. You also have to use the string
literals for status, like ITEM_COMPLETED, ITEM_PERFORMED, etc.
This method depends on things spread out in different parts of the application, like the
UtilOrder class and the createInvoiceForOrder and changeOrderItemStatus services.
This code is completely dependent on the ofbiz framework's GenericValue, entity
engine delegator, and local dispatcher.
Static Java methods like this, while easier to work with than minilang, do not enjoy the
benefits of real object-oriented programming.
In other words, for somebody to write this code, they have to know a lot about the
framework, the data model, and the application tier.
Here's a re-write of the everything inside the try ... catch block using the domain driven
design:
// validate that the order actually exists and get list of non-physical
OrderRepository orderRepository = new OrderRepository(new
Infrastructure(dispatcher), userLogin));
Order order = orderRepository.getOrderById(orderId);
if (UtilValidate.isEmpty(orderItemStatusId)) {
Debug.logInfo("No status specified when invoicing non-physical items on order ["
+ orderId + "], using [" + OrderSpecification.ITEM_STATUS_PERFORMED + "]",
module);
orderItemStatusId = OrderSpecification.ITEM_STATUS_PERFORMED;
}
List<GenericValue> itemsToInvoice =
order.getNonPhysicalItemsForStatus(orderItemStatusId);
tmpResult = ServiceUtil.returnSuccess();
tmpResult.put("invoiceId", invoiceId);
return tmpResult;
This code is the programming equivalent of the missing link: it has many features of the
old code, but a few important differences as well. What we have done is push
everything related to orders to the Order Entity object, its OrderRepository, and
OrderSpecification. We don't care where the order came from, how we can get the
items of an order, or even how the status codes of an order are defined any more,
because those are all responsibilities of the Order domain objects. (Even the validation
that an order was obtained is handled by the OrderRepository, which will throw a
RepositoryException if nothing is found from orderId.) We are also no longer tied to the
delegator, although the Order domain may itself require the delegator. (The casting of
itemsToInvoice to GenericValue is vestigal -- remember that our Entity object extends
GenericValue, and a specific Java object may in turn extend Entity.)
We are, however, still tied to the createInvoiceForOrder service and the ofbiz service
engine. That will have to wait until the next evolutionary step (which happened the next
day). Using the Service class from above, we can implement a POJO version of this
service:
public class OrderInvoicingService extends Service {
/**
* Set the status id of non-physical order items to be invoiced by
invoiceNonPhysicalOrderItems, or
* OrderSpecification.ITEM_STATUS_PERFORMED will be used
* @param statusId
*/
public void setStatusIdForNonPhysicalItemsToInvoice(String statusId) {
if (statusId != null) {
statusIdForNonPhysicalItemsToInvoice = statusId;
}
}
}
Then, the original Java static method simply has to pass the parameters to it, execute
the method in the OrderInvoicingService, get its result, and pass it back. Here's the
content of that try ... catch block again:
OrderInvoicingService invoicingService = new OrderInvoicingService(new
Infrastructure(dispatcher), new User(userLogin), locale);
invoicingService.setOrderId(orderId);
invoicingService.setStatusIdForNonPhysicalItemsToInvoice(orderItemStatusId);
invoicingService.invoiceNonPhysicalOrderItems();
Congratulations! Now your business logic is a POJO. You can add annotations, use
dependency injection with it, and use it with other Java frameworks now. (Is this how
that missing link felt, seeing all those primordial forests for the first time?)
Your service is using a legacy ofbiz service "createInvoiceForOrder" still through its
getDispatcher() method, but that's not so bad. If you want to use an ofbiz service, you
should use its dispatcher. In this example, however, you still had to write a static Java
method for your service, because you are using the ofbiz static Java method service
engine. With the POJO Service Engine, however, that is no longer necessary, and you
can remove the code in InvoiceServices.java and call
OrderInvoicingServices.invoiceNonPhysicalOrderItems() directly.
A final round of enhancements used the base entities instead of GenericValues and the
domains directory to load the order domain and the order repository, so this order
invoicing service could function independent of the order management system. See
POJO Service Engine for the code sample.
Putting It All Together
Now, let's see how we could put all this together to create applications around the
domain driven architecture. As we discussed before, related data Entities could be
grouped together as an Aggregate, which will have related Factories, Repositories, and
Services. For example, an aggregate of concepts related to invoicing might include the
Invoice, InvoiceItem, InvoiceStatus, InvoiceContactMech, InvoiceAttribute entities as
well as invoice factories, invoice repositories, and several invoicing services:
Several of these Aggregates may then form a Domain of related business knowledge.
For example, the Billing domain may consist of Invoice and Payment aggregates and
their related factories, repositories, and services. This Domain would interact with other
domains, such as Organization, Ledger, Party, and Product:
An application, such as opentaps Financials application, could be built from several
relatively independent domains:
To keep them relatively independent of each other, an interface should be declared for
each domain, and they should return interfaces to the repositories, factories, and
services. Interfaces are not necessary for the entities, however, since entities represent
a data model, which must be implemented in the same way for all opentaps
applications. For example, Invoice will always have to have an invoice ID field, and the
getInvoiceId() method should always return the value of that field. If different underlying
invoicing systems use different types of invoice IDs, it is the responsibility of the invoice
repository to parse that and store it in the invoice ID field of Invoice. The Invoice entity
does not need to be changed. Here is an example of the interface for the billing domain,
defined in org.opentaps.domain.billing.BillingDomainInterface:
import org.opentaps.domain.billing.invoice.InvoiceRepositoryInterface;
}
There should only be one directory of domains at any one time, so that all the opentaps
applications use the same domains. In opentaps, this domain directory is defined in the
DomainsDirectory class, and the actual domains are defined in hot-deploy/opentaps-
common/config/domains-directory.xml:
<beans>
<bean id="opentapsBillingDomain"
class="org.opentaps.financials.domain.billing.BillingDomain"/>
</beans>
Note that domains are declared explicitly in the DomainsDirectory, rather than as a
Map. To add a new domain, you must modify the DomainsDirectory class to add a new
member plus accessor (set/get) methods. To change your domains, you can just modify
this xml file. For example:
<beans>
</beans>
When you restart opentaps, the new domains will be loaded.
To load your domains, use DomainsLoader, which can be instantiated with
Infrastructure and User:
// get the domain
DomainsLoader dl = new DomainsLoader(new Infrastructure(dispatcher), new
User(admin));
DomainsDirectory domains = dl.loadDomainsDirectory();
BillingDomainInterface billingDomain = domains.getBillingDomain();
// run the query and get it back as a List of Maps or the first value as a Map
List list1 = q.list();
Map map1 = q.firstResult();
// run the query and get an EntityListIterator. Specify the entity name and optionally a
list of fields
EntityListIterator eli1 = q.entityListIterator("StatusItem");
EntityListIterator eli2 = q.entityListIterator("StatusItem", UtilMisc.toMap("statusId",
"statusTypeId", "description"));
// run the query and get a List of GenericValues. Specify the entity name and optionally
a list of fields
List list3 = q.entitiesList("StatusItem");
List list4 = q.entitiesList("StatusItem", UtilMisc.toList("statusId", "statusTypeId",
"description"));
Because Query implements the standard JDBC PreparedStatement, you can set
parameters to your Query as if it were a PreparedStatement:
Query q2 = qf.createQuery("SELECT * FROM STATUS_ITEM WHERE STATUS_ID
LIKE ? AND STATUS_TYPE_ID LIKE ?");
q2.setString(1, "%APPROVE%");
q2.setString(2, "INVOICE%");
List list5 = q2.list();
Technical Notes
When the Query is first instantiated, a PreparedStatement is instantiated, and on the
first call to a method which would cause the query to be executed, such as .list(), the
PreparedStatement is called, a ResultSet is obtained, converted to a List, and then
closed. Subsequent calls to .list() only return the previously stored list and does not
cause another query to be run. If you need to run the query again, call
.clearQueryResults();
Converting the query results to GenericValues/GenericEntities requires the use of the
ofbiz entity engine's EntityListIterator. If you use the .entityListIterator(..) method, the
EntityListIterator will be returned to you, and it will handle the closing of the connection
with its own .close() method. If you use the .entitiesList(..) methods, the EntityListIterator
and the ResultSet will be automatically closed.
The ResultSet is automatically closed on finalize().
The Query and QueryFactory throw a QueryException. If GenericValues/GenericEntities
are involved, they also will throw the GenericEntityException.
public OrderInvoicingService() {
super();
}
/**
* Set the status id of non-physical order items to be invoiced by
invoiceNonPhysicalOrderItems, or
* OrderSpecification.ITEM_STATUS_PERFORMED will be used
* @param statusId
*/
public void setOrderItemStatusId(String statusId) {
if (statusId != null) {
statusIdForNonPhysicalItemsToInvoice = statusId;
}
}
Unit Testing
Unit Testing
p
[hide]
1 How to Write Unit Tests
1.1 opentaps 1.0
1.2 opentaps 0.9
2 Where are the Unit Tests?
3 Setting Up For Unit Testing
4 Unit Testing Strategies
5 A Unit Testing Tutorial
6 Creating Reference Data Sets
7 Running a Unit Test from Beanshell
8 Debugging Unit Tests with IntelliJ
9 Dealing with Concurrency
10 Warning about running Unit Tests in MySQL
(
p'#p
The Google Web Toolkit (GWT) is built independently of opentaps. To build the Google
Web toolkit widgets,
$ ant gwt
To clear the previous build,
$ ant clean-gwt
This will cause ant to look for look "gwt" in the opentaps components' build.xml files and
build them one at a time. In the component build.xml, the following directories are
specified for building gwt:
<property name="gwt.deploy.dir" value="./webapp/crmsfagwt"/>
<property name="gwt.module.base" value="org.opentaps.gwt.crmsfa"/>
<property name="gwt.src.common" value="../opentaps-
common/src/org/opentaps/gwt"/>
<property name="gwt.src.base" value="./src/org/opentaps/gwt/crmsfa"/>
Then, when ant tries to build gwt, it will look all that gwt modules specified in the
build.xml. Each module is specified at a path of
${gwt.deploy.dir}/${gwt.module.base}.${module}.${module} For example, if you specify
contacts as the module to compile, then opentaps will try to compile
org.opentaps.gwt.crmsfa.contacts.contacts.gwt.xml, which should be in your src/ path.
When you have an additional GWT module to build, add it to the list of modules:
<foreach list="contacts,accounts,leads,partners" target="gwtcompile"
param="module"/>
To speed up the build during development, you can setup GWT to only compile for one
of the supported browsers. This is configured in the common module in hot-
deploy/opentaps-common/src/org/opentaps/gwt/common/common.gwt.xml. For
example, you can enable it for only Mozilla/Firefox by setting the user.agent property to
"gecko1_8":
<set-property name="user.agent" value="gecko1_8"/>
pppj p
Your GWT widgets will need to interact with server-side services to store and retrieve
data. A "best practices" pattern we have started in opentaps is to create a configuration
Java file for each server side service available for GWT client-side widgets. For
example, there is a
org.opentaps.gwt.crmsfa.contacts.client.form.configuration.QuickNewContactConfigurati
on Class which contains the server-side URL and all the form parameters for interacting
with the quick new contact service on the server. This is part of the GWT client package
and is designed to be used by all the client-side widgets. Note that the pattern is to have
one Configuration Java file for each -side service, to be shared by many client-
side widgets which may access the same server-side service, not to have a
configuration file for each client-side widget.
p
The GWT widgets do not perform security checks, but users permissions are made
available to them in order to adapt the user interface accordingly. The real security
checks allowing a user to perform an operation or retrieve data are performed server
side.
Client-side permission checking is handled in the following way:
The server side uses the User object to determine what permissions the currently
logged in user has and puts it into the webpage sent to the client as an object using
JavaScript. This is done in the main-decorator.bsh and header.ftl of the server.
On the client-side, the Permission class retrieves the security permissions set into the
browser via JavaScript. Your GWT widget can use its hasPermission method to check if
the user has permissions to access certain sections of your page:
if (!Permission.hasPermission(Permission.CRMSFA_CONTACT_CREATE)) {
return;
}
# !*j*': Do not rely on those checks to hide sensitive data or services from a user.
Specifically, it is possible for the end user to modify the JavaScript and add permissions
for their displayed widgets. Therefore, you should always filter out sensitive data before
sending them to the client-side widgets, and every operation on the server side should
check permissions again. Client-side widget permission checking is only for hiding parts
of the user interface and should not be considered a security feature.
(
p#pp(p
p
Base panels are base classes providing handlers and utility methods to quickly build
forms that integrate with the application.
p(p
p
(
is the base of all forms and it provides the following:
and !1
methods that set the correct class for the labels and
set the handler that submits the form when the user hits the Enter key
( that places a submit button
the three % event handlers that performs validation before submitting,
display an activity indicator when the form is submitted, handles exception returned by
the server
a mechanism to notify any registered widgets when the form has been successfully
submitted
is the base class for forms that should fit on the left column, in all
aspects it behaves the same as (
only the CSS classes applied differ.
provides methods to create a multiple tabs form such as the one
used to present the filters available in Find Contacts. The tabs created are
which provide the same add fields methods than (
.
p
(
provides simple validation that is automatically called before trying to
submit the form. It works by checking each field in the form against its own internal
validation method, for example it checks that all required fields are filled, that email
address input fields have valid email addresses, etc ...
In order to implement a more complex validation, simply override the method
(be sure to call the base implementation first to keep checking field validation methods).
* p
In order to notify another widget it must first implement the * j .
You can register (using the
method) that widget in your form and the method
of that widget will get called if your form was successfully submitted. This is
normally done in the entry class which is the place where all widgets are loaded.
For example, the % implements it allowing form that create new parties like
the $
* to notify the list that is should reload in order to display the
newly created entity.
And the notification is setup in the contact entry point:
if (RootPanel.get(QUICK_CREATE_CONTACT_ID) != null) {
loadQuickNewContact();
// for handling refresh of lists on contact creation
if (myContactsForm != null) {
quickNewContactForm.register(myContactsForm.getListView());
}
if (findContactsForm != null) {
quickNewContactForm.register(findContactsForm.getListView());
}
}
So when $
* has successfully submitted,
-
makes a call to -
which notifies each registered widget by calling
from the
* j .
!
is a % which implements that interface and
its
method performs
.
Form macros:
How to use the opentaps Form Macros
opentaps Form Macros Documentation
j p* p p
p
2p
p
pp pp p3pj p p
pp pp p 4p
p
ppp &
Why It Exists
Many of us are constantly creating forms for our users to enter and display data. Most of
those forms share common elements: text input fields, date input fields, drop downs,
etc. etc. Wouldn't it be nice to have a tool which helps you make and manage? At the
same time, the tool should still give you control over the design of your form, so you
don't end up with an ugly cookie-cutter look for all your forms. You should be able to
add form elements in HTML when they are appropriate, or completely change the layout
and design of your forms by just changing the HTML.
The opentaps Form Macros were created for this reason: to make writing forms easier,
while still giving you control over the final layout. The macros help you design form
elements such as input rows, select boxes, and date fields more efficiently but do not
force you to use them--you can write some form elements with them, write others in
HTML or anything else. It is completely written in Freemarker and can be accessed from
any Freemarker page, so you can combine opentaps Form Macros, HTML, Freemarker
in the same form. It is also easily extend or re-skin: you edit the form macros file and
make your changes there, without updating XSD definitions or Java code.
How It Works
First, you must make sure that the opentaps form macro importing tool is loaded. This
can be done by including the following code in your beanshell (.bsh) script for your
page. They can be put in main-decorator.bsh so that the form macros would work for
your entire webapp:
loader = Thread.currentThread().getContextClassLoader();
globalContext.put("import",
loader.loadClass("org.opentaps.common.template.freemarker.transform.ImportTransfor
m").newInstance());
globalContext.put("include",
loader.loadClass("org.opentaps.common.template.freemarker.transform.IncludeTransfo
rm").newInstance());
The form macros are located in an FTL file in your opentaps-common directory:
hot-deploy/opentaps-common/webapp/common/includes/lib/opentapsFormMacros.ftl
To use it, simply include the form macros in your Freemarker (FTL) page, like this:
<@import location="component://opentaps-
common/webapp/common/includes/lib/opentapsFormMacros.ftl"/>
<@import /> is an opentaps Freemarker extension which allows macros to be imported
into the current context from any file in your opentaps applications.
j
Now you are ready to use the form macros, like this:
<#list inventoryProduced as inventoryItemProduced>
<#assign inventoryItem =
inventoryItemProduced.getRelatedOne("InventoryItem")/>
<#if inventoryItem.inventoryItemTypeId == "SERIALIZED_INV_ITEM">
<tr class="${tableRowClass(rowIndex)}">
<@displayLink
href="EditInventoryItem?inventoryItemId=${inventoryItem.inventoryItemId}"
text="${inventoryItem.inventoryItemId}"/>
<@display text="${inventoryItem.productId}"/>
<@inputText name="serialNumber_o_${rowIndex}"
default="${inventoryItem.serialNumber?if_exists}"/>
<@inputHidden name="_rowSubmit_o_${rowIndex}" value="Y"/>
<@inputHidden name="inventoryItemId_o_${rowIndex}"
value="${inventoryItem.inventoryItemId}"/>
In this example, we've mixed Freemarker directives (if, list, assign), HTML and CSS
tags (tr, class), and opentaps forms macros (displayLink, display, inputText,
inputHidden.) The form macros are just macros for generating the appropriate HTML
around the parametrized fields nad values. The list of form macros and how to use them
are given in the API below.
That's all there is to it.
The opentaps Form Macros API
Notation:
@inputHidden name value=""
means that the macro can be used as:
<@inputHidden name="facilityId">
which creates a hidden input with default value of "". Or, it can be used as:
<@inputHidden name="facilityId" value="${facilityId}">
which creates a hidden input with default value of whatever ${facilityId} is in the context.
Each attribute (ie, name or value in this case) after the name of the macro (inputHidden
in this case) is a parameter. If there is an after the attribute, it defines a default
value.
By convention, these are standard fields for all macros which may use them:
name: name of the field.
title: descriptive title of this field, used for rows (ie, "Charge Tax?")
form: name of the current form, for javascript such as lookup widgets
list: used for dropdowns. The list of maps or GenericValue entities where the
information for the select option elements will be retrieved
key: used for dropdowns. For each map in "list", a select "option" element is generated,
and its value (the "value" attribute) comes from map or GenericValue's entry under the
key in "key"; if the value of "key" is empty, then the value of "name" is used as the
lookup key instead.
displayField: used for dropdowns. The value of lookup key used to retrieve (from each
map in "list") the display text of the generated option. Note that if "displayField" is empty,
then the macro expects a nested string that contains the FTL string that will be used as
the option display text.
default: the default value for the field. For dropdowns, which is the option that will be
initially selected by the browser (optional).
index: used for multi-row submit forms. By default, index is set to -1, which means
nothing happens. If set to a different value, then"_o_${index}" is appended to the field
name. For example, if you call a macro with name="productId" and index="5", you will
get "product_o_5" for the name of the field.
required: for dropdowns, whether the user is required to select an option from the select
box or not (if not, then a "default" option element with an empty value is generated, in
addition to the other ones).
The form macros can be divided into two sub-groups: element and row macros.
Element macros are for creating a single cell or form element. Row macros use the
element macros for creating an entire input row. For example, an element macro might
be used to create a date entry field, which can be used in a multi-row or single submit
form. A date entry row macro might then use the date entry field element macro to
create a row with a title ("Start Date") and the elemnt macro, all wrapped in TR and TD
tags.
!
!"
!!
!!
#
Row Macros
+ p )p
@inputRowText
name title size=30 Creates a text entry row for field with name and displays the title.
maxlength="" Optionally specify size, maxlength, and default values.
default=""
@inputRowLookup
name title lookup
Creates a text entry row with a lookup. The lookup URL is
form size=20
specified in the "lookup" parameter.
maxlength=20
default=""
@inputHidden name
Creates a hidden input for name with default value of ""
value=""
@inputRowDateTime
name title form Creates a date and time input row.
default=""
@inputRowIndicator
name title Creates an input row with a Y/N dropdown (select) box. This
required=true uses inputIndicator (see above.)
default=""
@inputRowSubmit Creates a submit button. Specify the word in the button with title
title colspan="1" and how many columns it spans.
Element Tags
+ p )p
@displayTitle text class="tableheadtext" Display a title tag. Used by inputRowText.
+ p )p
width=200
An input element with a lookup button
@inputLookup name lookup form default=""
next to it. lookup is the controller request
size=20 maxlength=20 index=-1
for the lookup (ie, LookupProduct)
Creates dropdown (select) box of Y/N for
@inputIndicator name required=true
the name. If required=true, user must
default=""
select one.
Creates a select box. defaultOptionText is
the value of "required" is false, and an
empty default option element is
generated, then "defaultOptionText" will
contain the display text for that option.
For a row:
@inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
or: @inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
@inputSelect name list title="" key="" display="row"
displayField="" default="" index=-1 For a cell:
required=true defaultOptionText="" @inputSelect name list key=""
display="row|cell|block|inline" displayField="" default="" index=-1
required=true defaultOptionText=""
display="cell"
For a block:
@inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
display="block"
Inline:
@inputSelect name list key=""
displayField="" default="" index=-1
required=true defaultOptionText=""
display="inline"
@inputText name size=30 maxlength="" Create an input text box. index not
default="" index=-1 implemented yet.
Creates a confirmation button. When the
@inputConfirm title href="" form="" button is pressed, it will produce a popup
confirmText=uiLabelMap.OpentapsAreYouS confirmation dialogue. If the user cancels,
ure class="buttonDangerous" then nothing happens. If the user
confirms, then either the given form name
+ p )p
is submitted or the user is sent to the
given href link. The text in the popup
window can be set with confirmText, but
the buttons are browser specific. (See the
javascript function confirm() for
reference.)
Creates two dropdowns for the user to
select state and country. If the country is
changed, the state dropdown will be
updated to show the states in that
country. The default country is defined in
opentaps.properties as
defaultCountryGeoId.
You may either pass in a PostalAddress
with the address= argument, or you can
specify the parameter field names with
stateInputName and countryInputName.
In order for this macro to work properly,
@inputStateCountry address=null
the following script should be called in the
stateInputName="stateProvinceGeoId"
implementing screen,
countryInputName="countryGeoId"
components://opentaps-
common/webapp/common/WEB-
INF/actions/includes/stateDropdownData.
bsh
For a row:
@inputRowSelect title
stateInputName="stateProvinceGeoId"
countryInputName="countryGeoId"
For a cell:
@inputCellSelect
stateInputName="stateProvinceGeoId"
countryInputName="countryGeoId"
Header and Menu Tags
+ p )p
Creates a header for a subsection within an
OpenTaps screen. Parameter "title" is the title that
will be shown in header, "headerClass" is the class
@sectionHeader title of the section header DIV element, and "titleClass"
headerClass="subSectionHeader" is the class of the actual title (technically, the DIV
titleClass="subSectionTitle" element that contains the title, which in turn is
contained within the section header DIV element).
Note that additional contents, such as FTL code for
menu buttons, can be "nested" within this macro.
Other Macros
+ p )p
Generates a pagination block for a list, eg: Previous 21-35 of 35 Next
Usage example:
In screen definition:
In FTL:
<#assign exParams = "&doLookup=Y&supplierPartyId=" +
parameters.supplierPartyId?if_exists/> <@pagination
viewIndex=viewIndex viewSize=viewSize
currentResultSize=lotList?size requestName="manageLots"
totalResultSize=lotsTotalSize extraParameters=exParams/>
Usage example:
In FTL:
For a flexArea which is hidden and closed on page load and has its
expansion triggered by an external event:
<@flexArea targetId="..." title="..." controlClassClosed="hidden"
state="closed" save=false enabled=false>...</@flexArea>
openOrClosedCl
Supporting functions for the flexArea macro. Not useful separately.
ass(domId,
openClassName,
closedClassNam
e, default="")
In this guide, we will show you how to replace a static form-widget list form with an Ajax
paginated form using the opentaps Form Macros pagination framework.
The screen is the Financials > Configure >> Chart of Accounts screen and displays all
the general ledger accounts configured for a company. Originally, the list of accounts
was created with the ofbiz form widget, but because a company typically has several
hundred accounts associated with it, such a static form was not very user-friendly. It
always displayed 100 GL accounts per page, and paging through was slow.
Configuring the Screen Widget
The first step is to edit screen widget XML definition and remove the references to the
form widget. Edit the file hot-
deploy/financials/widget/financials/screens/ConfigurationScreens.xml and look for the
screen "listGlAccounts". Since the ajax pagination is done within freemarker (FTL)
templates, you can remove the following lines which referenced the old form widget :
<container style="screenlet-body">
<include-form name="listGlAccounts"
location="component://financials/widget/financials/forms/configuration/ConfigurationFor
ms.xml"/>
</container>
You can also remove these lines:
<set field="viewIndex" from-field="parameters.VIEW_INDEX" type="Integer" default-
value="0"/>
<set field="viewSize" from-field="parameters.VIEW_SIZE" type="Integer" default-
value="100"/>
These are no longer needed because they were used to control the pagination of the list
of GL accounts from the server side, but the opentaps Ajax pagination form macro
allows the user to set pagination choices.
entityName = "GlAccountOrganizationAndClass";
where = UtilMisc.toList(
new EntityExpr("organizationPartyId", EntityOperator.EQUALS,
organizationPartyId),
EntityUtil.getFilterByDateExpr()
);
orderBy = UtilMisc.toList("accountCode");
return this;
}
This function essentially defines what entity (GlAccountOrganizationAndClass) will be
queried, what the conditions are in the statement, and how the query results will
be ordered. The fields
", , -, ,
, and
#p
p$pp
p
p
p%p
p
p"
p
p-pp
p
p
pp$p% p& p
p
pp
ppp
ppp
p
$( is created for you by the pagination macro to define different CSS
classes for different rows. You can use either FTL and HTML to display the results or
use one of the other form macros, such as <@displayLink> or <@displayCell>.
Finally, you would wrap up like this:
</table>
</#noparse>
</@paginate>
And that's it!
Debugging
There are a few things you should know about debugging the paginator:
The ofbiz framework caches the freemarker files, so after changing your .ftl file, make
sure you clear the cache in Webtools > Cache. Otherwise, the changes may not appear.
The paginator's content is retrieved via AJAX after the main page has loaded. Thus, if
you did a "View Page Source" on your browser, it would not show the content inside
paginate. If you are using Firefox, you can highlight the paginated area, right click on
your mouse, and click on "View Selection Source" to view the HTML code of your
paginator.
Notes
If you use an EntityListBuilder, then add additional fields, you will not be able to sort by
the fields which are not part of the database table.
The paginator can accept additional parameters into it. They can be passed in as part of
the @paginate directive, like this:
<@paginate name="pendingInboundEmails" list=inboundEmails
teamMembers=teamMembers>
Then, inside of the paginator, you can access them using the parameters Map, like this:
<#if parameters.teamMembers?has_content>
...
<#list parameters.teamMembers as option>
pp
p
Creating and Applying Patches
Creating and Applying Patches
p
[hide]
1 Creating Patches
1.1 Patch of Changes that I Made
1.2 Patch of Specific Revision of Opentaps
2 Applying Patches
2.1 Dealing with Patch Rejects
Creating Patches
Patch of Changes that I Made
To make a patch of the changes you made to opentaps, you can use the svn diff
command from a terminal or command prompt.
First, ensure you are in the root directory of opentaps,
prompt> cd opentaps
To verify that you're in the right directory, ensure that it contains the build.xml and
startofbiz.sh files. Next, execute the svn diff command,
prompt> svn diff
It will print the patch of all changes you made to the screen. To save the output to a file
instead, use a redirect,
prompt> svn diff > mychanges.patch
This command will create a mychanges.patch file that contains all changes you made to
opentaps.
If you wish to see changes of only one file or directory, you can specify the file or
directory explicitly,
prompt> svn diff applications/product
This command will make a patch of all your changes to the applications/product/
directory and its children.
Patch of Specific Revision of Opentaps
Let's say you want to create a patch against a specific revision of opentaps, such as the
bugfix revision 9593. In order to do this, you will need either a complete checkout of
opentaps that's fully up to date or internet access to the opentaps subversion repository.
Since it's simpler to use the online opentaps subversion repository, we will go over this
technique here.
To make the patch, use the svn diff command and use the -c argument to specify the
revision. You must also specify the location of the opentaps repository from the trunk
directory. The full command is as follows,
prompt> svn diff -c 9593 svn://svn.opentaps.org/opentaps/versions/1.0/trunk >
bugfix.patch
A file named bugfix.patch is created and it contains revision 9593 of opentaps.
Applying Patches
If you get a patch, you can use it to modify your files with the patch command. patch is a
standard UNIX command, and a Windows version is also available. First ensure that
you are in the root directory of opentaps,
prompt> cd opentaps
It should contain the build.xml and startofbiz.sh files.
We recommend copying the patch file to this directory for convenience. For instance, if
you have the bugfix.patch patch file from the above example, copy it into the opentaps
root directory. Also make sure the patch is not compressed (.zip or .gz).
Next, use the patch command with -p0 arguments as follows,
prompt> patch -p0 < bugfix.patch
If you did not copy the path file to the opentaps root directory, you will have to specify
the full path to your patch file,
prompt> patch -p0 < /path/to/bugfix.patch
Assuming you have made no major changes that would conflict with the patch, it should
be applied without errors. You can check to see if the patch was applied correctly using
svn diff.
Dealing with Patch Rejects
Sometimes the patch might fail to be applied to a certain file. In this case, a rejection file
is created with information about what caused the problem. Rejection files have the
same name and location as the file that was not patched, except that it has an
extension .rej.
// inside of myService
List orderHeaders = delegator.findByAnd("OrderHeader", UtilMisc.toMap("statusId",
"ORDER_APPROVED"));
for (GenericValue orderHeader: orderHeaders) {
List orderItems = orderHeader.getRelatedByAnd("OrderItem",
UtilMisc.toMap("statusId", "ITEM_APPROVED"));
// inside of myService
Database Tips
Database Tips
p
[hide]
1 General
2 PostgreSQL Tips
2.1 Monitoring PostgreSQL Deadlocks
2.2 Checking Open PostgreSQL Connections
3 MySQL Tips
3.1 Table Name Case Sensitivity
3.2 UTF-8 Support
4 DB2 Tips
4.1 DB2 Basics
4.2 Making DB2 Work
General
J2EE Transaction Management =>
http://www.javaworld.com/jw-07-2000/jw-0714-transaction.html
A transaction can be defined as an indivisible unit of work comprised of several
operations, all or none of which must be performed in order to preserve data integrity.
For example, a transfer of 00 from your checking account to your savings account
would consist of two steps: debiting your checking account by 00 and crediting your
savings account with 00. To protect data integrity and consistency -- and the interests of
the bank and the customer -- these two operations must be applied together or not at
all. Thus, they constitute a transaction.
p pp p
All transactions share these properties: atomicity, consistency, isolation, and durability
(represented by the acronym ACID).
: This implies indivisibility; any indivisible operation (one which will either
complete fully or not at all) is said to be atomic.
2 A transaction must transition persistent data from one consistent state to
another. If a failure occurs during processing, the data must be restored to the state it
was in prior to the transaction.
j
2 Transactions should not affect each other. A transaction in progress, not yet
or p$) (these terms are explained at the end of this section), must be
isolated from other transactions. Although several transactions may run concurrently, it
should appear to each that all the others completed before or after it; all such
concurrent transactions must effectively end in sequential order.
2 Once a transaction has successfully committed, state changes committed
by that transaction must be durable and persistent, despite any failures that occur
afterwards.
p
and
p
p
Declarative transaction demarcation
Declarative transaction management refers to a non-programmatic demarcation of
transaction boundaries, achieved by specifying within the deployment descriptor the
transaction attributes for the various methods of the container-managed EJB
component. This is a flexible and preferable approach that facilitates changes in the
application's transactional characteristics without modifying any code. Entity EJB
components must use this container-managed transaction demarcation.
What is a transaction attribute?
A transaction attribute supports declarative transaction demarcation and conveys to the
container the intended transactional behavior of the associated EJB component's
method. Six transactional attributes are possible for container-managed transaction
demarcation:
!12 A method with this transactional attribute must be executed within a JTA
transaction; depending on the circumstances, a new transaction context may or may not
be created. If the calling component is already associated with a JTA transaction, the
container will invoke the method in the context of said transaction. If no transaction is
associated with the calling component, the container will automatically create a new
transaction context and attempt to commit the transaction when the method completes.
!1*2 A method with this transactional attribute must be executed in the
context of a new transaction. If the calling component is already associated with a
transaction context, that transaction is suspended, a new transaction context is created,
and the method is executed in the context of the new transaction, after whose
completion the calling component's transaction is resumed.
* 2 A method with this transactional attribute is not intended to be part of a
transaction. If the calling component is already associated with a transaction context,
the container suspends that transaction, invokes the method unassociated with a
transaction, and upon completion of the method, resumes the calling component's
transaction.
2 A method with this transactional attribute supports the calling component's
transactional situation. If the calling component does not have any transactional context,
the container will execute the method as if its transaction attribute was NotSupported. If
the calling component is already associated with a transactional context, the container
will execute the method as if its transactional attribute was Required.
+ 2 A method with this transactional attribute must only be called from the
calling component's transaction context. Otherwise, the container will throw a
javax.transaction.TransactionRequiredException.
*2 A method with this transactional attribute should never be called from a calling
component's transaction context. Otherwise, the container will throw a
java.rmi.RemoteException.
Methods within the same EJB component may have different transactional attributes for
optimization reasons, since all methods may not need to be transactional. The isolation
level of entity EJB components with container-managed persistence is constant, as the
DBMS default cannot be changed. The default isolation level for most relational
database systems is usually ReadCommitted.
Programmatic transaction demarcation
Programmatic transaction demarcation is the hard coding of transaction management
within the application code. Programmatic transaction demarcation is a viable option for
session EJBs, servlets, and JSP components. A programmatic transaction may be
either a JDBC or JTA transaction. For container-managed session EJBs, it is possible --
though not in the least recommended -- to mix JDBC and JTA transactions.
JDBC transactions
JDBC transactions are controlled by the DBMS's transaction manager. The JDBC
Connection -- the implementation of the java.sql.Connection interface - supports
transaction demarcation. JDBC connections have their auto-commit flag turned on by
default, resulting in the commitment of individual SQL statements immediately upon
execution. However, the auto-commit flag can be programmatically changed by calling
the setAutoCommit() method false with the argument. Afterward, SQL statements may
be serialized to form a transaction, followed by a programmatic commit() or rollback().
Thus, JDBC transactions are delimited with the commit or rollback. A particular DBMS's
transaction manager may not work with heterogeneous databases. JDBC drivers that
support distributed transactions provide implementations for
javax.transaction.xa.XAResource and two new interfaces of JDBC 2.0,
javax.sql.XAConnection and javax.sql.XADataSource.
JTA transactions
JTA transactions are controlled and coordinated by the J2EE transaction manager. JTA
transactions are available to all the J2EE components -- servlets, JSPs, and EJBs -- for
programmatic transaction demarcation. Unlike JDBC transactions, in JTA transactions
the transaction context propagates across the various components without additional
programming effort. In J2EE server products, which support the distributed two-phase
commit protocol, a JTA transaction can span updates to multiple diverse databases with
minimal coding effort. However, JTA supports only flat transactions, which have no
nested (child) transactions.
The javax.transaction.UserTransaction interface defines methods that allow applications
to define transaction boundaries and explicitly manage transactions. The
UserTransaction implementation also provides the application components -- servlets,
JSPs, EJBs (with bean-managed transactions) -- with the ability to control transaction
boundaries programmatically. EJB components can access UserTransaction via
EJBContext using the getUserTransaction() method. The methods specified in the
UserTransaction interface include begin(), commit(), getStatus(), rollback(),
setRollbackOnly(), and setTransactionTimeout(int seconds). The J2EE server provides
the object that implements the javax.transaction.UserTransaction interface and makes it
available via JNDI lookup. The isolation level of session EJB components and entity
EJB components that use bean-managed persistence may be programmatically
changed using the setTransactionIsolation() method; however, changing the isolation
level in mid-transaction is not recommended.
c
pp p56p
p p
Some aspects of the J2EE platform are optional, which may be due to evolving
standards and introducing new concepts gradually (in terms of Internet time). For
example, in the EJB 1.0 specification,
p$ (and container-managed
persistence) was a relatively new concept and an optional feature. Support for entity
beans became mandatory about a year later in the EJB 1.1 specification because of
high market acceptance and demand. As products mature and support more
sophisticated features, non-trivial features may be made a mandatory part of the
specification. The following are some optional transaction-related aspects:
pp p
Sanjay Mahapatra is a Sun Certified Java programmer (JDK 1.1) and architect (Java
Platform 2). He currently works for Cook Systems International, a consulting and
systems integration vendor for the Java 2 Platform.
+
pp ppp p ppp 0p
p
2 The J2EE 1.2 specification does not require a J2EE server
implementation to support access to multiple JDBC databases within a transaction
context (and support the two-phase commit protocol). The
javax.transaction.xa.XAResource interface is a Java mapping of the industry-standard
XA interface based on X/Open CAE specification. (See Resources.) X/Open is a
consortium of vendors who aim to define a Common Applications Environment that
supports application portability. Support for the multiple JDBC data sources,
javax.transaction.xa.XAResource, two-phase commit, etc., is optional in the current
specification, though the next version will likely mandate such support. Sun
Microsystems's J2EE reference implementation, for instance, supports access to
multiple JDBC databases within the same transaction using the two-phase commit
protocol.
p p p
p
pp
: The J2EE 1.2
specification does not require that transactional support be made available to
application clients and applets. Some J2EE servers may provide such support in their
J2EE server products. As a design practice, transaction management within application
clients should be avoided as much as possible, in keeping with the thin client and three-
tier model. Also, a transaction, being a precious resource, must be distributed sparingly.
j0#0 p p p 2 The J2EE 1.2 specification
does not mandate that the transaction context be propagated between Web
components. Typically, Web components like servlets and JSPs need to make calls on
(session) EJB components, rather than to other Web components.
ppp
Here we will walk through initially about the types of drivers, availability of drivers, use of
drivers in different situations, and then we will discuss about which driver suits your
application best.
Driver is the key player in a JDBC application, it acts as a mediator between Java
application and database. It implements JDBC API interfaces for a database, for
example Oracle driver for oracle database, Sybase driver for Sybase database. It maps
Java language to database specific language including SQL.
JDBC defines four types of drivers to work with. Depending on your requirement you
can choose one among them.
Here is a brief description of each type of driver :
Type
of Tier Driver mechanism Description
driver
This driver converts JDBC calls to ODBC
calls through JDBC-ODBC Bridge driver
1 Two JDBC-ODBC
which in turn converts to database calls.
Client requires ODBC libraries.
This driver converts JDBC calls to database
Native API - Partly - Java
2 Two specific native calls. Client requires
driver
database specific libraries.
This driver passes calls to proxy server
through network protocol which in turn
3 Three JDBC - Net -All Java driver
converts to database calls and passes
through database specific protocol. Client
doesn't require any driver.
Native protocol - All - Java This driver directly calls database. Client
4 Two
driver doesn't require any driver.
This figure illustrates the drivers that can be used for two tiered and three tiered
applications. For both two and three tiered applications, you can filter down easily to
Type three driver but you can use Type one, two and four drivers for both tiered
applications. To be more precise, for java applications( non-applet) you can use Type
one, two or four driver. Here is exactly where you may make a mistake by choosing a
driver without taking performance into consideration. Let us look at that perspective in
the following section.
Type 3 & 4 drivers are faster than other drivers because Type 3 gives facility for
optimization techniques provided by application server such as connection pooling,
caching, load balancing etc and Type 4 driver need not translate database calls to
ODBC or native connectivity interface. Type 1 drivers are slow because they have to
convert JDBC calls to ODBC through JDBC-ODBC Bridge driver initially and then
ODBC Driver converts them into database specific calls. Type 2 drivers give average
performance when compared to Type 3 & 4 drivers because the database calls have to
be converted into database specific calls. Type 2 drivers give better performance than
Type 1 drivers.
Finally, to improve performance
1. Use Type 4 driver for applet to database communication.
2. Use Type 2 driver for two tiered applications for communication between java client
and the database that gives better performance when compared to Type1 driver
3. Use Type 1 driver if your database doesn't support a driver. This is rare situation
because almost all major databases support drivers or you will get them from third party
vendors.
4.Use Type 3 driver to communicate between client and proxy server ( weblogic,
websphere etc) for three tiered applications that gives better performance when
compared to Type 1 & 2 drivers.
&p
p
In general, transaction represents one unit of work or bunch of code in the program that
executes in it's entirety or none at all. To be precise, it is all or no work. In JDBC,
transaction is a set of one or more Statements that execute as a single unit.
java.sql.Connection interface provides some methods to control transaction they are
public interface Connection {
boolean getAutoCommit();
void setAutoCommit(boolean autocommit);
void commit();
void rollback();
}
JDBC's default mechanism for transactions:
By default in JDBC transaction starts and commits after each statement's execution on
a connection. That is the AutoCommit mode is true. Programmer need not write a
commit() method explicitly after each statement.
Obviously this default mechanism gives good facility for programmers if they want to
execute a single statement. But it gives poor performance when multiple statements on
a connection are to be executed because commit is issued after each statement by
default, that in turn reduces performance by issuing unnecessary commits. The remedy
is to flip it back to AutoCommit mode as false and issue commit() method after a set of
statements execute, this is called as batch transaction. Use rollback() in catch block to
rollback the transaction whenever an exception occurs in your program. The following
code illustrates the batch transaction approach.
try{
connection.setAutoCommit(false);
PreparedStatement ps = connection.preareStatement( "UPDATE employee SET
Address=? WHERE name=?");
ps.setString(1,"Austin");
ps.setString(2,"RR");
ps.executeUpdate();
PreparedStatement ps1 = connection.prepareStatement( "UPDATE account SET
salary=? WHERE name=?");
ps1.setDouble(1, 5000.00);
ps1.setString(2,"RR");
ps1.executeUpdate();
connection.commit();
connection.setAutoCommit(true);
}catch(SQLException e){ connection.rollback();}
finally{
if(ps != null){ ps.close();}
if(ps1 != null){ps1.close();}
if(connection != null){connection.close();}
}
This batch transaction gives good performance by reducing commit calls after each
statement's execution.
*&p p
p
p
Isolation level represent how a database maintains data integrity against the problems
like dirty reads, phantom reads and non-repeatable reads which can occur due to
concurrent transactions. java.sql.Connection interface provides methods and constants
to avoid the above mentioned problems by setting different isolation levels.
public interface Connection {
public static final int TRANSACTION_NONE =0
public static final int TRANSACTION_READ_COMMITTED =2
public static final int TRANSACTION_READ_UNCOMMITTED = 1
public static final int TRANSACTION_REPEATABLE_READ =4
public static final int TRANSACTION_SERIALIZABLE =8
int getTransactionIsolation();
void setTransactionIsolation(int isolationlevelconstant);
}
You can get the existing isolation level with getTransactionIsolation() method and set
the isolation level with setTransactionIsolation(int isolationlevelconstant) by passing
above constants to this method.
The following table describes isolation level against the problem that it prevents :
Permitted Performanc
Transaction Level
Phenomena e impact
Dirt Non Repeatabl Phanto
y e reads m reads
reads
TRANSACTION_NONE N/A N/A N/A FASTEST
TRANSACTION_READ_UNCOMMITE
YES YES YES FASTEST
D
TRANSACTION_READ_COMMITED NO YES YES FAST
TRANSACTION_REPEATABLE_READ NO NO YES MEDIUM
TRANSACTION_SERIALIZABLE NO NO NO SLOW
YES means that the Isolation level does not prevent the problem
NO means that the Isolation level prevents the problem
By setting isolation levels, you are having an impact on the performance as mentioned
in the above table. Database use read and write locks to control above isolation levels.
Let us have a look at each of these problems and then look at the impact on the
performance.
pp
p2
The following figure illustrates Dirty read problem :
Step 1: Database row has PRODUCT = A001 and PRICE = 10
Step 2: Connection1 starts Transaction1 (T1) .
Step 3: Connection2 starts Transaction2 (T2) .
Step 4: T1 updates PRICE =20 for PRODUCT = A001
Step 5: Database has now PRICE = 20 for PRODUCT = A001
Step 6: T2 reads PRICE = 20 for PRODUCT = A001
Step 7: T2 commits transaction
Step 8: T1 rollbacks the transaction because of some problem
The problem is that T2 gets wrong PRICE=20 for PRODUCT = A001 instead of 10
because of uncommitted read. Obviously it is very dangerous in critical transactions if
you read inconsistent data. If you are sure about not accessing data concurrently then
you can allow this problem by setting TRANSACTION_READ_UNCOMMITED or
TRANSACTION_NONE that in turn improves performance otherwise you have to use
TRANSACTION_READ_COMMITED to avoid this problem.
:p
Use Type two driver for two tiered applications to communicate from java client to
database that gives better performance than Type1 driver.
Use Type four driver for applet to database communication that is two tiered
applications and three tiered applications when compared to other drivers.
Use Type one driver if you don't have a driver for your database. This is a rare
situation because all major databases support drivers or you will get a driver from
third party vendors.
Use Type three driver to communicate between client and proxy server ( weblogic,
websphere etc) for three tiered applications that gives better performance when
compared to Type 1 &2 drivers.
Pass database specific properties like defaultPrefetch if your database supports
any of them.
Get database connection from connection pool rather than getting it directly
Use batch transactions.
Choose right isolation level as per your requirement.
TRANSACTION_READ_UNCOMMITED gives best performance for concurrent
transaction based applications. TRANSACTION_NONE gives best performance for
non-concurrent transaction based applications.
Your database server may not support all isolation levels, be aware of your
database server features.
Use PreparedStatement when you execute the same statement more than once.
Use CallableStatement when you want result from multiple and complex statements
for a single request.
Use batch update facility available in Statements.
Use batch retrieval facility available in Statements or ResultSet.
Set up proper direction for processing rows.
Use proper getXXX() methods.
Close ResultSet, Statement and Connection whenever you finish your work with
them.
Write precise SQL queries.
Cache read-only and read-mostly tables data.
Fetch small amount of data iteratively rather than whole data at once when
retrieving large amount of data like searching database etc.
PostgreSQL Tips
PostgreSQL Wiki Article on Performance Optimization
Optimizing PostgreSQL
+ p $%p
p
You can monitor any database locks using the Entity SQL Processor in #
p0;p
p$%p with the following query,
select pg_class.relname, pg_locks.mode, pg_locks.relation, pg_locks.transaction,
pg_locks.pid
from pg_class, pg_locks
where pg_class.relfilenode = pg_locks.relation
order by pid
This will show what kinds of locks are active on what entities. If there is an exclusive
lock on a table followed by a bunch of pids that are waiting for it, then you have a
deadlock.
pcp $%p p
Run this query:
select datname, numbackends from pg_stat_database;
to see the number of open connections to each of your databases. See How to Know
Number of Connections Made with Database - PostgreSQL
If you are running out of connections, edit the file
framework/entity/config/entityengine.xml and increase the number of connections
available.
MySQL Tips
p*ppp
If you use Linux or Unix for your MySQL server, the table names may be case sensitive,
so PRODUCT and product are not the same table. You can turn this off by configuring
mysqld on startup to ignore table names with the lower-case-table-names flag, such as
this example from /etc/init.d/mysql:
$bindir/mysqld_safe --datadir=$datadir --lower-case-table-names=1 --pid-
file=$server_pid_file $other_args >/dev/null 2>&1 &
If you get an error message from DB2, you will get a SQLCODE like below:
DB2 SQL Error: SQLCODE=-270, SQLSTATE=42997, SQLERRMC=63,
DRIVER=3.50.152
To figure out what it is, you have to run db2 from the command line:
$ db2 ? sql-270
Some of the more popular codes are:
SQL-204: <name> not recognized. Most likely, you are referencing a table that doesn't
exist.
SQL-270: Function not supported. See the SQLERRMC for the message code. If you
get sql-270 with sqlerrmc=63, it means that you are trying to select a CLOB/BLOB type
with a scroll insensitive cursor.
SQL-286: insufficient page size for CREATE TABLE
SQL-530: foreign key violation
SQL-803: operation violates a unique value constraint
+
p(6p#
pp
There are three issues with using DB2 and opentaps:
You must define a fieldtypedb2.xml file for your framework/entity/fieldtype/ directory.
You can Start with the field type XML from another database, such as MySQL. Most of
the valid DB2 field types are similar, but DB2 does not have a "NUMERIC" type. It is
called "DECIMAL" instead of must be used for floating-point and currency field types.
On startup, the ofbiz entity engine does a check of the database against the entity
model definitions. Part of the check is to verify that the primary keys of all the tables are
correctly defined, but the entity engine attempts to obtain the primary key information for
all the tables of the database at once, which is not supported by DB2. To make this
feature work, you need to modify DatabaseUtil.java to have the entity engine check the
primary keys one table at a time.
The biggest problem with DB2 is that it does not support SELECT operations which
include CLOB/BLOB fields when the ResultSet is scroll insensitive (See [1].) The
solution is not as simple as just changing the result set type, because DB2 also does
not support (i) SELECT operations on views or with JOIN using scroll sensitive cursor or
(ii) moving around with .absolute(i) or .relative(i) operations on a ResultSet of
TYPE_FORWARD_ONLY. This means that the view entities which include CLOB/BLOB
types cannot be SELECTED (because you cannot use a scroll insensitive ResultSet), or
that the EntityListIterator.getPartialList method will not work (because you cannot use
.absolute and .relative), so the ofbiz form widget's list form will not paginate correctly.
There is no solution for this problem, but the following workarounds exist:
Since the majority of the large object (LOB) types are CLOB for long character strings,
you can redefine the field type for your blob and very-long to be the longest possible
VARCHAR instead of using CLOB.
You can avoid using the getPartialList feature and instead use findAll or findByAnd to
return a Java list, and then use the sublist() method on it. These queries are done with
TYPE_FORWARD_ONLY and return the entire list at once, but the drawback is that a
Java list has a limited capacity of about 65,000 records.
If neither of these workarounds are acceptable, you would have to rewrite certain
features (like surveys with long text responses) to conform to DB2's restrictions.
In practice, most ERP-related uses of opentaps would not require CLOB's, so the first
option should suffice. Only with content management features would such field types be
required, and those features would need to be rewritten for DB2 compatibility.
Trac Tips
Trac Tips
Prefixing the Email Subject with [projectname trac]
It's useful to prefix the emails sent from trac with the project or company name to
distinguish from other trac systems. To do this, edit the & file and change the
following line,
smtp_subject_prefix = [projectname trac]
As with all trac configuration changes, there is no need to restart the server. The next
trac email subject will be prefixed by [projectname trac].
p
[hide]
1 Monitoring Deadlocks in PostgreSQL
2 Suspending Runaway Threads
3 Profiling with AspectJ
3.1 Out of the Box Profiling
3.2 Understand AspectJ code
Monitoring Deadlocks in PostgreSQL
See Database_Tips#Monitoring_PostgreSQL_Deadlocks
Suspending Runaway Threads
Suppose you start a process that you realize will take forever and need to stop it.
However, it can't be stopped because it was activated by an HTTP request and killing
the browser session doesn't work. First, check the log to see if you can identify the
thread that is running this process. For instance, suppose you have the following line in
your log that corresponds to what your process is doing,
2008-01-23 18:55:47,585 (TP-Processor10) [ Something.java:1015:WARN ]
Something that identifies your process
This thread is TP-Processor10. You can use the Java Thread API to suspend it by
hand. The easiest way to do this is to use a bsh script or the bsh terminal. First, you will
want to know the number of threads in the system. Load up #
p0;p5 p0;p
p% and count the rough number of threads displayed. Suppose you have
about 50 threads.
Once you know the rough size, run the following script, either via the bsh terminal or by
hooking it up to a controller.xml request,
threads = new Thread[50];
size = Thread.enumerate(threads);
for (i = 0; i < size; i++) {
print(i + ": " + threads[i]);
}
This will print out the index and name of each thread. Find the index of TP-Processor
10. Suppose it is index 37. You can then suspend the thread by doing this,
t = threads[37];
t.suspend();
Installing tsung
Homepage http://tsung.erlang-projects.org/
Download page http://tsung.erlang-projects.org/dist/
Documentation http://tsung.erlang-projects.org/user_manual.html
Tsung needs the erlang platform to be installed. We will not extend over it, because rpm
or deb versions just runs fine. http://www.erlang.org/
Tsung is a tool only available for linux platform under binary form. Maybe it could be
runned against windows with the erlang windows version and the cygwin platform.
We used Tsung with erlang 5.6.1 The version 1.2.2 has some problem linked to xml
parsing, so we prefer the version 1.2.0
You may want to modify tsung-1.2.0/src/tsung_controller/ts_os_mon.erl, depending on
which platform you are running the opentaps server. The scripts which do the os
monitoring (cpu, memory and network graphs) may not be convenient for your platform.
Compilation
For debian users, just type fakeroot debian/rules binary.
For the others a ./configure, make, make install will do it.
To run the client with the configuration file tsung.xml, just have
$ tsung -f tsung.xml start
Configuration file
The configuration file and all the file needed to do a tsung stress testing are available in
the directory hot-deploy/opentaps-tests/scripts/tsung/
readcsv.erl a small erlang script used to generate the login string from the user and
password read
tsung.xml the configuration file for tsung
userlist.csv the list of the users which will be used successively to login
&
pp
In this file, there is only one function called user. The step done are:
ts_file_server:get_next_line() to read one line in the file
string:tokens(Line,";") to separate the user and the password
"USERNAME=" ++ Username ++"&PASSWORD=" ++ Passwd to return the login string
You have to compile this file
$ erlc readcsv.erl
You will get a file called readcsv.beam, which you have to copy in the tsung binary
directory. In our case it is
/usr/lib/erlang/lib/tsung-1.2.0/ebin/
&
pp
The tsung.xml file is the configuration file which has the different scenarios to execute.
There is some comments in there.
Here is an overview:
p pp
pp
By default only one http client is configured which is localhost. You can configure as
many http clients as you want. The computer which fires the tests must have a ssh
access with passphrase.
<clients>
<client host="localhost" use_controller_vm="true"/>
</clients>
p pppp
By default the opentaps http server is configured on localhost:8443. You can change it
too.
<servers>
<server host="localhost" port="8443" type="ssl"/>
</servers>
p pp p84p 4p
9pp
The monitoring has to be configured to access the opentaps server. In our case it is
localhost. The computer which fires the tests must have a ssh access with passphrase.
<monitoring>
<monitor host="localhost" type="erlang"/>
</monitoring>
p pp
p ppppp
It is configured in the load node and by a phase system. For each phase you put the
duration of the phase and the frequency of arrival of the clients. You can have as many
phase as you want. In this case, there is one phase of one minute which is configured,
where clients arrived each 25 seconds. We will have 4 clients.
<load>
<arrivalphase phase="1" duration="1" unit="minute">
<users interarrival="25" unit="second"/>
</arrivalphase>
</load>
p p pp
user_agent UserAgent string to use
thinktime The thinktime of a user between his last response received and the next
request he will send.
file_server the file with the login
<options>
<!-- which type of client are we going to fire -->
<option type="ts_http" name="user_agent">
<user_agent probability="80">Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8)
Gecko/20050513 Galeon/1.3.21</user_agent>
<user_agent probability="20">Mozilla/5.0 (Windows; U; Windows NT 5.2; fr-FR;
rv:1.7.8) Gecko/20050511 Firefox/1.0.4</user_agent>
</option>
<!-- Each client has a random thinktime (time between two request) around 3 -->
<option name="thinktime" value="3" random="true" override="true"/>
<!-- TOBEMODIFIED Absolute path for the file from which we generate the login and
password -->
<option name="file_server" value="userlist.csv"/>
</options>
&pp
It is a basic csv file which associate on one line a user and a password separated by a ;
ie
DemoSalesManager;crmsfa
This users will be used successively to login into the opentaps server.
Logs, Reports and Graphics generated
% pp
In the directory ~/.tsung/log/ you will get the logs file.
tsung_controller@FabsLaptop.log is the log where you can have the errors related to
tsung
tsung.dump is the dump of all the requests and responses generated
tsung.log is used to generate the reports and graphics
tsung.xml is the configuration file
To generate the reports and graphics, just execute /usr/lib/tsung/bin/tsung_stats.pl in
the directory of logs.
! pp
Main statistics
connect is the time to make a tcp connect page is the time to download a whole page
request is the time to send a request session is the time to get through a scenario node
in the configuration file (in our case, to create and approve an order)
Transactions Statistics
size_rcv is the quantity of bits received size_sent is the quantity of bits sent
Counters Statistics
finish_users_count match is the quantity of verification done by the match node, which
success newphase nomatch is the quantity of verification done by the match node,
which fails users_count is the quantity of users generated
Server monitoring
cpu is the consumption of cpu freemem is the free memory available recvpackets is the
quantity of bits received sentpackets is the quantity of bits sent
HTTP return code
The apparition of the different http code. We observe 200, which is the normal http back
code, and 302, which is received sometimes. This is normal I observe both during the
building of the scenario.
'pp
Response Time
Mean
page correspond to the mean time to charge one page tr_* correspond to the mean time
to charge one page in the specified transaction
Mean
connect correpond to the mean time to do a tcp connect request correspond to the
mean time to do a http request
Throughput
Rate
page correspond to the mean time to charge one page tr_* correspond to the mean time
to charge one page in the specified transaction
Rate
connect correpond to the mean time to do a tcp connect request correspond to the
mean time to do a http request
cpu mean
freemem mean
HTTP return code Status (rate)
p
p
jpp
pp! p
opentaps JavaDocs
LiveCatalog XML-RPC API
Amazon Integration
Manufacturing Model
Configuring the POS Store
Configuring Authorize.NET
Implementing CVV Security Code Checking
Recommended Reading
Show me your flowcharts and conceal your tables, and I'll continue to be mystified.
Show me your tables, and I won't usually need your flowcharts; they'll be obvious.
-- The Mythical Man Month by Fred Brooks
Even after four decades, and long after the word "flowchart" has been replaced by
"UML", data models still play a central role in software design. The following three
books will give you three different perspectives on data modeling for enterprise
applications and help you understand the heart of opentaps. You should read all three
to get a balanced perspective:
Data Model Resource Book, Volume 1. -- This comprehensive volume and approaches
it from a relational perspective for transactional systems.
Domain Driven Design -- This book approaches the design of transactional applications
from an object-oriented perspective.
Data Warehouse Toolkit -- This book gives a comprehensive treatment from the
analytical perspective.
Other helpful references:
Opentaps Source Licensing -- If you want to learn more about open source software
licenses, read this book instead of all the mailing list graffiti.