Wednesday, July 28, 2010

Jersey and Spring Injection

On the project I am on we use iBatis, Spring, and Jersey. We have been using spring injection for our DAOs since the beginning, but our version of Jersey at the time did not support spring injection. To get around this we created factories allowing us to inject beans into the factories. Our REST classes then used the factory to get the beans they were interested in. As the project grew, so did the factory. When we upgraded to Jersey 1.0.3, we were able to take advantage of a Jersey-Spring jar, that would allow us to spring inject beans directly into our REST classes.

In our Maven pom.xml file we added the following dependency.

<dependency>
<groupId>com.sun.jersey.contribs</groupId>
<artifactId>jersey-spring</artifactId>
<version>1.0.3</version>
</dependency>

Then in the constructor of the REST Classes we were able to do the following: (NOTE: as of Jersey 1.4, @Inject becomes @InjectParam)

public class CompanyWS {
private ICompanyManager _companyManager;
private ICompanyTypeManager _companyTypeManager;
public CompanyWS (@InjectParam("companyManager") ICompanyManager companyManager,
@InjectParam("companyTypeManager") ICompanyTypeManager companyTypeManager) {
_companyManager= companyManager;
_companyTypeManager= companyTypeManager;
}
...


The web.xml file also needs to be modified to call out the SpringServlet, instead of the ServletContainer:


<servlet>
<servlet-name>REST Application</servlet-name>
<servlet-class>com.sun.jersey.spi.spring.container.servlet.SpringServlet</servlet-class>
<init-param>
<param-name>javax.ws.rs.Application</param-name>
<param-value>com.accenture.netcds.api.rest.RegisterResources</param-value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>


One downside of the 1.0.3 Spring-Jersey library is that the automatic finding of the REST classes does not work in websphere 6.1. This has been reported as a bug to Jersey and is supposed to be fixed in the more recent versions. As a result the REST classes need to be registered in an Application class.


public class RegisterResources extends Application {
public Set<Class<?>> getClasses() {
Set<Class<?>> s = new HashSet<Class<?>>();
s.add(CompanyWS.class);
return s;
}
}


Spring successfully injected the beans into the REST classes, allowing us to delete the factories we had been using that had grown quite large. Another simplification to make it easier to maintain the software over time.

Nested iBatis Result Mappings

Although, this is not an earth shattering iBatis discovery, it is a time saver, both in development and maintenance so thought I would share it.

Let's assume we have a class called CompanyType that contains type_id and type name and another class called Company that among other properties has a CompanyType object companyType included.

In the iBatis xml file the result map for CompanyType would look something like

<sqlMap namespace="CompanyType">
<resultMap id="companyTypeResultMap" class="com.sample.CompanyType">
<result column="company_type_id" property="typeId"/>
<result column="company_type_nm" property="typeName"/>
</resultMap>
...

In the ibatis xml file for Company the result map would look like:

<sqlMap namespace="Company">
<resultMap id="companyResultMap" class="com.sample.Company">
<!-- other company properties here -->
<result column="company_type_id" property="companyType.typeId"/>
<result column="company_type_nm" property="companyType.typeName"/>
</resultMap>


The problem with this is that if something changes in company type, you have to know to change it in two locations. Makes it harder to maintain. Instead though you could do the following:

<sqlMap namespace="Company">
<resultMap id="companyResultMap" class="com.sample.Company">
<!-- other company properties here -->
<result property="companyType" resultMap="CompanyType.companyTypeResultMap"/>
</resultMap>


As you can see, now if the result map is updated in CompanyType, the changes are automatically reflected in the Company ibatis file. Hope you find this helpful. I know I did.

Monday, March 29, 2010

ExtJS store.find vs store.findExact

I recently ran into a problem with an ExtJS combo box, specifically using the find method on the store. In the application, between uses, the last selected value was remembered so it could be selected automatically the next time the user logged in to the system. The problem was that the previously selected value was no longer a valid value. In this case, the behavior was supposed to default back to the first value in the list. However, it did not do that.

After some investigation, I found that store.find was doing a partial match, and in this case there were two names very close to each other. Let's say the last value selected was "War", and there was another value called "War Room". If the list was unsorted, it is possible that War Room would be earlier in the list than War, and it would get selected over War. Another scenario, is that between uses, the value of "War" is removed from the system. The next time the user logs in it would select War Room automatically, instead of the first value in the list. While the second is not as bad of situation as the first, if that behavior is unintended, it is unwanted.

During the investigation, I found that there was now a store.findExact method for store. I replaced the store.find method with store.findExact and that cleared the problem right up. Just wanted to make sure that others were aware of this. I don't remember ExtJS 2.x having this problem.

Monday, March 15, 2010

Lost Milliseconds using Date and Timestamp

While trouble shooting some date discrepancies in our application, we discovered we were losing milliseconds, when converting from java.sql.Timestamp, to java.util.Date. The following test isolated the problem:

DateFormat dfm = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss:SSS");

try {
Date datea = dfm.parse("2009-05-13 11:11:03.113");
Date dateb = dfm.parse("2009-05-13 11:11:03.257 ");

assertTrue(dateb.after(datea));
} catch (Exception e) {

}

Timestamp ts1 = new Timestamp(2010, 3, 15, 9, 9, 9, 111000000);
Timestamp ts2 = new Timestamp(2010, 3, 15, 9, 9, 9, 222000000);
assertTrue(ts2.after(ts1));

Date datea = (Date)ts1;
Date dateb = (Date)ts2;
// NOTE this asserts false because it truncates the milliseconds in the cast.
assertFalse(dateb.after(datea));

I am surprised that Java did not handle this. They could have provided a constructor that would take a java.sql.Timestamp as an argument, so they could preserve the milliseconds. Another solution was they could have kept both the milliseconds field and the nanoseconds field updated, so that the cast could have worked. Either way, hopefully, this saves someone else the time of tracking this down.

Monday, March 1, 2010

iBatis "Cache Miss", Part 3 Confguring the CacheModel

In this final blog of this series we want to analyze the iBatis CacheModel, and how to configure it correctly. The first blog of the series discussed how to use JBoss logging to verify the problem and the previous blog demonstrated how to write a test to automatically test caching.

Let's start with simple example from the other blogs. The location_type table. It has no foreign keys to other tables. We only have to worry about changes to this table. Let's examine the location_type iBatis CacheModel.

<cacheModel id="locationType_cache"
type="LRU"
readOnly="true"
serialize="false">
<flushInterval hours="24"/>
<flushOnExecute statement="LocationType.insertLocationType"/>
<flushOnExecute statement="LocationType.updateLocationType"/>
<flushOnExecute statement="LocationType.removeLocationType"/>
<property name="size" value="10"/>
</cacheModel>

There are four attributes defined for the CacheModel. The first is the id, this is the name you will reference on the select definitions in the iBatis XML file. The next is, type, which is the type of caching used. There two choices: LRU - Least Recently Used and FIFO - First-In-First-Out. We chose LRU so the most frequently used objects would be kept in the cache. The next two parameters are the two that determine how the cache model works(or doesn't if misconfigured).

I ran across a blog, that helped me understand what I was doing wrong. As Clinton explains how the readOnly and serialize attributes play together. I will replicate the meat of his analysis here, but have included the link to his post.

Setting readOnly to false, which means it can be modified, and serialize to false the data is not able to be serialized forces iBatis to limit the caching to the current session request. This essentially means that a second request from the client, will not use the cache. This was the situation we found ourselves in.

There are two possible configurations that will allow caching to work. You will have to decide which is appropriate for your project. The first is readOnly=true, serialize=false. This will cache the objects for in one cache accessed by all users. This assumes that the objects are readOnly and will not change, or that if they change you will have the appropriate flushOnExecute statements defined.

The second configuration you can use is readOnly=false, serialize=true. This will allow each user to have their own cache. This allows the tables to be updated with out negatively affecting the other users. The downside of this approach is memory usage. With each individual having their own cache, you can run yourself out of memory.

On our project we went with the single cache shared among all users. This placed the burden on us to manage the cache model. How do you do that? There are some other properties that can be set for the CacheModel. One of them is size of the cache for a given table. It will keep the most frequently used objects in the cache up to the quantity specified. Other parameters, include flushInterval. This allows you to say if this hasn't been flushed in N hours, flush it anyway. The finest control is telling the cache model what statements to flush on, using the flushOnExecute properties.

How do you know what statements need to cause the cache to flush? If it caused the database to change, you have to flush the cache. If you add a new record, modify an existing record, or delete a record you need to flush the cache. By examining the CacheModel definition above, you will see that it does just that.

That was the simple case. What if your table joins to other tables in it's select statements? Then cache management becomes more complex. You need to flush not only when your table contents are modified, but also when the contents of any table you are joined to changes. Consider the following CacheModel:

<cacheModel id="locations_cache"
type="LRU"
readOnly="true"
serialize="false">
<flushInterval hours="24"/>
<flushOnExecute statement="Location.insertLocation"/>
<flushOnExecute statement="Location.updateLocation"/>
<flushOnExecute statement="Location.deleteLocation"/>
<flushOnExecute statement="LocationType.insertLocationType"/>
<flushOnExecute statement="LocationType.updateLocationType"/>
<flushOnExecute statement="LocationType.removeLocationType"/>
<property name="size" value="250"/>
</cacheModel>

Location contains a reference to what its location type is, and subsequently all the selects performed on location join to location type to get the type information. We ran into a problem right after we enabled caching that when locations were cached, and the location_type had the name updated, it would not be reflected in the cache for 24hrs. After investigating we realized that the location cache was not being cleared out when location types were modified.

This forced us to start using iBatis namespace so we could reference one iBatis XML statements within another. In the CacheModel above you can see that the cache will be flushed anytime the location or location type table is modified.

Our project uses Spring 2.5.6 and iBatis 2.3.4. As a result of this finding, we also found a problem with iBatis 2.3.4. It does not support transaction caching. What this means is that, even though spring rolls back the transactions after each test method, if the cache was updated it is not flushed. We discovered this when running the automated test suite and they were failing.

To see this problem you can write a test class that has two tests. In the first test save a new record, and validate that it was saved. At the end of the test the database is rolled back. However the cache will still contain the record. in the second test method, do a get all and record the number of records returned. Let's say it was 2. Save a new record, which forces the cache to flush and get all the records again. If all was working as expected, the count would now be 3. However, the cache still had the record from the first test in it, when the save is done and the cache is flushed, the get will again return 2.

That's with tests. Can this happen in a live system? The answer is yes. If the saves are complex in nature(ie. more than one dao call required to save the information) and their are select's performed at different points during the save process that update the cache, if an error occurs before the save has completely succeeded and the transaction is rolled back. The database will be cleaned up, but the cache will still contain the records it attempted to save.

iBatis 3 is supposed to solve that problem, however, there is no Spring support for iBatis 3 yet. They have it planned for Spring version 3.1. For now, you must be very careful using cache on complex save operations.

iBatis "Cache Miss", Part 2 Creating automated tests

In part 1, of this series we discussed how to set up the JBoss logging so that you can verify whether or not your iBatis caching is working. Once you get caching working, how do you prevent it from being broken by future modifications. It is important to create some automated integration tests, that will fail if the caching is broken.

You may ask does this make sense, after all iBatis is a third party Jar. Why would I want to test their code? If you can't trust third party jar files, where does the testing end. This is a valid argument. After all you would never write a test to verify that Java set/get methods work as expected. However, in the case of iBatis caching you are not so much testing third party software, although it will certainly do that, as you are testing that you configured the third party software correctly. As an added benefit, if you updated iBatis and they had broken the cacheModel, you would know immediately.

Spring provides a autowire capability for injecting beans into your test classes for integration tests. So we wanted to simulate the client behavior in an integration test. We injected the dao bean for a simple table in the database. We wrote two tests.

In the first test, we performed a get immediately that should have set up the cache. Next, making use of the Spring SimpleJdbcInsert class we bypassed our iBatis bean, to do an insert into the table. We did this so that iBatis would not flush it's cache on insert. Now if the cacheModel is set up correctly, when you go get the list of records again, it will retrieve it from the cache and it will not include this new record. Next, using the iBatis dao bean, we saved another record to this table. Now when the get all command is executed it should grow by 2 records(the one inserted through spring, and the one inserted through iBatis). Here is the test method.


public void testDatabaseCaching() {
List<LocationType> cachedTypes = _locationTypeDao.getAll();
int numTypes = cachedTypes.size();

SimpleJdbcInsert lJdbcInsert = new simpleJdbcInsert(_dataSource)
.withTableName("location_type")
.usingGeneratedKeyColumns("location_type_id");
Map<String, Object> lParameters = new HashMap<String, Object>();
lParameters.put("location_type_nm", "cacheTest");
lParameters.put("location_type_desc", "verify ibatis caching working");
lParameters.put("message_resource_key", "location.type.cacheTest");
lParameters.put("active_flag", 1);
lParameters.put("create_user", "ibatisCacheVerification@daoTest.com");
lParameters.put("create_ts", Calendar.getInstance().getTime());

// verify that the straight JDBC insert works
Number lNewId = lJdbcInsert.executeAndReturnKey(lParameters);
assertNotNull("autogen key shouldn't be null", lNewId);

// ibatis cache should be unaware that we added a new
// location type since we did it with straight JDBC
cachedTypes = _locationTypeDao.getAll();
assertEquals(numTypes, cachedTypes.size());

LocationType newType = new LocationType("cache2", "verify caching working part 2");
newType.setCreateUserName("ibatisCacheVerification@daoTest.com");
_locationTypeDao.save(newType);

// insert causes flush, so cache should now have both new types
cachedTypes = _locationTypeDao.getAll();
assertEquals(numTypes + 2, cachedTypes.size());
}

Running this first test prior to fixing the cache model resulted in assert failures, which validated that our caching was broken.

The second test was to validate a more complex caching problem. In this example, you have a location table with a foreign key to the location_type table. Performing a select on location will result in returning the location information as well as information about the location type. Suppose that the location select is cached, and then a modification occurs where some of the location type information is updated. If the cacheModel is configured incorrectly, it will not flush the cache and the location select cache will still contain the old values from the location_type table. Here is what that test looked like.

public void testIbatisCacheWithJoins() {
// get location types
List<LocationType> cachedTypes = _locationTypeDao.getAll();

// Add new location type
LocationType locType = null;
for(LocationType type : cachedTypes) {
if (type.getId() == 1) {
locType = type;
break;
}
}


// get locations to set the cache initially
_locationsDao.getLocationsByTypeId(null, locType.getId(), null);


locType.setUpdateUserName("ibatisCacheVerification@daoTest.com");
locType.setName("New Type Name");
_locationTypeDao.save(locType);
List<Location>cachedLocations =
_locationsDao.getLocationsByTypeId(null, locType.getId(), null);

assertEquals("New Type Name", cachedLocations.get(0).getLocType());
}

Running this test case with the cacheModel set up incorrectly resulted in an assert failure as well. Now that we have the two main tests in place we needed to fix the CacheModel definition, which is discussed in the final blog, iBatis "Cache Miss", Part 3.

iBatis "Cache Miss", Part 1 - Validating the problem

There is a lot of information to cover on this topic, so this will be a series of three blogs. First a little background. The project I am on, noticed some performance issues as we had more and more users. As a result, we decided to cache the heavily used, infrequently modified iBatis selects. At the time a lot of investigation was done to verify that the cache models were set up correctly and working.

Unfortunately, all the testing was done manually. So as fate would have it, subsequent changes ended up breaking the caching. However, due to upgrades and other optimizations in the hardware and software it was not immediately obvious that this had occurred. As a result, the project progressed for nearly a year with caching broken.

In this first blog I want to discuss how we were able to verify that caching was broken through JBoss logging. The second blog will discuss how to write an integration test that can be automated using maven, to prevent the caching from being broken in the future. The final blog, will cover a discussion of the iBatis cacheModel and how to configure it.

First, you will need to modify the jboss-log4j.xml file in the $JBOSS_HOME/server/default/conf directory. You will need to add a new appender definition as well as set up the categories you want to be output to that file. This information was found in this post. Here are the exerts from that file.

<appender name="SQL_FILE"
class="org.jboss.logging.appender.DailyRollingFileAppender">
<errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/>
<param name="File" value="${jboss.server.log.dir}/sql.log"/>
<param name="Append" value="false"/>
<param name="DatePattern" value="'.'yyyy-MM-dd"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d %-5p [%c] %m%n"/>
</layout>
</appender>

<category name="java.sql" additivity="false">
<priority value="DEBUG" />
<appender-ref ref="SQL_FILE"/>
</category>

<category name="com.ibatis" additivity="false">
<priority value="DEBUG" />
<appender-ref ref="SQL_FILE"/>
</category>

When verifying this we started with our simplest example. A single table that did not join to other tables. From the client we brought up our admin interface for that table. This performed the initial select of everything which set up the cache. Refreshing, without having performed an insert, update, or delete should have returned the same results from the cacheModel. Much to our surprise, what we saw was "Cache Miss".

2010-03-01 13:43:56,515 DEBUG [com.ibatis.sqlmap.engine.cache.CacheModel] Cache
'LocationType.locationType_cache': cache miss
2010-03-01 13:59:41,921 DEBUG [java.sql.Connection] {conn-102346}
Preparing Statement: SELECT ...
2010-03-01 13:59:41,921 DEBUG [java.sql.PreparedStatement] {pstm-102347}
Executing Statement: SELECT ...
2010-03-01 13:59:41,921 DEBUG [java.sql.PreparedStatement] {pstm-102347}
Parameters: []

Now that we have been able to manually verify that the cacheModel is broken, we need to write some automated tests so this doesn't happen again. That is the subject of of the next blog, iBatis "Cache Miss", Part 2.

After fixing the cacheModel the result was quite different.

2010-03-01 14:01:28,109 DEBUG [java.sql.Connection] {conn-102351} Connection
2010-03-01 14:01:28,109 DEBUG [com.ibatis.sqlmap.engine.cache.CacheModel] Cache
'LocationType.locationType_cache': retrieved object 'LocationType name: ...'

As you can see from the above log entries, it retrieved the results from the cache. Now when you do an insert, this is the output you will see:

2010-03-01 14:06:20,421 DEBUG [java.sql.Connection] {conn-102352} Connection
2010-03-01 14:06:20,421 DEBUG [java.sql.Connection] {conn-102352}
Preparing Statement: INSERT ...
2010-03-01 14:06:20,421 DEBUG [java.sql.PreparedStatement] {pstm-102353}
Executing Statement: INSERT ...
2010-03-01 14:06:20,421 DEBUG [java.sql.PreparedStatement] {pstm-102353}
Parameters: [Weather, null, location.type.weather, 1, jp1@test.com]
2010-03-01 14:06:20,421 DEBUG [java.sql.PreparedStatement] {pstm-102353}
Types: [java.lang.String, null, java.lang.String, java.lang.Integer, java.lang.String]
2010-03-01 14:06:20,703 DEBUG [com.ibatis.sqlmap.engine.cache.CacheModel]
Cache 'Location.new_locations_cache': flushed
2010-03-01 14:06:20,703 DEBUG [com.ibatis.sqlmap.engine.cache.CacheModel]
Cache 'LocationType.locationType_cache': flushed

Reviewing these log statements you can see that both the Location and LocationType cache was flushed, when the insert was performed. The next select that is done will hit the database again and cache the values returned.

Sunday, February 28, 2010

Bulk saving and updating using iBatis

Recently I had the need to save records to an association table. All the object values were the same except one. I was looping through the array list saving them one at a time, but kept getting unable to get database connection errors. I lamented that I couldn't do a bulk save. A co-worker thought that iBatis supported what I wanted to do, so after some investigation we were able to get both a bulk save and a bulk update to work.

For our example let us assume we have a table that contains a list of companies and another table that contains a list of Locations. From the client when you edit a company it allows you to multi-select a list of locations to associate with that company. On the back end you could loop through the locations and store each separately, but iBatis allows you to save them all at once. Let's examine how to do that.

Here are the Company and Location classes, minus getters/setters and imports;

package com.projectA;
public class Company {
private int companyId;
private String name;
private List locations;
}

package com.projectA;
public class Location {
private int locationId;
private String address;
private String city;
private String state;
private String zipCode;
private String phoneNumber;
}

The database contains a table containing all the companies, and a table containing all the locations. There is also a third table that contains the company - location associations. To do this we need to take advantage of the iBatis dynamic fields. Here is the iBaits XML file that contains the bulk save and update methods for the association table:

<sqlMap namespace="Company">

<typeAlias alias="company" type="com.projectA.Company"/>
<typeAlias alias="loation" type="com.projectA.Location"/>

<resultMap id="companyResultMap" class="company">
<result column="company_id" property="companyId" />
<result column="company_name" property="name" />
<result column="{companyId=company_id}" property="locations"
select="Company.getActiveLocations"/>
</resultMap>

<resultMap id="locationResultMap" class="location">
<result column="location_id" property="locationId" />
<result column="location_address" property="address" />
<result column="location_city" property="city" />
<result column="location_state" property="state" />
<result column="location_zip_code" property="zipCode" />
<result column="location_phone_number" property="phoneNumber" />
</resultMap>

<sql id="select_company_fragment">
SELECT company_id,
company_name,
FROM company
</sql>

<sql id="select_location_fragment">
SELECT cl.location_id,
loc.location_address,
loc.location_city,
loc.location_state,
loc.location_zip_code,
loc.location_phone_number
FROM company_location cl, location loc
WHERE cl.company_id=#companyId#
AND cl.location_id = loc.location_id
</sql>

<select id="getAllCompanies"
resultMap="companyResultMap">
<include refid="select_company_fragment"/>
ORDER BY company_name
</select>

<select id="getActiveLocations"
resultMap="int">
<include refid="select_location_fragment"/>
AND active_flag = 1
ORDER BY location_state, location_city
</select>

<insert id="bulkInsertCompanyLocation" parameterClass="map">
INSERT INTO company_location
(company_id,
location_id,
active_flag)
SELECT
#companyId#,
#activeFlag#,
loc.location_id,
FROM location loc LEFT JOIN company_location cl ON cl.company_id=#companyId#
AND cl.location_id = loc.location_id
WHERE cl.company_id IS NULL
<dynamic>
<isNotNull property="locations">
<iterate property="locations"
prepend="AND " open=" loc.location_id IN ("
close=")" conjunction=",">
#locations[]#
</iterate>
</isNotNull>
</dynamic>
</insert>

<update id="bulkUpdateCompanyLocation" parameterClass="map">
UPDATE company_location
SET
active_flag = #activeFlag#
WHERE company_id=#companyId#
<dynamic>
<isNotNull property="locations">
<iterate property="locations" prepend="AND " open=" location_id IN ("
close=")" conjunction=",">
#locations[]#
</iterate>
</isNotNull>
</dynamic>
</update>

</sqlMap>

The map that is passed into both the insert and update methods, contains the company_id, an active flag, and an array list of locationId's. These methods save or update multiple records in one shot. That's all there is to it. By changing to the bulk insert and update methods, the out of database connections errors went away.

The resulting SQL that iBatis generates looks like this:

INSERT INTO company_location
(company_id,
location_id,
active_flag)
SELECT
#companyId#,
#activeFlag#,
loc.location_id,
FROM location loc LEFT JOIN company_location cl ON cl.company_id=#companyId#
AND cl.location_id = loc.location_id
WHERE cl.company_id IS NULL
AND loc.location_id IN ("1","3","10")

UPDATE company_location
SET
active_flag = #activeFlag#
WHERE company_id=#companyId#
AND location_id IN ("4","6","12")

You can read more about the dynamic and iterate methods in the iBatis documentation.

Saturday, February 27, 2010

Retrieving complex objects from a database using iBatis

We have been using iBatis for 2 years now on the project I am on. We had need to retrieve a complex object from the database.
Basically, we have an object obA that has some fields(string, int, ..., and one that is List where obB has some fields(String, int, date, ...). Instead of retrieving all of the obA objects then looping through each of those to get the array of obB objects for each, I wanted to do it in one shot.

A little digging and I found that iBatis supports this and it really wasn't that hard. This simple example will generate retrieve an array of Company objects that each contain an array of Employee objects
.
First here are the two Java Classes for Company and Employee. To simplify, imports, getter/setter methods, etc have been omitted.


package com.projectA;
public class Company {
private int id;
private String name;
private String address;
private String city;
private String state;
private String zipCode;
private List employees;
}

package com.projectA
public class Employee {
private int companyId;
private int id;
private String firstName;
private String lastName;
private String phoneNumber;
private String ssn;
private Date dob;
}


The following is the iBatis XML file that will retrieve a list of all companies sorted by company name, each containing a list of employees sorted alphabetically by last name, then first name.


<sqlMap namespace="Company">

<typeAlias alias="company" type="com.projectA.Company"/>
<typeAlias alias="employee" type="com.projectA.Employee"/>

<resultMap id="companyResultMap" class="company">
<result column="company_id" property="id" />
<result column="company_name" property="name" />
<result column="company_address" property="address" />
<result column="company_city" property="city" />
<result column="company_state" property="state" />
<result column="company_zip_code" property="zipCode" />
<result column="{companyId=company_id}" property="employees"
select="Company.getEmployees"/>
</resultMap>

<resultMap id="employeeResultMap" class="employee">
<result column="employee_company_id" property="companyId" />
<result column="employee_id" property="id" />
<result column="employee_first_name" property="firstName" />
<result column="employee_last_name" property="lastName" />
<result column="employee_phone_number" property="phoneNumber" />
<result column="employee_ssn" property="ssn" />
<result column="employee_dob" property="dob" />
</resultMap>

<sql id="select_company_fragment">
SELECT company_id,
company_name,
company_address,
company_city,
company_state,
company_zip_code
FROM company
</sql>

<sql id="select_employee_fragment">
SELECT employee_company_id,
employee_id,
employee_first_name,
employee_last_name,
employee_phone_number
employee_ssn,
employee_dob
FROM employee
</sql>

<select id="getAllCompanies"
resultMap="companyResultMap">
<include refid="select_company_fragment"/>
ORDER BY company_name
</select>

<select id="getEmployees"
resultMap="employeeResultMap">
<include refid="select_employee_fragment"/>
WHERE employee_company_id=#companyId#
ORDER BY employee_last_name, employee_first_name
</select>

</sqlMap>


That's all there is to it. So when getAllCompanies is called, it will automatically call the getEmployees method to fill in the list of employees for each company. You can see this by looking at the companyResultMap, it has a column called employees that is mapped to a select statement and passes the current company id into that select.

Some benefits are simplicity and performance. One dao call retrieves everything I want, instead of getting a list of companies and needing to iterate over each to get the list of employees.

A nice ibatis feature is the alias. The benefit of using the alias in the ibatis XML file is that instead of needing to put com.projectA.Company in multiple places in the file, you do it once and reference it. Then if the package structure is refactored, you only have to change it once.

Another nice ibatis feature is the sql sections. This allows you to have SQL fragments, that can be used by more than one select. So in our example if you wanted to select all the companies for a given city you could add the following:

<select id="getAllCompanies"
parameterClass="String"
resultMap="companyResultMap">
<include refid="select_company_fragment"/>
WHERE company_city=#city#
ORDER BY company_name
</select>


This is the same as the select statement above, except it has the additional constraint on the city.

More iBatis posts to follow.