Monday, August 22, 2005

Part 2 - Error Handling (Not Exceptions)

Discussions have lead to a couple of resolutions to the problem at hand.

  1. Creating an event on the necessary object that gets raised whenever an error occurs.
  2. Creating an errorMessage field/property that's normally null, but gets set any time an error occurs. It's up to the developer to check the field before using the object.
  3. Similar to 2, except you create an Error object that has properties and such that get set on error.
  4. Similar to 2 also, create an Enumeration, along the lines of ObjectStatus, that all Business Objects have to indicate the state of the object at all times.

Any other suggestions out there?


Friday, August 12, 2005

Error Handling (Not Exceptions)

I work for a good friend of mine on a side project and we are currently in the development stages of version 2.0. The biggest modification being a complete architecture over haul. One of the issues that I have come across involves propagating logical error messages for non-exceptional circumstances (what?) through the different application layers. Here's what I mean...

An exception occurs when the database you anticipate being available is unavailable. An error occurs when you attempt to insert a new record in the database and a duplicate entry is found. You should not throw an exception in this instance because of the over head involved in terms of performance (perf).

In a standard Enterprise application you typically have 3 logical layers - Presentation, Business Logic and Data Access. The layer separation allows a developer to keep dependencies to a minimum so that maintenance is easier. We have decided to use objects as our business components instead of DataSets (this is a .NET project) because this allows for more complicated business logic (read Martin Fowler's Patterns of Enterprise Application Architecture for more information). We are then using what's called a Gateway to map data to these objects and then return the intialized objects to the presentation layer.

Okay, now you have the background, back to the example.

Say you are trying to insert a new person's information into a database; for simplicity's sake we are just going to insert a last name and an e-mail address. Your business rules tell you that both the last name and e-mail address have to be unique (not reality I know, but it's simple). Your method call ends up looking like the following.

// C#
Person person = Person.Create( lastName, email );

The Create method might look something like this.

public static Person Create( string lastName, string email )
Person person = null;
if ( Person.FindByLastName( lastName, email ) == null )
person = PersonGateway.Create( lastName, email );

return person;

Now, if the person object being returned is null, then we can assume a duplication was found in either the last name or the email address, but how do we communicate to the user which one was the problem? Do we just say a duplicate was found and hope he/she guesses correctly or wait for the annoying tech support call?

Making two separate calls to check the last name and then the email address is a logical solution but that means going to the database twice before we even attempting to create the new record (a third trip to the database). Another suggestion would be to let the SQL handle the business logic (a no-no because that's supposed to happen in the Business Logic layer), but again how do we communicate which field was the problem.

Can anyone out there offer a suggestion?


Thursday, August 04, 2005


I have been improving on an application at work for more than a year now with great success. We have quadrupled the number of users in that time and received additional funding for equipment and other resources. Unfortunately, we have run into a new snag - OutOfMemoryException (which turns into a Denial of Service [DOS] response to the user).

Okay, how do you solve that? For starters, we have not been doing a very good job at monitoring the application's performance. If we had, we would have realized that there is too much data being stored in our sessions. When 7 users on a server translates to ~500MB of memory you naturally encounter problems when your numbers increase to 28 (not actual figures :)).

The obvious solution is to stop putting everything into the session. Okay, but then the application suffers from network traffic delays due to round trips from the user's machine to the server, the server to the DB server, back to the server and then finally to the user's machine again. During off-peak hours this is not a problem. However, during the day our network virtually crawls.

The current proposed solution is an upgrade in hardware; the lead architect wishes to go from 1GB of RAM to 4GB. A sizable increase, but IMO, this ignores the real problem of too much session data. What happens when we quadruple or user base again? The server has a limit to the amount of RAM it can utilize; users, however, have no concept of hardware limitations and assume that things just work.

We'll see where this goes, but I may be re-architecting the entire solution at some point for business process changes, so I may tackle this issue then.