Thursday, October 13, 2005

RTFM and GetParserCacheItem Exception

In 2004, I posted to a couple of forums regarding an exception I was receiving for a website that I maintained. Going back and reviewing those posts made me realize that all I needed to do was read the content of the exception message (blush) and realize that the exception was being generated because the file was not found. Here's an idea of what the message looked like...

Message: c:\someDirectory\anotherDir\webFolder\SomePage.aspx TargetSite: System.Web.UI.ParserCacheItem GetParserCacheItem() Source: System.Web Stack Trace: at System.Web.UI.TemplateParser.GetParserCacheItem() at System.Web.UI.TemplateControlParser.CompileAndGetParserCacheItem(String virtualPath, String inputFile, HttpContext context) at System.Web.UI.TemplateControlParser.GetCompiledInstance(String virtualPath, String inputFile, HttpContext context) at System.Web.UI.PageParser.GetCompiledPageInstanceInternal...

The problem was that the file wasn't found and so never loaded. Unfortunately, I expected to receive a 404 message instead of this GetParserCacheItem Exception. I didn't accept the fact that the file was NOT in the directory, biggest example of why one should RTFM, or in this instance RTFEM. Live and learn...

-Brian

Monday, September 26, 2005

Test Driven Development

I have been reading a lot lately about software development processes. One of the most fascinating ideologies I have come across is Test Driven Development (TDD). The theory, as I understand it, is that you start the process by writing a test case. At this point you have no code so your test fails. Now you create enough code for your application to run. You then run your automated test against your application and it will fail - remember that you only wrote enough code to get your application running. Enough code is then written for the test to complete successfully and then you move on to the next test case, refactoring at the beginning of each successive iteration.

What most appealed to me, initially, is the fact that you are writing unit tests from square one. That's absolutely wonderful. Unfortunately many developers in the real world (I am guilty of this one) wait until the last minute to write/perform testing on an application. I won't go into why this is bad, just know that it is. TDD forces you to write your tests up front, giving you the ability to run your tests with every build. This also allows you to perform regression testing - BRILLIANT! Why haven't I come across this before? The more I research the more I learn.

Proponents of TDD also boast that writing unit tests up front leads to improved application design. Boy are they right! In writing tests you are interfacing with your application before it is even constructed. If your tests don't make sense, then your application won't make sense. This is a good indication that the design needs to be revisited. I love it!

Some helpful links...
http://www.nunit.org/
http://www.testdriven.com
http://codebetter.com/blogs/jeremy.miller/

Monday, August 22, 2005

Part 2 - Error Handling (Not Exceptions)

Discussions have lead to a couple of resolutions to the problem at hand.

  1. Creating an event on the necessary object that gets raised whenever an error occurs.
  2. Creating an errorMessage field/property that's normally null, but gets set any time an error occurs. It's up to the developer to check the field before using the object.
  3. Similar to 2, except you create an Error object that has properties and such that get set on error.
  4. Similar to 2 also, create an Enumeration, along the lines of ObjectStatus, that all Business Objects have to indicate the state of the object at all times.

Any other suggestions out there?

-Brian

Friday, August 12, 2005

Error Handling (Not Exceptions)

I work for a good friend of mine on a side project and we are currently in the development stages of version 2.0. The biggest modification being a complete architecture over haul. One of the issues that I have come across involves propagating logical error messages for non-exceptional circumstances (what?) through the different application layers. Here's what I mean...

An exception occurs when the database you anticipate being available is unavailable. An error occurs when you attempt to insert a new record in the database and a duplicate entry is found. You should not throw an exception in this instance because of the over head involved in terms of performance (perf).

In a standard Enterprise application you typically have 3 logical layers - Presentation, Business Logic and Data Access. The layer separation allows a developer to keep dependencies to a minimum so that maintenance is easier. We have decided to use objects as our business components instead of DataSets (this is a .NET project) because this allows for more complicated business logic (read Martin Fowler's Patterns of Enterprise Application Architecture for more information). We are then using what's called a Gateway to map data to these objects and then return the intialized objects to the presentation layer.

Okay, now you have the background, back to the example.

Say you are trying to insert a new person's information into a database; for simplicity's sake we are just going to insert a last name and an e-mail address. Your business rules tell you that both the last name and e-mail address have to be unique (not reality I know, but it's simple). Your method call ends up looking like the following.

// C#
Person person = Person.Create( lastName, email );

The Create method might look something like this.

public static Person Create( string lastName, string email )
{
Person person = null;
if ( Person.FindByLastName( lastName, email ) == null )
person = PersonGateway.Create( lastName, email );

return person;
}


Now, if the person object being returned is null, then we can assume a duplication was found in either the last name or the email address, but how do we communicate to the user which one was the problem? Do we just say a duplicate was found and hope he/she guesses correctly or wait for the annoying tech support call?

Making two separate calls to check the last name and then the email address is a logical solution but that means going to the database twice before we even attempting to create the new record (a third trip to the database). Another suggestion would be to let the SQL handle the business logic (a no-no because that's supposed to happen in the Business Logic layer), but again how do we communicate which field was the problem.

Can anyone out there offer a suggestion?

-Brian

Thursday, August 04, 2005

OutOfPatienceException

I have been improving on an application at work for more than a year now with great success. We have quadrupled the number of users in that time and received additional funding for equipment and other resources. Unfortunately, we have run into a new snag - OutOfMemoryException (which turns into a Denial of Service [DOS] response to the user).

Okay, how do you solve that? For starters, we have not been doing a very good job at monitoring the application's performance. If we had, we would have realized that there is too much data being stored in our sessions. When 7 users on a server translates to ~500MB of memory you naturally encounter problems when your numbers increase to 28 (not actual figures :)).

The obvious solution is to stop putting everything into the session. Okay, but then the application suffers from network traffic delays due to round trips from the user's machine to the server, the server to the DB server, back to the server and then finally to the user's machine again. During off-peak hours this is not a problem. However, during the day our network virtually crawls.

The current proposed solution is an upgrade in hardware; the lead architect wishes to go from 1GB of RAM to 4GB. A sizable increase, but IMO, this ignores the real problem of too much session data. What happens when we quadruple or user base again? The server has a limit to the amount of RAM it can utilize; users, however, have no concept of hardware limitations and assume that things just work.

We'll see where this goes, but I may be re-architecting the entire solution at some point for business process changes, so I may tackle this issue then.

Thursday, July 14, 2005

Dynamic Controls Everywhere - Part Two: Control Creation

A couple of months ago I implemented a simple control builder in a test application to generate dynamic controls using the System.Activator of the .NET Framerwork. The class added the created control to a ControlCollection (what a coincidence). It worked very well, but it suffered from two flaws- 1.) I had to maintain a disgusting switch statement for any type of control that I wanted to render, and 2.) I had to recompile when I needed additional control support.

In order to avoid the switch statement and the recompile, I needed to perform the same operations, but use a cached configuration file to provide the controls required for render instead of the switch statement. I managed to implement this using Reflection instead of using the System.Activator - naturally I ran into some problems that lead me to this solution.

The scenario...

In order to use the Activator's "CreateInstance" method you need to specify the object's Type (the easiest overloaded memeber IMO). I recently learned that the GetType( string typeName ) method of the System.Type namespace only works on initialized objects (would defeat the purpose of creating dynamic controls) and Types in the current working assembly (not sure if this is the right term). For instance, if you are in namespace MyProject.Web and have a type MyProject.Web.MyCustomPage the GetType( "MyProject.Web.MyCustomPage" ) will return a type object. However, if you need to perform GetType on MyOtherProject.WebControls.SomeCustomControl, you're out of luck because GetType only looks in the current assembly.

Okay, so how can I get the right assembly loaded so that the GetType looks in the correct assembly for the correct type name so that I can initialize my control? I started poking around the MSDN documentation looking into Reflection and I found out that the System.Reflection.Assembly object has the very same CreateInstance method for returning a type. Except that it requires an instance of the Assembly. Easy enough, right? Well, yes and no!

It's easy to create an Object in most OO languages, however, in order to create an instance of the assembly you need the fully qualified assembly name. So I built a quick console application that would write my assembly's fully qualified name (not the easiest, but it was quick and dirty) to a text file and I just copied that into the necessary constructor and voila. Assembly instance and now the ability to create an instance of any public, non-sealed member of that Assembly.

All that is left to do is build the config file for the dynamic controls and all will be right in the World. Stay tuned!

Thursday, July 07, 2005

Dynamic Controls Everywhere - Part One: Problem/Initial Solution

One of the tasks I have been given requires me to add a “portal” (for lack of a better term) to our existing intranet website for reporting purposes. The current report section suffers from many problems, the least of which is that it requires a complete application recompile to add a simple link to a new report for our users – terribly inconvenient. The new report section had to be, above all else, configurable (read config files – lots of config files).

A “report” in our application consists of a set of search criteria – e.g. show me sales for business unit A for Q1 of this year (or any year for that matter); the corresponding parameter names required for the report; and a hyper link to a separate web server that generates the report. Most of the aforementioned requirements are hard-coded into the code-behind of the ASP.NET page which makes it difficult to maintain. The current interface lists every possible criterion, as some sort of HTML input control, regardless of the report’s requirement. Once the search options are put into the form fields, the user has to post the page back – we like the client, so let’s try to fix this – which then re-builds all of the report links. Then the user can click on any link and open the corresponding Crystal Report.

Yuck! That’s all I can say. Hard-coding business logic into code-behinds is a big no-no if you ask any software developer. Not only do you have to manage the business logic, but any time a change is needed we have to recompile and redeploy the entire application. This means booting all user sessions and forcing the server to recompile the app into MIL. To resolve part of this problem, I created a configuration file with the following XML arrays that then get loaded into a cached (cache = performance boost when memory is available) custom class.

<criterion>
<criteria type="”“" description="”“" label="”“" assemblyname="”“">parameterName=”“ />
</criterion>
<reports>
<navigationitem description="”“" label="”“" hyperlink="”“">
</reports>

When the Report page is called, the page loads the configuration file (either from the cache or deserializes it using the XmlSerializer) and supplies the custom collections to the Report through Properties – Reports and Criterion (what a coincidence). The page then loops through each collection building the criteria and report links sections of the page with the corresponding collection. Any time we need to add a criteria item or report – we just modify the configuration file and the necessary items just “show up” on the page. Look Ma, no recompilation (twice if you’re keeping count), no redeployment, and we have achieved the maintenance ease.

E-mail me if you would like to see the source code and configuration files.

-Brian

Tuesday, July 05, 2005

Entrance Into the Blogosphere

I have finally decided to enter the world of blogging to share my thoughts on .NET, programming, other blogs and anything else that suits my fancy. My first "real" post will be up in a few days when I have a working sample. The post will involve dynamic web controls, .NET reflection, and a config file.

Stay tuned (or not :})

-Brian