Tuesday, February 28, 2006

Standards in enterprise level intranets

This is more of a best practices blog than a technical one. After seeing several large enterprise level intranets grind themselves to near uselessness, I figured it was time to shed some light on why standards can be so important.

Defined standards is an often overlooked part of a companies internal computing strategy, yet in my opinion a very important one. Introducing standards into web systems will in the long run save user frustration, save time, save money, and ensure that an organization's investment in their information is accessible.

Keeping a few simple things in mind when laying out your design will inevitably create a better end user experience.

Successful enterprise level intranets should contain usable, organized information. Feel free to babble on about the history of your company on your extranet, remember to keep your intranet environment concise and to the point. The key is the intranet is a tool, and when users brains are highjacked by lack of organization an extraneous information the effectiveness is lost. The end users should be able to retrieve what the are looking for quickly, and then to move on.

Early intranet adopters usually have chaotic web structures. Many larger companies have a disorganized or non existent web structure because their strategy was (and is)to piece together all their departments home made websites. Every department has a self proclaimed web aficionado, and that person was typically usually tapped to "put together" and maintain that dept's intranet site. This leads to a host of issues including lack of central management, unbalanced traffic loads (both physical loads and "political") and my personal pet peeve - departmental branding which I will get into a bit later. All of these things lend themselves to an inefficient end user experience. It may sound harsh, but taking the design liberties away from your rouge developers will foster a user centric and standard web experience. Management of corporate intranets should be centrally managed in regards to design and function, actual content should be delegated.

Drop the fancy logos. One thing that I have seen over the years in most or all of the patched together intranet systems is custom departmental logos popping up. Some facets of an organization will in fact need self branding, but keep in mind most don't and when they don't they add to the confusion factor. Adopt a rendition of your corporate logo, and create a clear background for sublevels to modify with a picture explaining what it is that they do. Your company has already spent millions of dollars developing an image for itself, it may hurt, but it is better than your fancy new logo that you made in Photoshop. Sub-branding also throws off new users. I speak from experience when I say an intranet with a different header image and logo in each site makes a new employee wonder how many different companies are involved. Sure departmental pride is a good thing, but who do you actually work for - creating sub logos projects you are on a different team altogether and not working for a common goal.

In closing, it is easy to see why we need standards. Designing the superstructure of your intranet smart will make your investment give a much higher return. So, develop your design standards in regards to Look/Feel, Navigation, and keep them user focused! Long story short - all development including back end systems, graphics, and applications should be agreed upon at a corporate level by development staff and management. Delegate content management tasks the guys in each dept whose experience consists of making a website for their local church. Good luck, you're gonna need it.

If you read this far,  you should follow me on Twitter!

Wednesday, February 01, 2006

Classic ASP on 2003 Server with disabled Connection Pooling.

I thought that this experience was blog-worthy due to the highly undocumented nature of this problem. I hope that it can shed some light on why your new MS2003 web migration isn't exactly holding it's own under a moderate traffic load.

Day starts like this.. You take initiative and migrate all of your rusty old NT webfarm to a happy new Win2003 environment. You're running very expensive and highly trafficked custom ASP code, with a remote SQL backend. Expecting huge performance gains from the cutting edge systems, you brief everyone on the IT staff about the new direction the neglected websystems are headed.

You migrate all of the systems over in one fell swoop, do a bit of testing and after deciding that everything checks out - you swap the DNS entries and now the applications are live on the new hardware.

4am the phone rings, it's Tokyo and they want to know why their business critical web applications are not serving pages.

Saving you from my ranting soliloquy, this is an issue that I recently ran into. All ran smooth as a whistle until there was a mild, I stress mild, load (<300 concurrent connections) on the servers. But why did cutting edge servers running the latest Microsoft webserving technology get out performed by old NT machines? The answer is in the way that 2003 server talks to SQL backends with connection pooling disabled.

If you read this far,  you should follow me on Twitter!

Basically the web server was running out of available ports to communicate with the back end SQL server. This became evident by running a basic netstat on the test webserver during one of the load tests and monitoring the connection traffic. By default, w2k3 server reserves 4000 ports for communicating with SQL. The netstat showed me that all 4000 ports were quickly simultaneously opened to the SQL server. When the server runs over this allotment, it will start denying the connections. These opened ports are reserved for a default 4 minutes, so even if the connection is idle - it is still in a "TIME_WAIT" status, essentially unusable by another request. Every request after this limit gets denied until ports are freed. This is a very "obscure" feature of 2003 and classic ASP. There are essentially two ways to fix the problem, in this case, partly to do with the nature of the applications, I chose to increase the port allotment on the webserver to allow more simultaneous connections. Here is the quick and dirty fix:

In the afflicted webservers registry, add the key

Value Name: MaxUserPort
Data Type: REG_DWORD
Value: 65534

to

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

This essentially gives the client (in this case a webserver) 60000 more ports to play with. After applying the changes, the load tests show that this successfully resolved our issue.

You should under just about all circumstances be using connection pooling, however there are some circumstances where this is not feasable. In my own opinion, I am not confident that connection pooling works very well after some of the load tests I have done on 2003 server communicating with a SQL backend (in my case I was using a SQL cluster). This may or not have been fixed by the time this is read, if it was an issue at all.

After spending hours testing and researching this issue, we happened on this fix. Since then MS has published this KB article explaining most of what is going on here.

http://support.microsoft.com/kb/328476

Please note, that this may or may not be the best solution for your setup. This type of issue can also be evident if there are other underlying problems such as poorly closed code or quickly opening and closing connections in the code which can lead to high stress on your database servers.

I hope this is useful info, I know it would have been to me had I run across it.

If you read this far,  you should follow me on Twitter!