Tuesday, December 04, 2007

Making sense of the Global Positioning System

Thanks to the U.S. Department of Defense (and good ole President Ronald Reagan) GPS signals are freely available for civilian use. The fact is, today GPS is basically ubiquitous in most people's lives. Most new cars use it to show you where you are and your proximity to the nearest Starbucks. Raising the "cool factor" bar for technology using GPS is its implications on the Internet as we know it. For example people are geocoding their images in their online photo albums, Cool apps like Google Earth, Geocaching and all sorts of new creative games using real places are sprouting up all over the Internet (people are going outside again!). Basically, with GPS the Internet can break out of it's closed linear stage and becomes part of our real three dimensional world. Very cool, but more on that later - let's dive into what makes the Global Positioning System tick and how you can take advantage of the technology in your next project.

You will find that most GPS devices out there (USB GPS for your PC, handheld GPS's, etc etc) will report data from satellites in a neat standard format called NMEA. NMEA uses a serial ASCII protocol to send GPS data to your application for consumption. The resulting comma delimited data is piped out or logged sentence by sentence from your device. The NMEA format makes it easy to take the data and parse it anyway we'd like. Now we could just use the software that came with our Microsoft's Streets and Trips or Earthmate GPS but that's no fun.

Decoding GPS log files
There are many different types of NMEA sentences. Luckily they are all in an easy to read format. The first field is the sentence type and will start with $. I am only going to focus on a couple sentence types that give us our position information. For the purpose of this blog, we'll ignore the other sentences but there is plenty of references online about them.

My favorites, $GPGGA and $GPRMC. You will find that these two will have all the data you should need for tracking.

Example sentence (GPGGA):
$GPGGA,192122,3514.7971,N,07634.7585,W,1,04,01.3,00006.2,M,-035.9,M,,*79

Translation:
$GPGGA,hhmmss.ss,llll.ll,a,yyyyy.yy,a,x,xx,x.x,x.x,M,x.x,M,x.x,xxxx*hh

Here is what each field means:
1 = UTC of Position
2 = Latitude
3 = N or S
4 = Longitude
5 = E or W
6 = GPS quality indicator (0=invalid; 1=GPS fix; 2=Diff. GPS fix)
7 = Number of satellites in use [not those in view]
8 = Horizontal dilution of position
9 = Antenna altitude above/below mean sea level (geoid)
10 = Meters (Antenna height unit)
11 = Geoidal separation (Diff. between WGS-84 earth ellipsoid and
mean sea level. -=geoid is below WGS-84 ellipsoid)
12 = Meters (Units of geoidal separation)
13 = Age in seconds since last update from diff. reference station
14 = Diff. reference station ID#
15 = Checksum

Example sentence (GPRMC):
$GPRMC,192137,A,3514.7966,N,07634.7588,W,000.0,000.0,310707,,,A*66

Translation:
$GPRMC,hhmmss.ss,A,llll.ll,a,yyyyy.yy,a,x.x,x.x,ddmmyy,x.x,a,m*hh

Here is what each field means:

1 = UTC time of fix
2 = Data status (A=Valid position, V=navigation receiver warning)
3 = Latitude of fix
4 = N or S of longitude
5 = Longitude of fix
6 = E or W of longitude
7 = Speed over ground in knots
8 = Track made good in degrees True
9 = UTC date of fix
10 = Magnetic variation degrees (Easterly var. subtracts from true course)
11 = E or W of magnetic variation
12 = Mode indicator, (A=Autonomous, D=Differential, E=Estimated, N=Data not valid)
13 = Checksum

By parsing these etypes either in real time by reading the data from a COM port, or from an existing log file you can use the coordinate information in any way you choose. I am currently working on a cool (Ok, I think its cool..) vehicle tracking application that I should be releasing here on this site soon. There are allot of good examples floating around with source code that should give you a handle on using this data.

In closing, the Global Positioning System is a powerful resource that you can tap into and enable your applications to become spatially aware. This opens plenty of doors for new and innovative apps - so get coding!

If you read this far,  you should follow me on Twitter!

Wednesday, August 29, 2007

XSS vulnerabilities, do they even care?

Is your site at risk? If you knew it was would you do anything about it? I would hope so, but, you'd be surprised. I've found many "very large" companies online with exploitable vulnerabilities in their main websites that could potentially be very embarrassing and costly.

This article is the start of several where I will test the philosophy of "responsible disclosure" by contacting 5 companies and notify them of security holes that I have found in their sites - even offer assistance and resolutions - to see how long it takes for them to fix them, if at all. I'll keep the names of the companies to myself and just describe them as "industry/estimated # of employees". Just a little white hat test that should get interesting.

By now, most companies and organizations have a little more than a static html brochure online. Most sites are actually full blown online applications either purchased "off the shelf", developed in house, or custom developed by some third party. Dynamic sites, although a necessity, can potentially open doors when improper techniques are used when developed. Once your web application is online, mal-intented site patrons have all the time in the world to pick apart your site for potential vulnerabilities. I speak from experience as web applications that I have created have even been the target of attacks in the past - and I'd be ignorant to think they wouldn't be targeted again in the future.

Some background on the method of the day, XSS..

For this test I'm going to focus on one facet of web application security, XSS(or more confusingly CSS in some cases - not Cascading Style Sheets). XSS stands for cross site scripting and is generally a method employed by hackers to inject their own modified code into your site. I have identified a diverse range of flawed websites below to see what, if anything, their reaction is to someone telling them they have a problem. Here are the companies and description:

1. Retail/95,000 Employees- notified webmasters 8/30/2007
2. Government/1,000 Employees- notified webmasters 8/29/2007
3. Manufactoring/23,000 Employees- notified webmasters 8/30/2007
4. Transportation/19,000 Empl0yees- notified webmasters 8/30/2007
5. Pharmaceutical/2,000 Employees- notified webmasters 8/30/2007

If you'd like for me to take a quick run through of your site, drop me an email with the URL and I'll be glad to send you what if anything I find (time permitting:)

So, there you have it. I'll post updates as responses come in. Let the whirlwind begin.

If you read this far,  you should follow me on Twitter!

Tuesday, July 03, 2007

Displaying fiscal year with VBScript

In an effort to drive engineers bananas, at some point a financial wienie decided that a normal calendar we've been using for thousands of years just wasnt up to par. Fiscal dates took root in the government and corporate America, surely chaos would ensue..

Truthfully, fiscal dating makes more sense to companies because the organization can then make their own rules and target the start and end dates around important production times or downtime.

There are many good ways to generate the fiscal date information, I've found that one really quick and dirty way to display just the year is by using vbscript datedd and datepart.

Our example will use the Government fiscal year which starts October 1 so will need to add one year to the current year if it is October, Nov, or Dec.

<%
if (DatePart("M",Date)) = "10" then
FISCALYEAR = DateAdd("yyyy",1,date)

elseif (DatePart("M",Date)) = "11" then
FISCALYEAR = DateAdd("yyyy",1,date)

elseif (DatePart("M",Date)) = "12" then
FISCALYEAR = DateAdd("yyyy",1,date)

else
FISCALYEAR = DateAdd("yyyy",0,date)
end if
%>

If your fiscal year starts in, say, August.. August is the 8th month so start your script with
if (DatePart("M",Date)) = "8" then
and then continue through the months through the end of the year (12). Happy scripting, or rather, fiscalling!

If you read this far,  you should follow me on Twitter!

Wednesday, June 06, 2007

Version 2.0 is here

I've been offline for a while in anticipation of our new son, and we're happy to report that he's finally here. Ryker Douglas Butcher was born on May 11th, 2007 weighing 8 lbs and 20 inches.

Click for Details

Now that he's here, it's time to put him to work - be on the lookout for some upcoming blogs on a couple of projects I am cooking up in the lab utilizing RFID technology.

Sunday, December 03, 2006

Take Control of High Level Userid's

It's been a while since I published anything so I figured I would drop a quick tip/suggestion for account administration that works well. One of the most overlooked and dangerous habits of system administrators, development staff etc is good plan for safe usage of high access user accounts. Obviously it's bad practice to allow users to perform all of their day to day responsibilities (email, web browsing, etc.) while logged in as an administrator. These accounts should be reserved only for performing the duties in which they were created for. If allowing otherwise, one of your power users users will eventually succumb to a virus or browse some illegitimate website and reak much more havoc on your infrastructure as they potentially would logged in as a normal user.. Not a good plan.

So, how is your staff supposed to do their jobs without having an administrator account? One simple solution that works well is to create two separate accounts for these users. One account should have very minimal access levels allowing just the basics. The users should use these accounts as their login everyday. Make the second id's similar to the first but prefix them with a standard naming convention like "admin" to make them easier to manage. The second id's should have all appropriate permissions for all of the employees tasks.

Now, forcing your users to login and logout all day long will make them go bananas, and truthfully you will not get anyone to abide by this for very long without forcing and denying things link email on the administrator accounts. To make every one's life easier, you could create a batch file for each the users that executes the RUNAS command, and fires up the command prompt running as the admin user.

Example of the batch file contents:
runas /user:domain\administratorid cmd

Drop a shortcut to the batch on their desktops via AD to make it really easy for them. There's also alot of good options in the runas command that you can take advantage of like using alternate profiles if need be.

Now that we have our user logged in with a stripped down account, and they are running the cool custom command prompt as the administrator you should be all set. The user can actually drag and drop any program into this command prompt (Computer manager, SQL manager, etc) and voila - it fires up as the admin account. The user can keep this prompt running all day and use it over and over whenever they need to access an admin level application.

In closing - Having a good, organized strategy for account access is paramount in creating a safe, secure and happy infrastructure.

Tuesday, February 28, 2006

Standards in enterprise level intranets

This is more of a best practices blog than a technical one. After seeing several large enterprise level intranets grind themselves to near uselessness, I figured it was time to shed some light on why standards can be so important.

Defined standards is an often overlooked part of a companies internal computing strategy, yet in my opinion a very important one. Introducing standards into web systems will in the long run save user frustration, save time, save money, and ensure that an organization's investment in their information is accessible.

Keeping a few simple things in mind when laying out your design will inevitably create a better end user experience.

Successful enterprise level intranets should contain usable, organized information. Feel free to babble on about the history of your company on your extranet, remember to keep your intranet environment concise and to the point. The key is the intranet is a tool, and when users brains are highjacked by lack of organization an extraneous information the effectiveness is lost. The end users should be able to retrieve what the are looking for quickly, and then to move on.

Early intranet adopters usually have chaotic web structures. Many larger companies have a disorganized or non existent web structure because their strategy was (and is)to piece together all their departments home made websites. Every department has a self proclaimed web aficionado, and that person was typically usually tapped to "put together" and maintain that dept's intranet site. This leads to a host of issues including lack of central management, unbalanced traffic loads (both physical loads and "political") and my personal pet peeve - departmental branding which I will get into a bit later. All of these things lend themselves to an inefficient end user experience. It may sound harsh, but taking the design liberties away from your rouge developers will foster a user centric and standard web experience. Management of corporate intranets should be centrally managed in regards to design and function, actual content should be delegated.

Drop the fancy logos. One thing that I have seen over the years in most or all of the patched together intranet systems is custom departmental logos popping up. Some facets of an organization will in fact need self branding, but keep in mind most don't and when they don't they add to the confusion factor. Adopt a rendition of your corporate logo, and create a clear background for sublevels to modify with a picture explaining what it is that they do. Your company has already spent millions of dollars developing an image for itself, it may hurt, but it is better than your fancy new logo that you made in Photoshop. Sub-branding also throws off new users. I speak from experience when I say an intranet with a different header image and logo in each site makes a new employee wonder how many different companies are involved. Sure departmental pride is a good thing, but who do you actually work for - creating sub logos projects you are on a different team altogether and not working for a common goal.

In closing, it is easy to see why we need standards. Designing the superstructure of your intranet smart will make your investment give a much higher return. So, develop your design standards in regards to Look/Feel, Navigation, and keep them user focused! Long story short - all development including back end systems, graphics, and applications should be agreed upon at a corporate level by development staff and management. Delegate content management tasks the guys in each dept whose experience consists of making a website for their local church. Good luck, you're gonna need it.

If you read this far,  you should follow me on Twitter!

Wednesday, February 01, 2006

Classic ASP on 2003 Server with disabled Connection Pooling.

I thought that this experience was blog-worthy due to the highly undocumented nature of this problem. I hope that it can shed some light on why your new MS2003 web migration isn't exactly holding it's own under a moderate traffic load.

Day starts like this.. You take initiative and migrate all of your rusty old NT webfarm to a happy new Win2003 environment. You're running very expensive and highly trafficked custom ASP code, with a remote SQL backend. Expecting huge performance gains from the cutting edge systems, you brief everyone on the IT staff about the new direction the neglected websystems are headed.

You migrate all of the systems over in one fell swoop, do a bit of testing and after deciding that everything checks out - you swap the DNS entries and now the applications are live on the new hardware.

4am the phone rings, it's Tokyo and they want to know why their business critical web applications are not serving pages.

Saving you from my ranting soliloquy, this is an issue that I recently ran into. All ran smooth as a whistle until there was a mild, I stress mild, load (<300 concurrent connections) on the servers. But why did cutting edge servers running the latest Microsoft webserving technology get out performed by old NT machines? The answer is in the way that 2003 server talks to SQL backends with connection pooling disabled.

If you read this far,  you should follow me on Twitter!

Basically the web server was running out of available ports to communicate with the back end SQL server. This became evident by running a basic netstat on the test webserver during one of the load tests and monitoring the connection traffic. By default, w2k3 server reserves 4000 ports for communicating with SQL. The netstat showed me that all 4000 ports were quickly simultaneously opened to the SQL server. When the server runs over this allotment, it will start denying the connections. These opened ports are reserved for a default 4 minutes, so even if the connection is idle - it is still in a "TIME_WAIT" status, essentially unusable by another request. Every request after this limit gets denied until ports are freed. This is a very "obscure" feature of 2003 and classic ASP. There are essentially two ways to fix the problem, in this case, partly to do with the nature of the applications, I chose to increase the port allotment on the webserver to allow more simultaneous connections. Here is the quick and dirty fix:

In the afflicted webservers registry, add the key

Value Name: MaxUserPort
Data Type: REG_DWORD
Value: 65534

to

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

This essentially gives the client (in this case a webserver) 60000 more ports to play with. After applying the changes, the load tests show that this successfully resolved our issue.

You should under just about all circumstances be using connection pooling, however there are some circumstances where this is not feasable. In my own opinion, I am not confident that connection pooling works very well after some of the load tests I have done on 2003 server communicating with a SQL backend (in my case I was using a SQL cluster). This may or not have been fixed by the time this is read, if it was an issue at all.

After spending hours testing and researching this issue, we happened on this fix. Since then MS has published this KB article explaining most of what is going on here.

http://support.microsoft.com/kb/328476

Please note, that this may or may not be the best solution for your setup. This type of issue can also be evident if there are other underlying problems such as poorly closed code or quickly opening and closing connections in the code which can lead to high stress on your database servers.

I hope this is useful info, I know it would have been to me had I run across it.

If you read this far,  you should follow me on Twitter!

Friday, January 20, 2006

The Second E-Revolution

I look at starting an e-business like a treasure hunter, with an idea as the map and the successful venture the treasure. You have to follow your treasure map before someone else gets a chance to photocopy it and claim the gold for themselves. You have to protect your ideas at all costs because in many cases it can be your only true asset until you have large online following.

Your product or service has to be new, original, witty, and it helps if it is something that the media would have a field day with. That's one thing that many people don't understand about the internet and starting a business on it. The internet was built to share information, so in turn, anything that is new or original on the net has the tendency to spread like wildfire whether it makes financial sense or not. Internet businesses have been won and lost over a single article, but the overall longevity of the venture depends on the effort put forth to keep the site on the top in terms of technology, ease of use and services. The media is a powerful force when it comes to ecommerce but without a concerted effort to stay the first you will find that spotlight will burn out very quickly.

Many skeptics say that the dot com era has financially come and gone but in my opinion it hasn't even started yet. The rise and fall of the dot coms of the late 90's was due to over-hyped overvalued stocks, not because the internet was not a sturdy platform for business. The dot coms came and went because we had the ideas for using the internet to make money but the general population didn't yet have the dependency or trust in the net. This left the startups having to sell the public on why using the internet was better than driving to their local Best Buy. The net still has inherit values which make it perfect for business in the future:

1. Relative low overhead
2. It's always upgradeable
3. Growing dependency of public on the net

Devices (Phones, PDA's, cars, etc) are becoming more and more 'smart' or connected. This in turn is moving us more and more into a wired (and wireless) world. As the world becomes more used to the idea of information at the click of a mouse or the touch of a button, they are also becoming more dependent on these services. Imagine trading your TV in for a radio. Crazy? Of course it is, but with wireless connections on the rise, trading a web enabled smartphone in for an analog cellular would be just as ridiculous.

We are just now physically catching up with all the promises we got about the internet and how it was going to change our lives. The general public is using powerful internet tools to book their flights, sell old junk, and to communicate more and more every day. Trust in the internet is finally growing and with that trust people are spending more money for products and services found solely online.

The idea is the same, have a great idea for a website and become a millionaire, but this time around the stakes are much higher and the reward much greater for those who can succeed in the second E-revolution.

Thursday, December 15, 2005

Greetings Earthlings

Hello World, and welcome finally to my BLOG. I have been meaning to put all of my rants into a blog for years, and finally this comes to fruition. No more procrastination, no more excuses. Please fel free to post your own comments and questions. I vow to update the site at least twice a month, perhaps more - but attempt to spare you from useless dribble. So sit back, relax - and dont forget to bookmark ryanbutcher.com, a sometimes personal, sometimes professional blog...