" ... and then Google built Chrome, and Chrome used Webkit, and it was like
Safari, and wanted pages built for Safari, and so pretended to be Safari.
And thus Chrome used WebKit, and pretended to be Safari, and WebKit pretended
to be KHTML, and KHTML pretended to be Gecko, and all browsers pretended to
be Mozilla, (...) , and the user agent string was a complete mess, and near
useless, and everyone pretended to be everyone else, and confusion
User agent strings are a complete mess since there is no standard format for them. They can be in various formats and can include more or less information depending on the vendor's (or the user's) choice. Also, it is not dependable since it is some arbitrary identification string. Any user agent can fake another. So, why deal with such a useless mess? You may want to see the choice of your visitors and can get some reliable data (even if some are fake) and generate some nice charts out of them or just want to send a HttpOnly cookie if the user agent seem to support it (and send a normal one if this is not the case). However, browser sniffing for client-side coding is considered a bad habit.
Parse::HTTP::UserAgent is a Perl module that implements a rules-based parser and tries to identify MSIE, FireFox, Opera, Safari & Chrome first. It then tries to identify Mozilla, Netscape, Robots and the rest will be tried with a generic parser. There is also a structure dumper, useful for debugging.
my $ua = Parse::HTTP::UserAgent->new( $str );
die "Unable to parse!" if $ua->unknown;
# or just dump for debugging: