Tech – Sheep Guarding Llama https://sheepguardingllama.com Scott Alan Miller :: A Life Online Mon, 12 Jan 2015 18:20:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Disabling Quotas on a ReadyNAS Ultra NAS https://sheepguardingllama.com/2011/09/disabling-quotas-on-a-readynas-ultra-nas/ https://sheepguardingllama.com/2011/09/disabling-quotas-on-a-readynas-ultra-nas/#respond Tue, 13 Sep 2011 00:21:15 +0000 http://www.sheepguardingllama.com/?p=7273 Continue reading "Disabling Quotas on a ReadyNAS Ultra NAS"

]]>
If you have a ReadyNAS Ultra you may want to disable quotas in order to eliminate some issues.  I’ve experienced issues myself where CIFS copies would fail due to the filesystem being out of space even though quotas are set insanely high and the ReadyNAS interface is reporting plenty of space left on the device.  Often there are no needs for the quotas and the easiest thing is to simply remove them altogether.

Removing quotas in ReadyNAS (RAIDiator) is actually pretty easy but you have to know where to go and be confident that you are not going to break anything.  The filesystem data is kept in the /etc/fstab file.  The /c mount is the one about which we are concerned.  This is the mount that is used for our NAS shares.  In /etc/fstab it should look like this:

/dev/c/c  /c   ext4  defaults,acl,user_xattr,usrjquota=aquota.user,
grpjquota=aquota.group,jqfmt=vfsv1,noatime,nodiratime 0 2

All we need to do is to edit that line to remove the three arguments that refer to the quote.  For safety, make a copy of the original line before making any edits so that you can easily put it back exactly as it was.  The modified line will look like this:

/dev/c/c  /c   ext4  defaults,acl,user_xattr,noatime,nodiratime 0 2

Simply reboot and, if all is well, you will no longer have to put up with quotas and previously failing file copies will now work smoothly.

]]>
https://sheepguardingllama.com/2011/09/disabling-quotas-on-a-readynas-ultra-nas/feed/ 0
PHP Fatal error: Call to undefined function posix_getpwuid() https://sheepguardingllama.com/2011/03/php-fatal-error-call-to-undefined-function-posix_getpwuid/ https://sheepguardingllama.com/2011/03/php-fatal-error-call-to-undefined-function-posix_getpwuid/#comments Sun, 13 Mar 2011 00:19:51 +0000 http://www.sheepguardingllama.com/?p=6592 Continue reading "PHP Fatal error: Call to undefined function posix_getpwuid()"

]]>
I found that this error appears rather often online but almost no one has any idea why it would come up.  I found this error myself today while doing an install of FreePBX on Fedora 14.  My full error was:

Checking user..PHP Fatal error:  Call to undefined function posix_getpwuid() in /usr/src/freepbx-2.8.1/install_amp on line 728

This seems like it must be a permissions error.  But more than likely you are simply missing the PHP Posix library.  You can resolve this on Fedora with

yum -y install php-posix

Ta da!

]]>
https://sheepguardingllama.com/2011/03/php-fatal-error-call-to-undefined-function-posix_getpwuid/feed/ 8
Latency and Software Developers https://sheepguardingllama.com/2010/04/latency-and-software-developers/ https://sheepguardingllama.com/2010/04/latency-and-software-developers/#respond Fri, 09 Apr 2010 17:23:28 +0000 http://www.sheepguardingllama.com/?p=5332 Continue reading "Latency and Software Developers"

]]>
I was recently having a conversation where someone asked me to compare real time and low latency Linux kernels.  I used the phrase “real time is the enemy of low latency.”  This caught some attention and I was asked to explain what I meant.  In order to accommodate the needs of real time processing we need to add a certain amount of overhead so that we can accurately and reliably predicate the amount of time that a procedure will take to complete.  In order to do this we incur a certain amount of overhead and this overhead contributes to latency.  In order to move as quickly as possible we would have to remove this overhead and thereby decrease latency but without having the predictability for which the overhead had allowed.

Today I was reading a paper on Agile development and traditional software development methodologies and it occurred to me that we were essentially talking about the same concept.  Programmers are a lot like organic, squishy CPUs chugging along churning out data.  The concept behind traditional development methodologies (or schedulers, if you will) was to make sure that developers were able to turn out a project or a piece of a project in a predictable nature – so predictable that projects could be projected years in advance with meeting rooms scheduled and the caterers hired for the release party.  This predictability is provided by the inclusion of a large amount of management overhead that hinders rapid development.  All of those status meetings don’t come for free and incur large amounts of lost productivity in exchange for keeping management up to date as to the release schedule.

Agile development takes the opposite approach.  In Agile the idea is not predictability, at least not to the same extreme level.  Agile really focuses on producing software with minimal latency – getting it done and out the door as quickly as possible even if that ends up surprising the marketing and sales departments and no box art has been approved yet.  It does this by lowering the management overhead and reducing artifacts that interfere with the actual job of producing a product allowing the team to move more quickly.

]]>
https://sheepguardingllama.com/2010/04/latency-and-software-developers/feed/ 0
Testing Socket Connections Programmatically https://sheepguardingllama.com/2010/01/testing-socket-connections-programmatically/ https://sheepguardingllama.com/2010/01/testing-socket-connections-programmatically/#respond Tue, 12 Jan 2010 05:57:40 +0000 http://www.sheepguardingllama.com/?p=4972 Continue reading "Testing Socket Connections Programmatically"

]]>
Often we have to use “telnet remotehost.somewhere.com 80” to test if a remote socket connection can be established.  This is fine for one time tests but can be a problem when it comes time to test a number of connections – especially if we want to test them programmatically from a script.  Perl to the rescue:

#!/usr/bin/perl
use IO::Socket;

my $sock = new IO::Socket::INET (
                   PeerAddr => $ARGV[0],
                   PeerPort => $ARGV[1],
                   Proto => tcp,
);

if ($sock) { print "Success!\n" } else { print "Failure!\n" };
close($sock);

Just copy this code into a file called “sockettest.pl” and “chmod 755 sockettest.pl” so that it is executable and you are ready to go.  (This presumes that you are using UNIX.  As the script is Perl, it should work anywhere.)
To use the code to test, for example, a website on port 80 or an SSH connection on port 22 just try these:

./sockettest.pl www.yahoo.com 80
./sockettest.pl myserver 22

You aren’t limited to known services.  You can test any socket that you want.  Very handy.  Now, if you have a bunch of servers, you could test them from a simple, one line BASH command like so (broken so as not to be one line here for ease of reading…)

for i in myserver1 myserver2 yourserver1 yourserver2 someoneelsesserver1
do
  echo $i $(./sockettest.pl "$i" 80)
done
]]>
https://sheepguardingllama.com/2010/01/testing-socket-connections-programmatically/feed/ 0
Windows Server 2008 R2 on Xen https://sheepguardingllama.com/2009/11/windows-server-2008-r2-on-xen/ https://sheepguardingllama.com/2009/11/windows-server-2008-r2-on-xen/#respond Wed, 11 Nov 2009 18:15:59 +0000 http://www.sheepguardingllama.com/?p=4776 Continue reading "Windows Server 2008 R2 on Xen"

]]>
Having poked around in the forums I have found that no one seems to believe, at least not at the time of this writing, that Windows Server 2008 R2 can be deployed on Xen or, if it can, that you cannot do so using the Xen package available in Red Hat Linux 5.  I am proud to announce that Windows Server 2008 R2 does indeed run just fine on RHEL 5.4.  I believe that the update to the latest 5.4 package is likely required here and, as Server 2008 R2 is 64bit only, you have no choice but to be running on x64 hardware.  But work it does and 2K8 R2 on Xen on Red Hat Linux is very much a reality.

]]>
https://sheepguardingllama.com/2009/11/windows-server-2008-r2-on-xen/feed/ 0
Connecting VNC with virt-install in RHEL 5.4 https://sheepguardingllama.com/2009/11/connecting-vnc-with-virt-install-in-rhel-5-4/ https://sheepguardingllama.com/2009/11/connecting-vnc-with-virt-install-in-rhel-5-4/#respond Wed, 11 Nov 2009 15:12:43 +0000 http://www.sheepguardingllama.com/?p=4774 Continue reading "Connecting VNC with virt-install in RHEL 5.4"

]]>
If you have been using virt-install for a while and have update to RHEL 5.4 (Red Hat Enterprise Linux) or CentOS 5.4 then you will likely have noticed that the virt-install utility has changed its behaviour.  No longer can you simply run virt-install without any other parameters.  To run the way that the utility was run traditionally you need to use the new –prompt flag.  Easy enough.  But now graphical console information is not prompted for and if you are installing Windows you will need that graphical console.  What to do?

What is needed is the –vnc flag to turn on the VNC console.  We will also use the –vncport flag to set the port so that we can easily access our system.  Here is an example command to start our installation using the default VNC port, 5900:

virt-install –prompt –vnc –vncport=5900

Of course, if you have another process (likely another virtual machine on either Xen or KVM) using port 5900 then you will need to choose an alternative port.

]]>
https://sheepguardingllama.com/2009/11/connecting-vnc-with-virt-install-in-rhel-5-4/feed/ 0
Change Windows Password from the Command Line https://sheepguardingllama.com/2009/11/change-windows-password-from-the-command-line/ https://sheepguardingllama.com/2009/11/change-windows-password-from-the-command-line/#respond Mon, 09 Nov 2009 22:49:52 +0000 http://www.sheepguardingllama.com/?p=4768 Continue reading "Change Windows Password from the Command Line"

]]>
Whether you are on Windows XP, Windows Server 2003 or some other recent version of the Windows NT system you have probably gotten into a situation where you wanted to change your password but did not have a means of issuing a remote Ctl-Alt-Del to bring up the password change dialogue or, for one reason or another, you need to make a password change via the command line.  In UNIX this is the normal mode of password changes but in Windows this is a poorly known skill but an important one.  This is especially important on Windows 2003 machines as the normal password management dialogues may not exist.

To change your password simply:

net user username password

So for me that might be “net user scott mysecretpassword” and voila, password changed.  Very, very handy.

]]>
https://sheepguardingllama.com/2009/11/change-windows-password-from-the-command-line/feed/ 0
Forcing Red Hat Linux to IGMP Version 2 https://sheepguardingllama.com/2009/11/forcing-red-hat-linux-to-igmp-version-2/ https://sheepguardingllama.com/2009/11/forcing-red-hat-linux-to-igmp-version-2/#respond Thu, 05 Nov 2009 07:57:21 +0000 http://www.sheepguardingllama.com/?p=4759 Continue reading "Forcing Red Hat Linux to IGMP Version 2"

]]>
Not a very common task and one that is relatively hard to locate for administrators doing a quick search online.  In some cases it is necessary to force Linux, in this case, Red Hat Enterprise Linux, away from the default of using IGMP Version 3 to Version 2.  This is often done to support older switches for multicasting.

In order to make this change we will be editing the /etc/sysctl.conf file by adding the following lines:

net.ipv4.conf.eth0.force_igmp_version = 2
net.ipv4.conf.lo.force_igmp_version = 2
net.ipv4.conf.default.force_igmp.version = 2
net.ipv4.conf.all.force_igmp.version = 2

You can find out more about your current running IGMP configuration by using this command:

cat /proc/net/igmp

The resulting output will show the IGMP version in use for your machine’s interfaces.  In Red Hat Enterprise Linux 5 this will default to version 3.  If you want to only see the interface and IGMP version number you can simplify with this command:

cat /proc/net/igmp | grep V | awk ‘{print $2 ” ” $5}’

]]> https://sheepguardingllama.com/2009/11/forcing-red-hat-linux-to-igmp-version-2/feed/ 0 SQLite3-Ruby Gem Version Issues on Red Hat Linux and CentOS https://sheepguardingllama.com/2009/09/sqlite3-ruby-gem-version-issues-on-red-hat-linux-and-centos/ https://sheepguardingllama.com/2009/09/sqlite3-ruby-gem-version-issues-on-red-hat-linux-and-centos/#comments Sat, 26 Sep 2009 13:56:21 +0000 http://www.sheepguardingllama.com/?p=4596 Continue reading "SQLite3-Ruby Gem Version Issues on Red Hat Linux and CentOS"

]]> So you are attempting to follow earlier instructions on installing SQLite3 to use with Ruby on Red Hat Enterprise Linux (aka RHEL) or CentOS 5 and you get the following error:

gem install sqlite3-ruby
ERROR:  Error installing sqlite3-ruby:
sqlite3-ruby requires Ruby version > 1.8.5


Don’t worry, you are not alone.  What has happened is that the SQLite3-ruby Gem has been updated past the point supported by the version of Ruby included with Red Hat and CentOS (this should apply to Oracle Linux as well but has not been tested by me.)  What we need to do is simply to specify the exact version of SQLite3 that we need.  So to install:

gem install sqlite3-ruby --version '= 1.2.4'

And now you should install without any problem.  This is a followup post to the original how-to on Installing ruby-sqlite3 on Red Hat or CentOS Linux which I posted last year.  The addition was necessary because of the change to the available gems and a complete lack of online resources mentioning this error.

]]>
https://sheepguardingllama.com/2009/09/sqlite3-ruby-gem-version-issues-on-red-hat-linux-and-centos/feed/ 3
Mounting and Unmounting CD and DVD Images in Xen https://sheepguardingllama.com/2009/09/mounting-and-unmounting-cd-and-dvd-images-in-xen/ https://sheepguardingllama.com/2009/09/mounting-and-unmounting-cd-and-dvd-images-in-xen/#respond Mon, 21 Sep 2009 22:54:01 +0000 http://www.sheepguardingllama.com/?p=4586 Continue reading "Mounting and Unmounting CD and DVD Images in Xen"

]]>
A simple, but often hard to figure out, task with virtualizing with Xen is learning how to mount and unmount, dynamically, CD and DVD image files, also known as ISOs.  This is actually quite simple but the commands are unintuitive.  The first step is to download the ISO file that you while to mount and to place it on the Dom0 filesystem.

The commands that we use are the “xm block-attach” and “xm block-detach” commands.  First we will attach a new ISO file to the CD or DVD drive of a virtualized Windows guest, DomU, on Xen:

xm block-attach 2 file:/tmp/myimage.iso hdc r

In this case we are attaching myimage.iso in Dom0’s /tmp directory to the hdc device of our DomU guest number two.  If you don’t know which guest number you want use the “xm list” command to get a list of all of the images running on your server.  The trailing “r” is for “read only”.  Use “w” if you want your block device to be writable.

Now to detach that same device we will use the following command:

xm block-detach 2 hdc

In this case we are detaching the hdc device’s ISO image from domain number two.

]]>
https://sheepguardingllama.com/2009/09/mounting-and-unmounting-cd-and-dvd-images-in-xen/feed/ 0
Building a Home Media Server https://sheepguardingllama.com/2009/08/building-a-home-media-server/ https://sheepguardingllama.com/2009/08/building-a-home-media-server/#respond Tue, 11 Aug 2009 21:30:48 +0000 http://www.sheepguardingllama.com/?p=4426 Continue reading "Building a Home Media Server"

]]>
In this day and age the use of physical media in your home theatre is very passé. It is so much more convenient to have all of your movies, home videos, television shows and more available on your network and available at the touch of a button – “on demand” to borrow the term from the DVR crowd.  You can modify your existing multimedia collection to put it onto your home network and you can do so at minimal cost.  For many people all of the pieces already exist and all that is left to do is to put it all together.

In this article I am going to run through the components necessary to put together a very nice workflow for creating a working home media network.  When you are done what you will have is the ability to sit in your living room, or any other room of your house, and browse through your movie, photo and music collections with nothing more than your remote control and watch them instantly without ever having to get up and search for a disc, trying to remember what movies you own, trying to figure out who say what song or what album it is from or more and your videos or music will start instantly.

So what do we need to get started on this?

There are three essential steps.  In the first step we take any existing media like DVDs and we have to “rip” them so that we have a local copy on our computer.  The second step is transcoding – we will be taking the raw DVD file and turning it into a high efficiency h.264 media file which will look nearly as good as our original DVD but at a fraction of the original size so that we can store many more movies.  The final step is to load these movies into a UPnP/DLNA server and make them available on the network.

To do all this we will need a few, free pieces of software and one really important piece of hardware – our media viewer.  The media viewer hardware is the piece of the equation that attaches to our television to make all of this magic available there.  For me, the best hardware for this many people already have – a Sony PlayStation 3.  The PS3 works amazing well for this and really cannot be beat.  If you already have a PS3 or are thinking about getting one anyway then you are in great shape. If you have an XBOX 360 then we should be able to make that work for us as well.  You can also use a computer hooked to your television, an Apple TV or any of many, many different devices.  My experience is with the PS3 and for that I will write this guide but do not feel that you should be limited to just this one device.  Also, you can add PS3 units (or a mix of different devices) all over your house so that every television in your house can access this media in the same way.  In addition to all of your televisions being able to access this centralized media you will also be able to get to the media from the computers in your home.

Step One and Step Two in our workflow are about converting our existing, legacy media (we will address DVD and CD media here) into modern, efficient, network friendly h.264 and MP3 files.  If you are working with home movies that are already converted to h.264 or music from a service like Amazon MP3 downloads then neither of these steps are necessary at all!  This is simply a conversion process to take our very old media and to prepare it for our new, modern system.  For DVDs, the second step is generally not even necessary but results in far better use of our storage and is almost always well worth the effort.

One of the great things about this process is that even though there is some quality lost in the transcoding process (this is a necessary side effect of all lossy transcoding, but our process does as much as possible to minimize this effect) there is a lot of work done by the process to “fix” things that were handled badly by the DVD encoding process originally (like interlacing and framerate changes) which now we can undo making the trancoded version sometimes appear actually better than the DVD original!

Step One: Preparing the Media (DVD)

In this step we will prep our video media for conversion.  This means removing the video from the DVDs, assuming that that is from where you are starting, and placing it, raw, onto your computer’s hard drive.  If you want to convert CDs for use with your media system you do not need to perform this step at all.  CDs can be converted directly from the physical disc directly to MP3 making them extremely easy to do.

The software that we will need to use to prepare our DVDs for conversion is called DVDFab and is a free download for the basic version of the software – and the basic version is what we want as the “advanced” features are all about lowering the quality of the DVDs for no reason and doing weird things that we definitely would never want to do.  So download the latest version of their software, find a DVD that you want to try first and let’s give it a go!

We will start with converting a simple movie.  Nothing fancy like a television show.  Start by picking out a movie without subtitles that is just a single movie on a single DVD.  This is most movies but if I don’t say it then we will run into some edge case which will take more work and we don’t want that to happen our first time out.

This part of the process can be a bit finicky so we need to adjust our process depending on what we are doing and how well it works.  We have two basic methods of ripping our DVDs.  The first is to rip the entire disc.  This takes up more temporary working space for us and requires more work when transcoding so we only want to do this when necessary – it is the correct method to use when converting television shows with several episodes on a single disc, however, as we will see later.  For movies we almost always want to allows DVDFab to detect the film on the DVD for us.  Then we can select, in its interface, the audio and option subtitle channels that we would also like to include.  DVDFab’s interface for determining which audio and subtitle channels we are interesting in keeping is far superior to anything that we will have later in the process so it is best to do this now if possible.

One of the big advantages of this ripping and transcoding process is that it gives us an opportunity to remove all of the extra “fluff” from the DVDs such as extra audio channels (do you really want to store the Spanish dub of that movie that was originally in English?), subtitle channels, ads, warnings, menus, etc.  Let’s face it, the menus of DVDs are a major detraction from the movies.  No one wants to wait for the menu to load up, make noise and make it confusing how to start the movie.  We just want our movies to play right away when we are ready to watch them.  This process removes all of this detriment from the DVDs and takes us back to the pure, simple world of “just our movie”.  Movie night will suddenly be a lot less frustrating.

If you want to include extra audio channels at this time, you can. For example, if you want that audio commentary track feel free to include it in the ripping process.  Personally I remove everything extra because I know that I am never, ever going to watch that movie or show with that extra audio track.  Ever.  But to each their own.  For me it simply is not worth the time, effort and storage space to keep all of that stuff.  With cartoons I will often include French and Spanish language tracks so that my daughter can watch the same movie in different languages but only for cartoons where there is no weird lip syncing and watching it in a foreign language is probably as good as seeing it in English anyway.

Once you have selected the audio that you want to keep you can click “Start” and DVDFab will do the rest.  This process generally takes around twenty minutes or so, depending on your DVD drive speed and other factors on your computer.  If you are going to be doing this a lot you may want to invest in a nice DVD player like a high speed, external HP unit that connects via USB2, but that is completely not necessary.

Once the DVDFab process is complete you should have a directory called “MainMovie” and in that directory will be the name of the DVD that you just ripped (in about one out of ten cases DVDFab is unable to determine the actual name of the movie for this folder and so calls it DVD or something like that.)  In that folder you will have a folder called Audio_TS and one called Video_TS – you don’t actually care about these but they are there in case you are interested.

That is it.  Step one in complete.  You no longer need the physical DVD at this point.  My advice is, after the entire process is done, that you pack up the DVD and put it someplace very safe since you need to retain the original DVD for legal reasons – if you sell or give away your original DVD the h.264 copy that we are creating changes from being an archival/backup copy of your original to being stolen so keep that in mind.  This is a process of improving the usefulness of your existing DVD library not a means to saving money by selling DVDs that you no longer need.

Step Two (DVD)

Now that we have our rip from DVDFab we are ready to try our hand at transcoding.  This is the complicated step with a lot of options.  The bottom line with this process is that you are going to have to make some decisions yourself and you are just going to have to try some conversions, see how they look for you and tweak settings from there.  No real way to get around that, I am afraid.  I will do my best to give you some starting points, though.

The software that we will use for transcoding is called Handbrake and, like everything else that we are using, it is free.  Download and install it and fire it up.  We will now convert our first rip from MPEG2 VOB into h.264.  (h.264 is a compression algorithm used behind the scenes in technologies like MPEG4, BluRay, QuickTime HD and WMVHD.  It gives us better compression ratios than does old MPEG2 allowing us to store more movies in less space at the same quality.  As with any conversion there is a loss of quality from the original but depending on what we want we can tweak that in Handbrake to minimize the quality loss while maximizing the size gains.)

Once Handbrake is open we need to open the “folder” in which we just ripped our DVD.  We do this by clicking “Source” in the top left of the Handbrake window and selecting “DVD/Video_TS Folder”.  This will open a browse dialogue and we just need to navigate into that “MainMovie” folder and select the folder named for the movie that we just ripped located there.  Handbrake will then look at this file – this can take up to a minute – to determine what options we have.

If the Handbrake detection goes well then we will see the Title and Chapters fields filled out automatically.  Normally the Title field will be correct and we will not need to modify it.  The Chapters fields can always be ignored.  For a normal movie Title should populate with the longest option from the dropdown.  So you normally do not need to even check this.  If you did the “select your movie and audio” option in DVDFab then Title will only have one option making this even easier.

Fill in the file field.  This is very important as you can accidentally leave this blank or leave in the name of a previously converted DVD and overwrite the file so always be sure to modify this each and every time you use Handbrake or you will be very sorry and have to do some work over again.  When you modify this you want to select “MP4” as your output type – at least for this, your first movie conversion.   Be aware that no matter what you pick in the naming dialogue Handbrake will change your entry and make it “M4V”.  This is a bug.  After you select the location and name to save the file you must go to the “Format” drop down menu and choose “MP4” again.  Don’t worry if you miss this and end up with an .m4v file.  Unlike most things this can be changed with a simple file rename after the entire process is complete.  The .mp4 or .m4v option is just a flag to the program playing the video so that it knows how to interpret the data.  The .mp4 extension works more reliably but generally gives you fewer features.  So for now we want to use it but after testing you may want to use .m4v or, like me, a mix of the two later on.

Now for the fun bit.  We need to select the correct compression settings for our particular video.  In this section it is a lot less about getting things right or wrong but more about getting the settings to be where you want them.  Likely you will play with these for a while, watch some videos and decide to move in one direction or another.  I am just going to attempt to give you some decent starting points so that you get good results from which to begin your tweaking.  I tend to lean towards pretty good video and audio quality without going over the top.  Almost all of my movie viewing, especially that coming from DVD (rather than from BluRay) is very casual and normally done with the family sitting around, eating dinner.  So having the most perfect surround sound or whatever is not a top priority for me.  So generally I drop that out and go for nice, regular audio instead (my main viewing area is stereo only, not surround sound anyway.)

We have three tabs about which we really need to worry for our settings: “Picture Settings”, “Video” and “Audio & Subtitles”.  The other tabs can be ignored, at least until you are really, really comfortable with making Handbrake tweaks.

Picture Settings. Leave the “crop” section alone, Handbrake is good at auto-detecting this.  I have never needed to modify it myself.  Under “Anamorphic” you will want to choose “Loose” if you have a widescreen/letterbox movie or chose “None” if the movie is Full Frame (when in doubt, use Loose.)

If the movie was a cinema movie (as opposed to a “made for TV” movie) then I turn on “detelecine” but check my note below about framerates before you do this.  I always turn on “decomb” and set “deinterlace” to “slower”.  I set “denoise” to medium and, for live action film, set “deblock” to 5.  If it is a cartoon rather than live action I turn off “deblock”.

Video.  The “Video Codec” should always be “h.264”.  This will be the default so you can just leave it as it is.

Framerate is tricky.  Movies made for the cinema are normally in 23.976 frames per second and so you will get better quality if you detelecine and take the framerate back to its original.  Many television shows made before the mid-1980s were done directly to film and were 23.976 fps as well.  Today many good televisions are able to display 23.976 fps (they call it 24 fps or 1080p/24) while older televisions could not.  Mine does and so I convert all of my film DVDs back to the original framerate to increase the quality and to lower the file sizes.  If you are dealing with television content then I do not turn on detelecine and leave the FPS as “same as source” which will keep it at the 29.97 of normal DVD NTSC.  If you do not have a television capable of showing 24 fps then you might want to consider keeping everything “same as source” and avoiding detelecining unless you are investing in future viewing assuming that anything new that you buy will be able to show 24 fps.

Under “Advanced Encoding Settings” I always use “2-pass Encoding” and I always turn off “Turbo First Pass”.  My goal with this process is to take more time but to get the best conversion process possible.  You should turn on “Grayscale Encoding” anytime that you are encoding a black and white film.  This keeps colour from accidentally popping into the picture when there should not be any.

Quality.  Now for the most subjective section.  I use non-constant quality because it gets you better storage to quality ratios at the expensive of not being able to predict streaming rates.  This is a no-brainer for a home network.  Constant quality is sometimes used in Internet streaming of video to make the experience more consistent even though it greatly increases your storage needs while reducing the overall quality of the video.  I avoid target size as well.  You can play with this if you want later.  I do all of my adjusting via “Avg Bitrate”.

For an average movie I tend to use a bitrate of 2400.  This is a good place to start.  If this looks plenty good when you watch the final movie try 2200 the next time.  Still good?  Try 2000.  You want to be just slightly higher than where you start to not like the image.  The lower the bitrate the smaller the final files.  If I am dealing with a high detail cartoon, like a Disney cartoon movie or Studio Ghibli, I will lower the bitrate to around 1000 and if it is a low detail cartoon like Family Guy or the Simpsons I will go to 850.  At 850 the deinterlacing process actually allowed Family Guy to look better after conversion even with the file size cut to around 10% of the original files!  This really shows what a bad job the DVD format does for modern videos.

Audio & Subtitles. Not too much to worry about in this section.  If you have a subtitle track selected you can choose it here.  Be aware that subtitles in Handbrake and permanently burned into the final product.  They are not a subtitle channel that you can turn on and off.  So pick wisely.  Generally I burn on English subtitles for foreign language films but leave them out for anime but include both English and Japanese audio so that I can watch the original performance while still enjoying the artwork unencumbered by subtitles.

Under Audio Tracks you will choose the audio that you want recorded and what settings you want.  Generally you will have only a single track to include although you can include many if you want.  You can choose to which track you will listen when you are viewing your movies.  This could include the regular English track, a commentary track, the uncompressed AC-3 English track and a French language track.  That’s fine.

For me, I almost always just use the standard English track, encode as AAC, mixdown to Dolby Pro Logic II, Sample Rate set to Auto and Bitrate set to 160.  DRC is left at one, this is an advanced feature that you can play with later.  My system does not provide any digital surround sound.  For me, for most movies, it just isn’t important.  But for a lot of people it is.  To do this I always go ahead and still do my main track (track 1) just as described but then I also include a second track from the same source but with my Audio Codec set to AC3 rather than to AAC.  This causes Handbrake to simply include the original soundtrack exactly as-is so the digital surround sound folks can get the original audio exactly the same as on the DVD.  I like to include both because there are a lot of playback systems that do not handle AC3 very well that will play the AAC without any problem.  The AAC track is a bit smaller than the AC3 track so once you are including AC3 you might as well include the AAC as well.  I would definitely record things like commentaries as AAC to save space as their sound quality is never very good nor important.

All Done. That is it.  That is all that you need to know to get a good starting point in Handbrake.  Now that you have all of your settings filled in for your first movie just hit the “Start” button and Handbrake will begin encoding your first movie.  This will take a few hours.  Possibly many hours.  This is highly dependent upon the power of your computer and if it is busy doing anything else at the same time.  You might want to just let it run overnight.  You can check out your creation in the morning.

Step Two (CD)

In addition to compressing DVDs and other video formats for use on your new home network media system you can also compress your CD collection into the handy MP3 format so that you can play this on any device in your home.  You can also buy MP3s directly from Amazon that are not encumbered by DRM like downloads from the Apple iTunes store so that you can use them however you like.  Amazon downloads are both higher quality and less expensive than Apple’s encumbered downloads.  There is no upside to buying from Apple.  With Amazon you actually own the files and are not just paying to “borrow” them from Apple.

If you have Amazon MP3 downloads (or any other MP3 downloads) you can use them directly and do not need to do anything with Step One or Step Two.  Go directly to Step Three.  This step is for converting CDs to MP3 format.

The best software that I have found for this process is CDEx.  Using CDEx we can rapidly put in a CD and have it turn out a complete set of MP3s in no time with almost no interaction from us.  It is fast and easy.  Far easier than converting DVDs.

At this time I am going to leave out the complete directions for converting using CDEx.  Needless to say, set your encryption to MP3 using LAME, setup your email address with the FreeDB settings so that you can download CD information automatically, pop in your first CD and let CDEx work its magic.  I suggest using the very highest quality variable bitrate settings that you can for MP3 encoding.  We have so much cheap storage space these days and you want your music to sound as good as possible.

Step Three: Serving Up the Media

Now that we have h.264 video files (well, one file at least) and some audio MP3 music files we can set up our media server and test out our system.  Keep in mind that the “coolness” and useful of this system grow rapidly as you introduce more and more media files to it.  Once you have converted your DVD and CD libraries and no longer need to ever go to them in order to reach your movies and music from around your home suddenly the system because extremely useful.

For my own system I have decided to go with a dedicated media server, an HP Proliant DL185 G5, running OpenFiler and MediaTomb to server out my video and audio files.  Given the scale at which my system works this is a really good investment for me.  Down the road you may find a similar system to be worthwhile for you as well.  The DL185 can, at the time of this article, scale to 28TB of storage in a single, smallish server.  If you have a lot of movies this can be a great investment, especially when you consider that you can run RAID (redundant drives so that you don’t lose your data if one drive fails) which helps to protect your movie collection from drive failures.

For our purposes, we will assume that you are going to start using your system via a Windows desktop.  Likely you will want to invest in a minimum of a 1.5TB USB connected hard drive that you can use to store your movies.  Ideally you will also have some backup mechanism – possibly just a second drive on the same machine to which you can copy the files once per week or so.  But you don’t need that to get started.

The software that is easiest to use on Windows and that works very well is ps3mediaserver.  This software does not work on the large OpenFiler system that I am running, mostly because it attempts to do many things that I do not need.  For beginning users it is ideal.  Like all of our other software pacakges, it is free.

Install ps3mediaserver on your Windows desktop.  The default configuration should actually work immediately as soon as you turn it on.  It really is that simple.  There is one additional step that we need to perform, however, because of the way that we have done our transcoding.  In ps3mediaserver under the “Transcoding Settings” tab, under “Misc options” we need to enter “mp4, m4v, mp3” on the line that says “Skip transcode for following extensions (coma separated):”.  The reason that we do this is because the ps3mediaserver people assume that the files being served up for viewing on the network are a random assortment of video files that are poorly compressed and not prepared for viewing on the Sony PS3.  For us this is not the case.  We have painstakingly prepared our videos so that we could store them small and get maximum quality out of them.  If you allow ps3mediaserver to transcode itself it will take the long Handbrake process that we have done so carefully and do it “again”  and will do it on the fly while you are watching the video.

By transcoding on the fly we have several problems.  The first is that our videos are already in the exact format that we want for maximum quality.  Why would we want to make them worse?  The second is that our desktop will be working extremely hard while we watch movies rather than doing almost nothing.  If we do not transcode on the fly then our single Windows desktop should be able to server out many movies at the same time.  Not so if it is busy transcoding.  The third is that the good transcode done by Handbrake takes an idle computer easily six to ten hours to do.  That same work is them done in the ninety minutes of a normal movie’s playtime when transcoding on the fly.  That means that only one quarter or less of the “effort” is put in to making that movie look good.

The bottom line is that on the fly transcoding is sometimes necessary when movies are not prepped ahead of time but unless absolutely necessary because movies will not play without it, it should be completely avoided.  All it will do is make your entire experience far less than optimal.  It will also introduce new playback problems to your movies such as issues with fast forwarding.

So, once we have disabled transcoding we are good to go.  Click “Restart HTTP Server” and everything should be working.

Putting It All Together

Now that everything is set up you can go to your Sony PS3 which is, I hope, connected to your home network.  Under the video menu you should see the new ps3mediaserver running on your network.  You can navigate to it and through its menu system you should be able to find the movie that you just transcoded.  Click on it and, if all went well, it will start playing instantly.  Welcome to the world of the networked media server.  As you add movies, television, podcasts, photos and music to your system it will continue to become more and more useful.

In theory, using ps3mediaserver you will also be able to stream, automatically, to the XBOX 360 in addition to the PS3.  I have no yet been able to test this but will report back when I do.

There are many other devices other than the XBOX 360 and the PS3 that can play these videos over the network.  On the computer I use the VLC Media Player to watch the videos directly.

Adding Internet Content

Now that you have your own video and music content on your network you can now consider doing away with your streaming television content that you get from cable or satellite.  Check out the PlayOn software available from The Media Mall.  For just $40 you can get this software that also runs on your Windows desktop – it will run alongside ps3mediaserver – and it will make online streaming video sites like Hulu, CBS, ESPN and YouTube available to both your PS3 and your XBOX 360 so that you can get all of that great video on demand content through the same system as your home video collection!  If you have a NetFlix account this entire system becomes even more awesome as NetFlix On Demand (Play Instantly) offerings, which include a ton of content like shows from the Disney Channel, as NetFlix will play through PlayOn as well.

Having both your own media server and PlayOn together is an amazing combination.  No more need for DVD players, CD players or stacks of discs sitting around to become lost or scratch and no more searching for the CD that you want just to listen to one song.  It’s an “on demand” system that is very addicting.

]]>
https://sheepguardingllama.com/2009/08/building-a-home-media-server/feed/ 0
Installing MediaTomb on OpenFiler https://sheepguardingllama.com/2009/08/installing-mediatomb-on-openfiler/ https://sheepguardingllama.com/2009/08/installing-mediatomb-on-openfiler/#comments Wed, 05 Aug 2009 18:44:12 +0000 http://www.sheepguardingllama.com/?p=4374 Continue reading "Installing MediaTomb on OpenFiler"

]]>
If you have researched both FreeNAS and OpenFiler then you will be aware that a key difference between the two is the inclusion of a UPNP media server in FreeNAS.  This is lacking in OpenFiler and is a major piece of functionality that I with to have in my own installation.  I specifically would like a UPNP / DLNA server that will work easily with a number of devices such as the Sony PlayStation 3 and the XBOX 360.  After much work I decided that the best product would be MediaTomb to add this functionality to OpenFiler.

I originally started this article with the intent of installing ps3mediaserver onto my OpenFiler installation but, due to a ridiculous lack of support for dependencies, ps3mediaserver is not a reasonable possibility for this platform.  As it turns out, though, MediaTomb is actually a better, lower resource usage, simpler option that does exactly what I want and does not require careful tuning to force to behave logically.

Installing MediaTomb onto a working OpenFiler system is actually extremely easy as MediaTomb is packaged with all dependencies included in the option binary package for Linux 32bit.  Simply download the i386 static binarys from the MediaTomb Static Binaries Download page.

You can then just unpack the download tarball to the /opt directory using “tar -xzvf” and you have a working system already!  It is actually that simple.  One of the great things about MediaTomb is that it does not attempt to transcode your media files lowering the quality and eating CPU cycles.  It is simply a UPNP / DLNA server.  If you are like me all of your media has been carefully transcoded ahead of time for maximum quality versus storage.  I certainly don’t want low quality, real-time transcoding degrading my video experience.  Many people do but if you are running a full storage server like OpenFiler you probably do not want it busy transcoding media files every time that they are served out.

You can start MediaTomb from the command line simply using the command:

nohup /opt/mediatomb/mediatomb.sh &

And away you go.  In my case I renamed the MediaTomb directory to /opt/mediatomb to make it easier to use.  When you fire it up you will get an on-screen message telling you where the web management interface to the software will be.  You can simply go to the web page to add your media directories to MediaTomb so that it can scan them and make them available via UPNP.

Caveat: I have noticed that MediaTomb tends to crash for me about once every twenty-four hours.  Not a major issue, restarting is quick and easy.  I am still investigating this and hope to have an answer soon but it is not a major issue.

Why not ps3mediaserver?

In order to make ps3mediaserver work you need to manual fulfill a large number of dependencies.  Ps3mediaserver comes as a tarball, not as a system package like RPM, DEB, Conary, etc. and so all dependencies are yours to discover and fulfill.  On Red Hat, Ubuntu or Suse systems these dependencies are often fulfilled by default and can be ignored.  On rPath, however, which is a dedicated appliance, server OS not only are they not filled by default but the necessary packages are not even available for the platform!

You will need to install Java for starters.  This will allow ps3mediaserver to run and serve out audio files.  If this is all you want then you can go down this route.  But once you start ps3mediaserver you will discover that it has no normal administrative interface and is designed to only work with an X GUI.  Of course, no one has X installed by default on rPath Linux – this is a server not a desktop.  This is an extremely silly requirement for ps3mediaserver and really shows that they do not intend this to be used in a serious installation like what we are doing here.  This is a desktop solution like iTunes.  Fine for most people but we are on a different scale here.

So to configure your new ps3mediaserver you will need to install Xterm and get remote X to your server.  If you are working from Windows then you will need an X server like Mocha to handle this.  You can install all of the necessary packages for this using “conary update xterm” but this is just the beginning of your problems.

You can set ps3mediaserver to not transcode but on Linux, without the transcoding libraries installed, it won’t work.  It will attempt to transcode regardless of the settings.  You can verify this by checking the media type from your PS3 or other video player.  For me my pristine, low bandwidth h.264 MP4 files were being displayed as being MPEG2.  This does not happen with ps3mediaserver on Windows with the statically compiled binaries.  Rather inconsiderate for the ps3mediaserver project to compile such for other platforms but to cripple the Linux version without so much as a list of dependencies that we need to fulfill.

You will need ffmpeg and mencode for starters.  Good luck.  Neither are available for the rPath platform and they do not compile using the included compilation environment.  You will, of course, need to install an entire compilation environment just to get started with these.  More software not exactly appropriate on a server.  You can remove it once you are done but then how do you update your system?

The bottom line is that you should avoid ps3mediaserver on the rPath platform and stick with MediaTomb.  The ps3mediaserver project just is not ready for prime time from what I have seen.  They are okay on carefully controlled environments but they are not yet prepared to really run on a “production” media server.  They have some great potential to be sure.  I’ve run their project on Windows and it is very nice.  Over the top complicated but nice.  However getting it to run for the highest possible quality, like MediaTomb does as its only real feature, requires a lot of work and a lot of extra libraries and bloat for a relatively simple system.

]]>
https://sheepguardingllama.com/2009/08/installing-mediatomb-on-openfiler/feed/ 7
Third Party Hard Drive for HP Proliant DL185 G5 https://sheepguardingllama.com/2009/05/third-party-hard-drive-for-hp-proliant-dl185-g5/ https://sheepguardingllama.com/2009/05/third-party-hard-drive-for-hp-proliant-dl185-g5/#comments Sun, 31 May 2009 18:06:25 +0000 http://www.sheepguardingllama.com/?p=4088 Continue reading "Third Party Hard Drive for HP Proliant DL185 G5"

]]>
This document applies directly to the Hewlett Packard Proliant DL185 G5 server.  I have tested this with the twelve front bay configuration and will test shortly with the rear-facing drive configuration as well.  [Edit – Tested with fourteen drive configuration and it checked out just fine.]

When buying a hot-swap SAS or SATA 3.5″ hard drive for use in your new HP Proliant DL185 G5 you can acquire them directly from HP with the drive carrier (or sled, caddy) already attached.  This is the easiest method.  If you are like me and prefer to select your own drives from third party makers (in my case, I want to use low power, high capacity Seagate Barracuda LP drives) then you must purchase your hot swap drive sleds separately.  Finding the correct part number from HP can be quite a hassle.  Even calling them for support can be tricky as almost no one buys this part directly.

If you wish to get your drive trays separately and not through HP you may be in tough shape.  HP does not stock this part and, in fact, is unable to even look up this part number for you.  I spent some time working with HP in the US on this issue and they were able to provide a visual confirmation on the part for me but could not verify the quality or the usability of the third party drives that I was able to find.  So I was stuck taking a risk to see if these drives would work.  For some machines HP can provide a part number and sometimes can even sell the caddy themselves, but not in the case of the 185 G5.  I have taken the time both with HP and with the third party vendors and with the server in-hand to verify these parts so you do not have to do so.

The part that you need to purchase is HP Part Number: 373211-001.  This part is generally priced around $35 USD.  You will need as many as fourteen of them to fully populate the DL185 G5 drive with the two optional large drive bays (twelve in front and two in back) but you can use them individually as well, or course.  I have had good luck and have gotten a good price getting these trays from Discount Technology: DL185 G5 Hot Swap Drive Tray.

Beware of shops attempting to sell you a much lower cost alternative to this part number.  Quite often the lower cost part is actually a drive blank.  A drive blank is simply a plastic air dam that corrects airflow through the server chassis when a drive is not present. Many of these drive blanks should ship with your DL185 G5 when it is new.  They are readily available and very inexpensive but, mostly, useless.

The big advantage of working with third party sleds and drives is that the DL185 G5 can be populated for thousands of dollars less and can house as much as 28TB of storage in a tiny 2U server.  This is possibly the second densest storage unit on the market when used with the Seagate Barracude LP 2TB drives – the densest is the Sun x4500 “Thumper” 4U storage server at many times the cost of the DL185.

Also, when ordering a DL185 G5 you should be aware that if you get the larger twelve hard drive front drive bay that you cannot also have a front loading optical drive and will need to get your optical drive rear facing.  If you get the optional dual hot swap rear facing drive option then you cannot have a rear facing optical drive.  If you choose both of these options you must use a USB-based optical drive in order to boot from optical media.  This is not always obvious when you are attempting to order one of these machines.

]]>
https://sheepguardingllama.com/2009/05/third-party-hard-drive-for-hp-proliant-dl185-g5/feed/ 31
Intel Core 2 Duo Reports as Pentium III in Windows XP https://sheepguardingllama.com/2009/04/intel-core-2-duo-reports-as-pentium-iii-in-windows-xp/ https://sheepguardingllama.com/2009/04/intel-core-2-duo-reports-as-pentium-iii-in-windows-xp/#respond Thu, 02 Apr 2009 22:18:56 +0000 http://www.sheepguardingllama.com/?p=3784 Continue reading "Intel Core 2 Duo Reports as Pentium III in Windows XP"

]]>
I have been running into a lot of people having issues with Windows XP recently who use WMI to poll data from machines running Intel Core 2 Duo processors.  These processors report as Pentium III processors through the WMI Win32_Processor class.

This is caused by an actual bug that appears to be impacting Windows XP Pro SP2 and Windows XP Pro SP3 machines only.  It most definitely does not affect the Windows Vista family.

Under some circumstances the Core 2 processor may report as a Pentium III or as a Pentium III XEON.

This behaviour has no known ill side effects other than causing confusion for people attempting to catalogue their machines via the WMI interface and finding that their machines believe themselves to be quite old even when they are obviously very new.

A hotfix is available from Microsoft for this issue: KB953955

References:

http://news.softpedia.com/news/XP-SP3-Win32-Processor-Class-Labels-Intel-Core-2-Duo-CPUs-Incorectly-90201.shtml

http://wccftech.com/forum/computer-talk/20133-xp-sp3-win32-processor-class-labels-intel-core-2-duo-cpus-incorrectly.html

]]> https://sheepguardingllama.com/2009/04/intel-core-2-duo-reports-as-pentium-iii-in-windows-xp/feed/ 0 Accessing HP Integrity MP for Newbies ^Ecf https://sheepguardingllama.com/2009/03/accessing-hp-integrity-mp-for-newbies-ecf/ https://sheepguardingllama.com/2009/03/accessing-hp-integrity-mp-for-newbies-ecf/#respond Mon, 30 Mar 2009 02:03:40 +0000 http://www.sheepguardingllama.com/?p=3761 Continue reading "Accessing HP Integrity MP for Newbies ^Ecf"

]]> So you’ve bought/inherited/stolen a hot HP Integrity server… now what do you do?  For those not up on their EPIC systems, the HP Integrity line is Hewlett-Packard’s Intel Itanium 2-based server line running the EPIC IA64 architecture.  These are seriously high-end servers and not to be taken lightly.

So you acquire one.  The first thing that you may realize is that you have absolutely no way to connect to it.  Well, this can be made pretty difficult by the fact that we do not know what state your server is in.  If it is in pristine condition then the easiest thing to do will be to set the IP address via ARP since DHCP is disabled by default.

Look on the info card on the front of your server (on my rx2600 this is a card that has a tiny handle that you can pull forward.)  On this card is the unit’s MAC address.  Use this to manually set the IP address of the management system via a computer on the local subnet.  First, make sure that you have plugged in and plugged in the management console ethernet connection.  In this example we will set the management system address to 192.168.2.28:

arp -s ma-ca-dd-re-ss-00 192.168.2.28

ping 192.168.2.28

Where the MAC Address is the address of your machine (not my cleaverly written MAC address!)  If all goes well this will set your system address and you will be good to go.  If you system has already been set up for DHCP then this technique will not work.  Check your DHCP server logs to see what address was handed out to the MAC Address that you just looked up.

Now, using Windows telnet or, better yet, the amazing PuTTY tool you can connection to your new server’s management console:

telnet 192.168.2.28

Or, of course, you can connect via your web browser if you have Java installed:

http://192.168.2.28/

Now, if you have an older version of MP the username / password that you are looking for are both blank by default.  Just hit the enter key a couple of times and you should be in.  If your MP firmware has been updated then the default username and password are Admin / Admin along with a default operator of Oper / Oper.

If your system is like mine you will now be presented with some warning and a notice that you must press ^Ecf in order to access the system.  This can be a bit confusing.  Here is the long description that helps to solve the mystery of what to press: Hit Control-e, then release completely.  Then press c.  Then press f.  This is a three key “sequence” not a “chord.”  Only the first character in the sequence is “controlled”.

If this wasn’t confusing enough, after hitting this three key sequence you then need to hit Control-b in order to be dropped to the Management Process system.

If all goes well, you will be dropped to the MP> command parser so that you can begin to use your system.  I include this all here because my first experience with an HP Integrity rx2600 was a bit daunting and everyone online seemed to assume a rather extensive amount of access to documentation, cabling, HP resources and mind reading capacity.

Good luck and welcome to the world of the HP Integrity server!

]]>
https://sheepguardingllama.com/2009/03/accessing-hp-integrity-mp-for-newbies-ecf/feed/ 0
Linux Active Directory Integration with LikeWise Open https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/ https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/#respond Sun, 01 Mar 2009 23:27:12 +0000 http://www.sheepguardingllama.com/?p=3648 Continue reading "Linux Active Directory Integration with LikeWise Open"

]]>
I downloaded the latest RPM package (for Red Hat, Suse, CentOS and Fedora) from the LikeWise web site (you need to register before starting your download.)  I downloaded the RPM package to the /tmp directory.  The version that I am testing is the Winter 2009 Edition.

Warning: LikeWise modifies many configuration files and its uninstall routine does not replace these.  Installing LikeWise and then uninstalling again will likely cause you to lose the ability to log back in to your machine.  Treat modifying authentication systems with the utmost care.

The RPM download still uses a script so you will need to add execute permissions.

chmod a+x LikewiseIdentityServiceOpen-5.1.0.5220-linux-x86_64-rpm.sh

./LikewiseIdentityServiceOpen-5.1.0.5220-linux-x86_64-rpm.sh

The package steps you through the installation program.  You will need to accept the license as there are actually several packages, covered under various licenses, that need to be installed to support LikeWise.  If you are installing on an AMD64 platform then you will be questioned as to whether or not you want to install 32-bit support libraries.  Unless you really know what you need just select the “auto” option.  After that, the installation will take care of itself.

If you use SELinux like you should, you will need to turn this off during the configuration.

setenforce Permissive

Then we can join the Linux machine to the Active Directory domain.

/opt/likewise/bin/domainjoin-cli join exampledomain.com domainadminuser

At this point basic authentication is already working.  You will need to make some changes to your setup if you have existing accounts as well, but we can address that later.

Test your login:

ssh -l exampledomain\\username linuxhostname

Once you are all set do not forget to turn SELinux back on.

setenforce Enforcing

The big caveat with using LikeWise Open for your Unix to AD integration needs is that there is no Windows to UNIX GID/UID mapping so your UNIX (Linux, Solaris, Mac OSX, etc.) machines are stuck using Windows IDs.  This is not necessarily the end of the world depending on your environmental needs but it can be quite a pain if you are introducing AD into a large, established Unix environment.  LikeWise Enterprise does not suffer from this limitation, but it is obviously not free.

]]>
https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/feed/ 0
Third Party Hard Drive for HP Proliant DL385 G5 https://sheepguardingllama.com/2009/02/third-party-hard-drive-for-hp-proliant-dl385-g5/ https://sheepguardingllama.com/2009/02/third-party-hard-drive-for-hp-proliant-dl385-g5/#respond Sat, 28 Feb 2009 17:41:37 +0000 http://www.sheepguardingllama.com/?p=3642 Continue reading "Third Party Hard Drive for HP Proliant DL385 G5"

]]>
This document applies directly to the Hewlett Packard Proliant DL385 G2 and DL385 G5 servers which share a physical chassis.  To the best of my knowledge, this will also apply to the DL585 G2 and DL585 G5 which should share an eight bay drive cage with their 3xx series cousins.  I also believe that this applies to the Intel based DL380 G5 as well as the DL580 G5.  (The DL380 G4 and the DL580 G4 use different drive configurations as does the DL385 G5p.)

When buying a hot-swap SAS or SATA 2.5″ hard drive for use in your new HP DL385 G5 you can acquire them directly from HP with the drive carrier (or sled, caddy) already attached.  This is the easiest method.  If you are like me and prefer to select your own drives from third party makers (in my case, I want to use high performance Seagate drives) then you must purchase your hot swap drive sleds separately.  Finding the correct part number from HP can be quite a hassle.  Even calling them for support can be tricky as almost no one buys this part directly.

I have already done the legwork to find the correct part number and have purchased and tested this part to be sure that it is correct.  The part that you need to purchase is HP Part Number: 378343-002.  This part is generally priced around $50 USD.  You will need eight of them to fully populate the DL385 G5 drive housing but you can use them individually as well, or course.

Beware of shops attempting to sell you a much lower cost alternative to this part number.  Quite often the lower cost part is actually a drive blank.  A drive blank is simply a plastic air dam that corrects airflow through the server chassis when a drive is not present.  Seven of these drive blanks should ship with your DL385 G5 when it is new.  They are readily available and very inexpensive but, mostly, useless.

If you need to reach HP’s Parts Store directly you can call them at (800) 227-8164 in the US.

]]>
https://sheepguardingllama.com/2009/02/third-party-hard-drive-for-hp-proliant-dl385-g5/feed/ 0
WordPress on Red Hat / CentOS Linux https://sheepguardingllama.com/2009/02/wordpress-on-red-hat-centos-linux/ https://sheepguardingllama.com/2009/02/wordpress-on-red-hat-centos-linux/#respond Thu, 26 Feb 2009 00:17:36 +0000 http://www.sheepguardingllama.com/?p=3614 Continue reading "WordPress on Red Hat / CentOS Linux"

]]>
If you run WordPress on Red Hat Enterprise Linux (RHEL) or its free cousin CentOS then you will likely run into the following error after you have unpacked WordPress, installed it and tried to do your initial setup:

Error establishing a database connection

This either means that the username and password information in your wp-config.php file is incorrect or we can’t contact the database server at databasename. This could mean your host’s database server is down.

  • Are you sure you have the correct username and password?
  • Are you sure that you have typed the correct hostname?
  • Are you sure that the database server is running?

If you’re unsure what these terms mean you should probably contact your host. If you still need help you can always visit the WordPress Support Forums.

You are not alone, this happens to everyone.  If you do some searching on this you will find that pretty much no one has an answer for what is wrong.  People running MySQL server locally already know the trick necessary to fix this problem but if you are running MySQL remotely, as I am, then you can be easily mislead into thinking that the fix does not apply to you, but it does.

The issue here, surprisingly, is that SELinux is enabled on the web server and is keeping the MySQL library from communicating with the MySQL server whether local or remote.  Simply set SELinux to Permissive rather than Enforcing and voila, you should be working well.

The command to set SELinux to Permissive mode is:

setenforce 0

You can verify that the mode has changed correctly with:

getenforce

It is important to note that this SELinux issue (bug, I am told) does NOT affect the MySQL client but does affect PHP.  So if you are testing your database connection with “mysql” and it works but WordPress throws the error than you are a prime candidate for this problem.

Also, be sure that PHP has the MySQL module installed:

yum install php-mysql

I have seen this issue on several versions of all of the software components but specifically just dealt with it in CentOS 5.2 with PHP 5.1.6 and WordPress 2.7.1.

]]>
https://sheepguardingllama.com/2009/02/wordpress-on-red-hat-centos-linux/feed/ 0
How To – Easy NTP on Solaris 10 https://sheepguardingllama.com/2009/02/how-to-easy-ntp-on-solaris-10/ https://sheepguardingllama.com/2009/02/how-to-easy-ntp-on-solaris-10/#respond Fri, 20 Feb 2009 20:44:47 +0000 http://www.sheepguardingllama.com/?p=3595 Continue reading "How To – Easy NTP on Solaris 10"

]]>
Setting up NTP (the Network Time Protocol) on Solaris 10 is very simple but requires a few less than obvious steps that can trip up someone looking to set up a basic NTP daemon to sync their local machine.

The first step is to install the NTP packages SUNWntpr and SUNWntpu, both of which are available from the first CD of the Solaris 10 installation CDs.  These packages are located, along with the others, are located in /mnt/cdrom/Solaris_10/Product/ assuming that you mounted your Solaris 10 CD 1 or its ISO image to /mnt/cdrom, of course.  Personally, I keep an ISO copy of this CD available on the network for easy access to these packages although they could very easily be copied off into a package directory.  Depends on the number of machines which you need to maintain.

Go ahead and install the two packages.  This can be done easily by moving into the Product directory and using the “pkgadd -d .” command and selecting the two packages from the menu.  There are no options to worry about with these packages so just install and then we are ready to configure.

The “gotcha” with NTP on Solaris is that there is no default configuration to get you up and running automatically and most online information about the installation either leaves out this portion or supplies details unlikely to be used under common scenarios.

Solaris’ NTP comes with two sample configuration files, /etc/inet/ntp.client and /etc/inet/ntp.server.  Confusingly, for the most basic use we are going to want to work from the ntp.server sample file rather than from the ntp.client sample file.  NTP uses /etc/inet/ntp.conf as its actual configuration file and, as you will notice, after a default installation this file does not exist.  So we start by making a copy of ntp.server.

# cp /etc/inet/ntp.server /etc/inet/ntp.conf

Now we can make our changes to the new configuration file that we have just created.  I will ignore any of the commented lines here and only publish those lines actually being used by my configuration.  In this case I have gone with the most simple scenario which includes using an external clock source and ignoring my local clock.  In a production machine you should set up the local clock as a fallback device.

For my example here, I am syncing NTP on Solaris 10 to the same machine pool to which my CentOS Linux machines get their time, the CentOS pool at ntp.org.  You should replace the NTP server names in this sample configuration with the names of the NTP servers in the pool which you will use.

server 0.centos.pool.ntp.org
server 1.centos.pool.ntp.org
server 2.centos.pool.ntp.org
server 3.centos.pool.ntp.org
broadcast 224.0.1.1 ttl 4
enable auth monitor
driftfile /var/ntp/ntp.drift
statsdir /var/ntp/ntpstats/
filegen peerstats file peerstats type day enable
filegen loopstats file loopstats type day enable
filegen clockstats file clockstats type day enable
keys /etc/inet/ntp.keys
trustedkey 0
requestkey 0
controlkey 0

This very standard and simple setup provides you with four servers from which to obtain NTP data and also rebroadcasts this data on the local network via multicast using the NTP standard multicast address of 224.0.1.1.  Feel free to remove or comment out the broadcast line if you have no desire to have any machines locally getting their NTP data from this machine.  The ease of which you can republish NTP locally via multicast is just too simple to pass up.

Now that we have a working configuration file, we need to fire up NTP and let it sync up with our chosen servers.  The best practice here is to use the ntpdate command a few times to get the box date and time as close as reasonable to accurate before turning NTP loose to do its thing.  The NTP daemon is designed to slowly adjust the clock whereas ntpdate will set it correctly immediately so this gets the initial time correct right away.

# ntpdate pool.ntp.org; ntpdate pool.ntp.org

# svcadm enable ntp

At this point, the NTP Daemon should be running and your time should be extremely accurate.  You can verify that NTP is running by looking in the process pool for /usr/lib/inet/xntpd which is the actual name of the NTP Daemon running on Solaris 10.

]]>
https://sheepguardingllama.com/2009/02/how-to-easy-ntp-on-solaris-10/feed/ 0
Time Sync on VMWare Based Linux https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/ https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/#comments Thu, 19 Feb 2009 02:57:49 +0000 http://www.sheepguardingllama.com/?p=3586 Continue reading "Time Sync on VMWare Based Linux"

]]>
In many cases it can be quite difficult to accuracy keep time on a virtualized operating system due to the complex interactions between the hardware, host operating system, virtualization layer and the guest operating system.  In my case I found that running Red Hat Linux 5 (CentOS 5) on VMWare Server 1.0.8 resulted in an unstoppable and rapid slowing of the guest clock.

The obvious steps to take include running NTP to control the local clock.  This, however, only works when the clock skews very slowly.  In my case, as in many, the clock drifts too rapidly for NTP to handle.  So we need another solution.  VMWare recommends installing VMWare Tools on the guest operating system and subsequently adding the following to your VMX configuration file:

tools.syncTime = true

This does not always work either.  You should also try changing you guest system clock type.  Most suggestions include adding clock=pit to the kernel options.  None of this worked for me.  I had to resort to a heavyhanded NTP trick – putting manual ntpdate updates into cron.  In my case, I set it to update every two minutes.  The clock still drifts heavily within the two minute interval but for me it is an acceptable amount.  You should adjust the update interval for your own needs.  Every five minutes may easily be enough but more frequently might be necessary.

Using crontab -e under the root user, add the following to your crontab:

*/2 * * * * /usr/sbin/ntpdate 0.centos.pool.ntp.org

For those unfamiliar with the use of */2 in the first column of this cron entry, that designates to run every two minutes.  For every five minutes you would use */5.  Remember that it takes a few minutes before cron changes take effect.  So don’t look for the time to begin syncing for a few minutes.

For me, this worked perfectly.  Ntpdate is not subject to the skew and offset issues that ntpd is.  So we don’t have to worry about the skew becoming too great and the sync process stopping.

If anyone has additional information on syncing Linux in this situation, please comment.  Keep in mind that this is for Red Hat Linux and the kernel with RHEL5 is 2.6.18 which does not include the latest kernel time updates that may be found in some distributions like Ubuntu.  Recent releases of Ubuntu likely do not suffer this issue as I expect OpenSuse 11.1 or the latest Fedora would not either.

]]>
https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/feed/ 7
Installing Windows Server 2003 on Xen on Red Hat Linux 5 https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/ https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/#respond Mon, 02 Feb 2009 03:31:21 +0000 http://www.sheepguardingllama.com/?p=3492 Continue reading "Installing Windows Server 2003 on Xen on Red Hat Linux 5"

]]>
After being challenged several times during the process of installing Windows Server 2003 into a fully virtualized Xen environment on Red Hat Enterprise Linux 5 (RHEL 5 or CentOS 5) I decided that a quick tutorial for those of you who wish to install in exactly the same way would be helpful.  There are several potential road blocks that must be addressed including issues with accessing the graphical console (necessary for a normal Windows installation process)  if you are not working from a local Linux workstation with a graphical environment installed.

I like to start a Xen installation using the very handy virt-install command.  Virt-install, available by default, makes creating a new virtual machine very simple.  I will assume that you are familiar with this part of the process and already have Xen installed and working.  If you are not sure if your environment is set up properly, I suggest that you start by paravirtualizing a very simple, bare-bones Red Hat Linux server using the virt-install process to test out your setup before challenging yourself with a much more lengthy Windows install that has many potential pitfalls.

The first potential problem that many users face is a lack of support for full virtualization.  This is becoming less common of a problem as time goes on.  Full virtualization must be supported at the hardware level in both the processor and in the BIOS/firmware.  (I personally recommend the AMD Opteron platform for virtualization but be sure to get a processor revision, like Barcelona or later, that supports this.)

Using virt-install to kick off our install process is great but, most likely, you will do this and, if all goes well, you will begin to format your hard drive and then you will find that your Xen machine (xm) simply dies leaving you with nothing.  Do not be concerned.  This is a known issue that can be fixed with a simple tweak to the Xen configuration file.

CD Drive Configuration Issues

In some cases, you may have problems with your CD / DVD drive not being recognized correctly.  This can be fixed by adding a phy designation in the Xen configuration file to point to the CD-Rom drive.  This is only appropriate for people who are installing directly from CD or DVD.  Most people prefer to install from an ISO image.  Using an ISO does not have this problem.

In Red Hat, your Xen configuration files should be stored, by default, in /etc/xen.  Look in this directory and open the configuration file for the Windows Server 2003 virtual machine which you just created using virt-install.  There should be a “disk =” configuration line.  This line should contain, at a minimum, configuration details about your virtual hard drive and about the CD ROM device from which you will be installing.

The configuration for the CD ROM device should look something like:

disk = [ “file:/dev/to-w2k3-ww1,hda,w”, “,hdc:cdrom,r” ]

You should change this file to add in a phy section for the cdrom device to point the system to the correct hardware device.  On my machine the cdrom device is mapped to /dev/cdrom which makes this very simple.

disk = [ “tap:aio:/xen/to-w2k3-ww1,hda,w”, “phy:/dev/cdrom,hdc:cdrom,r” ]

Accessing the Xen Graphical Console Remotely via VNC

If you are like me you do not install anything unnecessary on your virtualization servers.  I find it very inappropriate for there to be any additional libraries, tools, utilities, packages, etc. located on the virtualization platform.  These are unnecessary and each one risks bloat and, worse yet, potential security holes.  Since all of the guest machines running on the host machine all all vulnerable to any security concerns on the host it is very important that the host be kept as secure and lean as possible.  To this end I have no graphical utilities of any kind available on the host (Dom0) environment.  Windows installations, however, generally require a graphical console in order to proceed.  This can cause any number of issues.

The simplest means of working around this problem is to use SSH forwarding to bring the remote frame buffer protocol (a.k.a. VNC or RFB) to your local workstation which, I will assume, has a graphical environment.  This solution is practical for almost any situation, is very secure, rather simple and is a good way to access emergency graphical consoles for any maintenance emergency.  Importantly, this solution works on Linux, Mac OSX, Windows or pretty much any operating system from which you may be working.

Before we begin attempting to open a connection we need to know on which port the VNC server is listening for connections on the Xen host (Dom0).  You can discover this, if you don’t know already from your settings, by running:

netstat -a | grep LISTEN | grep tcp

On Linux, Mac OSX or any UNIX or UNIX-like environment utilizing a command-line SSH client (OpenSSH on Windows, CygWin, etc. will also work on Windows in this way) we can easily establish a connection with a tunnel bring the VNC connection to our local machine.  Here is a sample command:

ssh -L 5900:localhost:5900 [email protected]

If you are a normal Windows desktop user you do not have a command-line integrated SSH option already installed.  I suggest PuTTY.  It is the best SSH client for Windows.  In PuTTY you simply enter the name or IP address of the server which is your Dom0 as usual.  Then, before opening the connection, you can go into the PuTTY configuration menu and under Connection -> SSH -> Tunnels you can specify the Source Port (5900, by default for VNC but check your particular machine) and the Destination (localhost:5900.)  Then, just open your SSH connection, log in as root and we are ready to connect with TightVNC Viewer to our remote, graphical console session.

If you are connecting on a UNIX platform, such as Linux, and have vncviewer installed then you can easily connect to your session using:

vncviewer localhost::5900

Notice that there are two colons between localhost and the port number.  If you only use one colon then vncviewer thinks that you are entering a display number rather than a port number.

If you are on Windows you can download the viewer from the TightVNC project, for free, without any need to install.  Just unzip the download and run TightVNC Viewer.  You will enter localhost::5900 and voila, you have remote, secure access to the graphical console of your Windows server running on Xen on Linux.

]]>
https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/feed/ 0
Scope Creep in Software Development https://sheepguardingllama.com/2009/01/scope-creep-in-software-development/ https://sheepguardingllama.com/2009/01/scope-creep-in-software-development/#respond Sat, 10 Jan 2009 15:13:23 +0000 http://www.sheepguardingllama.com/?p=3366 Continue reading "Scope Creep in Software Development"

]]>
Few words strike fear into the hearts of software development managers and developers than scope creep. Long the bane of the development industry, scope creep has affected almost everyone who works on software – even those who work for themselves but especially those who report to a client providing software specifications.  The perennial question in software development management has always been “How can we manage or eliminate scope creep?”

First, let us define scope creep for the purposes of this discussion.  Scope Creep refers to the tendency within software development projects for the scope of a project to slowly grow over time through changing requirements and specifications.  Scope Creep is a disaster for software projects because it makes budgeting nearly impossible, predictions of completion nearly always incorrect – often by astounding margins, makes determination of project feasibility questionable as the development team may not be aware of technical challenges that will be faced due to expanded requirements and can even cause a project to “scope thrash” in a state where “feature complete” is never clearly defined and a project never completes because the current state is never accepted as finished even if all existing features work correctly and current scope is far beyond the original requested scope.

While thinking about this article it occurred to me that there are two schools of thought on scope creep: those who believe accept that scope creep is an inherent artifact of the software development process and those who believe that it can be eliminated through project management practices (rather than simply mitigated.)  But that is not the whole story.  In reality, I realized, these two schools of thought are products of two higher issues: that some people believe that big design up front (aka BDUF) is a necessity and those who do not.  Even this, though, is not the whole picture.  At a higher level than this is the old argument as to whether software is an engineering discipline (engineering efforts require a small amount of design followed by a large amount of manufacturing or construction) or a design effort (where the design requires almost all of the effort and duplication is a trivial afterthought.)

Disclaimer: I am well aware that certain small segments of the software development community work on project types which absolutely require BDUF and locked requirements such as nuclear power station control systems or control systems for the space shuttle.  This represents a niche software market and falls outside of a more general discussion.  However, many projects of this type are only believed to be locked into BDUF processes because of client requirements and not because non-BDUF methodologies would necessarily produce less stable or reliable software.  Statistics indicate that we may, in many cases, be increasing risk and failure rates by requiring these projects to avoid the best practices of the industry in general.

The logic goes like this: If you believe that software development is an engineering task then you assume that you must complete all design before “construction” begins.  This requires that all or almost all design be completed prior to the beginning of construction or manufacturing.  This is the origins of big design up front.  This, in turn, requires that scope be carefully controller as BDUF has means of managing changes in scope and small changes can have catastrophic effects on an engineering project.  Imagine a bridge where, after construction has started, the people who have request the bridge decide that they choose the wrong location or that its load bearing properties need to be changed to carry trains in addition to automotive traffic!

One of the key differences between software development and traditional engineering disciplines in that traditional clients understood that if the products was already made or partially made that the cost of retooling for changes would be staggering.  Even a small change to a car (make it 2 inches longer) would send the entire design team back to the drawing boards and all aspects of the engineering portion would begin again and safety certifications, marketing, etc. would all have to retool almost as if they were starting from scratch.  In software, though, clients do not see the final product as immutable and do not understand the impact of changes.

Scope creep in software development is deeper than a misunderstanding between clients and software engineers, however.  In specifing a traditional engineering project it is generally fairly simple for the person requesting the product to provide meaningful and useful specifications.  Here is an example, I can actually order a bridge from the engineering firm by stating: the location at which I need the bridge, the carrying capacity of the bridge (two lane road, four lane road, etc.) and any special features (one walking path on the side, two paths, open sides for a view, etc.)  The rest of the bridge design is carried out without my input because what do I know about designing a bridge?  In some cases there might be several designs submitted, all which meet the specifications but with wildly different looks, from which I can choose one to build.  Once planning for that build has begun there will be cost involved with me changing my mind for obvious reasons.

Software engineering is not like this, though.  In bridge building (or car manufacturing or whatever traditional engineering discipline you wish to imagine) the people physically putting the bridge or car together make absolutely no decisions about what parts to use or how to assemble them. They are provided a design that tells them every nail, rivet, girder, component, material type, weld type, etc.  In software, the architect provides only a high level design and the software engineers or developers who actually create the software continue the low level design process choosing the design, components, structure, etc.  So the design process continues until completion, at least a a low level.  So in software everyone is an engineer and no one is a construction worker (or factory worker.)

This means that design is always underway no matter what we do.  If software developers were not doing any design and simply assembling code as specified by a higher level designer then that piece of the work would be automated simply and the higher level designer would “become” the developer in question.  That may sound strange until you think about high level languages like Java or C# .NET with large support libraries, application platforms like JBoss or Rails, GUI designers, toolkits like JQuery or Dojo, etc. all which come together to eliminate many routine design decisions for most projects allowing developers to switch from writing extremely low level code to focusing on more design issues and domain specific problems.  The “moving up” of the designer is a constantly occurring process that year by year makes the average developer able to do so much more than they could in the past and makes bigger and more complex software projects possible.

Given that we know that scope creep can and will happen to most projects we have three choices: due nothing and see what happens, manage and mitigate scope screep (the BDUF approach) or “embrace change” and build scope change into the ongoing design process (the Agile approach.)  Obviously, doing nothing, is a recipe for disaster and should be discounted.  Only the naive take this approach.

Managing and mitigating scope creep is a challenging and difficult project management endeavor.  This generally requires a large amount of client education and heavy contractual agreements limiting the client’s ability to make changes, to accept a reworking of scheduling and budgeting with each significant request and to lock client’s to a design at some point so that no further changes may be made.  This approach can work and often does but its failure rate is one of the most significant contributing factors to the high percentage of failed software development projects throughout the industry.

This approach forces clients to make many decisions at the beginning of a project at the point when the least is known about it.  It does not address changes external to the project that may force the need for scope change upon the project.  It also risks clients becoming disatisfied with the project partway along and finding it easier to pull out and start again than to rework what already exists.  Even when a project does complete it risks being outdated, poorly specified and at lower than anticipated relevance.

Moving to an Agile approach, where scope creep is accepted as being inherent to software development, things work very differently.  In most Agile practices there is some design up front but the idea is to design only those portions very unlikely to change and only to design enough to provide the basic foundation and to address any hidden problems that could arise but would have been discovered through an up-front design process.  Obviously no project will work well without any design at the beginning.  How would you even know where to begin?

Agile does not dispute the need for design but, in fact, is designed around the idea that the design is so important that it can’t be determined at the outset and left as-is.  Agile takes the approach that the design is far more important than that ands builds design into every aspect of the development project.

A moderate amount of design is done from the beginning.  Then design, both low lever and architectural, continues throughout the project.  This approach moves design decisions from the time when the least is known about the project to the time when the most is known.  It builds in flexibility so that as the client or business requesting the software learns how the software can and will work they can learn more about how they will use it or expand what they need it to do or even reduce features that they later feel are unnecessary (although we all know how unlikely that is to happen.)

With many Agile processes, such as Extreme Programming, there is a concept of keeping the project in a continual “shippable” state in which everything that has been implemented is currently working and functional.  In this way, at any point, if the project budget for time or money runs out the project is at least usable in some form even if not in the form originally envisioned.  In this manner, the scope starts very small and grows with each project iteration.  This is not creep but a built-in function of the development process and very much intended.  In this way, the client or business can decide to use the software at any time while allowing for growth and advancement to continue as well.  The project can peacefully continue until the client feels that additional features are no longer necessary.  There is still a risk that the client will never call an end to the addition of features but since the project is continuously shippable or usable the risk involved in not reaching the “end” of a project does not exist.

An important aspect of Agile methodologies is that features are prioritized and completed discretely on a priority basis.  By addressing features purely by their priority we can assure that the system always exists in its most usable state for the amount of time that has been given to it.

Both approaches to scope creep management have their merits.  The risk of the traditional BDUF approach is that it is often used to shift responsibility from the software development team to the client.  This is handy when you are the developers and have no long-term interest in the viability of the software but are only concerned about meeting contractual obligations (whether to a third part or to an internal business unit.)  The BDUF approach is about solidifying responsibility with the client even though they are not software designers.

The Agile approach accepts that the client or business requiring software is not a software engineer and needs to work with the team to produce something that will meet the business needs.  This approach is about meeting business needs not to lock the customer into paying for whatever original requirements were specified.  Locking in customers may sound great to some outsourcing shops but businesses would be well advised to look elsewhere for companies whose interest lies in making great and working software not in meeting contract requirements alone.

Businesses looking to generate software internally have no reason to support the use of BDUF which operates in the disinterest of the business itself.  This ideology pits the software development team against the business when, in reality, they should be aligned to work towards mutual benefit.

When deciding on a scope creep management strategy you must ask yourself, “Are you going to embrace the unknown and use it to your advantage or are you going to ignore it and continue to make software designed before all current information was gathered.”  Businesses whose processes are Agile have a distinct advantage in the marketplace as their products are based on current market needs and pressures and not based as a response to pressures that existed when the project was first proposed.

]]>
https://sheepguardingllama.com/2009/01/scope-creep-in-software-development/feed/ 0
Installing ruby-sqlite3 on Red Hat or CentOS Linux https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/ https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/#comments Sun, 23 Nov 2008 22:04:22 +0000 http://www.sheepguardingllama.com/?p=3012 Continue reading "Installing ruby-sqlite3 on Red Hat or CentOS Linux"

]]>
For my development environment, I like to SQLite3 on Red Hat Enterprise Linux (RHEL / CentOS.)  When working with the gem installer for the sqlite-ruby package I kept getting an error on my newest machine.  I searched online and found no answers anywhere while finding many people having this save problem.  I have found a solution.  There is no need to compile Ruby again from source.

The command used was:

gem install sqlite3-ruby

What I found was the following error:

gem install sqlite3-ruby
Building native extensions.  This could take a while…
ERROR:  Error installing sqlite3-ruby:
ERROR: Failed to build gem native extension.

/usr/bin/ruby extconf.rb install sqlite3-ruby
checking for fdatasync() in -lrt… no
checking for sqlite3.h… no

make
make: *** No rule to make target `ruby.h’, needed by `sqlite3_api_wrap.o’.  Stop.

Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-ruby-1.2.4 for inspection.
Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-ruby-1.2.4/ext/sqlite3_api/gem_make.out

There are two main causes of this problem.  The first is that the correct dev packages are not installed.  Be sure that you install the correct packages for Red Hat.  In RHEL 5, which I use, SQLite3 is now simply SQLite.

yum install ruby-devel sqlite sqlite-devel ruby-rdoc

If you are still receiving the error then you most likely do not have a C compiler installed.  The Gem system needs make and the GCC.  So install those as well.  (Obviously you could combine these two steps.)

yum install make gcc

Voila, you SQLite / SQLite3 installation on Red Hat (RHEL), Fedora, or CentOS Linux should be working fine.  Now your “rake db:migrate” should be working.

Update: If you follow these direction and get the error that sqlite3-ruby requires Ruby version > 1.8.5 then you can go to my follow-up directions on
SQLite3-Ruby Gem Version Issues on Red Hat Linux and CentOS

]]>
https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/feed/ 16
Resizing VMWare Server Virtual Disk https://sheepguardingllama.com/2008/11/resizing-vmware-server-virtual-disk/ https://sheepguardingllama.com/2008/11/resizing-vmware-server-virtual-disk/#comments Thu, 20 Nov 2008 18:12:30 +0000 http://www.sheepguardingllama.com/?p=2990 Continue reading "Resizing VMWare Server Virtual Disk"

]]>
Today I needed to resize a VMWare Virtual Disk (vmdk) for a Windows Server 2003 image running on a Red Hat Enterprise Linux 5 host using LVM to manage the physical, local disk space.  In my case, my logical volume was too small to accommodate the vmdk expansion and so I had to grow my logical volume before I could begin the VMWare portion of the work.

I must preface all of this, of course, by stating that you must make a complete backup of your virtual machine before doing something as invasive as this.  While this process is reasonably safe there is always the potential for disaster.  Take precautions.

The lvextend command is used to increase the size of the logical volume.  You can view your current logical volumes with lvdisplay.  I use the -L+ syntax as a safety measure to be sure that my drive is getting larger and not shrinking accidentally due to a typo.  In this example I am expanding the /dev/VolGroup00/lvxen logical volume by an additional 80GB.

lvextend -L+80G /dev/VolGroup00/lvvmware

This first step can be completed while the virtual machine is still running.  It will happily extend your available space in the background.  Our next step, however, requires that you power down your virtual machine before continuing.

Now that we have created space on our logical volume we need to expand the Linux local filesystem before we can expand the virtual filesystem running on top of it.  Assuming that we are using the current standard Ext3 this is very simple:

umount /dev/VolGroup00/lvvmware

e2fsck -f /dev/VolGroup00/lvvmware

resize2fs /dev/VolGroup00/lvvmware

mount /vmware/

Obviously for my purposes I use a /vmware directory structure for holding all of my disk images.  You will need to adjust as needed for your own setup.  /var/vmware is another common option.

Now we just enlarge the virtual disk itself.  We will do this through the vmware-vdiskmanager command.  You will need to execute this command on your vmdk and not your flat-vmdk even though this seems counter-intuitive when looking at your directory structure.

vmware-vdiskmanager -x 22GB ph-w2k3-ad.vmdk

This concludes the easy part.  Now you have plenty of logical disk space available for Windows but in order to expand the System drive of Windows you will need to use a third party tool.  Windows Server 2003 is unable to make partition changes that affect the running system.

If you are like me, you will want to fire up your virtual machine just to make sure that everything is okay after the disk change, but you will need to turn it off again before we make changes to the partition table.

There are many tools that can be used for this task but I decided to use GParted, which is available as a live CD which you can download for free.  For the version that I used, I just cd’d into /tmp and used this command to get my copy of GParted’s bootable CD ISO file.

wget http://downloads.sourceforge.net/gparted/gparted-live-0.3.9-4.iso?modtime=1222872844&big_mirror=0

Using your VMWare Server Console (or through the command line) you will need to set your Windows Server image to boot from the GParted ISO which you just downloaded.  Then go ahead and “start this virtual machine.”

You will likely need to hit “Esc” as soon as the virtual machine starts so that you can select to boot from CD.  I keep my Virtual BIOS set to boot directly to the hard drive under normal circumstances because it is faster and I don’t want to accidentally boot to some CD media unless I really, really mean it.

Once GParted starts you will be given a boot menu.  The default option works fine in more cases and worked fine for me.  You will need to select your keyboard layout and then you will be taken to GParted’s graphical partition manager screen.

Once in the GParted Partition Manager you should see the current partition that you had before we started, in my case called /dev/sda1 and marked as being an NTFS file system.  Mine also shows the “unallocated” partition space into which I will be expanding my /dev/sda1 partition.

Start by selection the partition which you are seeking to resize (sda1 for me) and then select “Resize/Move”.  This will open the Resize/Move window.  Do not alter the first numer – “Free Space Preceding”, this is for “moving” your partition.  You only want to alter the second number – “New Size.”  If you are doing like me and have created some empty space specifically for this purpose then you will simply set this number to the “Maximum Size” as displayed in the window.  Then select “Resize/Move” to continue.

Once you have completed that step you can visually confirm that the disk now looks the way that you want it to look.  If you look at the bottom of the window you will see that there is “1 operation pending.”  If everything looks alright go ahead and click “Apply” to commit your changes and to resize your partition.

Once the resizing completes you are safe to reboot your virtual machine into Windows again.  Double click the “Exit” button on the GParted desktop.  Reboot should already be selected so just choose OK to continue.

When Windows starts it will detect the drive configuration change and force a disk consistency check.  Allow it to run through this process and when it completes the system will restart automatically.  Once Windows restarts you should see that your drive has been resized.

]]>
https://sheepguardingllama.com/2008/11/resizing-vmware-server-virtual-disk/feed/ 2
Subversion Permission Issues https://sheepguardingllama.com/2008/11/subversion-permission-issues/ https://sheepguardingllama.com/2008/11/subversion-permission-issues/#comments Sun, 16 Nov 2008 19:19:02 +0000 http://www.sheepguardingllama.com/?p=2940 Continue reading "Subversion Permission Issues"

]]>
In my installation of Subversion (SVN) on Red Hat Enterprise Linux 5 (a.k.a. RHEL 5 or CentOS 5), I was attempting to access my working Subversion repository through the web interface using Apache.  I came across a permissions issue giving the following errors:

This one is from the Apache error log (/var/log/httpd/error_log) and is generated whenever an attempt to connect to the resource via the web interface is made:

[error] [client 127.0.0.1] Could not open the requested SVN filesystem  [500, #2]

This is what was visible from the web browser.  This is its rendering of the XML response.

<D:error>
<C:error/>
<m:human-readable errcode=”2″>
Could not open the requested SVN filesystem
</m:human-readable>
</D:error>

This one arose when attempting to run the svn command as the apache user (sudo -u apache svn list….)

svn: Can’t open file ‘/root/.subversion/servers’: Permission denied

I eventually discovered that this problem was being caused by the Subversion binary looking to the root home directory, instead of to the Apache / httpd home directory (~apache which was /var/www in my configuration.)  This is not the correct behaviour but until the issue is fixed you can fix the problem yourself with this:

cp -r /root/.subversion/* ~apache/.subversion/

chown -R apache:apache ~apache/.subversion/

]]>
https://sheepguardingllama.com/2008/11/subversion-permission-issues/feed/ 3
The Perfect Development Environment Manual https://sheepguardingllama.com/2008/11/the-perfect-development-environment-manual/ https://sheepguardingllama.com/2008/11/the-perfect-development-environment-manual/#respond Sun, 16 Nov 2008 19:08:14 +0000 http://www.sheepguardingllama.com/?p=2933 Continue reading "The Perfect Development Environment Manual"

]]>
This paper was written as my final paper for my graduate Process Management course at RIT.  The paper is based on a semi-fictional software as a service (software on demand) vendor which develops its own software and maintains its own development processes.  This document was to be formatted somewhere between a handbook and a memorandum to the CIO from the Director of Software Engineering.  So the document is somewhat technical and not all terms are explained.  Because of the target of the paper this document is a mix of theory and everyday practicality.  For example, at times we will discuss development methodologies but at other we discuss actual hardware specs that will be out of date almost immediatley.

To simplify things, the company in question is simply CompanyX and their main product is ProductY.  This document was originally a Word doc which I had to paste into Notepad and then into WordPress and then manually edit to restore formatting.  So please forgive any formatting mistakes as it was a lengthy manual process.

Overview

As requested, I have assembled a body of knowledge and best practices in process management with which our organization may promote healthy, productive software development and software development management.  Because CompanyX focuses solely on software as a service (i.e. “on demand”) products, many traditional approaches to process and project management do not apply or must be applied with an eye towards modifying their principles to work most aptly within CompanyX’s corporate framework.  This document is an attempt to distill the general body of knowledge available in this area and to modify it so that it is applicable and usable within CompanyX.

As you know, CompanyX’s approach to software development has always focused around forward-thinking, progressive hiring practices which aggressively seek to hire the absolute best IT professionals and extremely agile development practices designed to leverage the skill base of our developers most effectively while reducing overhead created in other organizations necessary to manage mediocrity in the environment.  Because of these practices it is necessary for any software development discussion within the CompanyX context to take these into consideration and to therefore alter generally accepted industry norms and practices to reflect this.

The basis of this document is not to enact change unnecessarily but to identify potential areas for improvement, to find areas which are weak, missing or poor and to codify practices which are working and should not be changed.  This document is very timely as CompanyX is currently undergoing a change from a single-focus medical facilities software firm to a multi-focused firm with a financial software group and internally managed hedge fund.

Current Development Status

To begin our discussion, I would like to look at the current state of the CompanyX development environment to establish a baseline against which we may compare our proposal of best practices and design.
CompanyX has two key datacenters, Chicago (internal and development) and Houston (production and Internet facing / non-VPN applications.)  Houston is in the process of being relocated to San Francisco.  A third datacenter, dedicated to backup storage and warm recovery needs, is being planned in Philadelphia.

At this time, CompanyX employs only a handful of developers most of whom do not need to directly collaborate as each works in a highly segregated technology niche which creates a situation of relative exclusion simply by its nature.  All CompanyX developers work from home offices provided by CompanyX using CompanyX workstations, software, networking hardware and telephones (when appropriate.)

Collaboration is handled through the secured corporate XMPP and eMail servers as well as over VoIP.  Remote desktops are visible using the central NX remote desktop acceleration system giving all employees access to one another on the central network.  Centralized documentation is handled through the corporate wiki.  Source control is handled via Subversion hosted at Peoria Heights – closest to the development servers.
Each home office has a hardware firewall with IPSec VPN acceleration to make networking to CompanyX’s Peoria Heights office transparent and simple.  Files can be transferred directly via CIFS or NFS from the central filers.

The main development platform at this time is Visual Studio 2008 using C# 3.0 and ASP.NET 3.5.  Visual Basic and VB.NET has been eliminated after years of weaning.  SQL Server 2005 and MySQL 4/5 are the key database platforms.  PHP is in use for some internal applications.

Given CompanyX’s current small scale this environment has proven to be quite effective, but we must realize that as growth and diversification occurs it will become increasingly necessary to add more structure and communication methods in order to keep projects moving fluidly.

Physical Location Issues

The first issue in development environments that we will address is physical space and location issues for the development teams.  Conventional wisdom says that the most practical means of getting maximum performance from developers is to locate them physically near to one another to facilitate rapid and smooth communications.  The need for rapid communications, however, is in direct conflict with the need that developers have for quiet and uninterrupted time for development to allow them to efficiently enter and maintain “the zone”.

“…knowledge workers work best by getting into “flow”, also known as being “in the zone”, where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done.” – Joel Spolsky, Joel on Software

In weighing the benefits of high level communications versus the benefits of isolation, flexible time, flexible location, travel, etc. we have determined that the cons of collocated development teams outweigh the benefits in our case.  Our approach, at this time, is to continue to pursue a completely work from home environment encouraging extremely flexible schedules allowing developers to maintain “the zone” for longer than usual durations while encouraging extended downtime when “the zone” is not obtainable.

Physical Environment

Regardless of the physical juxtaposition of developers a superior development environment is a necessity.  Following the recommendations of Joel Spolsky, a recognized expect in physical development environments, we intend to make Herman-Miller Aeron chairs available for all developers along with two 20.1” LCD monitors as a minimal, standard setup.  Additional monitors and choice of development platforms (Windows, Mac OSX, Linux, Solaris) are available at the request of the developers.  Also, all developers have access to commercial grade Bush desks that can be shipped directly from or picked up from Office Max.  These desks have surfaces that are very conducive to heavy mouse usage.

Communication Tools

Given that intra-team communications are of the utmost importance but recognizing that we are unable to effectively co-locate most of our staff we must turn to technology to mitigate this problem as far as is possible.  This is a challenge faced by all teams even when they have the luxury of physical proximity.  We believe that we hold an unapparent advantage in this area because we are acknowledging communications as a shortcoming up front and tackling this issue seriously rather than assuming that communications is handled vis-à-vis natural proximity.

We have a number of tools at our disposal that should function well in the role of providing a solid communications infrastructure for CompanyX.  We have many of these tools in place already such as a very successful email system and groupware package that handles basic communications both internally to the company and externally to clients and vendors.  The system is secure, reliable, readily accessible both inside and outside of the company, is tested to work well with BlackBerry and iPhone devices, etc.  We have team aliases set up to encourage team collaboration in this form.

Our email system also includes a simple document management system which has not proven useful for team collaboration but has shown to be very effective for individual user note-taking and reference information rather than storing information directly in an email format but still allowing this data to be accessed easily through the convenience of the email interface.

Group calendaring is currently proving to be a very useful addition to the eMail and groupware system.  Calendars are easily viewed by interested parties and teams can select calendar views to allow them to see the available and busy times of everyone on their team all at once in a single calendar view which is very effective for ad hoc planning and collaboration events.

In addition to asynchronous, archival communications available via eMail, CompanyX focuses heavily on the use of the XMPP/Jabber protocol for instant messaging via the OpenFire Server product.  This tool has worked very effectively for us in the past and our plans for the future involve a continued investment in this technology and an advancement in our client-side architecture with which to support communications. Currently we have a Java-based client, Spark by Ignite RealTime, that serves very well for general use.  We have begun testing an Adobe Flash based client that is available via a web page that may prove to be very useful for staff who need casual instant messaging access from machines where Spark is not installed.  It is our goal to make secure instant messaging access simple and ubiquitous so that staff are available to one another, in real time, whenever possible.

Often staff is required to be live on the instant messaging infrastructure while located in a secured office location that does not allow direct XMPP/S communications to the Internet.  For these users we have implemented an SSL Proxy / VPN product, SSL-Explorer, which allows them to access our applications such as eMail and XMPP even under the harshest network conditions.

Recently, CompanyX has moved from pmWiki to DokuWiki and has begun focusing heavily on using the wiki format to support as much internal development documentation as possible.  The wiki application is ideal for much of development communication because it is easy for any collaborator to create necessary documents, all contributors can edit their appropriate documents and staff requiring information have a central, searchable repository in which to perform searches for necessary information.  Wiki documents are automatically versioned making it trivial to track changes over time and to return to previous versions when necessary.

The wiki format offers many benefits that would be considered non-obvious.  DokuWiki allows staff to subscribe to wiki pages in which they have an interest getting automatic change notifications, either via eMail or via RSS/Atom feeds.  Through this subscription system the people who are responsible for or dependent upon a particular set of data can find out about changes immediately and react accordingly even if the person modifying the document is unaware of which parties may be affected.

The wiki format is useful for many types of documentation with standards documents, best practices, guidelines, project initiation documents such as the project charter and use cases being among the most effective.  The wiki is most useful as a supporting architecture for use cases which are the bread and butter of our design documents at CompanyX.  Their text basis and ongoing modifications lend them to be ideal candidates for wiki-ization.

Not all document types lend themselves well to the wiki architecture.  Rarely changing documents, end-user printable documentation and UML diagrams, for example, are more appropriately handled through a richer format such as Microsoft Word, Microsoft Visio, Umbrello and OpenOffice.org Writer.  To support these file formats a more traditional content management system is recommended.  The two products that are leading, at this time, in that area are Microsoft’s SharePoint Server and the Alfresco Content Management System. Both of these products often portal functionality so that they may function as a central starting point for staff when searching for content as well as document repositories with interfaces to traditional file stores.  Both may additionally be used as supporting frameworks for module application interfaces.

Voice communications remain very important, especially for detailed technical discussions in real time.  In order to support fluid and cost effective voice communications CompanyX will implement an internally hosted Voice over Internet Protocal (VoIP) system.  VoIP compares favorably to traditional voice technologies with many advanced features available such as conference calling, call forwarding, follow me service, voice-to-text support, auto-paging, voice to email interfaces, etc.  The cost of VoIP is also very low compared to traditional voice technologies.  Because VoIP systems are so cost effective, CompanyX has the opportunity to extend the internal corporate voice infrastructure into the homes of all staff providing a seamless environment so that neither internal staff nor external clients can easily tell when a given member of staff are located at home, on the road or in the office.  This flexibility and transparency is important in supporting the disparate office environment required by CompanyX.

Unified Communications platforms, such as those made popular by Microsoft, Cisco, Nortel and Avaya, have many advantages over our recommended approach of maintaining and moving forward with separate eMail, IM, VoIP and other communication products.  By choosing to work with separate products CompanyX has the benefit of choosing “best of breed” products rather than a single, unified product that does not necessarily need to compete head to head with industry leading products.  More importantly, however, Unified Communications or UC fall under a legal grey area, at this time, and may subject loosely regulated communications such as voice to long term scrutiny and archival penalties of communications such as eMail.  Because of laws within our primary jurisdiction, the United States, it is important that UC be avoided until legal hassles surrounding its implementation are resolved conclusively.

Development Methodologies

Development Methodologies are critical underpinnings of software development.  CompanyX has always focused on highly iterative and extremely agile development practices focusing heavily on those methods and practices that lend themselves first and foremost towards heavy innovation, rapid adaptation to the market and quick feature development.

Because of our needs and strategies, CompanyX has turned to highly modified Agile methods drawing inspiration from popular Agile methodologies, especially eXtreme Programming (XP) and SCRUM.
CompanyX has adopted the XP use of the “user story” as the primary measurement against which development is performed.  Stories are not completely “user” related and can sometimes be “developer” stories to allow simple scheduling for changes not visible to the end user.  This can be misleading to XP adherents as stories are designed to be visible to the client but CompanyX’s true end client is, in a fashion, the development organization itself and so we treat stories in an unusual way at times.

In order to support the continuous development processes within CompanyX any standard methodology must be adapted heavily as all major methodologies rely upon release cycles, sprints, etc.  CompanyX uses Continuous Sustainable Velocity as its model and does not have scheduled release cycles.

A development team “burn down chart” or backlog system is used to schedule upcoming stories so that priorities are maintained even without a set schedule.  The backlog is continuously managed by the development lead and the analyst lead / project manager to keep the stories organized by priority but with continuous reference to their weight (estimated time for completion.)  The weight may be used to determine that a small feature or minor change might need to be popped to the top of the backlog stack because it represents better story velocity.

The goal within CompanyX is to get new features designed, developed, tested and into the hands of our customers as quickly as possible.  Because we release all of our software directly to our own servers and deploy internally making the software available as a service to our customers we focus on releasing as features are made available and do not need to announce most new features or changes.  We try to roll out changes gradually on a regular basis so that customers always have some new fix, feature or performance enhancement.  To our customers our system is very much alive – a system constantly under change and development.

Extreme Programming

CompanyX’s methodology also follows the XP principle of simplicity and “You Aren’t Gonna Need It” or YAGNI.  This principle, which states “Always implement things when you actually need them, never when you just foresee that you need them.”, is important to allow CompanyX to produce software more rapidly while delaying potentially complicated and unneeded work until we are sure that it is required.

CompanyX loosely follows Mehdi Mirakhorli’s Rules of Engagement and Rules of Play for XP which I will list and explain:

  • Business People and Developers Work Jointly: CompanyX follows this very closely but, because CompanyX does not have direct customers but is itself the primary customer of the software products which it then hosts as a service, CompanyX produces an industry trained business analyst who represents CompanyX as an entity as well as theoretical business customers.  This analyst is available full time as a “business person” with whom the developers can go for clarification, expansion, etc.
  • The Highest Priority is Customer Satisfaction: This means primarily that CompanyX is able to host and manage the application and second that CompanyX’s SaaS customers are thrilled with the quality, features and innovation of our services.
  • Deliver Working Software Frequenty: In XP, short term timeboxing is used to encourage the delivery of new features and updates on a regular basis.  At CompanyX we attempt to roll features and fixes out as quickly as they become available on a continuous basis.  Delays are only introduced to this system when multiple changes may be available extremely close to one another or when system changes are considered risky.
  • Working Software: Our goal is to produce working software.  No other deliverable has the priority of working software.
  • Global Awareness: The development team must tune its behavior in order to support the overall needs of the organization and its customers.
  • The Team Must Act as an Effective Social Network: Honest communication leading to continuous learning and an emphasis on person-to-person interaction, rather than documentation, empowerment for developers and analysts to do the work that they need to do and to make the decisions necessary to create and innovate and to level the political playing field so that all team members are available to one another and seen as peers.
  • Continuous Testing: All code is continuously tested through all reasonable means.
  • Clearness and Quality of Code: Code should be clean and elegant resisting duplication and obfuscation.  When possible, code should adhere to the precepts of Software as Literature.
  • Common Vocabulary: The team, as well as the organization, should share a common vernacular to reduce miscommunication and increase the speed of development.
  • Everybody Has the Authority: Everyone on the development team has the authority to do any work, to fix any problem and two or more people are always available who understand any given piece of the system.

In addition to the “Rules” of XP which CompanyX follows almost exactly, there are twelve XP Practices, as set down by Kent Beck, which we follow less closely.  The practices which we do not believe are beneficial to us include pair programming and test driven development (at all times.)  We feel that these practices are too heavy for CompanyX’s needs.

The XP Practices which CompanyX should be adhering to include: the Planning Game (for CompanyX this is played between the development and analyst leads), Whole Team (business is always available for the developers), continuous integration, refactoring, small releases (CompanyX uses continuous releases), coding standards, collective code ownership, simple design, system metaphor (business and development joint metaphor for system functionality creating a meaningful, shared namespace) and sustainable pace (CompanyX should never approve pushed overtime.)

SCRUM

CompanyX also takes concepts from the SCRUM development methodology but not as heavily as from XP.  SCRUM’s focus is more upon agile software management then upon software development itself as is the focus of XP.

SCRUM separates stakeholders into two key groups, pigs who are completely invested in the project and chickens who have an interest in the project but are not necessarily totally committed to it.  CompanyX follows this tradition but without the humorous naming conventions.

Within the pig group of stakeholders, SCRUM recognizes three key roles: product owner, ScrumMaster and the Team.  The product owner is the voice of the customer and makes sure that the team is working on features and in a direction most beneficial to the business.  The ScrumMaster, sometimes called a Facilitator, whose role overtakes the traditional role of the Project Manager but, instead of leading the team or “managing” them the ScrumMaster’s job is to interface with the organizational management and to make sure that outside influences do not get in the way of the team’s ability to work.  Finally, the team itself includes the developers, designers and others who perform the “real work.”

CompanyX organizes its teams similarly.  The Product Owner role is played by the business analyst assigned to the project who represents the clients or potential clients.  Since SCRUM is generally organized around contract software development their role is more aligned with a contract customer while CompanyX’s customers are potential industry customers and so the business analyst’s role is to have a firm grasp on the business of the industry as a whole and to represent them in a way that no one person within the industry may be able to do so themselves.  These roles are extremely similar.

The ScrumMaster role is less clearly defined within CompanyX.  As CompanyX does not actually use SCRUM we do not require someone to manage the SCRUM process.  A manager is selected to act in the Facilitator role, but we use the term Project Manager, whose role it is to keep the team able to work.  The Project Manager is not over the team and does not manage the team but manages the project, interfacing with the sponsors, providing data, keeping the team able to keep working by not being bothered by the overhead of politics.  Often this role does not require a full time person and this role is either played by an executive sponsor or by the business analyst.

CompanyX’s concept of a team aligns very well with the SCRUM idea of the team being a self-organizing entity that needs no external management.  Like SCRUM, CompanyX keeps teams small so intra-team communication is fast and fluid.  Additional overhead of formal meetings or project management processes are not necessary.

Much of CompanyX’s concepts of project task organization come from SCRUM.  CompanyX’s teams utilize the product backlog and burndown chart systems for providing straightforward and simple project planning, scheduling, etc.  As we do not use Sprints themselves the idea of the Sprint Backlog is missing and, as we see it, unnecessary.  CompanyX does not require schedule detail of the level that is generally provided by the Sprint backlog and so this is seen as superfluous overhead that should remain out of the picture.

Unlike both XP and SCRUM, CompanyX does not generally encourage customers to become a part of the development process.  Ideas and suggestions are solicited from customers at times but these are used to provide vision and guidance but almost never day to day project goals or scheduling.  We choose this model because CompanyX delivers a software product that is then purchased by unknown customers.  It is the job of the business analyst to understand the needs of the potential customer pool and to anticipate their needs better than they would be able to do.

After eight years of developing our methodologies based upon our studies of the software management and development field and upon our unique software development models and delivery mechanisms we feel that our blend of light management, informal processes and extremely agile methodology is quite effective and does not constitute unnecessary overhead while allowing the team members the ability to communicate and work effectively together.

Core Technology Competencies

Due to CompanyX’s relatively small size it is important that we focus upon a core group of technology and platform competency areas in which we are able to specialize and develop a mature practice in which we are able to cross-train and mentor.  The core areas in which we are currently focused include Microsoft’s C# on the .NET platform, Java with Spring and Hibernate on UNIX (both Red Hat Linux and Sun Solaris) and Ruby on Rails.  C# and .NET have long been our core platform replacing our previous VBScript/ASP platform based on Microsoft’s venerable DNA model.

C# and Java remain our most critical platforms providing the basis for much of the code which we currently have in production as well as much of the code that currently exists in our pipeline.  These two languages and library collections are also, by far, the most well known and well understood platforms by our current development teams who have used these technologies extensively both within CompanyX and without.

In the past year, Ruby and the Rails web application framework have begun to appear as an important additional platform choice for CompanyX.  Ruby on Rails is targeted at rapid web application development and often aligns well with highly agile methods like those used at CompanyX.  Ruby is well suited to the average database intensive but computational non-intensive applications which we are apt to produce.  Ruby’s high level of abstraction and powerful library system is highly advantageous for building complex business applications very quickly.

For financial applications and other high computational performance needs situations the use of C or, less likely, C++ may be critical for small pieces of code requiring extensive tuning.  This may not be a correct assumption, however, has the Java platform is extremely performant, especially in numeric computations, and may prove to be more than adequate.  Having a C compiler tested and approved would, most likely, be important to ensure that necessary tools are ready before they are needed.  Our financial development lead is currently evaluating two compiler packages in order to find an appropriate choice for our platforms.  As we are standardized on AMD Opteron and Sun UltraSparc processors only the GCC (AMD Optimized) and the Sun C Compiler (Sparc Optimized) are being considered since both outperform the rather expensive Intel C Compiler for our chosen hardware platforms.

Future Technologies

In the future, having access to a readily available C platform may be critical as CompanyX investigates the NVidia GPU-based CUDA technologies for massively parallel floating point computations.  The CUDA libraries and drivers necessary to access the NVidia Tesla GPU hardware are currently callable only through C which may necessitate the use of this language.

Under investigation, potentially as production development platforms but more often as learning and exploration opportunities, are a number of languages and platforms.  We believe that it is important to constantly seek out new languages, approaches, platforms and techniques so that we are prepared when new challenges arrive.  Experimenting in new languages and platforms is beneficial outside of potential production usage as well.  Having an opportunity to work in a diverse environment is more exciting for developers and also contributes to growth.

Currently we are experimenting with the use of Clojure (a Scheme-like LISP language that runs on top of the Java Run Time), Groovy (a Ruby-like language that shares syntax with Java and runs on the Java Run Time) and Scheme (a LISP variant.)  Highly potential other languages for our use include Scala, Boo, J# and Haskell.  By working with new and groundbreaking languages we expand the way in which our developers think helping them to approach problems from a unique perspective and to discover new and different solutions.

Design Tools

Standardized design tools are important in providing consistent communications and documentation from the design process to the development process.  CompanyX chooses to focus on a limited set of highly effective design tools that deliver us the maximum of communicative power without unnecessary overhead and the creation of design documents that are unlikely to be used and may rapidly become outdated and ineffectual.

We have chosen to use the wiki format as our preferred mode of design document sharing.  Wikis are easy to implement and easy for all team members to access and modify.  Collaboration happens fluidly and naturally on a wiki and built in change control mechanisms protect against data loss even when major changes to documentation are made.  In the past we have used pmWiki but found the system to be lacking in many respects.  After some searching we have, in the past year, migrated to DokuWiki which is much better suited to corporate-style documentation needs and authentication options that allow us to integrate it into our environment effectively through LDAP integration which allows it to authenticate either to our UNIX systems or to Active Directory and Windows.

The wiki format has proven to be very effective from its initial usage at CompanyX and we have begun to use it in many areas including new hire documentation, helpdesk information, application and package instructions for desktop applications, scripts for deskside build and migration tasks, general company data, etc.  Our goal is to move as much information, except for financial control data, into the wiki format as the format will allow both because we find this system to be very effective for our small firm but also because by putting more information into a single repository we find that usage rates increase and more employees are able to find more types of data more quickly.

At CompanyX we have chosen the Use Case as our primary form of design documentation.  Use Cases are simple for all parties to create, consume and comprehend and provide the most important piece of documentation needed by the development team – a clear understanding of the customer needs.  Use Cases are divided into business use cases and system use cases and are designated as each within the wiki.  Business use cases come from the business analyst and include business process modeling.  System use cases come from the system analyst / architect and involve technical, system process models.  Comprehensive Use Case documents are a long term project resource since they reflect the way in which the system will be used or the way in which we believe that it will be used.  As such this documentation is much less likely to become outdated than more technical documents are wont to do.  In some cases, Use Case information close to a decade old has been beneficial in passing long term project knowledge between project iterations.

Beyond the use of Use Cases, which are often mentioned in conjunction with the Unified Modeling Language (UML) but are not actually a part of the UML specification, CompanyX makes selective use of UML diagrams and models for more technical aspects of the system design process.  The graphical nature of UML diagrams makes them naturally more difficult to store, catalogue, search and retrieve.  Because no system, of which we are aware, is effective at dealing with a store of UML diagrams we have chosen to use the wiki system again as a UML diagram repository alongside the Use Cases.  In some rare cases a UML Use Case Diagram might be attached directly to a Use Case but more likely a separate category of pages is created with carefully organized diagrams arranged as images on the wiki page with appropriate text on the page making the page as searchable as is possible.

While our use of UML is less extensive than Use Cases we find the more technical diagrams such as class diagrams and object diagrams to be the most effective.  Other types of diagrams would be used only rarely when a special circumstance called for a high degree of clarification.

UML diagrams are created using the Umbrello module of the KDevelop Developer’s Suite.  This package is available on all standard CompanyX Linux desktops.  Umbrello was chosen as the best UML modeling tool for our purposes after auditioning  several other tools such as ArgoUML, Kivio, StarUML and Poseidon.  For some special case needs Microsoft Visio may be required.  This is available on all CompanyX standard Windows desktop builds.

In addition to traditional design tools, CompanyX, being completely non-colocated, requires special attention to the needs of team communications in a collaborative development environment.  In order to support these forms of communications, which are especially critical during design phases, we have traditionally implemented instant messaging conference rooms and advanced voice telephony functionality that includes extending the organizational voice system to all offices and employee homes.  These tools are key for rapid communications but play only to synchronous or near-synchronous communication needs and do not address long-form discussion, discussion tracking or decision archiving.

To support these longer duration needs, we are suggesting that an online forum application be implemented that will allow teams to pose questions and ideas and invoke response and discussion both from the team itself as well as from the community.  Many open source forum and bulletin board packages exist to provide this functionality.  We have previously auditioned phpBB which is, most likely, the most popular package available in this space.  A new product, Vanilla by Lussumo, is also a recommended candidate.  Another potential candidate is Beast, a Ruby based forum.

Development Tools

Developers work day in and day out within development environments and because of the percentage of time spent within these tools it is very important that these tools be readily available and appropriate for the task.

Hardware

All developers should receive moderately high performance workstations which are more than adequate for software writing.  Hardware is effectively free in comparison to developer time and effectiveness.  The IT team keeps a standard desktop build design for hardware and software designed to be efficient and powerful whilst giving developers the flexibility that they need to be effective.

Because CompanyX has a highly “at home” workforce and due to the fact that our desktops need to be transported regularly it is useful to keep our machines as light and small as is reasonable.  The recommendation is to use the Hewlett-Packard Small Form Factor dx5xxx family of 64bit AMD Athlon64 and AMD Phenom based desktops.  These machines are available natively with 64bit Windows Vista to support the latest Microsoft-based development environments and are completely compatible with our Novell OpenSUSE 64bit development environment.  In some special cases, Apple Mac OSX or Sun SunBlade Solaris desktop may be requested.

CompanyX should make a concerted effort to rapidly replace all aging 32bit systems still in the field with 64bit hardware and software as all deployment platforms have already migrated to 64bit exclusively.  Development will be made more simplified if only a single architecture standard is in use.

Many companies choose to use full-fledged multi-processor workstations rather than high end desktops, such as are recommended here.  Those are very valuable and should be made available upon request but most developers, we believe, are unlikely to wish to have large pieces of equipment such as these in their home offices and they are unwieldy to transport making them a less advantageous choice compared to the less expensive small form factor (SFF) desktops.

Desktops should be outfitted with very fast, multi-threaded processors and extensive memory allotments to support rapid development, responsive environments and easy, local compiling.  Care should be taken if any deviation from the IT department standard hardware is made to ensure that the hardware selected is able to support multiple monitors as many desktop products, including units in the same lines as we use, are unable to do so without the addition of extra hardware which adds expense, support problems, time to deliver and complexity.

In addition to dedicated hardware, we have traditionally offered a remote desktop option for users working from remote locations or who just want access to an internal CompanyX managed Linux desktop sitting inside of the corporate firewall already configured with a large set of tools for development.  To enhance future developer flexibility it is recommended that this system be upgraded to newer hardware and software and designed to handle many simultaneous remote users nearly equivalent to the entire development team.  This remote desktop service should be upgraded to mirror our most current Linux standard builds to provide consistency between the desktops and the remote access services.  Also, currently Windows developers needing remote access to Windows development environments do not have a reasonable option to accomplish this.  The proposed solution is to add additional datacenter functionality that will support direct, remote access to developer’s personal workstations through a central accessibility portal.

Software

For Windows development, CompanyX has been quick to adopt the latest in Microsoft VisualStudio platforms moving to each one as it has become available moving from VisualStudio 6 through VS.NET 2002, 2003, 2005 and currently 2008.  VisualStudio provides a complete set of Integrated Development Environment (IDE), compiler collection and database design tools.  For our standard development package we believe that this set of tools is more appropriate for our chosen C# / .NET platform.

For UNIX development, CompanyX has chosen to use OpenSUSE Linux on the desktop and Red Hat Enterprise Linux on the server providing a nearly seamless set of tools.  Linux is our operating system of choice for most development languages including Java and Ruby.  Unlike on the Windows platform there is not a single, obvious source for all development needs on UNIX.  We have chosen the Eclipse IDE as our corporate standard for Java and Ruby development.  Eclipse is available on both Linux and Windows providing additional opportunities for platform independence.

On occasion, C and C++ develop is needed.  For this purpose we have decided to provide two options.  The first option is C support in Eclipse.  This is useful for most C tasks.  The second option is the Linux-specific KDevelop IDE which is well suited to development targeting the Linux KDE desktop and the Qt Toolkit.

Continuous Sustainable Velocity

CompanyX has long valued the importance of providing an environment where developers are given the best possible opportunity to innovate, design, create and write great code.  We believe that introducing excess overhead, including most deadlines, does not lead to better code or better innovation.  We want to be sure to provide our development teams with the best possible tools and environment in which to create while doing as little as possible to hamper their desire to produce.  Taking a page from the book of academic procedures, we see all of our development as being a part of “research and development” and encourage any flexibility necessary to encourage this type of behavior.

Continuous Sustainable Velocity, or CSV, is basically the aforementioned ideal.  The idea behind CSV is to create an environment and management structure to support a maintainable maximum development velocity without exceeding fatigue thresholds.  Traditional projects use deadlines as a means of pushing teams to finish code more rapidly.  Often this works but at the expense of slowed development after the push and a higher rate of bugs and errors.  This type of management behavior sacrifices long term gains in return for short term benefits and overall produces less code and what code is produced is of lower quality.  This is similar to Wall Street firms forfeiting long term financial growth in return for short term stock price inflation.  CompanyX does not have to answer to short term gain seeking investors nor does its software delivery model create a need for hard development and release cycles.  We are in a unique and enviable position to exploit these advantages.

CSV does not eliminate the potential use of iterations, that is not the intent.  The idea behind CSV is to eliminate deadlines which are set by management, marketing, sales, customers, etc.  All products are created on a “when it’s ready, it’s ready” basis.  Management works with development to determine “release criteria” which determines when certain milestones have been met.  Using time estimates from developers as approximate completion dates to provide managers with the ability to create meaningful “drop dead lines” for burndown task lists to mark when an iteration, version or milestone is complete provides us with a scheduling mechanism, but one very different from what most companies use.  In this way some amount of scheduling can be achieved without inflicting deadlines upon the development team.  Instead, deadlines are inflicted upon management.

Because of the Software as a Service model in use within CompanyX there is no need for a formal product release cycle.  Instead software can be released at the end of every iteration.  In many cases these releases are very small and changes could go unnoticed by customers.  If an important feature or fix is ready before an iteration is complete it can be released to production immediately.

CSV is designed to not encourage overtime but to encourage flexible time.  The idea is a sustainable pace but with the understanding every day cannot be the same and that without variety there is no way obtain a sustainable pace.  Employees are encouraged to work long hours when they want and to not work when they do not want.   CompanyX’s goal is to allow each employee to work in the manner in which they feel most productive.  For some this means eight hours per day, five days per week during normal, daylight hours.  For others this might mean working a constantly changing schedule or working in huge bursts of creativity followed by days of relaxing and not coding.  We recognize that people are unique and that everyone works differently.

General Development Best Practices

There are many industry accepted best practices for all types of software development.  Among these practices are code review, source control, unit testing, test driven development (TDD), continuous integration (CI) and bug tracking.  These are the primary practices which we will be targeting as most important to CompanyX at this stage and which, we believe, can be implemented and enforced quickly.

Code Review

Code review is an important part of any organization’s attempt to improve code quality at a line by line level as well as a way to create organizational integration by disseminating knowledge, technique and style between team members.  By having different team members regularly reading each others code it gives a perfect opportunity for learning, instruction and communications.  This is especially true for younger and more junior developers reading through the code of their seniors and mentors.

Code reviews have the obvious benefit of error reduction.  The idea here is not that code review will reduce syntax and compiler errors as those will generally be caught by automated tools (except for Python or Ruby where new code paths may be tested in production and not in testing) but that having a second set of eyes on the code will allow for logic mistakes to be caught or give an opportunity for someone to suggest an alternate approach that may be cleaner or more efficient.  Perhaps even more importantly, if a developer knows that she is going to undergo a code review she is more likely to make the code easier to read which naturally will also make it easier to support later.

Source Control

Source Control is an absolute necessity for any development environment.  Even teams of just a single developer should be using source control as it is so critical in maintaining records of changes and providing an ability to go back to previous source versions.  The larger the team, however, the more critical the  role played by source control.  Versioning is important but the ability to merge changes from several developers and to manage branch and trunk development is very important.  CompanyX has chosen to use the very standard Subversion source control system.  Subversion is open source, free and very widely used in the industry.  It has become the de facto standard over the previously popular Concurrent Versions System (CVS) which it was designed to replace.

Unit Testing and Test Driven Development

Unit Testing and Test Driven Development often go together as a pair of practices.  Unit Testing, often handled through a unit testing framework such as nUnit for C#, JUnit for Java and Test::Unit for Ruby, is a style of code testing that is done by testing the smallest possible piece of the code infrastructure, often the method or function.  Unit tests are a very important part of any standard refactoring procedure.  By writing the original code and testing against unit tests we have, in theory, proven that the atomic unit of the code is acting correctly.  There is still a possibility of error in both the mainline code as well as the unit test but this is far less likely and there is still the very real possibility of architectural or design mistakes but a great number of errors can be caught and eliminated early through the use of unit tests.  Unit tests, at a very minimum, force the developer to write the same piece of code in two different ways (a “positive” in the form of mainline code and a “negative” in the form of the expected results) which can have the benefit of forcing the developer to think differently about the problem and may result, in some cases, in faster and more creative problem solving.

The nature of unit tests makes them highly automatable allowing you to use them regularly, such as with each code check in, to determine that new code changes do not break previously functioning code or to verify that refactorings of existing methods continue to work as expected.  Unit testing increases the level of confidence in the code and makes integration much simpler.

Test Driven Development uses the principles of unit testing to suggest that instead of writing a method and then writing a unit test to verify the method’s validity that you can instead write a test, which will initially fail, that looks for the expected output of the to-be-written method.  Then the method can be written against the test until the test validates as successful.  The key benefit generally touted by proponents of this approach is that writing more tests, which TDD causes by its very nature, causes software development to be more effective.  Bug levels do necessarily fall compared to test-behind unit testing but productivity has been measured to be higher when tests are written first.  It is believed that this is caused by the thought processes and mental approaches to problems taken by the developers when they are writing the tests.

For CompanyX, TDD may be, at least at this time, rather too extreme for us to begin to implement but the use of unit testing, across the board on both new and old projects is very important.  Once unit testing is implemented and accepted then experimenting with TDD can possibly follow.

Continuous Integration

Continuous integration builds on the benefits of unit testing and source control.  The concept behind continuous integration is that each time code is checked in to the source control system, which should be a minimum of daily, that a full, automated build process is run creating, in theory, a completely working build of the package and then run the complete unit testing suite against it to verify that integrity has been retained.  Should any test or the build itself fail, the package must be corrected immediately to ensure that there is a working codebase for work to continue the next day.

CompanyX has not yet utilized continuous integration in our practices but this practice should be rolled out in conjunction with comprehensive unit testing frameworks.  Continuous integration most definitely benefits from having unit tests already in place but we are able to begin before unit tests are actually done allowing momentum to be achieved.  Popular continuous integration packages include Cruise Control, Cruise and Bitten.  We are assuming the use of Bitten as it is integrated with Trac, which we will discuss below.

Bug Tracking

Bug and Feature tracking involves a centralized error management system that provides a centralized location for recording bug occurrences, status, priority, etc.  Bug reports may be produced internally by developers or testers or externally by clients.  By having a central location for bug data managers are provided an opportunity to perform triage on the bugs and can provide developers with a clearly prioritized “to do” list of bugs.  This is important in keeping projects moving at maximum velocity as well as for minimizing impact to clients if bugs are appearing in production systems.

Bug Tracking systems are important for any number of reasons arising from the need to know where bugs are, when they have been fixed, by whom and how.  CompanyX has, in the past, auditioned products like Bugzilla and FogBugz for this very purpose but was very disappointed in the ability of these products to meet our rather simplistic needs.  We have decided to use the Trac product from Edgewall Software.  Trac is a popular bug tracking system that has an integrated continuous integration environment feature, Bitten, that can be implemented alongside of the tracking product.  Trac is already designed to integrate with a Subversion repository making it a very useful choice for CompanyX.

Project Selection and Prioritization

As CompanyX is a small firm with extremely limited development resources it is very important that we choose projects carefully and prioritize them as best as possible in order to ensure that projects with the greatest potential for success are chosen over those that are less likely to have large returns, have a very high cost to implement or carry a high risk of failure.

CompanyX’s model of software as a service means that our attention is always drawn, first and foremost, to maintaining the quality and integrity of our running applications.  Unlike traditional software companies for whom software in the field seldom represents a financial loss if new bugs are found but may actually realize a financial gain through support contracts CompanyX has a vested interest in keeping running applications running as smoothly, efficiently and effectively as is possible.  Unpatched, known bugs are very evident to our customers and may represent potential security or performance flaws that will impact our own infrastructure and support costs.  Running applications always take full precedence over any forward development project.

New, greenfield projects are started infrequently within CompanyX and choosing an appropriate project is an important decision that affects the company, as a whole, for any single project.  For a small firm like CompanyX which is profitable and maintains a relatively even level of developer and engineer resources versus workload in the pipeline possibly the most important selection criteria for a new project is availability of resources.

In most cases, at CompanyX, developer and engineering resources are not interchangeable commodities.  Developers and especially analysts are likely to be domain experts in a business area and projects are often selected based on the availability of these skill sets.  If new developer or analytical resources become available suddenly a new project may be initiated with very little planning in order to utilize these resources as quickly as possible.  In this way we are very opportunistic in our project selection process.  Cost / Benefit and other factors are generally subsidiary to resource availability.  This is not only because of the importance of resource availability to us but also because resource availability is an easy to determine factor while cost, benefit, MOV, ROI, etc. are extremely subjective and prone to very high rates of miscalculation and misjudgment.  Given the availability of a straightforward deciding factor, we believe that we will obtain best results by leaning heaving to the weighting of the known rather than the unknown.

CompanyX is very small and project prioritization is not a significant factor.  Projects are generally started when resources become available.  There are, of course, a number of projects that are internal or infrastructure related and can utilize any number of different resources and so these do require prioritization, but, by and large, project prioritization is handled naturally but the sudden availability of a market opportunity or availability of resources which are a good match for a particular project.

Prioritization of these projects is handled by executive management and is usually determined by expected ROI.  Internal projects tend to be of a moderately similar size and complexity so valuations are relatively straightforward and manpower reduction and competitive advantage are the primary ROI determinations.

Team and Project Organization

CompanyX, being a small firm, needs very little and relatively informal team and project organization.  Each organizational unit, being relatively autonomous, generally has a single development team assigned to it.  This segregation is intentioned to create an environment where each team feels empowered to self-organize as needed and as is appropriate for the nature of their project.  This lack of coordination also provides a more nature freedom for teams to choose architectures and platforms that most fit their needs rather than blindly following trends and directions forged by older teams and successful projects.

Steve Yegge of Google has stated that Google’s availability of great platforms can be a real boon for internal teams looking for resources but that it also squashes innovation by pressuring teams to implement large initial overhead systems as well as square-pegging technologies to work on Google’s existing platforms (i.e. BigTable, etc.) whether or not they are appropriate or advantageous just to justify the former investment in the technology.  This justification is clearly one that exists only on paper because if a platform must be used even when it hurts the project then it has actually introduced a negative ROI rather than a positive one.

Because CompanyX does not size teams based on a need to meet a schedule or deadline teams are chosen based on team member interaction, appropriateness for the team, technical needs, etc.  This team sizing method is designed around obtaining the maximum performance from each developer rather than picking an arbitrary speed for development and sizing a team until it is able to obtain that speed.  This sometimes costs us the ability to get to market as rapidly as possibly in favor of overall per-person productivity.  This impact is not as dramatic as it may seem, however, as innovation and high-performance teams are able to move quickly and keep pace with much larger teams that are built without regard for their team dynamic, communications, style and compatibility.

Typically within CompanyX teams range from two to five people.  We have found that smaller, more closely knit teams tend to obtain greater productivity per developer and that ideally teams should be in the two to three person range when possible.

Teams are only-lightly managed with self-organization as a high priority.  Obviously some management and direction is needed from outside of purely self-organization.  Direction is generally established by the business analyst assigned to each business area.  The analyst acts much like a manager as they are involved in deciding strategy, task priority, direction, etc.  The business analyst is not a personnel manager and no staff report to them.

Management is handled directly from the Director of Information Services.  This is the only dedicated management role in the CompanyX IT organization.  All IT staff report up to the Dir. of IS.  CompanyX is very fortunate to have the ability to avoid the bulk of politics that occur in most organizations simply because of our size.

Virtual staff is import at CompanyX as it is in almost any organization with more than a single team.  Because CompanyX is small, though, virtual staff is still a relatively minor consideration.  Technology expertise may be shared across teams but this tends to be quite casual.  The CompanyX wiki includes information including skills sets and areas of interest of employees so searching for skill areas is possible.

At CompanyX, it is the goal of our managers to make it possible for employees to work.  The idea is not that we have hired employees who need to be managed but simply that they need direction from time to time and, more often, need someone who buffers them from the non-development needs of the organization.  It is the role of the business analyst and manager, in this case the Director of IS, to eliminate distractions, unnecessary paperwork or other busywork and to provide a politics-free environment as conducive to working as possible.  CompanyX’s policy is that the manager is less in control of the team as a resource of the team.  We believe that if we hire the right staff and provide them the ability to do great work that they will so it is very important to keep out of their way and not stop them from being productive.

Project Management in the Continuous Sustainable Velocity Organization

In a traditional software development organization, project management is a rather heavyweight process which is heavily involved in the day to day work of the development teams as well as involved in the production of a large number of project artifacts stemming, generally, from a need to report project progress, status, details, etc. to upper management.  CompanyX does not operate with a need to produce project artifacts for upper management.

CompanyX management has the ability, and the responsibility, to pull project status by checking the project backlog and burndown lists.  In this way status is able to be monitored without needing the team to provide details back to management.  The monitoring is passive, not active.  At no point should the team stop doing actual, profitable work in order to produce paperwork for internal consumption.  Not only does this take time away from the development team and slow progress but it also would require management to spend their time reading status reports rather than being productive themselves.

Because we do not market products before they release and do not have deadlines to meet in delivering products to our customers the needs of our organization are much simplified over more traditional software firms.  This gives us an additional advantage in that we can spend more time innovating and producing software at lower cost making it easy for us to outpace our competitors to market.

Some pieces of the traditional project manager role are played by the business analyst.  Important decisions such as when a feature is complete or what level of functionality is acceptable are determined by the business analyst.

The PMBOK specifies many tasks which are a part of the project manager’s role but a large majority of these are simply assumed by the creators of the PMBOK and are not actually necessary.  At CompanyX, our model is continuous development so we do not “time box” development work into a project that has a definite end date.  We cannot budget by release because we do not know when a release will be completed or what it will contain.

Instead of traditional budgetary means, CompanyX establishes an ongoing development budget that is defined in a cost/time manner, such as $65,000/month.  This cost includes the salaries of the project team for that duration, an estimate of the months cost of electric, HVAC, Internet access, software, hardware, networking, telephones, chairs, desks, etc.  Using this number, which is extremely accurate and varies in predictable ways, project budgetary planning can be carried out based upon the needs for the project to continuously outperform its cost by way of its revenue.

Instead of determining a project’s total cost up front we determine the ongoing cost of a project and determine if we are prepared to support the ongoing cost of development and support.  This still requires budgeting and financial planning but the manner in which it is done is far more straightforward and intuitive.  It is also far more honest than planning done in many organizations as the ongoing support cost is both difficult to fudge (if resources are assigned, cost is assigned) and total project scope is not a part of the picture. Scope creep, which is controlled by the business analyst, will affect development duration but will not impact the budgetary process.

There is no cause to make early determination as to scope before the project has begun.  In order to determine a project budget a normal firm will need to decide upon scope, features, architecture, etc. before the project budget has been approved and, therefore, before the project itself has begun.  CompanyX avoids the costly and error-prone process of making scope decisions when far too little is known about the project and, instead, determine a velocity desired and/or budgetary need that can be withstood and allow the business analyst along with the development team to determine feature priority and to decide when the product has enough features completed to allow it to move from purely development into production.

Hiring Practices

One of the, if not the, most important endeavor undertaken by the staff at CompanyX, or arguably any company, is the accumulation of staff.   Because CompanyX focuses less on processes, procedures and documentation and more on team effectiveness, innovation and employee satisfaction we are far more affected by the quality of the staff that we hire.  This greater employee dynamic can be seen as a benefit or a deficit.  If we hire great staff then we empower them to be creative and productive. If we hire poor staff we empower them to do nothing.

CompanyX has always been and must remain very aggressive in its stance on hiring and retaining the best IT talent in all IT areas including engineering, development, administration, architecture, support, etc.  We have found that by the very nature of our management practices and our use of continuous sustainable velocity that we have naturally managed to maintain staff once they have been acquired.  Most ambitious and talented IT professionals, especially developers and designers, are thrilled to find an environment so dedicated to respecting their professionalism, providing a framework for their own learning, growth and production, peers that are passionate about their field and the projects on which they work and a lack of red tape separating them from the work that they wish to do.

Far more difficult than retaining great talent is locating and identifying that talent in the first place.  At CompanyX it is very important that we find not only great skill and experience but also great drive and passion.  CompanyX is a company focused on innovation and to achieve this we require staff that will constantly seek out and discover new avenues for exploration.

Our first means of seeking out great candidates is to simply not seek candidates.  CompanyX has always hired exclusively through people seeking out an opportunity to work with CompanyX and through word of mouth.  This drastically reduces the chances of hiring someone who is desperate for work rather than seeking a wonderful development environment.  We do not want staff that are simply happy to have a job which will pay the bills but want staff who are actually excited about the work that they get to do and would be very unlikely to leave the company because almost no other company offers the exciting, driven environment that CompanyX offers.

When in the hiring and interviewing process for new candidates it is very important that we do not stress specific technical skills.  Any candidate may have a select number of highly technical skills for which we test which proves nothing.  Technical testing also requires us to test a candidate against a skill set for which we assume knowledge will be needed for the job.  This is a short-sighted methodology based around an immediate need and not the long term growth of the company.  No technology is so complex that we must require that a candidate bring that existing knowledge with her.

Far more important than a candidate possessing an immediate technical skill is that the candidate has a set of technical skills – any set.  What CompanyX actually cares about is the ability for a candidate to grow and adapt.  The technical landscape is constantly changing and the skills useful today will not be useful tomorrow.  What we are looking for is a candidate who will embrace tomorrow’s skills and learn them easily.  Otherwise an investment in an employee is an investment in stagnation.  If a candidate earns a job because of a single skill set then that candidate is also more likely to leave later when we move forward to look for “comfortable” work at a firm using legacy technologies.

Even within a single technology for which we would choose to interview, Java for example, there are a range of specific technical skills necessary to be useful on day one and expecting any particular candidate to use the exact same set of technologies and approaches which we are currently using would limit the potential candidate pool dramatically while likely eliminating candidates who could pick up the skill in a weekend and outperform most candidates in a week or two.

For example, within Java candidates may be used to platforms such as Weblogic, Websphere, JRun, JBoss, Spring, Tomcat or WebObjects.  They may work with different databases like Sybase, SQL Server, MySQL, PostgreSQL, Firebird, SQLite, BDB, Oracle, Informix, DB2, Derby, etc.  ORM packages like Hibernate, iBatis and others.  They may use EJB 2, EJB 3 or POJO approaches.  They may run on Windows, Mac OSX or UNIX.  Perhaps they use Swing to make desktop applications or only write component services.  The variation is far too wide to expect that the right fit for CompanyX has been working with the same technology stack as us.  If we do find a candidate working with the exact same stack then we risk that that candidate won’t be bringing in fresh approaches and ideas which is some of the greatest value available from new candidates.

Along with technical prowess we also are very concerned with finding candidates who are extremely passionate about technology and are driven not just from a career perspective but also from a desire to excel, to create, to grow and learn and to make, manage or design amazing software products.  A candidate with drive but little experience is more valuable than one with much experience but that is not concerned with future personal growth.  Working at CompanyX is a lifestyle of technical excellence.

Along with the standard hiring process we must also consider the opportunities that lay in an intern or co-op program.  Interns may come from any background including current technical IT work, academia, high school, etc.  In an intern we look primarily for drive.  Drive is most recognizable in intern candidates of any age who are willing to work on their own projects, at home, in their own time and have been doing so long before becoming interested in working with CompanyX.  This same drive is sought after in any candidate, not just one at an intern level, but unlike more senior candidates who have an opportunity to differentiate themselves through career experience, skill set areas and previous projects, interns are viewed almost exclusively through their “at home” experiences.

Recognizing two important facts about academic backgrounds is also important in picking candidates.  The first is that many of the best candidates will not have attended college or university because they were too busy already being in the field.  The second is that if we desire to add academic experience to a candidate we can easily do that after they are hired; there is no reason to require this before the hire process begins.  A tertiary issue also exists that the majority of collegiate IT and CS programs are both far too easy and often avoid core topics that would be expected, and in so doing lessen the potential value in judging a candidate by their academic background even when it appears to be pertinent.  Because of these two facts it is CompanyX’s policy to disregard academic experience except in cases where the candidate has undertaken the process of obtaining a degree after having entered and while remaining active in the field.  Using university studies as an excuse for absenteeism from the field is not counted as positive experience time but as a resume gap.

Outsourcing is an important topic for companies like CompanyX.  Because we have no centralized, physical location we are well suited to both outsourcing and offshoring.  We will not, as a company, consider outsourcing labor to another software firm because we both believe that other companies are highly unlikely to be as competitive as we are leading to only higher costs and lower output and also because our value is in creating software and managing IT solutions ourselves.  It is these key tenants on which CompanyX builds its reputation and creates its value.

Using labor external to the United States, however, as employees is another matter.  CompanyX is officially nationality and locale agnostic.  We have no reason not to hire from international locations.  Even timezone issues are of little importance as our current staff regularly works floating hours and often works weekends or at night.

Quality Assurance and Testing

At this time, CompanyX does not utilize a dedicate QA and/or testing team for our software.  All of our testing is done by the developers and analysts and is done using informal methods.  This procedure is far from being ideal and leaves a lot of room for critical improvement.  This is an area in which CompanyX should focus heavily in developing a team and process for standardized testing.

The first step in preparing for formal testing is the creation of formal, separate testing and production environments.  At this time CompanyX has development environments running on desktops throughout the company and all testing is done directly at that level.  We are currently in the process of building out a dedicated testing datacenter which will be used to run pre-production code in a live setting for serious integration and testing work.  We currently have a Solaris testing environment built and  a Windows 2008 environment is already in process.

After we have a dedicated testing environment, our next step is to begin to build a dedicated testing team.  We will need professional testers with testing experience and knowledge who can help us to build a testing practice in the same manner in which we have built a development practice.  Implementing both white box and black box testing, testing automation beyond unit testing, interface testing, etc.

We have been fortunate that our small, tightly integrated user base thus far has been resilient to bugs as almost all of them clamor to be involved in early testing but this will change.  Our development methodology tends to introduce a low number of bugs, but we should be striving to lower that number as far as possible.  As our products are used by a larger and larger set of clients this will become more and more important.

Developer Mentoring

Mentoring is an important element in any professional development environment.  Mentoring adds organizational value for junior staff members as it gives them an opportunity to learn and grow with direct access to senior staff members with real world experience and value unobtainable through other avenues.
Mentoring is also important because it provides an opportunity for knowledge to flow back through the organization.  There are few mechanisms through which experience both personal and project or technical get disseminated, in a meaningful way, through an organization and one of the most powerful means of doing so is through mentoring.

The process of mentoring also helps to build stronger inter-personal ties within the company.  People on different teams or in different technical areas can use mentoring relationships to build networking ties and to become more acquainted with the organization and its resources.  Mentoring is significant in cementing corporate culture when new staff come on board.

Project Lifecycle Issues under Continuous Sustainable Velocity and Software as a Service Models

In a CSV and SaaS organization we face particular challenges less common in more traditional software companies in determining exactly what constitutes a project lifecycle and how that relates to its team and team members.  When software is delivered as a service it tends to take on a wholly different type of life than does software that is boxed and shipped.

Traditional box and ship software, or other types of software typically delivered in a “versioned” way, has a natural project lifecycle associated with each new version of the software.  Often teams will be known by their version such as the Word 2003 and Word 2007 teams at Microsoft.  This allows for projects to have a definite initiation point and a clear, or mostly clear, finishing point.  Some amount of ongoing support and patching generally continues but is minor in contrast to the project itself.

When working with Software as a Service and especially when doing so without the benefit of regular iterations there is no clear division between software development and its ongoing support.  For example, ProductY, our flagship product, ran as a project for approximately eight years.  During this time it went through initial development, ongoing support, new features, performance enhancements, stability improvements, underlying platform migration and more.

After eight years the ProductY codebase was replaced by the ProductY 2 codebase.  This new codebase we expect to go through a similar lifecycle but lasting approximately twelve years.  We anticipate the retirement of this codebase by 2020.  In this case, the team responsible for both codebases is the same.  Not one person changed between the original ProductY and ProductY 2.  From the perspective of the team, ProductY 2 is simply a natural continuation of the original ProductY and part of the lifecycle involved the migration of clients from one system to the other.

When applications go through lifecycles of this nature and on a timescale of this magnitude it is exceptionally difficult to differentiate between different phases of the project.  Is initial development until beta one project and then ongoing support another?  Sometimes though there is far more code developed after beta than before.  Are the two ProductYs separate projects?  Most likely that is true.  That is the most appropriate project divider that we have identified.  Still this is a very long time for the concept of a project to survive.

Because of these issues, CompanyX is faced with almost not even having the ability to identify teams by project at all.  This is difficult as the thought processes of developers, and humans in general, invariably brings them to the project.  At this time we have not determined the correct answer to this problem but we do have an approach that is, thus far, working while still being a work in progress.

Instead of organizing by project, we have been organizing our teams by application or feature that is a long-term, externally identifiable entity.  Therefore, to us, ProductY and ProductY 2 are the same entity as they are a single product to our customers.  A separate hazardous waste product, while similar and sharing the business unit, would be a separate identifiable entity and would be a separate “project” internally to CompanyX.  This approach, at this time, continues to function but also continues to evolve.

The overall lifecycle of one of these “projects” is very fluid.  Project initiation is very similar to a traditional project approach used by any software firm.  Initial requirements gathering and basic design is done and then code development commences.  Once a potentially useful level of development is deemed to be complete by the business analyst and testing is complete the application is released into production.

Once an application is in production there is no guarantee that it will be used by customers.  At this point we generate a basic sales and marketing framework such as a web site, support contacts, etc.  The appropriate marketing manager determines if the application is marketable in its current state.  If so then marketing begins to attempt to garner interest in the product.  If not, application development continues as before attempting to complete more features and increase functionality.  The development process is not driven by marketing except in rare circumstance nor is marketing driven by development.  Customers seeking our products always have a means to find them even when marketing does not feel that the product is feature-complete enough to justify marketing expenditures but new customers are unlikely to be located until marketing sees the product as more mature.

Regardless of the involvement of the marketing team, development continues on in the same manner.  The purpose of the initial lean release is not necessarily to get early adopters but to give customers who are anxious to use our products every opportunity to do so, to get earlier real-world feedback and to put the stake in the ground as to the launch of the codebase.

Throughout the lifecycle of the code development continues.  In some cases development may slow as the product ages and becomes more mature.  There is definitely a need for more development while ramping up a product than in ongoing support.  However, unlike 37 Signals who run a similar model, we are not targeting lean software as our primary market but extremely powerful and robust software with powerful business modeling potential and this gives us more opportunity to mold our products long after their initial release into the market.

In addition to feature growth, old code bases will often see refactoring, bug fixes, performance enhancements, library or platform upgrades, interface updates, etc. which supply a large amount of continuing development work long after initial design and release pieces have been completed.  At some point, as we saw with the ProductY product line, a new codebase will likely be generated and at that point development begins anew with a blend of fresh approaches and new designs along with new technologies while drawing from former knowledge and experience.

In the case of the ProductY, the initial system was a web-based Microsoft DNA architecture application running on VBScript/ASP in the Windows NT4 and SQL Server 7 era.  It relied upon Internet Explorer 4 as its client side platform and used a small VB6 client component to handle hardware integration.  ProductY 2, while providing identical functionality to the customer, does so using C#/ASP.NET on Windows 2003/2008 and MySQL with a single, non-web based client side C# application that handles user interface and hardware integration concerns along with local caching with a client side database.  The two architectures and approaches are very different but the functionality is the same.

Packaging and Deployment

CompanyX has a fairly unique need for packaging and deployment of its software.  As all of CompanyX’s software is deployed internally to our own platforms and our own systems we have an advantage over most firms in that we have little need for broad platform compatibility testing nor do we necessarily have a need to package our applications for easy deployment.

It is tempting, given the simplicity with which we can have the development teams perform their own deployments through a manual process, to completely forego the entire process of performing application packaging.  This makes sense in many ways.  It does, however, present a few problems.

The first problem that a lack of packaging creates is an over-dependence upon the development teams.  The developers themselves should not be required to be directly involved in the release of a new package, at least not with the physical deployment of the package.  By having a package created it allows the system administrators to perform application installations easily and uniformly.  This is not necessarily advantageous on a day to day basis but when there is an emergency and a server must be rebuilt quickly having the applications ready, in different versions, for the admins to apply to the environment without developer intervention is very important.

The second problem that arises without a packaging process is that software tends to become extremely dependent upon a unique build and deployment environment.  Shares to the environment from a system administration and engineering perspective may inadvertently break packages.

In a Red Hat Linux environment, for example, a customer deployed package may have a library dependency unknown to the system administrator.  That library may be updated, moved or removed, breaking the software quite accidentally.  If that same software were to be properly packaged as an RPM (Red Hat Package Manager) package then the RPM system itself would check dependencies system-wide and block the removal or changes to other packages needed by the application software.  When installing the application software a tool such as YUM (YellowDog Update Manager) would also be able to satisfy dependencies automatically rather than sending the system administrator off to find packages supplying necessary libraries.

For a company like CompanyX, our packaging needs are very simple.  Packaging formats such as RPM for Linux (Red Hat and OpenSUSE) and DStream for Solaris (aka Solaris Packages) provide important failsafe mechanisms to make installing fast and simple while protecting the software from inadvertent damage.  They also make version control and system management faster and easier.  More advanced packing, such as graphical installers, would be overkill in this environment and would, in actuality, make the deployment process more cumbersome considering that all potential deployers of the software are skilled IT professionals and not end-users and customers.

Management Reporting

Earlier I spoke about the availability of project backlogs and burndown lists as important tools for management to keep an eye on the status of projects without needing to interfere with the project team itself or to send agents to collect data on the team.  In many cases this is not enough information for management or for the rest of the company.

Currently CompanyX works on the basis of the basic backlog and burndown lists along with word of mouth, wiki posts, email distribution lists and a bidaily status and design call in the late evenings.  These tools, however, are not scalable and do not provide adequate reporting to parties outside of the direct management chain.  More information is needed but it is critically important to keep management from becoming directly involved and interfering with the teams and the projects.

The first status report which should be created for management is the build report from the continuous integration system, mentioned above.  Management should get a report, generated every time that the system is run, that details the status of tests passing and failing.  This allows detailed progress information to be noted without requiring any additional input from the team itself.  This is a good utilization of best practice processes that will already be being put into place.

The second important reporting tool, important for more than just management but for the entire organization, is developer, analyst and team blogs.  Having a project/team blog that is updated by the business analyst and/or development lead at least once a week gives insight into the projects for other teams, interested parties and, of course, management.  By providing all employees with their own internal blogs there is a system for personal status reporting and organizational information gathering.  Tools that may work well here include SharePoint, Alfresco, WordPress and Drupal.  Anyone in the organization would have the ability to subscribe to an RSS feed or to visit a website to find the latest information from any person or project.

In additional to traditional blogs which are great for technical posts and project status updates, the more recent concept of the microblog, made popular by Twitter, may also be beneficial to the organization.  Microblog posts are generally limited to 144 characters per post, just enough for a sentence or two.  This format could be useful to allow developers to post their thoughts, locations, status, needs, etc. throughout the day.  CompanyX has even written our own client for making these types of post easily from the command line while working in UNIX systems which is extremely handy for developers to effortlessly make quick statements to the company.  For internal use a microblogging platform server such as Laconi.ca would be appropriate, not a public service like Twitter or Identi.ca.

As an example of how microblogging could impact development, a developer might post “Stumped on a C# RegExp.”  Someone with extensive C# regular expression experience might see the post and instant message that developer offering some assistance.

Internal Training and the Development of Practice

The idea of Practice within the IT organization is a critical one.  By Practice we are referring to the idea of approaching a discipline as a true professional, perhaps even more appropriately, as an amateur in the classical sense.  The idea of Practice is, unfortunately, often a nebulous one, but one that is key to  the CompanyX approach to software development as a whole.

Practice refers to the holistic approach to the technical discipline whether that discipline is Java programming, Linux system administration, Windows desktop management, project management, business analysis or software architecture.  This means that we must look at the individual technical area as an area in which we can learn or develop techniques, strategies, best practices, etc.

Instead of approaching each technical area as just another tool in the tool belt, we look to encourage serious study of the technical areas which may include reading books, studying in a collegiate setting, joining user groups, attending vendor seminars and events, becoming active in online communities, writing and publishing papers, independent research and more.

Internally we support general educational opportunities through support of collegiate academic programs, unlimited book budgets and the Borders Corporate Discount program, internal training, mentoring programs and organized trips to vendor events, seminars and support groups.   Most importantly, we believe, it is important that we promote a culture of education and not merely offer opportunities but make learning an important part of everyday life within the organization.  We may also want to pursue projects such as having a reading group which meets to discuss readings in important industry publications or discusses seminar talks and panel discussions.

Software Development Best Practices and Standards

Even in an organization such as ours it is important to retain practices and standards throughout the organization.  Standards apply in many areas.  Applying standards, when possible, is an important factor in making code readable and rapidly adaptable when working with multiple developers.

Two areas in which coding standards need to be applied, across the board, are in naming conventions and code style standards. Both of these areas fall outside of technical areas of expertise and contribute to communications.

For a firm the size of CompanyX, developing internal style and convention guides would be extraneous overhead and unlikely beneficial to the environment.  Instead, we have decided to standardize upon the widely accepted Cambridge Style standards.  These are well known standards used not only in business but often in academia and widely used in professional publications.

Using these standards assures us that potential candidates are, at the very least, familiar with the standards even if they have not used them directly, will recognize the patterns easily and will write in a common style for the industry when publishing as well.  By using such common styles CompanyX can leverage the benefits of style and standard without unnecessary overhead and expense.

Style standards apply both to the “beautification” of code including things like the appropriate use of white space, placement of braces, use of multiple elements within a single line, etc. but also to naming conventions for variables, methods, classes, objects, etc.  Through the proper and standard use of such styles and standards programs become more readable, portable, extendable and maintainable.  The ultimate purpose is, we believe, to support the concept of Literate Programming as best stated by the legendary Donald Knuth:

“I believe that the time is ripe for significantly better documentation of programs, and that we can best achieve this by considering programs to be works of literature. Hence, my title: ‘Literate Programming.’

Let us change our traditional attitude to the construction of programs: Instead of imagining that our main task is to instruct a computer what to do, let us concentrate rather on explaining to human beings what we want a computer to do.

The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style. Such an author, with thesaurus in hand, chooses the names of variables carefully and explains what each variable means. He or she strives for a program that is comprehensible because its concepts have been introduced in an order that is best for human understanding, using a mixture of formal and informal methods that reinforce each other.”

Process Management Standards

Many software firms use process management standards such as the Capability Maturity Model Integration and ISO standards to control, improve or monitor processes.  No formal process improvement program or strategy is currently in use within CompanyX.

Capability Maturity Model Integration

The Software Engineering Institute at Carnegie Mellon developed the Capability Maturity Model Integration system with the idea that this model would be utilized by organizations seeking to build a framework of continuous process improvement.  The CMMI is a set of guidelines as to what processes should be implemented but not how those processes should be implemented leaving the actual implementation up to the individual organization.  This is good in that it allows for a great deal of flexibility when it comes to the actual usage of this system with a unique company.

The ideas behind CMMI are almost exclusively targeted at very large, non-IT organizations.  These ideas are not necessarily bad but assume a large amount of process overhead which is not present in many companies.  Introducing process where it is not needed just for the sake of managing and improving said process is unlikely to be an effective use of available resources.  Much of CompanyX’s management approach has traditionally been focused around reducing management and process overhead in order to make the company more efficient, agile and responsive while reducing costs.  The introduction of a process such as CMMI could easily double operational expenditures even without pursing official certification.  It is also likely to reduce agility and increase ongoing development cost by reducing development team efficiency.

Lean

Lean Software Development is an application of the principles of Lean Manufacturing to the software development process.  Lean is simply an attempt to eliminate waste from the development (or manufacturing) process and is directly related to concepts such as Just in Time Manufacturing – the process of having parts arrive at the factory just as they are needed for assembly reducing storage cost and leverage the time-value of money.

The concepts of Lean echo (or predate, in fact) concepts widely accepted in software development such as YAGNI (You Aren’t Gonna Need It) which encourage developers to save features that they cannot guarantee will be needed until they can.  This often reduces the total amount of effort needed to make a product since useless codepaths are less likely to be constructed.  Extra code not only costs at the time of creation but it also costs later to maintain and generally reducing system performance.

While Lean is often associated with process models and systems such as the CMMI and Six Sigma, Lean Software Development is more appropriately an Agile development methodology and is organized as such by its designers, Mary and Tom Poppendieck.  Lean is a relative newcomer to the methodology space appearing as recently as 2003 and appears to be mostly a repackaging of most core Agile principles without the addition of many of the “scarier” practices such as pair programming in XP.

For any organization utilizing an agile process, Lean seems to be less of a firm methodology and more of a repackaging of obvious and common sense items.  Lean’s greatest advantage, most likely, is for organizations who are caught in the “manufacturing” mentality to pick up a truly Agile framework and see it as “emulating Toyota”.  Lean may have the ability to be sold to managers who otherwise won’t accept processes without management buzz words attached.

Six Sigma

Six Sigma is a manufacturing process improvement program initially development by Motorola in 1986.  Six Sigma and Motorola processes are exceptionally well known to businesses in Western New York as many of them, including then Motorola CEO George Fisher, were brought to Eastman Kodak.  These manufacturing-designed processes and management when applied to Kodak, a research and development organization were used to drive out innovation, cripple agility, alienate the workforce and effectively dismantle the company.

Six Sigma itself is a manufacturing improvement process that focuses upon a feedback loop of quality improvement following on the tradition of many standard quality improvement procedures.  In fact, Six Sigma has been criticized for being nothing more than marketing fluff applied to obvious principles long established in the industry.  Six Sigma is often associated with other initiatives that have been labeled as management fads such as Total Quality Management and Reengineering.  These processes and their ilk were lambasted by Scott Adams in his landmark business book The Dilbert Principle.  Many of these fads and processes have come to be the face of poor management today even though mostly they have faded towards obscurity.  Six Sigma remains more strongly and may not constitute a fad since it is simply a repackaging of traditional concepts.

The biggest problem with Six Sigma is that it is a manufacturing methodology that relies upon repeatable processes and measurable error rates to achieve its goals.  As software development has neither of these things Six Sigma is meaningless in this space.  Even Eastman Kodak, a company whose fortunes lay mostly in their research and development, saw Six Sigma and heavy manufacturing processes erode their competitive advantage when their organizational focus was shifted to low-profitability and easily off-shore-able plastics manufacturing rather than pharmaceutical and technology research.  Even Motorola, the poster child of Six Sigma, saw its fortunes dwindle compared to its chip making competitors and the market as a whole once their focus shifted from new product development into manufacturing.  In first world markets, any process that can be measured and improved through a system such as Six Sigma should be earmarked for third world consideration.  In those markets, the principles behind much of Six Sigma should probably be applied to their manufacturing processes as there is much value when the core business is actually manufacturing and not product development and innovation.

The mistake often made with Six Sigma outside of manufacturing is the belief that all processes, even those that are inherently creative, can be meaningfully measured and quantified.  Few people would think of applying the principles of Six Sigma to painting a picture, composing a symphony, writing a novel, acting in a play, dancing in the ballet, etc.  Why?  What about an individuals expression through dance is less quantifiable than the creation of software?  Software, according to Donald Knuth, is literature and, according to Paul Graham, the processes making it are closer to art than engineering and even engineers would be appalled to find their engineering processes being measured and quantified as if they were factory workers – interchangeable cogs in a giant machine.  If companies believe that design processes are interchangeable and replicable why are they not automating them.  Clearly the end products of mechanical engineering and software development are both delivered as computer data files and so would be perfect candidates for total computer automation.  Why has this not happened –  Because both of these processes’ values are in their creativeness and not in their repeatability.  Therefore Six Sigma has no meaning in this context.

Lean Six Sigma

In theory, Lean Six Sigma is the application of Lean’s Agile development principles to a Six Sigma process improvement environment.  In manufacturing circles this is an obvious marriage.  In software circles, however, Lean does not in any way explain how Six Sigma can be applied to the creative process of software design and writing.  Even Mary and Tom Poppendieck, the creators of Lean Software Development, when speaking about Lean Six Sigma, are only able to explain the application of Lean in a software environment and neglect to explain the use of Six Sigma.

Summary

There are two things that often result from the application of process controls in an organization.  The bad is the creation of unnecessarily heavyweight or even possibly harmful process that encumber or possibly disrupt productivity and increase overhead costs making it more difficult to achieve profitability.  The good is the codification of common sense principles.  What is difficult is to avoid the first while leveraging the second without letting the second become the first.

Few process management advantages come from applying unintuitive principles to software development.  Sometimes principles which are obvious after you are aware of them, such as YAGNI, are non-obvious or even counter-intuitive to novice developers, managers and teams.  In cases where the organization is not aware of these principles at an organizational level and needs to apply them down to teams it is generally effective to codify them.  However, we do not believe that it is valuable to codify principles which are already understood and obvious.

Every organization has a different level of organization knowledge in software development.  The larger the company the great need for process control and codification.  Smaller companies have more opportunity to transmit good processes through the idea of Practice learning without the need for formal codification.
By transmitting through Practice we have an opportunity to teach, learn and grow rather than by enforcing.  Transmitted knowledge gives everyone an opportunity to evaluate, internalize and comprehend procedures and, most importantly, it gives everyone the right to question them.  If a process is codified within the organization then only a select few people are likely to have any real ability to question the applicability of a process or procedure and innovation may be stifled.  Few processes are appropriate all of the time and so teaching everyone when they are useful and when they are cumbersome empowers everyone to be as effective as possible.  Codification takes that decision out of the hands of the people with the most direct ability to determine the appropriateness of a procedure within a particular context.

Organizations blessed with highly skilled staff need very little process control as the staff will generally regulate themselves and follow meaningful processes on their own.  Some process control is still necessary, in this there is no question.  A minimally skilled organization, on the other hand, will require a great deal of process control to allow them to obtain any degree of productivity at all.  This is the fundamental theory behind a great many business and management processes – the concept of “managing to the middle”.  Managing to the middle means that instead of acquiring and training and empowering great staff the company has decided to acquire budget resources and using heavy process control to obtain some level of production from staff that would be mostly dysfunctional with those processes.

At CompanyX our base principles rely on the fact that we acquire great, motivated resources and empower them for innovation and creativity.  This has been said many times in this text and it cannot be stated enough.  Every management decision made at CompanyX must be done in the light of the fact that we trust our teams, and we believe in our teams.  CompanyX is not a code manufactory.  CompanyX creates literature.  It creates art.  It creates innovation.   Therefore it is imperative that we refrain from utilizing process control beyond that which is absolutely necessary and, whenever possible, rely upon the foundations of Practice and professionalism to establish working processes that are themselves flexible and mutable.

Risk Management

In software development, risk is a constant presence that must be addressed.  Risk comes in many forms such as choosing to enter an inappropriate marketplace, finding a new competitor after a project has begun, poor design, misidentification of business needs, bug-ridden software implementations, failed vendor support for key platform components, regulatory changes, unforeseen technical difficulties, team malfunctions, key employee health issues, lack of internal political support, etc.  Risk is unavoidable and it is by facing risks that we are presented with an opportunity to move into new, unproven territories.  With great risk comes great rewards.  But nothing is without risk, even an absence of action does not truly mitigate risk, in fact it is likely to exacerbate it.

“If you don’t risk anything, you risk even more.” – Erica Jong

Risk Identification

The first step in dealing with risk is identifying potential points of risk.  For CompanyX we know that our most significant points of risk come from personnel dynamics which include health issues, availability, etc. and post-production market uptake.  There are many other risk areas but to these two we are more susceptible than many companies and need to be acutely aware of the implications of these risks.

Before any project begins a detailed risk assessment process should be followed.  This procedure should include development lead, design lead and financial management at a minimum.   Our organization fortunately is able to avoid some areas of often high potential risk because of our minimal political turmoil, abundance of project support and lack of risk aversion.  For example, CompanyX has never been forced to abandon a project because of a withdrawal of internal executive sponsorship which may plague even the most successful of projects in other companies.

Risk Assessment

Once a comprehensive set of foreseeable and reasonable risks are collected then we must assign a likelihood of occurrence and a level of impact quantity to them.  Obviously this measurement is highly subjective.

Unlike a larger firm which may be able to spread risk out widely between projects and hedge high risk projects with low risk projects we are stuck in a position in which we must put our few eggs into very few proverbial baskets and our organizational risk to a single failed project is quite high.

Risk Mitigation and Handling

Once risks are understood there are four options for dealing with risks.  Risk avoidance, reduction, retention and transference.  Each approach must be considered and decided upon by the risk team.  In general, however, it is in the interest of CompanyX to practice risk retention when possible within the software development practice because our ability to withstand failures internally is generally better than is the industry average causing other forms of risk mitigation to be more costly by comparison.  My retaining risk we have a greater opportunity to leverage our internal skills more effectively.  Obviously this does not apply when risk can simply be avoided.

Summary

CompanyX has pursued process management very diligently in the past and many of our processes and procedures reflect a dedication to research, customization and independence, but we still have a very long way to go.  There is much that we have to learn from the industry and many best practices that we still need to apply.

Building a strong process supporting strong projects requires constant diligence, reevaluation and reassessment.  There are always areas in which improvement is possible and must be pursued.
In this document we have identified areas of improvement that should be addressed such as our lack of organized testing and our lack of support for a testing Practice and a need to implement more formal risk assessment procedures.  New software systems such as continuous integration and discussion forums needs to be installed and made available to the development teams.

In the past year we have been making many strides in the right direction and improvements are very visible.  Perhaps our greatest achievement of the past year is the use of the wiki format for storing procedures, design documents and other forms of organization knowledge in a repository that is not only highly available but also easily searchable.  It is very exciting to see how this resource will be adapted for use in the future.

Whatever process improvements we seek out in the future it is imperative that we remain focused upon the fact that our value lies in our staff and that any process which stymies their desire to create or hampers their ability to remain productive is dangerous and should be avoided.  If we undermine our own value then we have lost the very resource for which we were hoping to be able to manage through process control.  Like any form of management, process management has more ability to damage than it does to accentuate and care must always be taken.

Process management must be a tool for enabling innovation and creativity because it cannot be a tool to create these things.

References

Wells, Timothy D. Dynamic Software Development. New York: Auerbach, 2003.

Brooks, Frederick P. The Mythical Man Month: Essays on Software Engineering.

Graham, Paul. Hackers and Painters: Big Ideas from the Computer Age. Sebastopol: O’Reilly, 2004.

DeMarco, Tom & Lister, Timothy.  Peopleware: Productive Projects and Teams.  New York: Dorset House, 1987.

Rainwater, J. Hank.  Herding Cats: A Primer for Programmers Who Lead Programmers.  New York: APress, 2002.

Spolsky, Joel.  Joel on Software.  New York: APress.

Spolsky, Joel.  More Joel on Software.  New York: Apress, 2008.

Mason, Mike.  Pragmatic Version Control: Using Subversion – The Pragmatic Starter Kit Volume 1.  Raleigh: Pragmatic Bookshelf, 2006.

Hunt,Andrew & Thomas, Dave.  Pragmatic Unit Testing in Java with JUnit – The Pragmatic Starter Kit Volume 2.    Raleigh: Pragmatic Bookshelf, 2003.

Clark, Mike.  Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications – The Pragmatic Starter Kit Volume 3.  Raleigh: Pragmatic Bookshelf, 2004.

Hunt, Andrew & Thomas, Dave.  Pragmatic Programmer, The: From Journeyman to Master.  New York: Addison-Wesley Professional, 1999.

Spolsky, Joel & Atwood, Jeff.  Stack Overflow Podcast, Episode 23 – Seven Mistakes Made in the Development of Stack Overflow Segment.

Beck, Kent.  Test-Driven Development by Example.  New York: Addison-Wesley, 2003.

Beck, Kent & Andres, Cynthia.  Extreme Programming Explained: Embrace Change.  New York: Addison-Wesley, 2004.

Hass, Kathleen B.  From Analyst to Leader: Elevating the Role of the Business Analyst – The Business Analyst Essential Library.   Vienna: Management Concepts, 2008.

Cockburn, Alistair.  Writing Effective Use Cases.  New York: Addison-Wesley, 2000.

Ruby Module Test::Unit.  http://ruby-doc.org/stdlib/libdoc/test/unit/rdoc/classes/Test/Unit.html

Vermeulen, Allan; Ambler, Scott W.; et. al.  Elements of Java Style, The.  New York: Cambridge University Press, 2000.

Baldwin, Kenneth; Gray, Andrew; Misfeldt, Trevor.  Elements of C# Style, The.  New York: Cambridge University Press, 2006.

Ambler, Scott W.  Elements of UML 2.0 Style, The.  New York: Cambridge University Press, 2005.

Knuth, Donald.  “Literate Programming” in Literate Programming, CSLI.  1992.

Wikipedia Contributors.  “Capability Maturity Model Integration”.  http://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration November, 2008.

Wikipedia Contributors.  “Six Sigma”.  http://en.wikipedia.org/wiki/Six_Sigma  November, 2008.

Wikipedia Contributors.  “Lean Software Development”.  http://en.wikipedia.org/wiki/Lean_software_development  November, 2008.

Wikipedia Contributors.  “Risk Management”.  http://en.wikipedia.org/wiki/Risk_management  November, 2008.

Adams, Scott.  Dilbert Principle, The.  New York: Collins, 1997.

Poppendieck, Mary; Poppendieck, Tom.  “Why the Lean in Lean Six Sigma”. http://www.poppendieck.com/lean-six-sigma.htm  November, 2008.

Product References

OpenFire Instant Messaging Server
Spark Instant Messaging Client
Yahoo! Zimbra Email and Groupware Server
SSL-Explorer VPN Server
DokuWiki:
Microsoft SharePoint
Alfresco CMS
Vanilla Forum
phpBB Forum

]]>
https://sheepguardingllama.com/2008/11/the-perfect-development-environment-manual/feed/ 0
Installing Subversion on RHEL5 https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/ https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/#comments Sun, 16 Nov 2008 00:57:20 +0000 http://www.sheepguardingllama.com/?p=2298 Continue reading "Installing Subversion on RHEL5"

]]>
Subversion (SVN) is a popular open source, source code change control package. Today we are going to install and configure Subversion on Red Hat Enterprise Linux 5.2 (a.k.a. RHEL 5.2), I will actually be doing my testing on CentOS 5.2 but the process should be completely identical.

Installing Subversion on Linux

Installation of subversion is very simple if you are using yum. In addition to Subversion itself, you will also want to install Apache as you will most likely want to access Subversion through a WebDAV interface.  You can simple run:

yum -y install subversion httpd mod_dav_svn

Once Subversion is successfully installed we need to create the initial repository. This can be done on the local file system but I prefer to keep high priority and highly volatile data stored directly on the NAS filer as this is far more appropriate for this type of data.

As an aside, I like to keep low volatility data (say, website HTML) stored on local discs in general for performance reasons and since backups are not as difficult to take using traditional backups methods (e.g. tar, cpio, Amanda, Bacula, etc.) High volatility files I prefer to be on dedicated network storage units where backups can be easily taken using more advanced methods like Solaris 10’s ZFS snapshot capability. It is not always clear when data makes sense to keep locally or to store remotely but I feel that you can gauge a lot of the decision on two factors: frequency of data changes – that is changes to existing files not the addition of new files necessarily and the level to which the data is the focus of the storage – that is if the data is incidental or key to the application. In the case of Subversion the entire application is nothing but a complex filesystem frontend so we are clearly on the side of “data focused” application.

I started writing this article on RHEL4 on a system with a small, local file system.  When I returned to the uncompleted article and continued with it I was implementing this on a RHEL5 system with massive local storage and decided to keep my Subversion repository local on a dedicated logical volume for easy Linux based snapshots.

Subversion has two optional backend storage solutions.  The original method of storing Subversion data was with the venerable Berkley DataBase, known as BDB, which is now a product of Oracle.  The newer method, and the method which has been the default choice since Subversion 1.2, is FSFS (I don’t know exactly for what its initials stand) which uses the native filesytem mechanisms for storage.  In my example here and for my own use I choose FSFS as I think it is more often the better choice.  Of most important note, FSFS supports remote filesystems over NFS and CIFS while BSB does not.  FSFS is also easier to deal with when it comes to creating backups.  My feeling is that unless you really know why you want to use BDB, stick with the default FSFS, there is a reason that it was selected as the default.

Another note about creating Subversion repositories: some sources recommend putting Subversion repos under /opt. All I have to say is “No No No!” The /opt filesystem is not appropriate for regularly changing data. Any data that is expected to change on a regular basis (e.g. log files, source code repos, etc.) belongs in /var. This is the entire purpose of the /var filesystem. It stands for “variable” and is purposed for regular filesystem changes. Files going to /var is another indicator that external network filesystem may be appropriate as well.

mkdir -p /var/projects/svn

At this point you can either use /var/svn as a normal or mounted remotely in some manner such as NFS, CIFS or iSCSI.  Regardless of how the repository is set up the rest of this document will function identically.

We are now in a position to use svnadmin to create our repository directory:

svnadmin create /var/projects/svn/

At this point, Subversion should already be working for you.  If you are new to Subversion, we will do a simple import to test our installation.  To perform this test, create a directory called “testproject” and put it in the /tmp directory.  Now touch a couple of files inside that directory so that we have something with which to work.  Then we will do our first Subversion import.

mkdir /tmp/testproject; cd /tmp/testproject; touch test1 test2 test3

svn import tmp/testproject/ file:///var/projects/svn/test -m “First Import”

Your Subversion installation is now working, but few people will be happy accessing their Subversion repositories only from the local machine as we have done here.  If you are used to working from the UNIX (Linux, Mac OSX, Cygwin, etc.) command line you may want to try accessing your new Subversion repository using SVN+SSH.  Here is an example taken from an OpenSUSE workstation with the Subversion client installed:

svn list svn+ssh://myserver/var/projects/svn
testproject/

At this point you now have access from your external machines and can perform a checkout to get a working copy of your code.  To make the process really simple be sure to set up your OpenSSH keys so that you are not prompted for a password.  For many users, most notably Windows users, you are going to want access over the HTTP protocol since Windows does not natively support the SSH protocol.

The first thing that you are going to need to do, if you are running SELinux and Firewall security on your RHEL server like I am, is to open ports 80 and 443 in your firewall so that Apache is enabled.  Normally I shy away from management tools but this one I like.  Just use “system-config-securitylevel-tui” and select the appropriate services to allow.

You will also need to allow the Apache web server to write to the Subversion repository location within SELinux.  To do so we can use the command:

restorecon -R /var/projects/svn/

We have one little trick that we need to perform.  This trick is necessary because of what appears to be a bug in the way that Subversion sets the user ID when it runs.  This is not necessary for all users but can be a pretty tough sticking point for anyone who runs into it and is not aware of what can me done to remedy the situation.

cp -r /root/.subversion/* ~apache/.subversion/

Configuring Apache 2 on Red Hat 5 is a little tricky so we will walk through it together.  The first thing that needs to be added is the LoadModule line for the WebDAV protocol.  This goes into the LoadModule section of the mail /etc/httpd/conf/httpd.conf configuration file.

LoadModule dav_module         modules/mod_dav.so

The rest of our configuration changes for Apache 2 will go into a dedicated configuration file just for our subversion repository: /etc/httpd/conf.d/subversion.conf

I am including here my entire configuration file sans comments.  You will need to modify your SVNPath variable accordingly, of course.

# grep -v \# /etc/httpd/conf.d/subversion.conf

LoadModule dav_svn_module       modules/mod_dav_svn.so
<Location /svn>
  DAV svn
  SVNPath /var/projects/svn/
</Location>

At this stage you should now not only have a working Subversion repository but should be able to access it via the web.  You can test web access from you local box with the svn command.  Here is an example:

svn list http://localhost/svn/

References:

Mason, Mike. “Pragmatic Version Control Using Subversion, 2nd Edition“, Pragmatic Programmers, The. 2006.

Installing Subversion on Apache by Marc Grabanski

Subversion Setup on Red Hat by Paul Valentino

Setting Up Subversion and Trac As Virtual Hosts on Ubuntu Server, How To Forge

The SVN Book, RedBean

Additional Material:

Subversion Version Control: Using the Subversion Version Control System in Development Projects

]]>
https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/feed/ 5
Simple Ruby Twitter Client – Tweet [Ruby] https://sheepguardingllama.com/2008/10/simple-ruby-twitter-client-tweet-ruby/ https://sheepguardingllama.com/2008/10/simple-ruby-twitter-client-tweet-ruby/#comments Fri, 31 Oct 2008 18:41:58 +0000 http://www.sheepguardingllama.com/?p=2833 Continue reading "Simple Ruby Twitter Client – Tweet [Ruby]"

]]>
This is my simple, Ruby based Twitter client using Curl designed for UNIX systems like Linux, Mac OSX, FreeBSD, Solaris, etc.  The only requirements are Curl and Ruby.

In order to use Tweet, simply copy all of the included code into your favourite text editor (I use vi) and save as ‘Tweet’.  Don’t forget to “chmod a+x tweet” so that it is executable.  I suggest moving Tweet into your path (perhaps you should consider /usr/local/bin as a recommended directory) to make it easier to use.  I have designed Tweet to be useful to users on a multi-user UNIX system.  It is a command-line utility that simply accepts text input and posts that text, maximum of 144 characters, to your Twitter account.  An existing Twitter account is necessary so sign up if you do not have one already.

There is very little to know in order to use Tweet [Ruby].  (Should I name this RTweet perhaps?)  The one thing that is needed is to set your username and password.  Tweet [Ruby] is designed to accept username and password data from the system environmental variables $tweetuser and $tweetpass.  This design decision was made because it makes it extremely simple to have multiple users on the same system be able to use Tweet [Ruby] transparently from one another.  If you desire, you can bypass this setting by changing the “unset” user and pass settings in the code to your username and password.  This hardcoding is not recommended but is available if needed.

Once you have your username and password set (you can see what your settings currently are by using the -t option) all you need to do is enter the text that you want to publish.  Here is an example:

tweet ‘This is my first post from Tweet [Ruby].  Thanks Scott, this is great.’

Here is the code, go crazy.

#!/usr/bin/ruby -w
#Scott Alan Miller's "Tweet" - Twitter Command Line Script

text = ARGV[0].chomp
user = "unset"         #Supplied Username
pass = "unset"         #Supplied Password
url  = "http://twitter.com/statuses/update.xml"
ver  = "1.0"

user = ENV['tweetuser'] if ENV['tweetuser']
pass = ENV['tweetpass'] if ENV['tweetpass']

if    text.length <= 0
  puts "Please enter text to post."
elsif text.length >= 144
  puts "Please limit post to 144 chars."
elsif text == "-v"  # Version Message
  puts "Current Version of Tweet [Ruby] is " + ver
elsif text == "-h"  # Help Message
  puts "Tweet [Ruby] Help: \n"
  puts "To set environmental username and password:"
  puts "  export tweetuser=yourusername"
  puts "  export tweetpass=yourpassword\n"
  puts "Usage:"
  puts "  tweet \'This is my message.\'"
elsif text == "-t"  # Variable Test
  puts "Username: " + user
  puts "Password: " + pass
else
  result = %x[curl -s -S -u #{user}:#{pass} -d status="#{text}" #{url}]
  puts "Update Failure" if result.grep(/text/) == nil
end

If you end up using my little Twitter client, please send me a Tweet to let me know!

tweet ‘@scottalanmiller Using Tweet, best Twitter client ever.  Ruby rulz.’

]]>
https://sheepguardingllama.com/2008/10/simple-ruby-twitter-client-tweet-ruby/feed/ 5
Twitter from the Linux Command Line https://sheepguardingllama.com/2008/10/twitter-from-the-linux-command-line/ https://sheepguardingllama.com/2008/10/twitter-from-the-linux-command-line/#comments Fri, 31 Oct 2008 14:52:47 +0000 http://www.sheepguardingllama.com/?p=2820 Continue reading "Twitter from the Linux Command Line"

]]>
Okay, so you are a crazy BASH or Korn shell nut (DASH, ASH, TCSH, CSH, ZSH, etc., etc. yes, I mean all of you) and you totally want to be able to Tweet on your Twitter feed without going to one of those krufty GUI utilities.  Such overkill for such a simple task.  I feel your pain.  When I found this little nugget of command line coolness I just had to share it with all of you.  Special thanks to Marco Kotrotsos from Incredicorp who published this on IBM Developer Works.

If you have curl installed, all you need to do is:

curl -u username:pass -d status=”text” http://twitter.com/statuses/update.xml

So, to give you a real world example, if you are “bobtheuser” and your password is “pass1234” and you want to say “Hey, my first UNIX Shell Tweet.” then you just need to:

curl -u bobtheuser:pass1234 -d status="Hey, my first UNIX Shell Tweet." \
http://twitter.com/statuses/update.xml

You will get some feedback in the form of a response XML file. Happy Tweeting!

Disclaimer: I realize that using “Linux” in the subject is misleading.  This is not a Linux specific post but will apply to FreeBSD, OpenBSD, NetBSD, Mac OSX, UNIX, Solaris, AIX, Windows with Cygwin or just about any system with a command line and the curl utility installed.

I use this as the basis for my Ruby based Twitter Client for the command line.

]]>
https://sheepguardingllama.com/2008/10/twitter-from-the-linux-command-line/feed/ 1
Ruby/Qt: qtruby4.rb:2144: [BUG] [x86_64-linux] https://sheepguardingllama.com/2008/10/rubyqt-qtruby4rb2144-bug-x86_64-linux/ https://sheepguardingllama.com/2008/10/rubyqt-qtruby4rb2144-bug-x86_64-linux/#respond Sat, 04 Oct 2008 17:01:50 +0000 http://www.sheepguardingllama.com/?p=2661 Continue reading "Ruby/Qt: qtruby4.rb:2144: [BUG] [x86_64-linux]"

]]>
You are working with Ruby and Qt and you get the following error:

/usr/lib64/ruby/site_ruby/1.8/Qt/qtruby4.rb:2144: [BUG] Segmentation fault
ruby 1.8.6 (2008-03-03) [x86_64-linux]

This is usually caused by a library linking problem. Most likely you are using:

require 'Qt'

Personally, I run into this problem when using Ruby/Qt on Novell OpenSUSE 11 64bit (x86_64 / AMD64).  What needs to be done is that linking needs to occur explicitly to the correct library.  If you are using ‘Qt3’ then you can link directly to that or, in my case, you want to use Qt4/KDE4 bindings then you will want to link to korundum4:

require 'korundum4'

Problem solved!

Thanks to Bemerkenswertes Meinerseits for some guidance in German!

]]>
https://sheepguardingllama.com/2008/10/rubyqt-qtruby4rb2144-bug-x86_64-linux/feed/ 0
HowTo WhiteList Proxy for School Using Squid on OpenSUSE Linux 11 https://sheepguardingllama.com/2008/09/howto-whitelist-proxy-for-school-using-squid-on-opensuse-linux-11/ https://sheepguardingllama.com/2008/09/howto-whitelist-proxy-for-school-using-squid-on-opensuse-linux-11/#comments Sun, 28 Sep 2008 22:26:25 +0000 http://www.sheepguardingllama.com/?p=2610 Continue reading "HowTo WhiteList Proxy for School Using Squid on OpenSUSE Linux 11"

]]>
Overview

I am the technology coordinator for a small, private K12 school in rural Upstate New York.  One of our challenges is filtering Internet access so that the students may have access to the Internet as much as possible while not requiring constant, direct supervision.

To meet these needs we decided that we were limited to WhiteListing – managing a list of all allowed websites and blocking everything by default as opposed to blacklisting where everything is allowed except for a specific list of banned sites.  Whitelisting means that we have to manually maintain a list of approved websites, but the parents are confident that the students are only able to access pre-approved web sites.

Our Infrastructure

Before getting into the implementation details, I would like to detail how our network is laid out to put this project into context.  We are a pure 32bit Novell OpenSUSE environment, both desktops and servers, with a single Netgear ProSafe Firewall connecting us to a donated Time-Warner RoadRunner cable connection (Thank You, Time Warner RR!!)

Each desktop is setup without routing so they are limited to communications within the subnet only.  We have no fears of needing to grow beyond our /25 subnet’s limit anytime soon.  We have no DHCP and use static IP assignments throughout the school including machines connected via wireless.  Those machines used for administrators (not teachers – but office use where students do not have access) are routable and will not use our filter (for extra security they are allowed external access at the firewall via an IP list.)  All other machines can only get access to the Internet through the use of the proxy server.  This also allows us to improve bandwidth utilization through aggressive caching since the set of allowed sites is so limited and well known.

For our proxy server hardware we are using an HP Proliant DL380 G2 with dual Pentium IIIs 1.4GHz processors, 1.25GB and six hot-swap 36GB 10,000RPM SCSI drives arranged as RAID 0+1.  This machines is far more than adequate for our needs and does an amazing job.  We could easily run on a DL360 G1 with just a single processor, half that memory and two drives in RAID 1 without any problem.  Our previous machine, which we used for years without any issues in performance, was a Proliant 3000, dual Pentium II 333MHz, 1GB and five 4.3GB 7,200RPM drives in RAID 5.

The older system ran SUSE 9.2 and ran wonderfully for a long time.  I am writing this HowTo guide as I move us to OpenSUSE 11 and do a fresh installation of our proxy server.

The Software

As we are running on OpenSUSE Linux 11, I want to work with Novell managed packages as much as possible.  For the proxy portion of our system we will use the Linux standard proxy server Squid.  OpenSUSE’s repository offers us both Squid3 and Squid2.  We will go ahead and use the latest Squid Proxy package for OpenSUSE 11, Squid3 3.0.5.  The downside to going with the newer Squid3 package is that OpenSUSE’s YaST tool cannot yet manage it so you are stuck working only from the configuration files.

For advanced filtering we have two primary choices: SquidGuard and DansGuardian.  SquidGuard has the advantage of being included in the OpenSUSE repositories making it easier to manage from a patch perspective.  DansGuardian is what I have used in the past.  It is available as an RPM from the OpenSUSE Build site but is not available through the YaST repositories.  DansGuardian is GPL’d but the author asks that you not exercise your GPL right (GPL in fact but not in spirit.)  So, I like to avoid DansGuardian simply because I can’t figure out if the author even wants me to use his software or not.

For our purposes here, using nothing but whitelisting, we do not need the features of either SquidGuard or DansGuardian and can avoid them completely.  If you are looking to do more than just whitefiltering they are your best bets.

Installing the Proxy Server: Squid

Installing Squid3 on OpenSUSE 11 is extremely simple.

zypper install squid3

Of course, if you prefer, you can always use OpenSUSE’s YaST utility, either graphically through the desktop or through an ncurses interface on the command line to install Squid and any necessary dependencies.  I find that working through Zypper (or Yum on a Red Hat, CentOS or Fedora system) to be the most effecient by far.

Configuring Squid

These are the changes that I made to /etc/squid/squid.conf:

acl localnet srv 192.168.4.0/25
http_access allow all whitelist
http_access deny all
http_port 8080

That’s it.  Very, very simple.  The first line is simply to allow my local network.  You will need to add in your own local network and not mine for this to work for you.  If you stick with the Squid3 defaults then all private networks are allowed locally by default so that is a completely viable option.

The next two lines, http_access, first tell the system to allow access to anyone “all” to sites included in the whitelist.  The next line says to deny access to anyone who did not get allowed from the previous rule.

The last line, http_port, is also completely optionaly.  The default port for Squid is 3128 but I prefer to run my proxy on the more common 8080 port.  This is just easier to remember when setting up desktops.

With the default install of Squid3, Squid is not configured to start automatically.  So we need to use chkconfig to configure Squid to start on system boot.  You can skip this step if, for some reason, you do not want your proxy system to start automatically when your server restarts.

chkconfig –level 3 squid

Before we actually start Squid, though, we will want to create our whitelist file which will be the main configuration file that we will be using after Squid is up and running.

Creating the Whitelist

Using your favourite text editor (that’s vi for me) create the file /etc/squid/whitelist.  This file is just a simple list of websites that will be allowed.  The one thing of which to be aware is the fact that your entries need to lead with a dot.  If you leave off the dot you will have problems.  Here is an example from my own whitelist:

.gov
.sheepguardingllama.com
.unicef.org
.eff.org
.conversationsnetwork.org

In this example, all United States government web sites will be allowed (those ending in .gov) as well as this blog, UNICEF, the Electronic Frontier Foundation and The Conversations Network.  Anytime that you alter this file you will need to ask Squid to reread its configuration.

Configuring the Desktop Clients

If you are like me, you will be using OpenSUSE on your desktops as well which I highly recommend.  OpenSUSE makes a wonderful desktop, especially with KDE4.  With OpenSUSE you have the option of setting your proxy settings using the handy YaST tool.  This is fine.  If you are like me, you will prefer to use the command line – mostly because it is easily scriptable but also because it will work for non-SUSE Linux boxes as well.

To set your proxy temporarily just for the current session to test your proxy server you can simply:

http_proxy="http://192.168.4.2:8080/"

Notice that you will need to use your own IP address here as well as your own port number if you decided to use one other than 8080.  My proxy server’s IP address is 192.168.4.2 so modify accordingly.

The most common means of setting this variable to survive through a reboot is to use /etc/profile so that it will apply to all users.  Simply add this line to /etc/profile:

export http_proxy=http://192.168.4.2:8080/

In OpenSUSE, there is a better place to set this information.  Let’s look at /etc/sysconfig/proxy.  This file is a central proxy settings file for all of the OpenSUSE which makes it very handy so that we don’t have to worry about users not picking up changes from other locations.  It is also nice as it will allow us to have some advanced settings if we so desire.

In my case, I am only using the proxy server to handle HTTP and HTTPS requests (we are blocking FTP and GOPHER entirely) so we only need to edit the two lines pertaining to those protocols as well as the “no proxy” setting to list which locations should not be proxied but accessed directly.  Here are my settings:

HTTP_PROXY="http://192.168.4.2:8080/"
HTTPS_PROXY="http://192.168.4.2:8080/"
NO_PROXY="localhost, 127.0.0.1"

With these changes you should now have a functioning, whitelisting proxy server to protect your network.  OpenSUSE’s default installation of FireFox is set to bypass its own proxy settings and to pick up the system changes automatically.  Tools like w3m and wget will use the system proxy settings as well.  If you are using a client that is either unable to or is not configured to get its settings from the system then you will need to configure its proxy settings manually on an application by application basis.

]]>
https://sheepguardingllama.com/2008/09/howto-whitelist-proxy-for-school-using-squid-on-opensuse-linux-11/feed/ 13
Updating Zimbra on Linux https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/ https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/#comments Sat, 13 Sep 2008 04:22:52 +0000 http://www.sheepguardingllama.com/?p=2533 Continue reading "Updating Zimbra on Linux"

]]>
Having been a Zimbra Administrator for some time and having always worked on the Zimbra Open Source platform I have found that documentation on the update process has been very much lacking.  The process is actually quite simple and straightforward under most circumstances but for someone without direct experience with the process it can be rather daunting.

My personal experience with Zimbra, this far, is running the 4.5.x series on CentOS 4 (RHEL 4).  Using CentOS instead of actual Red Hat Enterprise Linux presents a few extra issues with the installer but have no fear, the process does work.

While this document is based on the Red Hat Enterprise Linux version of Zimbra, I expect that non-RPM based systems will behave similarly.

To upgrade an existing installation of Zimbra, first do a complete backup. I cannot state the importance of having a complete and completely up-to-date backup of your entire system.  Zimbra is a massive package that is highly complex.  You will want to be absolutely sure that you are backed up and prepared for disaster.  If you use the open source version of Zimbra, as I do, that means taking Zimbra offline so that a backup can be performed.  I won’t go into backup details here but LVM or virtual instances of your server will likely be your best friend for regular backups.  Email systems can get very large very quickly.

Go to the Zimbra website and download the latest package for your platform.  If you use CentOS, get your matching RHEL package.  It will work fine for you.  I find that the easiest way to move the package to your Zimbra server is with wget.  Downloading to /tmp is fine as long as you have enough space.

Unpack your fresh Zimbra package.  Zimbra downloads as a tarball (gzip’ed tar package) but contains little more than a handy installation script that automates RPM deployments.  It is actually a very nice package.

tar -xzvf zimbra-package.tar.gz

You can cd into your newly unpacked directory and inside you will find that there is a script, install.  Yes, the installation process is really that simple.  If you are on most platforms you may simple run the install script.  If you are on CentOS, rather than RHEL, you will need one extra parameter: –platform-override.

./install.sh –platform-override

Be prepared for this process to run for quite some time, by which, I mean easily an hour or more.  Depending on the version of the platform that you are upgrading from and to you may find that this process can run for quite some time.  Also, depending on the size of your mail store, that may impact the speediness of the process as well.

The installation script will fire off checking for currently installed instances of Zimbra, checking your platform for compatibility (be sure to check this manually if using the override option but CentOS users can rest assured that RHEL packages work perfectly for them), performing an integrity check on your database and checking prerequisite packages.  Chances are that you will need to do something in order to prepare your system for the upgrade.

In my case, upgrading from 4.5.9 to 5.0.9, I needed to install the libtool-libs package.

yum install libtool-libs

While there are processes here that can certainly go wrong, the Zimbra upgrade process is very simple and straightforward.  As long as you have good backups (make sure not to start Zimbra and receive new mail after having made you last backup) you should not be afraid to upgrade your Zimbra Open Source system.

You can also purchase a support contract from Yahoo/Zimbra so that you can move to the Network version of Zimbra and Zimbra support staff are happy to walk you through the process.  Having someone there to make sure everything is okay is always nice.

References:

Linux Zimbra Upgrade HowTo from GeekZine

]]>
https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/feed/ 2
The Case Against SAN https://sheepguardingllama.com/2008/08/the-case-against-san/ https://sheepguardingllama.com/2008/08/the-case-against-san/#respond Sat, 16 Aug 2008 14:36:12 +0000 http://www.sheepguardingllama.com/?p=2492 Continue reading "The Case Against SAN"

]]>
Despite an inflamatory post title, I believe that SAN (Storage Area Networks) is a great technology with numerous scenarious where it is the exact right technology and several scenarios that only exist because of SAN’s availability.  However, that being said, many enterprises today use SAN without doing any proper strategy, architecture or engineering.  It is being chosen as a technology not because of its appropriatness to the task at hand but simly because technology managers see it as easier, or more popular, to use it broadly than to carefully evaluate each system in question based on technical and financial factors.

SAN is an amazing technology that wonderfully compliments virtualization, clustering and other advanced use case scenarios.  Not every machine is using these types of scenarios and SAN has many downsides that need to be carefully considered before implementing it blindly.

SAN is Complex. Simply by chosing to use SAN we introduce another layer of complexity into the server equation.  (I am assuming server use situations here as SAN is nearly unheard of in the desktop space.  That being said, I use SAN on my own desktop.)  Having SAN means that either your system administrators need to wear yet another hat or you need to hire and maintain a dedicated storage administration, and possibly engineering, staff.

It also means that you will probably need to deal with sourcing and managing a fibre channel network along with the associated HBAs, fiber optics, etc.  Servers that would otherwise have just three simple Ethernet connections (I’m generalizing horribly here) are suddenly up to five or more connections making your datacenter folks oh so happy.

SAN is Expensive. Unless you opt to use a shared network SAN technology like iSCSI (or Z-SAN) then SAN introduces an expensive array of proprietary networking hardware, cabling and host bus adapters.  Only after all of those expenses must we consider the cost of the SAN itself.  SAN systems are generally quite expensive and only begin to approach being cost effective when utilization rates are extremely high and the systems are very large.  Heavy up front investments can make SAN difficult to cost justify even if long term utilization rates might be high.

SAN is Not Performant. High speed SAN networks, massive switching fabrics and huge drive arrays all play into an expensive and mostly futile attempt to get SAN technologies to perform at or near traditional direct attached storage technologies.  During the Parallel SCSI and PATA drive era, fibre channel SAN had an advantage over most local drives simply because of the high performance of its networking infrastructure.  Today this is not the case.

Unlike shared bandwidth technologies like Parallel SCSI and Parallel ATA (PATA), SAS and SATA drives have dedicated, full duplex bandwidth per device providing greatly increased transfer rates while lowering latency.  Only the largest, most expensive of high performance SAN systems could hope to overcome this gap in technology.

Typical SAN systems tend to use, in my experience, SATA devices traditionally running at 7,200 RPM.  Local drives are often SAS drives running at 15,000 RPM.  Often, especially in the AMD and Intel server worlds, local drives are handled via high powered RAID controller cards with dedicated processors and their own cache.  These cards move the cache closer to the system memory making their burstable throughput far greater than can normally be acheived in a SAN situation.

SAN is Not Easily Tunable. In most situations, SAN is managed as a single, giant storage entity.  Tuning is performed to an entire array but little thought is generally given to small segments within an array.

This is made nearly impossible and definitely impractical by the simple fact that physical drive resources are often shared and the concerns of each accessing system must be considered.  The obvious solution is to just tune for “average” use given no special considerations to any particular system.  If drive resources are not dedicated then we must question where the value of the SAN comes into play.

Drives located on a local machine can easily be tuned for cost and performance as needed.  Careful consideration of high speed SAS versus large volume SATA can be made on a volume by volume basis by the system engineer.  Drives can be grouped as needed into carefully chosen RAID levels such as 0 for raw performance, 5 for high speed random access with some additional reliability, 1 for good sequential access with full redundancy, 6 for additional redundancy over 5, etc.

Drive volumes can also be isolated so that drive systems often accessed simultaneously do not share command paths.  Carefully filesystem design can greatly reduce drive contention and minimize drive head movement for increased performance and reliability.

SAN is Often Political. Simpy by introducing SAN to a large organization we risk introducing new management, new skill sets, new job descriptions and, inevitably, confusion and paperwork.  By separating the storage from the server we create another point of coordination keeping the system administrator from being a single point of contact and troubleshooting for system issues.

Anytime that we introduce a separation of duties we introduce company politics and a chain of communication.  Instead of troubleshooting a single system when a server goes down we have to, in the case of SAN, now consider the server, the SAN box and the connecting network plus the peripheral pieces like the host bus adapters and the local configuration.  What might otherwise be simple, almost meaningless changes like the addition of another drive to expand a server’s capacity by a terabyte, can suddenly scale into major enterprise issues requiring much lead time, planning and expenditure, and, of course, a system outage that used to take minutes to repair could easily become hours as company departments seek shelter rather than simply fixing the issues at hand.

SAN uses Additional Datacenter Footprint. Because almost any server already comes with internal storage capacity, the datacenter space needed by SAN equipment is generally redundant.  Until additional storage capacity is needed beyond that which can fit inside of the existing server chassis the SAN storage is completely additional within what are generally cramped and overutilized data centers.  In many cases when a server needs additional drive capacity SAN is still not necessarily a good option from a footprint perspective as many external drive array systems can be locally attached and use very little datacenter space.

SAN systems require more than simply physical space within the datacenter for their switching and storage pieces, they also require additional power and cooling.  In an era when we are fighting to make our datacenters as green as possible, SAN needs to be considered carefully with respect to its overall power draw.

SAN does not address Solid State Drives. Solid State Drive technology, or SSD, poses yet another obstacle for SAN in the enterprise.  SSD drives are much smaller capacity, currently, than traditional spindle based hard drives but often provide better performance at a fraction of the power consumption.  A traditional hard drive generally draws roughly fifteen watts while a standard SSD generally draws around one watt – a very significant power reduction indeed.

SSDs often have very high burstable transfer rates which swing the performance balance far in favor of the locally attached storage options based on their greatly superior throughput.  For example, a standard Hewlett-Packard DL385 G5 server, a very popular model, as eight 3Gb/s SAS channels available to it for a total aggregate of 24Gb/s.  Six times that of the most common SAN connections.

SANs which choose to use SSD, which is likely to take quite some time because SANs generally lean towards large capacity over performance, will suffer from a lack of throughput available but will have the benefit of eliminating almost all issues mentioned early in regards to drive contention from shared drive resources.

SAN is Confusing. While this factor comes into play less often, it still holds true that a majority of server “customers”, those people who utilize servers but are not the server or storage administrators, have a very poor understanding of SAN, NAS, DAS or filesystems in general and by introducing SAN we can inadvertantly introduce forms of complexity that cause communications and support issues.  While not an issue with SAN itself, in some cases technical confusion can impede adoption even when the technology is appropriate.

Bottom Line. SAN suffers from performance, organization, cost and issues of complexity while local storage is well understood, extremely inexpensive, simple to manage and offers extreme performance.  With rare exception, SAN, in my opinion, has little place competing with traditional direct attached storage options until DAS is unable to deliver necessary features such as resource sharing, certain types of replication, distance or capacity.

]]>
https://sheepguardingllama.com/2008/08/the-case-against-san/feed/ 0
Robert Dewar on Java (and College) https://sheepguardingllama.com/2008/08/robert-dewar-on-java-and-college/ https://sheepguardingllama.com/2008/08/robert-dewar-on-java-and-college/#respond Mon, 04 Aug 2008 23:43:10 +0000 http://www.sheepguardingllama.com/?p=2478 Continue reading "Robert Dewar on Java (and College)"

]]>
Two recent interviews with Prof. Robert Dewar of NYU, Who Killed the Software Engineer and The ‘Anti-Java’ Professor, have recently been popular on the web and I wanted to add my own commentary to the situation.  These interviews arise from Dewar’s article in the Software Technology Support Center: Computer Science Education: Where are the Software Engineers of Tomorrow? As someone who takes his role on a university computer science / computer information systems professional review board very seriously, I have spent much time considering these very questions.

Firstly, Prof. Dewar is hardly alone in his opinion that Java, as an indicator of the decline of computer science education in America, is destroying America’s software engineering profession.  The most popular example of someone with similar opinions would, of course, be the ubiquitous Joel Spolsky (of Joel on Software fame) in his Guerrilla Guide to Interviewing or in Stack Overflow Episode 2.

The bottom line in these arguments is not against Java but about the way in which colleges and universities teach computer science.  Computer Science is an extremely difficult discipline, but universities will often substitute simple classes for core CS classes.  Dewar states that this is widely because enrollment has dropped off in these programs as the field is less attractive and students choose lower-hanging educational fruit.  Universities put pressure on the departments to increase enrollment, often by lowering standards and eliminating hard requirements.  However, difficult programming classes like deep C or Assembler, require more highly trained, and therefore expensive, resources so this too causes academia to avoid teaching such categories.  A trained C or C++ developer has much better compensation prospects in the “real world” than they do in academia.

Java itself is a great language and no one, in this case, is saying that Java is not or should not be popular in real world development.  But Java is a language designed for rapid software creation and includes a staggering amount of built in libraries.  Almost anything truly difficult has been addressed by Sun’s own highly skilled developers already and does not require reworking by a working developer.  Working with Java requires only a rudimentary knowledge of programming.  This, by its very nature, makes using Java as a learning environment a crutch.  Learning to program in Java is far too easy and many, perhaps most, programming concepts can be easily avoided or perhaps missed accidentally.  (Anything that I can say here could apply to C# as well.  Both are great languages but extremely poor for teaching computer science.)

Far too often university computer science programs teach no language but Java.  Computer science students need many things including deeper system knowledge and a more widespread knowledge of different languages.  Computer science programs need to stop focusing on single, limited skill sets and start teaching the field of CS, and students need to stop accepting the easy way out and demand that their schools live up to the needs of the workplace!

While, by and large, I agree with Dewar whole-heartedly, he does have one comment that I find very disturbing – although very unlikely to be wrong.  He mentions, in more than one place, that Java is inappropriate as a “first language” as if computer science students at NYU and other universities are learning their first programming languages in college! This is an incredibly scary thought.

I guarantee that international students looking at careers in software engineering or computer science wouldn’t think of entering university without a substantial background in programming.  I can’t imagine a school like NYU ever considering such a case.  If we are allowing the entrance bar to be set so low than can we even possibly consider what we teach when apparently it matters very little?  Would we accept college students who didn’t do algebra in high school?  Didn’t speak English?  Know no history?  Failed physics?  How then could we possibly consider allowing non-programmers into what should be one of the most difficult possible collegiate programs available, and how can we expect good, proficient programming students to learn something of value when forced to learn alongside new learners?

Dewar’s argument for the necessity of a higher standard of collegiate computer science education is that by dumbing down the curriculum and handing out meaningless degrees to anyone willing to pay for them (hasn’t this been my argument against the university system all along?) we are fooling outselves into believing that we are training tomorrow’s workforce when, instead, we are simply accelerating the rate of globalization as developing countries see a massive opportunity to invest in core disciplines and outpace the United States at breakneck pace.  Software development is a field with very little localization barrier inherent to the work and is a prime candidate for offshoring due to the nature of the work and the advanced communications commonly associated with its practitioners and the higher level of skills generally present in its management.  But by created a gap in the American education system we are making a situation occur that simply begs to be globalized as our own country is mostly unable to produce qualified candidates.

Lacking from many discussions about computer science curriculum is the need to discuss the range of IT curricula in programs such as IT and Computer Information Systems.  Computer Science is a very specific and very intense field of study – or so it is intended.  Only a very small percentage of Information Technology professionals should be considering a degree program in CS.  This is not the program for administrators, managers, network engineers, analysts, database administrators, web designers, etc.  Even a large number of programmers should be seriously considering other educational avenues rather than computer science.

There is a fundamental difference in the type of programming that a comp sci graduate is trained to perform compared to a CIS graduate, for example.  CIS programs, even those targetting programming, are not designed around “system programming” but are generally focussed around more business oriented systems often included web platforms, client side applications, etc.  CS is designed to turn out algorithm specialists, operating system programs, database programmers – the kind of professionals that companies like Microsoft, Oracle and Google need in droves but not the type that the 300 seat firm around the corner needs or has any idea what to do with.  Those firms need CIS grads with a grasp of business essentials, platform knowledge and the ability to make good user interfaces rapidly.  These are very different job descriptions and the best people from either discipline may be pretty useless at the other.

All of this points to the obvious issue that companies need to start thinking about what it means to higher college graduates.  If all but a few collegiate programs are allowing CS programs to be nothing more than a few advanced high school classes in Java – why are we even looking at college degrees in the highering process?  Highering practices need to be addressed to stop blindly taking university degrees as having some intrinsic value.  We are in an era where the universities are wearing the emporer’s new clothes.  Everyone knows that the degrees are valueless but no one is willing to say it.  The system depends on it.  Too many people have invested too much time and money to admit now that nothing is being taught and that students leaving many university programs are nothing more than four or five years behind their high school friends who went straight to work and developed a lifelong ability to learn and advance rather than to drink beer while standing on their heads and spent their parents’ or borrowed money.

Computer Science departs need to start by developing a culture of self respect.  Teaching Java is not bad but a CS grad should have, perhaps, one class in Java and/or C# not a curriculum based around it.  Knowledge of leading industry languages like Java is important so that students have some feel for real world development but a CS degree is not preparing many students for work in Java based application development but for systems programming which is almost exclusively in C, C++ or Objective-C.

]]>
https://sheepguardingllama.com/2008/08/robert-dewar-on-java-and-college/feed/ 0
Installing Fedora 9 Linux in VirtualPC https://sheepguardingllama.com/2008/07/installing-fedora-9-linux-in-virtualpc/ https://sheepguardingllama.com/2008/07/installing-fedora-9-linux-in-virtualpc/#comments Fri, 25 Jul 2008 02:46:54 +0000 http://www.sheepguardingllama.com/?p=2462 Continue reading "Installing Fedora 9 Linux in VirtualPC"

]]>
If you are using Microsoft’s VirtualPC 2007 as a host for installing Red Hat’s Fedora 9 Linux (aka Sulphur) distribution you may have run into a few problems.  The first problem that plagues just about anyone attempting to install the latest versions of Linux (not just Fedora) is that of auto-detected virtualization.  To overcome this problem we have to forcibly disable paravirtualization.  This is easier than it sounds.

When the initial Fedora 9 menu comes up after you boot from the install CD ISO image, that is the “Welcome to Fedora 9!” menu, you will need to press [tab] in order to be able to manually edit the boot options.  You only get 60 seconds to press [tab] after the menu comes up so pay attention.

If you pressed [tab] you will get a line that looks roughly like this:

> vmlinuz initrd=initrd.img

This is the boot options line that you can modify.  Simply add the option “noreplace-paravirt” and your installation will go much smoother.  The line should look like this when you are done.

> vmlinuz initrd=initrd.img noreplace-paravirt

In my own installation experience I had some problems with the native text mode of Fedora 9 not displaying correctly.  “Normal” X Window operations were not a problem.  Some installations, however, will go only in text mode which should work fine during initial setup but will then go to the bad screen modes after the installation completes.

If you set your memory level too low (I made the mistake of trying to use only 128MB) then full graphical installation mode will not be possible and the problem will arise.  Increase memory allotment to at least 192MB to allow graphical mode to be used.  256MB is recommended.  The graphical install should work just fine.  [All specs are for the x86 32bit architecture version of Fedora 9 as this is the architecture used for VirtualPC.]

Thanks to Sean of “The Sean Blog” over at TechNet for pointing us in the right direction on this one!

Installation requirements for Fedora 9 can be found at Red Hat’s Fedora 9 Architecture Specification page.

]]>
https://sheepguardingllama.com/2008/07/installing-fedora-9-linux-in-virtualpc/feed/ 1
Dual Head OpenSUSE 11 on the HP dx5150 https://sheepguardingllama.com/2008/07/dual-head-opensuse-11-on-the-hp-dx5150/ https://sheepguardingllama.com/2008/07/dual-head-opensuse-11-on-the-hp-dx5150/#comments Thu, 24 Jul 2008 03:10:08 +0000 http://www.sheepguardingllama.com/?p=2459 Continue reading "Dual Head OpenSUSE 11 on the HP dx5150"

]]>
One of my favourite workhorse platforms is the Hewlett-Packard HP Compaq dx5150 desktop with the AMD Athlon64 processor and ATI Radeon Xpress 200 chipset. I’ve used this model for many years with a variety of operating systems. I recently installed Novell’s OpenSUSE 11 to one of my dx5150 units to which I have attached two identical Samsung SyncMaster 204B monitors. Getting OpenSUSE to support both monitors at once was a bit problematic and finding the necessary resources was a bit of a problem so I decided to share the solution here to make it easier for other hapless souls to stumble across.

What appears to happen to most people is that they either use the Yast and Sax combination of tools to no effect and become discouraged. Many attempt to load the ATI fglrx drivers and find that after doing so they are unable to get anything but a blank screen. This was my experience as well.

The final solution was actually very simple and painless and was actually described on this site hosted by Novell specifically for OpenSUSE: Multiple Screens Using XRandR.  What is difficult is discovering if this set of information is the correct set for the dx5150.  It is.

The solution was quite easy. First, give up on the fglrx driver. User the radeonxrandr12 driver instead. The added Virtual settings to the /etc/X11/xorg.conf file that include the size of both (or all) of your monitors combined. In my case with two 1600×1200 LCDs that was 3200 1200. So the following line had to be added to each “Display” subsection:

Virtual 3200 1200

And change the “Driver” line to:

Driver “radeonrandr12”

Then restart the X server – easiest thing to do is to log out and back in again.  Once you are back in you can open up the command line and start playing with the simple xrandr command to change your monitor configuration.

You can learn more about the xrandr options with the –help option.  The correct command for me to have my two monitors appear side by side with one large desktop is:

xrandr –auto –output VGA-0 –mode 1600×1200 –right-of DVI-0

With OpenSUSE 11 installed on the dx5150, the two monitor adapters available to you natively off of the ATI Radeon Xpress 200 integrated chipset are VGA-0 and DVI-0.  This makes them very simple to work with.

Novell maintains another document about working with the ATI Radeon Xpress 200 chipset and OpenSUSE 11 but I found, as did many other people, that this documentation did not work for this particular set of hardware.

]]>
https://sheepguardingllama.com/2008/07/dual-head-opensuse-11-on-the-hp-dx5150/feed/ 4
AppleTV Architecture https://sheepguardingllama.com/2008/07/appletv-architecture/ https://sheepguardingllama.com/2008/07/appletv-architecture/#comments Fri, 11 Jul 2008 20:09:46 +0000 http://www.sheepguardingllama.com/?p=2444 Continue reading "AppleTV Architecture"

]]>
If you spend any time reading Apple’s literature you will discover that they have an intended architecture for their AppleTV devices.  I was surprised to learn that Apple’s idealized concept for their media device was so completely different from how I had envisioned its use.

Apple sees the AppleTV as a centralized media consumption device.  Obviously the AppleTV is targetted for tech-savvy home users and, from what I have seen from Apple’s officialy advertising, they expect a multi-computer home to have iTunes running on several computers (in bedrooms, home office, etc.) and a single AppleTV unit places in the home’s “media center” location attached to a large screen display and surround sound audio system.  Under this design the AppleTV is the media consumer interface to all of the home’s computing resources.

I am sure that for many potential AppleTV customers, especially those already very much entrenched in the ubiquitious use of Apple’s iTunes, that this model may make sense.  A family of four could have a Mac or a PC located in the parents’ bedroom, in each of the children’s bedrooms each running iTunes and containing the individual users’ personal audio and video files.  Then a single AppleTV device placed in the living room or den and hooked to a big screen LCD high-definition television display and a surround sound audio system could be used for serious viewing or family time and the individual computers could be used for personal viewing or listening.

This model makes a lot of sense, especially in a home where all users have computers available to them and each person is likely to want to maintain their own repository of media.  In many cases I believe that this may not be the optimum approach.  This “centralized AppleTV – decentralized media” approach leaves much to be desired by my reasoning for the average media consuming family.

My proposed architecture is based on the theory of “decentralized AppleTVs – centralized media.”  I feel that more often it will be a better use of resources to have many AppleTVs located throughout the home wherever media consumption is desired.  For example, having an AppleTV in each bedroom and in the living room and/or den.  Then, to support the AppleTV units, one single Mac or PC computer running iTunes would be used as a centralized “media server” so that all files are managed from a single location.  This gives each AppleTV throughout the home access to the entire family media archives very simply.

Of course you can use Mac desktops running FrontRow to replace specific instances of the AppleTV.  This can allow for mixing and matching additional functionality as needed without disrupting the base home media architecture.  This system allows every room to use movies and music through a dedicated “entertainment” machine while the desktop computers, if they exist, can be used solely for computing and will not have to share resources – most notably screen real estate – with video content.

Storage of media under Apple’s proposed architure requires each computer user to choose, store and protect their own media.  This means that each computer must be treated as a valuable resource and required dramatically more long term media management.  It also means that there is a likelihood of media duplication throughout the house.  If every family member wants to be able to watch Disney’s The Little Mermaid when they are going to bed at night then each computer has to have its own copy of the movie.  It only takes a handful of movies before this causes significant storage bloat.

Under my proposed architure you can simply use the “media server’s” internal disk for media storage, or if you grow beyond that point you can install a larger drive or just attach external hard drives.  If you have serious storage needs then you can back the iTunes application with an external storage system such as a NAS device.  Consumer grade NAS devices start under $1,000 and it is not financially unreasonable to move to custom server-based storage solutions which can easily hit 14TB today and will scale far beyond this in the near future.  (For reference, a typical new desktop machine today holds around .16TB with the largest drives being just 1TB – so 14TB is a significant amount of storage.)

Possibly the biggest advantage of having centralized media storage is that backups are very, very simple.  There is no bloat as there are not multiple copies of the same files floating around in different locations, and backups are only necessary from the media store (either the local drive, the external drives or the NAS device.)

In a previous article I discussed using the AppleTV as a means of controlling content being made available to children.  Apple’s architecture does not really take this advantage of their own system into account, but under my architecture children can safely have an AppleTV installed into their bedrooms with them having unlimited access to it without any worries that they will be able to access unintended content using it.

]]>
https://sheepguardingllama.com/2008/07/appletv-architecture/feed/ 1
Choosing a Linux Distro in the Enterprise https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/ https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/#respond Thu, 10 Jul 2008 17:27:22 +0000 http://www.sheepguardingllama.com/?p=2337 Continue reading "Choosing a Linux Distro in the Enterprise"

]]>
Linux is popular in big business today. No longer, and not for a long time now, has Linux been the purview of the geek community but it is a solid, core piece of today’s mainstream IT infrastructure. That being said, Linux is still plagued by confusion over its plethora of distributions. This being the case I have decided to weigh in with some guidance for businesses looking to use Linux in their organizations.

For those unfamiliar with the landscape, Linux is a family of operating systems that are generally considered to fall under the Unix umbrella although Linux is legally not Unix just highly Unix-like. Individual Linux packages are referred to as distributions or distros, for short. Unlike Windows or Mac OS X which come from a single vendor, Linux is available from many commercial vendors as well as from non-profit groups and individual distribution makers. Instead of there being just one Linux there are actually hundreds or thousands of different distributions. Each one is different in some way. This creates choice but also confusion. To make matters even worse some major vendors such as Red Hat and Novell release more than one Linux distribution targeted at different markets, and within a single distribution will often package features separately. This myriad of choices, before even acquiring your first installation disc does not help make Linux uptake in companies go any faster.

In reality the choices for business use are few and obvious with a little bit of research. To make things easier for you, I will just tell you what you need to know. Problem solved. Now if managing your Linux environment could be so easy!

Before we get started I want to stress that this article is about using Linux for enterprise infrastructure – that is, as a server operating system in a business. I am not looking into desktop Linux or high performance computational clusters and grid or specialty applications or home use. This article is about standard, traditional server applications that require stability, up time, reliability, accessibility, manageability, etc. If you are looking for my guide to the “ultimate Linux desktop environment”, this isn’t it. Desktops, even in the enterprise, do not necessarily have the same criteria as servers. They might, but not necessarily so.

When choosing a distribution for servers we must first consider the target purpose of the distro. Only a handful of Linux distros are built with the primary purpose of being used as a server. If your distro maintainer does not have the same principles in mind that you do it is probably best to avoid that distro for this particular purpose. Server distributions target longer time between releases, security over features, stability over features, rapid patching, support, documentation, etc.

In addition to targeting the distribution in harmony with our own goals we also need to work with a company that is reliable, has the resources necessary to support the product and has a track record with a successful product. Choosing a distribution is a vendor selection process. There are three key enterprise players in the Linux space: Red Hat, Novell and Canonical.

For many Red Hat is synonymous with Linux, having been one of the earliest American Linux distributions and having been a driving force behind the enterprise adoption of Linux globally. Red Hat makes “Red Hat Enterprise Linux”, known widely as RHEL, as well as Fedora Linux. Red Hat is the biggest Linux vendor and important in any Linux vendor discussion.

Novell is the second big Linux vendor having purchased German Linux vendor SUSE some years ago. Novell makes two products as well, Suse Linux Enterprise and OpenSUSE.

The third big Linux enterprise vendor is Canonical well known for the Ubuntu family of Linux distributions. While the Ubuntu distro family includes many members we are only interested in discussing Canonical’s own Ubuntu LTS distribution. LTS stands for “Long Term Support” and is effectively Canonical’s server offering. Their approach to versioning and packaging is quite different from Red Hat and Novell and can be rather confusing.

Before we become overwhelmed with choices (we have presented five so far) we have one here that we can further eliminate. Red Hat’s Fedora is not an “enterprise targeted” distributions. This is a “testing” and “community” platform designed primarily as a desktop and research vehicle and not as a stable server operating system. To be sure it is extremely valuable and a great contribution to the Linux community and has its place but as server operating system it does not shine. Nevertheless, without Fedora as a proving ground for new technologies it is unlikely that Red Hat Enterprise Linux would be as robust and capable as it is.

We can also effectively eliminate OpenSUSE.  OpenSUSE is the unsupported, community driven sibling to Novell SUSE Linux Enterprise.  However, unlike Fedora which is an independant product from RHEL, OpenSUSE is the same code base as SUSE Linux Enterprise but without Novell’s support.  This is a great advantage to the SUSE product line as there is a very large base of home and hobby users in addition to the enterprise users all using the exact same code and finding bugs for each other.  Going forward we will only consider SUSE Linux Enterprise as support is a key factor in the enterprise.  But OpenSUSE, for shops not needing commercial support from the vendor, is a great option as the product is the same, stable release as the supported version.

So we are left with three serious competitors for your enterprise Linux platform: Red Hat Linux, Novell Suse Linux and Ubuntu LTS. All three of these competitors are solid, reliable offerings for the enterprise. Red Hat and Novell obviously have the advantage of having been in the server operating system market for a long time and have experience on their side. But Canonical has really made a lot of headway in the last few years and is definitely worth considering.

Red Hat Linux and Suse Linux Enterprise have a few key advantages over Ubuntu. The first is that they both share the standard RPM package management system. Because RPM is the standard in the enterprise it is well tested and understood and most Linux administrators are well versed in its functionality. Ubuntu uses the Debian based package format which is far less common and finding administrators with existing knowledge of it is far less likely – although this is changing rapidly as Ubuntu has become the leading home desktop Linux distribution recently.

In general, Red Hat Linux and Suse Linux Enterprise have more in common with each other making them able to share resources more easily and giving administrators a broader platform to focus skills upon. This is a significant advantage when it comes time to staff up and support your infrastructure.

Ubuntu suffers from having a directly tie to a “non-enterprise” operating system that is particularly popular with the desktop “tweaking” crowd.  Unlike Red Hat and Suse, Ubuntu is coming at the enterprise from the home market and brings a stigma with it.  Administrators trained on RHEL, for example, tend to be taught enterprise type tasks performed in a business like manner.  Administrators with Ubuntu experience tend to be home users who have been running Linux for their own desktop and entertainment tasks.  This makes the interview and hiring process that much more difficult.  This is in no way a slight against the Ubuntu LTS product which is an amazing, enterprise-ready operating system which should seriously be considered, but shops need to be aware that the vast majority of Ubuntu users are not enterprise system administrators and their experience may be mostly from a non-critical desktop focused role.  It is rare to find anyone running RHEL or Suse Linux in this manner.

In my own experience, having software popular with home users in the enterprise also brings in factors of misguided user expectations.  Users expect the enterprise installations to include any package that the users can install at home and that update cycles be similar.  This can cause additional headaches although the Windows world has been dealing with these issues since the beginning.

At this point you have probably noticed that choosing either Suse and Ubuntu leaves you with the option of both free and fully supported versions, direct from the vendors.  This is a major feature of these distributions because it provides a great cost savings and greater flexibility.  For example, development machines can be run on OpenSUSE and production machines on Suse Enterprise lowering the overall cost if full support isn’t necessary for development environments.  You can run labs from free versions for learning and testing or only pay for support for critical infrastructure pieces.  Or, if you are really looking to save money or feel that your internal support is good enough, running completely on the free, unsupported versions is a viable option because you are still using the stable, enterprise-class code base.

Red Hat, as a vendor, does not supply a freely available edition of Red Hat Enterprise Linux.  Instead, they make their code repositories available to the public and expect interested parties to build their own version of RHEL using these repositories.  If you are interested in a freely available version of RHEL, look no further than CentOS.

CentOS, or the Community ENTerprise Operating System, is a code identical rebuild of RHEL.  It is identical in everyway except for branding.  CentOS is completely free – but unsupported.  CentOS is used in organizations of all sizes exactly like a free copy of RHEL would be expected to be used and many businesses choose to run CentOS exclusively.  As RHEL is the most popular Linux distribution in large businesses and as the commercially support version is rather expensive, CentOS also provides a very important resource to the community by allowing new administrators to experience RHEL at home without the expense of unneeded support.

Choosing between the Red Hat, Suse and Ubuntu families is much more difficult than whittling the list down to these three.  In many cases choosing between these three will be based upon cost, application demands, existing administration experience and features.  It is not uncommon for larger businesses to use two or possibly all three of these distributions as features are needed, but most commonly a single distribution is chosen for ease of management.  All three distributions are solid and capable.

Another potentially deciding factor is if your enterprise is considering using Linux on the desktop.  While RHEL can be used as a desktop operating system it is generally considered to be substantially weaker than Suse and Ubuntu when it comes to desktop environments.  Because of this, Fedora is generally seen as Red Hat’s desktop option but this is not supported by Red Hat nor does it share a code base with RHEL causing support to be somewhat less than unified even though to two are very similar.

For mixed server and desktop environments, Suse and Ubuntu have a very strong lead.  Both of these distributions focus a great many resources onto their desktop systems and they keep these components very much up to date and pay great attention to the user experience.  For a small company that can manage to use only one single distribution on every machine that they own this can be a major advantage.  Homogeneous environments can be extremely cost effective as a much narrower skill set is needed to manage and support them.

In conclusion: Red Hat Enterprise Linux, Novel Suse Enterprise and Ubuntu LTS, in both their supported versions as well as in their free versions (CentOS in the case of RHEL and OpenSUSE in the case of Novell, Ubuntu uses the same package) all represent great opportunities for the data center.  Do not be lulled into using non-enterprise Linux distributions because they are cool, flashy or popular.  Linux lends itself to being in the news often and to generating excess hype.  None of these things are good indicators of data center stability.  The data center is a serious business component and should not be treated lightly.  Linux is a great choice for the corporate IT department but you will be very unhappy if you pick your backbone server architecture based on its popularity as a gaming platform rather than on its uptime and management cost.

]]>
https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/feed/ 0
Business Analyst Reading List https://sheepguardingllama.com/2008/07/business-analyst-reading-list/ https://sheepguardingllama.com/2008/07/business-analyst-reading-list/#respond Tue, 08 Jul 2008 22:43:23 +0000 http://www.sheepguardingllama.com/?p=2440 Continue reading "Business Analyst Reading List"

]]>
This page is a work in progress.  I am beginning to compile a rather definitive Business Analyst reading list and will be updating this page with books and short descriptions and reviews.

The Business Analysis Essential Library from Management Concepts is a complete series on Business Analysis and the Business Analyst as a career which should not be missed:

My experience with textbooks is rather minimal but I have used three editions of the Whitten/Bentley text (5th, 6th and 7th) and have been very happy with it.  I have used a few other texts throughout my undergrad and graduate studies in systems analysis and feel that Whitten/Bentley is by far the best that I have used directly.

]]>
https://sheepguardingllama.com/2008/07/business-analyst-reading-list/feed/ 0
Firefox 3.0 is Out! https://sheepguardingllama.com/2008/06/firefox-30-is-out/ https://sheepguardingllama.com/2008/06/firefox-30-is-out/#respond Tue, 17 Jun 2008 19:49:47 +0000 http://www.sheepguardingllama.com/?p=2414 Continue reading "Firefox 3.0 is Out!"

]]>
Download FireFox 3.0 for free right now!

FireFox is the free, open source web browsing alternative.  It is the second most popular web browser currently and is fast, flexible and secure.  Highly recommended.  Go get it and try it out.  If you have used FireFox in the past be sure to check out version 3.0 as it is much faster than anything that you are used to using including FF2.

]]>
https://sheepguardingllama.com/2008/06/firefox-30-is-out/feed/ 0
Singleton Pattern in C# https://sheepguardingllama.com/2008/05/singleton-pattern-in-c/ https://sheepguardingllama.com/2008/05/singleton-pattern-in-c/#comments Fri, 09 May 2008 22:02:16 +0000 http://www.sheepguardingllama.com/?p=2370 Continue reading "Singleton Pattern in C#"

]]>
Implementing the Gang of Four Singleton Pattern in Microsoft’s C# .NET language is nearly identical to its Java implementation. The Singleton Pattern (as definied by the Gang of Four in 1995) is to “ensure a class only has one instance, and provide a global point of access to it.”

The idea behind the Singleton pattern is that there can only be one unique object of the class in existence at any one time. To make this possible we must make the constructor private and instead create a public “get” accessor for the Instance reference that controls access to the constructor, returning a new object if none exists or returning the already instantiated object if one does.

Here we have an example of the simplest possible Singleton class which I call, Singleton. It has only those methods absolutely necessary for the pattern and one console print statement in the constructor so that we can easily see when the constructor is called. I have opted to include Singleton and TestSingleton, its testing harness, in a single file. This allows me to easily demonstrate how to create the Singleton pattern and how to instantiate a Singleton object from outside the class.

TestSingleton.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SingletonPattern
{
    class TestSingleton
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Calling First Instance: ");
            Singleton mySingleton = Singleton.Instance;
            Console.WriteLine("Calling Second Instance: ");
            Singleton myOtherSingleton = Singleton.Instance;
        }
    }
    public class Singleton
    {
        private static Singleton instance;
        private Singleton()
        {
            Console.WriteLine("Singleton Constructor Called");
        }
        public static Singleton Instance
        {
            get
            {
                if (instance == null)
                {
                    instance = new Singleton();
                }
                return instance;
            }
        }
    }
}

Microsoft has a great article discussing the Singleton Pattern and its common variations as they pertain to the C# language in the MSDN Patterns and Practices Developers Center – Implementing Singleton in C#.

Also see: Sheep Guarding Llama Singleton Pattern in Java

]]>
https://sheepguardingllama.com/2008/05/singleton-pattern-in-c/feed/ 1
Singleton Pattern in Java https://sheepguardingllama.com/2008/05/singleton-pattern-in-java/ https://sheepguardingllama.com/2008/05/singleton-pattern-in-java/#comments Fri, 09 May 2008 18:58:02 +0000 http://www.sheepguardingllama.com/?p=2369 Continue reading "Singleton Pattern in Java"

]]>
Implementing a Singleton Class Pattern in Java is a common and easy task. The Singleton Pattern (as definied by the Gang of Four in 1995) is to “ensure a class only has one instance, and provide a global point of access to it.”

The idea behind the Singleton pattern is that there can only be one unique object of the class in existence at any one time. To make this possible we must make the constructor private and instead create a public getInstance() method that controls access to the constructor, returning a new object if none exists or returning the already instantiated object if one does. We must also override the clone() method from the Object superclass as this oft forgotten method will provide a workaround to our Singleton protection.

Here we have an example of the simplest possible Singleton class which I call, Singleton. It has only those methods absolutely necessary for the pattern and one console print statement in the constructor so that we can easily see when the constructor is called.

Singleton.java

public class Singleton {
     private static Singleton instance;
     private Singleton() {
          System.out.println("Singleton Constructor Called");
     }
     public static synchronized Singleton getInstance() {
          if (instance == null)
               instance = new Singleton();
          return instance;
     }
     public Object clone() throws CloneNotSupportedException {
          throw new CloneNotSupportedException();
     }
}

Now that we have a working Singleton class we need to make a simple test harness to see how we can call it and how it behaves. In our test we will simply create two objects, mySingleton and myOtherSingleton and we will see when the constructor method is called.

TestSingleton.java

public class TestSingleton {
        public static void main (String[] args) {
                System.out.print("Calling First Instance: ");
                Singleton mySingleton = Singleton.getInstance();
                System.out.print("Attempting to Call Again: ");
                Singleton myOtherSingleton = Singleton.getInstance();
        }
}

Hopefully this will help you write quick and easy Singleton pattern classes in the future.

See also: Sheep Guarding Llama Singleton Pattern in C#

]]>
https://sheepguardingllama.com/2008/05/singleton-pattern-in-java/feed/ 1
Ignite Realtime Spark IM Can’t Open Web Links in Linux https://sheepguardingllama.com/2008/04/ignite-realtime-spark-im-client-cant-open-web-links-in-linux/ https://sheepguardingllama.com/2008/04/ignite-realtime-spark-im-client-cant-open-web-links-in-linux/#comments Sun, 06 Apr 2008 21:08:19 +0000 http://www.sheepguardingllama.com/?p=2330 Continue reading "Ignite Realtime Spark IM Can’t Open Web Links in Linux"

]]>
In the latest version of the Java based Jabber/XMPP Instant Messaging client Spark by Ignite Realtime / Jive Software (Spark 2.5.8) there is a “bug” that keeps Spark from being able to open hyperlinks properly. According to AGomez in the Spark discussion group the client is hardcoded to point to Netscape which is, of course, a deprecated and minimally used web browser highly unlikely to be found on any modern computer. While this is annoying and keeps Spark from working out of the box with most Linux desktops it is not insurmountable. Since Ignite Realtime has stated that they do not expect to be releasing a new version of the Spark client on Java due to their work on a different platform for IM clients we will need to work around this issue until the community decides to pick up the Spark code and release a new version on their own. At the very least a hard-coded browser should be Firefox is browser detection is not going to be used.

To fix this issue you can simple run one of the following commands as the root user and, in my experience, Spark will be able to open the hyperlink just fine. I have tested this on OpenSUSE 10.3.

For Firefox: ln -s `which firefox` /usr/bin/netscape

For Konqueror: ln -s `which konqueror` /usr/bin/netscape

]]>
https://sheepguardingllama.com/2008/04/ignite-realtime-spark-im-client-cant-open-web-links-in-linux/feed/ 3
Linux Processor Ignored https://sheepguardingllama.com/2008/04/linux-processor-ignored/ https://sheepguardingllama.com/2008/04/linux-processor-ignored/#comments Fri, 04 Apr 2008 23:42:06 +0000 http://www.sheepguardingllama.com/?p=2328 Continue reading "Linux Processor Ignored"

]]>
WARNING: NR_CPUS limit of 1 reached. Processor ignored.

Not exactly the error message that you were hoping to see when you were checking you dmesg logs.  Don’t panic, this is easily remedied.  If you are wondering how to check your own Linux system for this error you can look by using this command:

dmesg | grep -i cpu

This error occurs on a multiple logical processor system when a uniprocessor kernel is loaded.  What the error indicates is that one CPU is being used and that more have been found but are being ignored.  The system should come online correctly but with only a single logical CPU.  (For a detailed discussion on logical processors see CPUs, Cores and Threads.)

In today’s market full of multi-core CPU products and hyperthreading this error message has moved from the exclusive realm of multi-socket servers to the home desktop and laptop.  It is now a potentially common site for many casual Linux users.

To correct this issue on a Red Hat, CentOS or Fedora Linux system all you will need to do is make a simple change to your GRUB configuration to tell it to point to a symmetrical multiprocessor (smp) kernel rather than the uniprocessor kernel. The file that you will need to edit is /etc/grub.conf.  After some header comments the beginning of your file should look something like this:

default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.9-67.0.7.plus.c4smp)
     root (hd0,0)
     kernel /vmlinuz-2.6.9-67.0.7.plus.c4smp ro root=/dev/VG0/LV0
     initrd /initrd-2.6.9-67.0.7.plus.c4smp.img
title CentOS (2.6.9-67.0.7.plus.c4)
     root (hd0,0)
     kernel /vmlinuz-2.6.9-67.0.7.plus.c4 ro root=/dev/VG0/LV0
     initrd /initrd-2.6.9-67.0.7.plus.c4.img

The GRUB configuration file can appear daunting at first but, in reality, it is quite simple to deal with.  The only line with which we are concerned with making modifications is the “default” line value.  In this case it is set to 1.  The grub.conf file contains a list of available kernels for us to use.  We may have just one or possible several, maybe even dozens.  In this case we see two.  You can see here that we have a CentOS 2.6.9 c4smp and a CentOS 2.6.9 c4 kernel.  You only need to be concerned with the “title” lines.  These are your kernel titles.  Normally the kernels of most interest will be at the top of the file.

You can check the name of the kernel that you are currently running by issuing:

uname -a

The first title line is kernel “0”, the second is kernel “1”, the next “2” and so forth.  Right now our “default” value is pointing to “1” which is the second kernel from the top and, as you will notice, not an smp kernel (therefore it is a uniprocessor kernel.)  In this case all we need to do is change the “default” value from “1” to “0” so that it now points to the first kernel option which for us is the smp kernel.

After your grub.conf file has been saved you make reboot the Linux system.  If all goes well it will return to you with additional logical processors enabled.  You can verify the name of the loaded kernel with the command given above.

]]>
https://sheepguardingllama.com/2008/04/linux-processor-ignored/feed/ 2
Linux’ kscand https://sheepguardingllama.com/2008/04/linux-kscand/ https://sheepguardingllama.com/2008/04/linux-kscand/#comments Wed, 02 Apr 2008 21:54:40 +0000 http://www.sheepguardingllama.com/?p=2318 Continue reading "Linux’ kscand"

]]>
In Linux the kscand process is a kernel process which is a key component of the virtual memory (vm) system.  According to Unix Tech Tips & Tricks’ excellent Understanding Virtual Memory article “The kscand task periodically sweeps through all the pages in memory, taking note of the amount of time the page has been in memory since it was last accessed. If kscand finds that a page has been accessed since it last visited the page, it increments the page’s age counter; otherwise, it decrements that counter. If kscand finds a page with its age counter at zero, it moves the page to the inactive dirty state.”

For the majority of Linux users and even system administrators on large servers this kernel process requires no intervention.  It is a simple process that works in the background doing its job well.  Nonetheless, under certain circumstances it can become necessary to tune kscand in order to improve system performance in a desirable way.

Issues with kscand are most likely to arise in a situation where a Linux box has an extremely large amount of memory and will be even more noticeable on boxes with slower memory.  The most notable is probably the HP Proliant DL585 G1 which can support 128GB or memory but in doing so drops bandwidth to a paltry 266MHz.  I first came across this particular issue on a server with 32GB of memory with approximately 31.5GB of it in use.  No swap space was being used and most of the memory was being used for cache so there was no strain on the memory system but the total amount of memory being scanned by the kscand process is where the issue truly lies.

Even on a busy server with gobs of memory (that’s the technical term) it would be extremely rare that kscand would cause any issues.  It is a very light process that runs quite quickly.  You are most likely to see kscand as a culprit when investigating problems with latency sensitive applications on memory intensive servers.  The first time that I came across the need to tune kscand was while diagnosing a strange latency pattern of network traffic going to a high-performance messaging bus.  The latency was minor but small spikes were causing concern in the very sensitive environment.  kscand was spotted as the only questionable process receiving much system attention during the high latency periods.

Under normal conditions, that is default tuning, kscand will run every thirty seconds and will scan 100% of the system memory looking for memory pages that can be freed.  This sweep is quick but can easily cause measurable system latency if you look carefully.  Through carefull tuning we can reduce the latency caused by this process but we do so as a tradeoff with memory utilization efficiency.  If you have a box with significantly extra memory or extremely static memory, such as large cache sizes that change very slowly, you can safely tune away from memory efficiency towards low latency with nominal pain and good results.

kscand is controlled by the proc filesystem with just the single setting of  /proc/sys/vm/kscand_work_percent. Like any kernel setting this can be changed on the fly to a live system (be careful) or can be set to persist through reboots by adding it to your /etc/sysctl.conf file.  Before we make any permanent changes we will want to do some testing.  This kernel parameter tells kscand what percentage of the system memory to scan each time that a memory scan is performed.  Since it is normally set to 100 kscand normally scans all in-use memory each time that it is called.  You can verify you current setting quite easily.

cat /proc/sys/vm/kscand_work_percent

A good starting point with kscand_work_percent is to set to 50.  A very small adjustment may not be noticeable so seeing 100 and then 50 should provide a good starting point for evaluating the changes in system performance.  It is not recommended to set kscand_work_percent below 10 and I would be quite wary of dropping even below 20 unless you truly have a tremendous amount of unused memory and your usage is quite static.

echo 50 > /proc/sys/vm/kscand_work_percent

Once you have determined the best balance of latency and memory utilization that makes sense for your environment you can make you changes permanent.  Be sure to only use the echo technique if this is the first time that this will be added to the file. You will need to edit it by hand after that.

echo "kscand_work_percent = 50" >> /etc/sysctl.conf

Keep in mind that the need to edit this particular kernel parameter is extremely uncommon and will need to be done only under extraordinary circumstances.  You will not need to do this in normal, everyday Linux life and even a senior Linux administrator could easily never have need to modify this setting.  On very specific conditions will cause this performance characteristic to be measurable or its modification to be desirable.

All of my testing was done on Red Hat Enterprise Linux 3 Update 6.  This parameter is the same across many versions although the performance characteristics of kscand vary between kernel revisions so do not assume that the need to modify the parameters in one situation will mean that it is needed in another.

RHEL 3 prior to update 6 had a much less efficient kscand process and much greater benefit is likely to be found moving to a later 2.4 family kernel revision.  RHEL 4 and later, on the 2.6 series kernels, is completely different and the latency issues are, I believe, less pronounced.  In my own testing the same application on the same servers moving from RHEL 3 U6 to RHEL 4.5 removed all need for this tweak even under identical load.  [Edit – In RHEL 4 and later (kernel series 2.6) the kscand process has been removed and replaced with kswapd and pdflush.]

Things that are likely to impact the behavior of kscand that you should consider include the following:

  • Total Used Memory Size, regardless of total available memory size.  The more you have the more kscand will impact you.  Determined by: free -m | grep Mem | awk '{print $3}'
  • Memory Latency, check with your memory hardware vendor. Higher latency will cause kscand to have a larger impact.
  • Memory Bandwidth.  Currently in speeds ranging from 266MHz to 1066MHz.  The slower the memory the more likely a scan will impact you and tuning will be useful.
  • Value in kscand_work_percent. The lower the value the lower the latency.  The higher the value the better the memory utilization.
  • Memory Access Hops.  Number of system bus hops necessary to access memory resources.  For example a two socket AMD Opteron server (HP Proliant DL385) never has more than one hop.  But a four socket AMD Opteron server (HPProliant DL585) can have two hops increasing effective memory latency. So a DL585 is more likely to be affected than a DL385 with all other factors being equal (as long as all three or four processor sockets are occupied.)
]]>
https://sheepguardingllama.com/2008/04/linux-kscand/feed/ 5
Handbrake Settings https://sheepguardingllama.com/2008/04/handbrake-settings/ https://sheepguardingllama.com/2008/04/handbrake-settings/#respond Wed, 02 Apr 2008 20:02:10 +0000 http://www.sheepguardingllama.com/?p=2316 Continue reading "Handbrake Settings"

]]>
Now that I am getting comfortable with Handbrake and its settings I have noticed the dearth of information online about peoples’ preferred settings. I feel that this information is very valuable and so have decided to share some baseline information about my own settings.

Unless otherwise stated I do my conversions to the .mp4 MPEG4 container and extension using h.264 for the video and AAC for the audio. My intent is to have videos that play with excellent quality with consideration to storage requirements on my first generation AppleTV and on VLC. I am not concerned with compression time. I always use two pass compression and do not select “turbo first pass.”  I always use the “H.264 (x264)” codec option considering the target devices that I intend to use.

Update: I have since added a Sony Platstation 3 (PS3) to my output targets and am now using Handbrake 0.9.3 which is needed to support the PS3 due to codec limitations.  I am using this settings for AppleTV, PS3 and VLC for desktop use.

Anamorphic Cinema Content from 480p Source (such as DVD)

Bitrate: 2,500Kb/s
AAC Stereo: 160Kb/s – 48 & AC3 Pass-Through
Deinterlace: Slowest
Denoise: Weak
Deblock: On
Keep Aspect Ratio: On
Detelecine: VBR
Container: .m4v (Needed for AppleTV support of dual audio.)

Wide Screen HDTV Content from 480p Source

Bitrate: 2,200Kb/s
AAC Stereo: 160Kb/s – 48 & AC3 Pass-Through
Deinterlace: Slowest
Denoise: Weak
Deblock: On
Keep Aspect Ratio: On
Container: .m4v

Newer American Television NTSC (1990s Sitcoms)

Bitrate: 1,800Kb/s
AAC Stereo: 160Kb/s – 48
Deinterlace: Slowest
Denoise: Weak
Deblock: On

Traditional American Television (1980s Sitcoms)

Bitrate: 1,600Kb/s
AAC Stereo: 128Kb/s – 48
Deinterlace: Slowest
Denoise: Weak

Old British Television (1980s Britcoms)

Bitrate: 1,200Kb/s
AAC Stereo: 112Kb/s – 48
Deinterlace: Slow
Denoise: Weak

Cartoons

Bitrate: 400Kb/s
AAC Stereo: 112Kb/s – 48
Deinterlace: Slowest
Denoise: Weak
Deblock: Off
Decomb: On

Hopefully this collection of settings will provide you with a starting point in getting the most out of Handbrake. At this time I am currently using Handbrake 0.93 and getting great results from the x.264 encoder. For computer based playback I highly recommend the use of VLC over QuickTime as its playback is much smoother.

]]>
https://sheepguardingllama.com/2008/04/handbrake-settings/feed/ 0
Are You Vista Capable? https://sheepguardingllama.com/2008/03/are-you-vista-capable/ https://sheepguardingllama.com/2008/03/are-you-vista-capable/#respond Thu, 20 Mar 2008 16:48:50 +0000 http://www.sheepguardingllama.com/?p=2295 Continue reading "Are You Vista Capable?"

]]>
Following my last article on Microsoft’s Windows Vista operating system and its review from the New York Times I felt that I should provide my own insight into the state of Windows Vista. I have been using Windows Vista for almost a year now. I am an IT professional and an early adopter of most technologies so I start using new operating systems a bit before the general public should consider looking at them. My main operating system is Novell’s OpenSUSE Linux 10.3 which is, in fact, newer than Windows Vista and my secondary machine is Windows XP Pro SP2.

(Warning, what is about to follow is anecdotal evidence as to the state of Vista from my own, limited first hand observations. But it could be worse, it could be second hand and out of context.)

My first attempt to work with Vista was on a dual-core AMD Turion X2 laptop. My hope was that with Vista it would finally make sense to run the operating system in 64-bit mode as Windows XP Pro 64-bit was a bit lack-luster. In Windows XP driver support had been extremely poor and I was unable to get much of anything to work. So all of my Windows XP machines ended up staying as 32-bit while my Linux machines moved back and forth. On Linux almost everything worked great as 64-bit. Only rarely would I get a driver issue or compatibility problem.

For the first week or so Windows Vista was incredibly slow. I decided that trying the 32-bit version of Vista (both had shipped with the laptop, thankfully) might be a good idea. So I performed a clean re-installation of Vista and started again.

Under Vista32 I noticed a significant increase in the overall speed and stability. The whole system seemed to hum right along now without the apparent slowness that I had had in 64-bit mode. Vista32 seems to work exceedingly well and starts and stops more reliably than my Windows XP machines have done in the past. The reliability of the shutdown process has been a major concern of mine from past Windows editions.

Because of the types of applications that I generally use on Windows (e.g. not video games, not entertainment applications, mostly serious business and management applications, only current versions, etc.) there were no compatibility issues in moving to Vista. Not a single application has failed to run and, I am told, that the only game that I actually would care about (Age of Empires 2 circa 1998) will run beautifully in Vista. I have a friend who has tested this on three separate Vista machines.

Few applications that are programmed “correctly” using Microsoft’s published standards and industry best practices have any issues moving to the Vista platform, in my experience. All of the complaints that I have heard about applications not working are either video games – which seldom follow platform guidelines, ancient legacy applications or small independent vendor applications that always fail to work between platforms because there are no updates, standards aren’t followed, etc. It happens. Every new operating system breaks a certain amount of old applications but in many cases, most cases, this is simply a separating of the wheat from the chaff. It is good to shake up the market and point out the weaklings in the herd and thin it out a bit for everyone’s long term health. Think of it as software genetics in action.

For contextual reasons I should point out that I have been using client side “firewalls” – a term that I am loathe to use but has become somewhat of the norm – for a long time, first with Symantec and more recently with Microsoft’s Live OneCare – and am quite familiar and comfortable with the concept of unblocking ports for every new application that is installed or any changes that are made. I am also used to this through the use of AppArmor on SUSE Linux and SELinux on Red Hat Linux.

Already being used to this as a matter of course makes the transition to Vista’s security system almost transparent. I have heard numerous complaints about the barrage of security notifications popping up and asking it “this software should be allowed to install” or if “such and such a port should be allowed to open” but if people were diligent about using past operating systems this would neither be new nor a surprise. This type of checking is wonderful in the computer security nightmare world in which we live. Many people want this “feature” suppressed but these are often the same people asking for continuous help to fix their virus and Trojan horse riddled computers caused, not by malicious external attacks, but by bad computer management habits and behaviours.

Even as a technology professional who is constantly installing and uninstalling applications, doing testing, making changes, fiddling with the network, etc. the number of these security alerts is not quite annoying enough to push me past the point of appreciating the protection which it provides. A normal user, who should not be installing new software or making network changes on a daily basis, should see these messages mostly only during the initial setup of the workstation and then somewhat rarely when new software or updates are applied. If this security feature is becoming annoying due to its regularity one must carefully ask oneself if there isn’t a behavioural issue that should be addressed. It is true, some users need to do “dangerous” things on a regular basis to use their computer the way that they need to use it. But these people are extremely rare and can almost always manage these issues on their own (turning off the feature, for example.)

Some people have had issues with the speed of their Vista machines. All of the complaints that I have heard to date, however, come from people who have moved from Windows XP to Windows Vista on the same hardware. This is not a move that I would suggest. Yes, Vista is slower than XP and noticably so. Just as XP was somewhat slower than Windows 2000 (although not very dramatically as 2000 was so slow. XP may not actually even be slower than 2000!) Windows 2000 was dramatically slower than Windows NT 4 and requires many times more system resources. The jump from the NT4 to the NT5 family was, by far, the biggest loss of performance that I have witnessed on these platforms. The move to Vista is minor.

The fact is that moving to newer, more feature rich, operating systems almost necessitates that the new operating systems will be slower. Each new generation is larger than the generation before. Each new version is more graphics intense (not true with Windows 2008 Core – yay!) and has power-hungry “eye candy” that demands faster processors, more memory and now graphics offload engines. Users clamour for features and then complain when those features cause their operating systems to be larger and more bloated. You can’t have both. If you want a car with one hundred cubic feet of hauling capacity the car absolutely must be larger than one hundred and four square feet in surface area. Period. It’s math. End of discussion. This isn’t Doctor Who – the inside can’t be larger than the outside. And your operating system can’t have less code than the sum of its components.

If I have one major complaint about Windows Vista it is the extreme difficulty with which one must search for standard management tools within the operating system. Under previous editions of Windows one could go to the Control Panel and find commonly used management tools in one convenient place. Now simply modifying a network setting – a fairly common task and impossible to research online when one needs it most – is nearly impossible to find even for full time Windows desktop support professionals. The interface for this portion of the system is cryptic at best and nothing is named in such a manner as to denote what task could possibly be performed with it.

Altogether I am very pleased with Vista and the progress that has been made with it and I am looking forward to seeing the improvements that are expected to come with the first Service Pack that should be released very soon. Vista is a solid product and Microsoft should be proud of the work that they have done. The security has been much improved and I hope that Vista proliferates in the wild rapidly as this is likely to have a positive effect on the virus levels that we are currently seeing.

Caveat: Moreso than previous versions of Microsoft Windows, Vista is designed to be managed by a support professional and used by a “user”. Vista is somewhat less friendly, out of necessity, and the average user would be better serviced to simply allow a knowledgeable professional handle settings and changes. Vista pushes people towards a “managed home” environment that would be more akin to a business environment.

This change, however, is not necessarily bad. As we have been seeing for many years, the security threats that exist with regular access to the Internet are simply far too complex for the average computer user to understand and with the number of computers in the hands of increasingly less sophisticated computer users the ability for viruses and other forms of malware to propagate has increased many fold. A computer user who does not properly protect his or herself from threats is not only a threat to themselves but to the entire Internet community.

In a business we do not expect non-technology professionals to regularly management their own desktops and perhaps we should not expect this of home users. Computers are far more complex than a car, for example, and only advanced hobbyist or amateur mechanics would venture to do much more than change their own oil. Why then, when a computer can be managed and maintained completely remotely, would we not use the same model for our most complex of needs?

With some basic remote support to handle the occasional software install or configuration change, automated system updates, pre-installed client side “firewall” all that is truly needed is a good anti-virus package and a normal home user could use their Windows Vista machine in a non-administrative mode for a long time with little need from the outside while enjoying an extreme level of protection. The loss of some flexibility would be minor compared to the great degree of safety and reliability that would be possible.

]]>
https://sheepguardingllama.com/2008/03/are-you-vista-capable/feed/ 0
Solaris Dstream Package Format (Package Stream) https://sheepguardingllama.com/2008/03/solaris-dstream-package-format-package-stream/ https://sheepguardingllama.com/2008/03/solaris-dstream-package-format-package-stream/#respond Thu, 20 Mar 2008 15:51:41 +0000 http://www.sheepguardingllama.com/?p=2306 Continue reading "Solaris Dstream Package Format (Package Stream)"

]]>
If you have worked on Solaris for a while you have probably stumbled across the package stream or “dstream” package format sometimes used for Solaris packages. Dstreams can come as a surprise to Solaris administrators who have become accustomed to the traditional package format. But Dstreams are very easy to work with if you just know some basics.

First of all there appear to be two naming conventions for these packages. The most common, by far, is to end a package in .pkg while the less common variant is to end the name in .dstream. Some people also leave off the postfix altogether leaving it unclear as to what the file is intended to be.

Installing a dstream is only slightly different than a regular package. The dstream is much more similar to a Linux RPM as it is a single, atomic file. Once it is installed it will act just like any other Solaris package and can be managed and removed in the usual ways (e.g. pkginfo, pkgrm, etc.)

Installing is simple. Let’s assume that we are dealing with the package myNewSoftware.dstream which is saved in /tmp. To install simply:

pkgadd -d /tmp/myNewSoftware.dstream

But in some cases you may want to have access to the contents of the Dstream without needing to install it first. If we are on Solaris this is easy. Just use pkgtrans.

pkgtrans myNewSoftware.dstream .

Or, possibly, you need to get access to the contents of the Dstream without having access to a Solaris machine or the pkgadd command. Do not fret. The solution is much simpler than you would imagine. The Dstream is created in the cpio format which we can extract using common tools.  Unfortunately I have had some issues getting the packages to unpack correctly using this trick.  If anyone has additional insight intot his process, please comment.

So to unpack, but not install, our previous example file on any UNIX box (or even Windows with cpio installed via Cygwin or a similar utility) we can simply:

cpio -idvu < myNewSoftware.dstream

or, I have also seen this option as well – both work for me equally:

dd if=myNewSoftware.dstream skip=1 | cpio -idvu

The “v” option just gives us some verbose output so that we can see what we just unpacked without having to look around for it. You will now have a directory (or a few) as contained in the cpio archive.

]]>
https://sheepguardingllama.com/2008/03/solaris-dstream-package-format-package-stream/feed/ 0
CPUs, Cores and Threads: How Many Processors Do I Have? https://sheepguardingllama.com/2008/03/cpus-cores-and-threads-how-many-processors-do-i-have/ https://sheepguardingllama.com/2008/03/cpus-cores-and-threads-how-many-processors-do-i-have/#comments Fri, 07 Mar 2008 17:25:07 +0000 http://www.sheepguardingllama.com/?p=2291 Continue reading "CPUs, Cores and Threads: How Many Processors Do I Have?"

]]>
In my job role I am very often called upon to determine how many “processors” a machine has or how many we will need for a specific task. Ten years ago this was a simple process but today even the very concept of the “processor” is fuzzy at best and only a very few people have a clear understanding of what it means. I spend much of my time explaining, as best as possible, the terms needed to even discuss processors today as everything processor related must be seen in context.

Before we begin let us look at the terms involved in discussing processors starting from the bottom of the stack. On the bottom we have a chip carrier, this can be something as simple as a motherboard (a.k.a. mainboard, systemboard, MoBo), a processor daughtercard or a dedicated chip carrier. Any of these will qualify for our use. A chip carrier holds sockets. Chip carriers can have a single socket or many.

On a standard desktop or laptop computer we would expect to find that we have a single motherboard containing a single socket. On a mid-sized server such as the HP DL360 G5 we see a single motherboard with two sockets. On a larger server such as the Sun SunFire x4600 we see several daughtercards each with a single socket but with a total of eight sockets available in the overall system.

The Intel Pentium III Slot A chips are a perfect example of a dedicated chip carrier. In the case of the Slot A Pentium III processors the processor itself was mounted directly onto a small daughterboard that was dedicated for the purpose of carrying the Pentium III processor and its associated voltage management electronics. This small card was enclosed in plastic for protection and would be attached directly to the motherboard.

Above the layer of the chip carrier is the socket. A socket is a physical connector allow a chip to be connected to a board. Occasionally a chip as important as the CPU will be connected to a board without a socket. This is more common when dealing with embedded systems and is exceedingly rare in general purpose computing. In this case the connection itself could be considered analogous to the socket. The use of the socket in this explanation can be confusing because of its questionable interpretation but it is important in its inclusion because of the need to identify potential system capacity and classification which is done, normally, at this level.

It is in the counting of sockets per computer that we determine that maximum “way” of a server. For example the DL360 mentioned above is classified as a two-way server. And the x4600 is an eight-way server. This is the case when the server is at capacity. A particular server would be classified by the number of sockets in use. For example a DL385 with just one socket occupied could be considered a one-way server but with extra potential capacity. By adding another chip to the second socket we are said to be upgrading from a one-way to a two-way. Many server vendors have started advertising the “way” of their servers based on non-socket based factors but this practice is non-standard and highly misleading. Be sure to compare servers based on socket capacity and not on advertised “way”.

Each socket is capable of holding one physical processor. While sockets are purchased with the board to which they are attached, a chip can be purchased already in a socket or as a standalone product. Processors are often sold in stores in boxes just like any other product and are the most visible form of “processor” that consumers will face. This is the only “processor” that can be seen visibly, held in the hand, bought as an item in a store, etc. This is the physical manifestation of processing power. Just as socket count determines the maximum “way” of a computer the processor count will determine the current “way” of that computer. Most consumers or desktop administrators think of processors in terms of the physical chip. If the term processor is to have an official usage this is the level at which it is most appropriate. Common examples of a processor include the AMD Opteron, Intel Core, Intel Pentium II, Sun UltraSpark IV or IBM Power6.

The most important industry recognition of this “level” being the “processor” is Microsoft, Oracle and most major software vendors using this definition of processor to determine their per-processor licensing requirements. Because of this stand on the definition of processor and its long history of use mostly in this context we are likely to see the word processor remain linked to the physical entity.

Each processor chip can have one or more die carried within it. A die is not visible as it is encased in the protective material of the processor. The die consists of the semi-conductive substrate and is a discrete electrical element within the processor. A die is the most difficult portion of a processor to define, in my opinion, as it is completely invisible to anyone unless they break apart a processor and even so they are extremely difficult to see because of their size and density.

A CPU, or Central Processing Unit, is, and has been, generally tied to a die. One die contains one single CPU. A die and a traditional CPU are, roughly, synonymous. Technically an important difference remains because a die can contain components in addition to the CPU such as support processing. In a more general sense, a die can contain other types of integrated circuits other than a CPU so the two words are not the same thing even though they effectively are when we are only discussing general use processors – CPUs. Strangely it is at this level that we have achieved the term CPU used so commonly but so extremely misunderstood.

Within a single CPU there can be one or more processing cores. A core is the real workhorse of the processor stack. It is within a core that the actual processing work is done. It is most common, today, for a CPU to contain only one core. There is a common misconception that this is not the case due to marketing efforts to convince people otherwise. Internal processor architecture should not be used as a marketing tool as it is simply confusing and misleading. Only a holistic view of processor performance characteristics can provide adequate comparisons when deciding on a processing platform. No single architectural element will have an impact large enough to be usable as a determining factor in processor selection. But more importantly it is not feasible for anyone who is not a chip architect with a solid grounding in IC design concepts to even remotely grasp the intricacies involved in the design of a microprocessor.

In traditional processors, like the Pentium III, there is one core per CPU. This is very simple. In many modern processors such as the Intel Core or the Intel Core 2 there is still only one core per CPU while there are multiple CPUs per processor (each CPU is on a discrete die within the processor.) So an Intel Core 2 Duo would be a single processor with two die each with one CPU each with one core. This gives a total of two cores per processor. It is multi-core as well as being multi-CPU. Technically the term multi-core should not apply here as that is only useful in a different and important context. In the AMD Opteron processor we see a single processor with a single die and single CPU with two cores within that single CPU. In this case we have a multi-core single-CPU configuration. This is a true multi-core processor. Multi-core within a single die/CPU is an important distinction because it varies the ability for components to communicate amongst themselves. The most confusing thing here is that Intel product is named “Core” while being based on multi-CPU technology. This has lead to a proliferation in the misuse of the term core.

Cores are still an extremely important component to use in normal system discussions, however. Cores are discrete processing elements and therefore represent a very important look at our computers. By looking at cores we can see how many independent parallel actions can be taken by the processors at one time. This is very important for understanding the scaling and capacity abilities of our computers. A computer can only truly parallelize to the extent of its “core” capacity.

The final layer of our stack that we need to examine is that of the multithread (a.k.a. Hyper-Thread, SuperThread, etc.) The most well known example of this is Intel’s implementation of such used in their Pentium 4 derived XEON processors. In current use the Sun UltraSparc T family of processors are the poster-children for multithreading. Multithreading does not truly add additionally parallelism to the processing structure but it can be used, under certain loads, to make the processing pipeline more efficient and to push multiple threads of execution into the processor roughly simultaneously. Multithreading is complicated but in the absolutely simplest terms (and possibly the most useful to the layman looking to grasp the correct use of this technology) it can be though of as allowing the processor to manage thread execution and scheduling instead of leaving this solely to the operating system. In reality what is performed is vastly most complicated than this.

Multithreading is useless for single-threaded workloads and its mere presence will degrade performance. Multithreading is most useful for highly threaded workloads. It is currently seeing a lot of positive use in the areas of web servers and databases. To transfer decision making from the operating system to the multithreading portion of the processor, an MT processor presents each of its thread processors to the operating system as a separate “logical processor”. It is at this point that we finally see the concept of processor as viewed by the operating system. This “logical processor” is what we view in Microsoft’s PerfMon or TaskMgr or in top on Linux. Often this is what we think of as being the processor.

Now that we have been bombarded with terms, layers and models we will look at a few examples to help determine how we should approach the classification of processors. We will look at the HP DL360 G2, the HP DL585 G2, the HP DL580 G4, the HP and the SunFire T2000.

In our first example we will look at the very traditional and standard Hewlett-Packard/Compaq Proliant DL360 G2. This server has a single motherboard containing two processor sockets. Each socket accepts one Intel Pentium III-S processor (up to 1.4GHz.) At this level we can identify this server as a true two-way server. Each Pentium III-S processor contains a single die / CPU. Each CPU has one core and each core is natively threaded with no multithreading capabilities. So, in total, this server is a two-way server with two processors, two CPUs, two cores and two logical processors to present to the operating system. Very simple, very straightforward. Just as we expect a computer to behave.

Our second example is the Hewlett-Packard Proliant DL585 G2. This server has four processor sockets on its motherboard making it a true four-way server. Each socket can hold an AMD Rev F Opteron Dual-Core processor. Each Opteron, in this scenario, has a single die with a single CPU. Each CPU has two cores and each core has only the native thread handler providing a total of one logical processor per core. So our total is four-way, four die / CPU, eight core and eight logical processors presented to the operating system.

Our third example is the Hewlett-Packard Proliant DL580 G4. The Proliant DL580 G4 has a four socket motherboard capable of holding four Dual-Core Intel XEON 7000 series processors. This, like the DL585 G2, is a true four-way server when fully populated. Each XEON 7000 processor contains dual dies / CPUs and each CPU contains one core for a total of two cores per processor. Each core has a single native thread handler. So our total is four-way, eight die / CPU, eight core and eight logical processors presented to the operating system.

My desktop example is the Hewlett-Packard Compaq DeskPro d530. This desktop unit has the option of using the Intel Pentium 4 HyperThread processor which is what makes it interesting for our purposes. We will use this processor in our example. The DeskPro d530 has a motherboard that supports a single Pentium 4 (or Celeron 4) processor. Like most desktops this is a one-way machine. Each Pentium 4 processor has a single die / CPU with a single execution core. Each core on a traditional Pentium 4 (or Celeron 4) can execute just a single thread but, in our example, we will use the HyperThread version of the P4 which can handle two simultaneous threads presenting two logical processors to the operating system. So we have a one-way desktop with a single processor with a single CPU containing a single core with two mulithread handlers presenting two logical processors.

To make this analysis more complicated we must also be aware that because of single thread performance problems on the Pentium 4 HT platform it was very common for HyperThreading to be disabled on this processors through a BIOS setting. In these cases the threading model returns to native and only a single logical processor is presented to the operating system. This is the only example, of which I am aware, of a processor having a selectable number of presentable logical processors. The efficacy of using the HyperThread features was based upon operating system and load characteristics. For example, Windows 98SE or ME running on the d530 could not even see the second logical processor because it only has a uni-processor kernel option. So HyperThreading is not even possible. With Windows 2000 or XP both logical processors were visible and usable but some workloads, such as most video games at the time, could not take advantage of it while many business workloads would. Each user would have to determine which mode made the most sense for them adding to the complexity of the situation.

Our final example is the Sun SunFire T2000 server. The SunFire T2000 is a single socket motherboard designed to hold one UltraSparc T processor. This is a true one-way server. Each UltraSparc T processor has a single die / CPU. Each CPU contains either four, six or eight cores depending on the purchased configuration – we will use eight in our example. Each of these eight cores has four thread handlers. In this machine we therefore see a one-way server with a single processor with a single CPU containing eight cores and a total of thirty-two simultaneous multithreads being presented to the operating system as thirty-two logical processors.

As computer systems continue to increase the number of logical processors being presented to the operating system the importance of efficient process and thread handling by the operating system kernel will continue to become more and more important.  Many traditional systems have not been able to handle multi-processor situations very efficiently, if at all, but today with the number of available logical processors skyrocketing even in desktops the need for good process and thread handling across a potentially large number of logical processors is extremely important.

As you can see the issue of determining the number of processors, cores, CPUs, etc. is extremely difficult.  It is clear why people have become confused and why marketing is playing such a significant role in determining the public’s perceptions of these architectural components.  The most important components to keep clear are the counts for way, processor, core and logical processor (virtual processor, processing thread, execution engine, etc.)  Underlying component issues, while important to be semantically correct and to understand the working of processors, are still underlying components and should not be thought of as being the defining characteristics of our computer systems today.

]]>
https://sheepguardingllama.com/2008/03/cpus-cores-and-threads-how-many-processors-do-i-have/feed/ 5
Simple Website Management with PHP Includes https://sheepguardingllama.com/2008/03/simple-website-management-with-php-includes/ https://sheepguardingllama.com/2008/03/simple-website-management-with-php-includes/#respond Fri, 07 Mar 2008 11:52:55 +0000 http://www.sheepguardingllama.com/?p=2289 Continue reading "Simple Website Management with PHP Includes"

]]>
We all know the importance of the careful separation of websites into their three discrete elements – content in the form of XHTML, style in the form of CSS (Cascading Style Sheets) and behavior in the form of JavaScript. This is the basic foundation of correct, clean and functional web design. Once one has accomplished this basic building block of manageability one must look to external tools in order to achieve greater gains in website management and maintenance. Often we find a solution in heavy Content Management Systems or CMS. But often, especially for smaller web sites, there is a far simpler solution that still allows for growth and maintainability. Enter the PHP Include.

Before we get into discussing the use of the PHP Include for maintaining website I would like to first mention the oft forgotten Server Side Include or SSI. I have nothing against the old Server Side Include. I used to use it, in fact, and I believe that it is extremely functional. However, PHP has become so ubiquitous on web servers today (especially Apache) and it is so easy to use for this purpose that I feel that its advantages in growth and flexibility outweigh the slight increase in processing necessary to use it. PHP Includes are simple, straightforward and flexible when we want to start doing just that little bit more with our site than we did before.

The first step to working with PHP is to have PHP enabled in your web server. PHP requires nothing in the client browser – it is a web server only technology. Everything that we will be doing here is server side. Most web servers today already have PHP turned on and ready by default. The notable exception to this is Microsoft’s IIS where you will have to manually install PHP. Check the instructions for your particular web server on how to enable PHP for your use.

The process that I prefer to use when building a website using PHP Includes is to first build my first “prototype” page using traditional XHTML, CSS and JavaScript. One I have the basics of the page I then start by chopping out everything before the <body> tag and pasting it into a file that I like to call header.inc. I can then replace the entire section that I removed with the following simple PHP line:

<?php include("header.inc"); ?>

If this is your first time working with PHP this is a good time to test your page. I like to rename my pages from *.html to *.php once I have modified them. This makes it more evident to me what I am working on and when. I will assume that we are working on a default page here and that we can call it index.php. You should now test your page to be sure that it is rendering correctly. Any browser should now see the page as identical to the *.html equivalent. What we have done is simply take the top of the page and move it into a separate, reusable file. The PHP Include statement is simply taking the header.inc file and concatenating it back into the page at the last minute when the page is requested. It we were only ever to work on a single page this would be silly and a waste of time and effort. But it is a rare thing to have a web site with only a single page. Our goal here is to make a multi-page site easy to manage, control and standardize.

Now that you have the basics of the PHP Include working we can work on expanding its use. Simply handling the declaration and head portions of our page are not enough to make the PHP Include truly useful. This would reduce on-disk file sizes and remove a few maintenance tasks but the gains would be few. The true value begins when we start including portions of the page body in our includes. Now the fun really begins.

Now we need to identify the portion of the top of our page that will be common to every individual page that we will have in our site. In many cases this will be a banner area and navigational elements – most often a menu of some sort. We want to be sure to find every line that will be duplicated exactly and none that need to be different in each page. Once you have identified this page portion you can cut that bit of text and paste it to the bottom of your header.inc file. Quite often I find that this section will include quite a bit of menu markup as well as many “setup” tags such as common opening <div> tags. The more that you can manage in this manner the more value you get from it.

If you want, save everything and test the site again to be sure that nothing is broken so far. Nothing is? Good. Now we need to do a similar task to the bottom of our index.php page. We need to identify where the custom page-specific material ends and the common “wrap up” material begins. In my experience this is generally a relatively small about of markup but will often include a “bottom bar”, some <div> closeouts and often some late-running JavaScript such as Google Analytics or other tracking code.

Once you have identified the common bottom code that is shared, or will be shared, between your pages you can cut it out and paste it into a new file that I like to call footer.inc. The .inc extensions are not necessary but I find that it really helps to make these files easy to find. Now in place of the markup that we removed from index.php we can add in:

<?php include("footer.inc"); ?>

Now test your page again. Nothing should have changed in your web browser. But now your page is significantly easier to manage going forward. You can now create many new pages and simply start them with the PHP Include “Header” line and end them with the PHP Include “Footer” line and all you have to do from page to page is fill in the content that goes between. This will speed initial page generation but more importantly will make updates and management far easier. If you need to add a new <base> or <link> tag, for example, you can simply add it into header.inc and instantly it is applied to every page that you have created. When you create a new page and need to add a link into your navigational menu you can simply add it in and instantly every page’s navigation menu is updated.

Not every page will benefit from this approach but most traditional pages used by small business and most “brochure” style pages will. Often you may need to exclude your default page (a.k.a. index.html) because it might be different from all of your other pages. That is fine. Don’t let that be a show stopper for you. Just manage all of your other pages together.

To get even more value from this approach you can begin looking for common shared elements between pages. If you have a navigational bar that appears in the middle of the pages you could create navigation.inc and move that shared markup out to that file to manage more easily. Of course you can also do this with markup shared between any two pages – it doesn’t have to common to every page. Just use the appropriate PHP Include on the right pages and voilà, less work when adding content or making changes to your site.

]]>
https://sheepguardingllama.com/2008/03/simple-website-management-with-php-includes/feed/ 0