linux – Sheep Guarding Llama https://sheepguardingllama.com Scott Alan Miller :: A Life Online Wed, 02 Nov 2011 22:09:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 PHP Fatal error: Call to undefined function posix_getpwuid() https://sheepguardingllama.com/2011/03/php-fatal-error-call-to-undefined-function-posix_getpwuid/ https://sheepguardingllama.com/2011/03/php-fatal-error-call-to-undefined-function-posix_getpwuid/#comments Sun, 13 Mar 2011 00:19:51 +0000 http://www.sheepguardingllama.com/?p=6592 Continue reading "PHP Fatal error: Call to undefined function posix_getpwuid()"

]]>
I found that this error appears rather often online but almost no one has any idea why it would come up.  I found this error myself today while doing an install of FreePBX on Fedora 14.  My full error was:

Checking user..PHP Fatal error:  Call to undefined function posix_getpwuid() in /usr/src/freepbx-2.8.1/install_amp on line 728

This seems like it must be a permissions error.  But more than likely you are simply missing the PHP Posix library.  You can resolve this on Fedora with

yum -y install php-posix

Ta da!

]]>
https://sheepguardingllama.com/2011/03/php-fatal-error-call-to-undefined-function-posix_getpwuid/feed/ 8
Testing Socket Connections Programmatically https://sheepguardingllama.com/2010/01/testing-socket-connections-programmatically/ https://sheepguardingllama.com/2010/01/testing-socket-connections-programmatically/#respond Tue, 12 Jan 2010 05:57:40 +0000 http://www.sheepguardingllama.com/?p=4972 Continue reading "Testing Socket Connections Programmatically"

]]>
Often we have to use “telnet remotehost.somewhere.com 80” to test if a remote socket connection can be established.  This is fine for one time tests but can be a problem when it comes time to test a number of connections – especially if we want to test them programmatically from a script.  Perl to the rescue:

#!/usr/bin/perl
use IO::Socket;

my $sock = new IO::Socket::INET (
                   PeerAddr => $ARGV[0],
                   PeerPort => $ARGV[1],
                   Proto => tcp,
);

if ($sock) { print "Success!\n" } else { print "Failure!\n" };
close($sock);

Just copy this code into a file called “sockettest.pl” and “chmod 755 sockettest.pl” so that it is executable and you are ready to go.  (This presumes that you are using UNIX.  As the script is Perl, it should work anywhere.)
To use the code to test, for example, a website on port 80 or an SSH connection on port 22 just try these:

./sockettest.pl www.yahoo.com 80
./sockettest.pl myserver 22

You aren’t limited to known services.  You can test any socket that you want.  Very handy.  Now, if you have a bunch of servers, you could test them from a simple, one line BASH command like so (broken so as not to be one line here for ease of reading…)

for i in myserver1 myserver2 yourserver1 yourserver2 someoneelsesserver1
do
  echo $i $(./sockettest.pl "$i" 80)
done
]]>
https://sheepguardingllama.com/2010/01/testing-socket-connections-programmatically/feed/ 0
Forcing Red Hat Linux to IGMP Version 2 https://sheepguardingllama.com/2009/11/forcing-red-hat-linux-to-igmp-version-2/ https://sheepguardingllama.com/2009/11/forcing-red-hat-linux-to-igmp-version-2/#respond Thu, 05 Nov 2009 07:57:21 +0000 http://www.sheepguardingllama.com/?p=4759 Continue reading "Forcing Red Hat Linux to IGMP Version 2"

]]>
Not a very common task and one that is relatively hard to locate for administrators doing a quick search online.  In some cases it is necessary to force Linux, in this case, Red Hat Enterprise Linux, away from the default of using IGMP Version 3 to Version 2.  This is often done to support older switches for multicasting.

In order to make this change we will be editing the /etc/sysctl.conf file by adding the following lines:

net.ipv4.conf.eth0.force_igmp_version = 2
net.ipv4.conf.lo.force_igmp_version = 2
net.ipv4.conf.default.force_igmp.version = 2
net.ipv4.conf.all.force_igmp.version = 2

You can find out more about your current running IGMP configuration by using this command:

cat /proc/net/igmp

The resulting output will show the IGMP version in use for your machine’s interfaces.  In Red Hat Enterprise Linux 5 this will default to version 3.  If you want to only see the interface and IGMP version number you can simplify with this command:

cat /proc/net/igmp | grep V | awk ‘{print $2 ” ” $5}’

]]> https://sheepguardingllama.com/2009/11/forcing-red-hat-linux-to-igmp-version-2/feed/ 0 Installing MediaTomb on OpenFiler https://sheepguardingllama.com/2009/08/installing-mediatomb-on-openfiler/ https://sheepguardingllama.com/2009/08/installing-mediatomb-on-openfiler/#comments Wed, 05 Aug 2009 18:44:12 +0000 http://www.sheepguardingllama.com/?p=4374 Continue reading "Installing MediaTomb on OpenFiler"

]]> If you have researched both FreeNAS and OpenFiler then you will be aware that a key difference between the two is the inclusion of a UPNP media server in FreeNAS.  This is lacking in OpenFiler and is a major piece of functionality that I with to have in my own installation.  I specifically would like a UPNP / DLNA server that will work easily with a number of devices such as the Sony PlayStation 3 and the XBOX 360.  After much work I decided that the best product would be MediaTomb to add this functionality to OpenFiler.

I originally started this article with the intent of installing ps3mediaserver onto my OpenFiler installation but, due to a ridiculous lack of support for dependencies, ps3mediaserver is not a reasonable possibility for this platform.  As it turns out, though, MediaTomb is actually a better, lower resource usage, simpler option that does exactly what I want and does not require careful tuning to force to behave logically.

Installing MediaTomb onto a working OpenFiler system is actually extremely easy as MediaTomb is packaged with all dependencies included in the option binary package for Linux 32bit.  Simply download the i386 static binarys from the MediaTomb Static Binaries Download page.

You can then just unpack the download tarball to the /opt directory using “tar -xzvf” and you have a working system already!  It is actually that simple.  One of the great things about MediaTomb is that it does not attempt to transcode your media files lowering the quality and eating CPU cycles.  It is simply a UPNP / DLNA server.  If you are like me all of your media has been carefully transcoded ahead of time for maximum quality versus storage.  I certainly don’t want low quality, real-time transcoding degrading my video experience.  Many people do but if you are running a full storage server like OpenFiler you probably do not want it busy transcoding media files every time that they are served out.

You can start MediaTomb from the command line simply using the command:

nohup /opt/mediatomb/mediatomb.sh &

And away you go.  In my case I renamed the MediaTomb directory to /opt/mediatomb to make it easier to use.  When you fire it up you will get an on-screen message telling you where the web management interface to the software will be.  You can simply go to the web page to add your media directories to MediaTomb so that it can scan them and make them available via UPNP.

Caveat: I have noticed that MediaTomb tends to crash for me about once every twenty-four hours.  Not a major issue, restarting is quick and easy.  I am still investigating this and hope to have an answer soon but it is not a major issue.

Why not ps3mediaserver?

In order to make ps3mediaserver work you need to manual fulfill a large number of dependencies.  Ps3mediaserver comes as a tarball, not as a system package like RPM, DEB, Conary, etc. and so all dependencies are yours to discover and fulfill.  On Red Hat, Ubuntu or Suse systems these dependencies are often fulfilled by default and can be ignored.  On rPath, however, which is a dedicated appliance, server OS not only are they not filled by default but the necessary packages are not even available for the platform!

You will need to install Java for starters.  This will allow ps3mediaserver to run and serve out audio files.  If this is all you want then you can go down this route.  But once you start ps3mediaserver you will discover that it has no normal administrative interface and is designed to only work with an X GUI.  Of course, no one has X installed by default on rPath Linux – this is a server not a desktop.  This is an extremely silly requirement for ps3mediaserver and really shows that they do not intend this to be used in a serious installation like what we are doing here.  This is a desktop solution like iTunes.  Fine for most people but we are on a different scale here.

So to configure your new ps3mediaserver you will need to install Xterm and get remote X to your server.  If you are working from Windows then you will need an X server like Mocha to handle this.  You can install all of the necessary packages for this using “conary update xterm” but this is just the beginning of your problems.

You can set ps3mediaserver to not transcode but on Linux, without the transcoding libraries installed, it won’t work.  It will attempt to transcode regardless of the settings.  You can verify this by checking the media type from your PS3 or other video player.  For me my pristine, low bandwidth h.264 MP4 files were being displayed as being MPEG2.  This does not happen with ps3mediaserver on Windows with the statically compiled binaries.  Rather inconsiderate for the ps3mediaserver project to compile such for other platforms but to cripple the Linux version without so much as a list of dependencies that we need to fulfill.

You will need ffmpeg and mencode for starters.  Good luck.  Neither are available for the rPath platform and they do not compile using the included compilation environment.  You will, of course, need to install an entire compilation environment just to get started with these.  More software not exactly appropriate on a server.  You can remove it once you are done but then how do you update your system?

The bottom line is that you should avoid ps3mediaserver on the rPath platform and stick with MediaTomb.  The ps3mediaserver project just is not ready for prime time from what I have seen.  They are okay on carefully controlled environments but they are not yet prepared to really run on a “production” media server.  They have some great potential to be sure.  I’ve run their project on Windows and it is very nice.  Over the top complicated but nice.  However getting it to run for the highest possible quality, like MediaTomb does as its only real feature, requires a lot of work and a lot of extra libraries and bloat for a relatively simple system.

]]>
https://sheepguardingllama.com/2009/08/installing-mediatomb-on-openfiler/feed/ 7
Linux Active Directory Integration with LikeWise Open https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/ https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/#respond Sun, 01 Mar 2009 23:27:12 +0000 http://www.sheepguardingllama.com/?p=3648 Continue reading "Linux Active Directory Integration with LikeWise Open"

]]>
I downloaded the latest RPM package (for Red Hat, Suse, CentOS and Fedora) from the LikeWise web site (you need to register before starting your download.)  I downloaded the RPM package to the /tmp directory.  The version that I am testing is the Winter 2009 Edition.

Warning: LikeWise modifies many configuration files and its uninstall routine does not replace these.  Installing LikeWise and then uninstalling again will likely cause you to lose the ability to log back in to your machine.  Treat modifying authentication systems with the utmost care.

The RPM download still uses a script so you will need to add execute permissions.

chmod a+x LikewiseIdentityServiceOpen-5.1.0.5220-linux-x86_64-rpm.sh

./LikewiseIdentityServiceOpen-5.1.0.5220-linux-x86_64-rpm.sh

The package steps you through the installation program.  You will need to accept the license as there are actually several packages, covered under various licenses, that need to be installed to support LikeWise.  If you are installing on an AMD64 platform then you will be questioned as to whether or not you want to install 32-bit support libraries.  Unless you really know what you need just select the “auto” option.  After that, the installation will take care of itself.

If you use SELinux like you should, you will need to turn this off during the configuration.

setenforce Permissive

Then we can join the Linux machine to the Active Directory domain.

/opt/likewise/bin/domainjoin-cli join exampledomain.com domainadminuser

At this point basic authentication is already working.  You will need to make some changes to your setup if you have existing accounts as well, but we can address that later.

Test your login:

ssh -l exampledomain\\username linuxhostname

Once you are all set do not forget to turn SELinux back on.

setenforce Enforcing

The big caveat with using LikeWise Open for your Unix to AD integration needs is that there is no Windows to UNIX GID/UID mapping so your UNIX (Linux, Solaris, Mac OSX, etc.) machines are stuck using Windows IDs.  This is not necessarily the end of the world depending on your environmental needs but it can be quite a pain if you are introducing AD into a large, established Unix environment.  LikeWise Enterprise does not suffer from this limitation, but it is obviously not free.

]]>
https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/feed/ 0
WordPress on Red Hat / CentOS Linux https://sheepguardingllama.com/2009/02/wordpress-on-red-hat-centos-linux/ https://sheepguardingllama.com/2009/02/wordpress-on-red-hat-centos-linux/#respond Thu, 26 Feb 2009 00:17:36 +0000 http://www.sheepguardingllama.com/?p=3614 Continue reading "WordPress on Red Hat / CentOS Linux"

]]>
If you run WordPress on Red Hat Enterprise Linux (RHEL) or its free cousin CentOS then you will likely run into the following error after you have unpacked WordPress, installed it and tried to do your initial setup:

Error establishing a database connection

This either means that the username and password information in your wp-config.php file is incorrect or we can’t contact the database server at databasename. This could mean your host’s database server is down.

  • Are you sure you have the correct username and password?
  • Are you sure that you have typed the correct hostname?
  • Are you sure that the database server is running?

If you’re unsure what these terms mean you should probably contact your host. If you still need help you can always visit the WordPress Support Forums.

You are not alone, this happens to everyone.  If you do some searching on this you will find that pretty much no one has an answer for what is wrong.  People running MySQL server locally already know the trick necessary to fix this problem but if you are running MySQL remotely, as I am, then you can be easily mislead into thinking that the fix does not apply to you, but it does.

The issue here, surprisingly, is that SELinux is enabled on the web server and is keeping the MySQL library from communicating with the MySQL server whether local or remote.  Simply set SELinux to Permissive rather than Enforcing and voila, you should be working well.

The command to set SELinux to Permissive mode is:

setenforce 0

You can verify that the mode has changed correctly with:

getenforce

It is important to note that this SELinux issue (bug, I am told) does NOT affect the MySQL client but does affect PHP.  So if you are testing your database connection with “mysql” and it works but WordPress throws the error than you are a prime candidate for this problem.

Also, be sure that PHP has the MySQL module installed:

yum install php-mysql

I have seen this issue on several versions of all of the software components but specifically just dealt with it in CentOS 5.2 with PHP 5.1.6 and WordPress 2.7.1.

]]>
https://sheepguardingllama.com/2009/02/wordpress-on-red-hat-centos-linux/feed/ 0
Time Sync on VMWare Based Linux https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/ https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/#comments Thu, 19 Feb 2009 02:57:49 +0000 http://www.sheepguardingllama.com/?p=3586 Continue reading "Time Sync on VMWare Based Linux"

]]>
In many cases it can be quite difficult to accuracy keep time on a virtualized operating system due to the complex interactions between the hardware, host operating system, virtualization layer and the guest operating system.  In my case I found that running Red Hat Linux 5 (CentOS 5) on VMWare Server 1.0.8 resulted in an unstoppable and rapid slowing of the guest clock.

The obvious steps to take include running NTP to control the local clock.  This, however, only works when the clock skews very slowly.  In my case, as in many, the clock drifts too rapidly for NTP to handle.  So we need another solution.  VMWare recommends installing VMWare Tools on the guest operating system and subsequently adding the following to your VMX configuration file:

tools.syncTime = true

This does not always work either.  You should also try changing you guest system clock type.  Most suggestions include adding clock=pit to the kernel options.  None of this worked for me.  I had to resort to a heavyhanded NTP trick – putting manual ntpdate updates into cron.  In my case, I set it to update every two minutes.  The clock still drifts heavily within the two minute interval but for me it is an acceptable amount.  You should adjust the update interval for your own needs.  Every five minutes may easily be enough but more frequently might be necessary.

Using crontab -e under the root user, add the following to your crontab:

*/2 * * * * /usr/sbin/ntpdate 0.centos.pool.ntp.org

For those unfamiliar with the use of */2 in the first column of this cron entry, that designates to run every two minutes.  For every five minutes you would use */5.  Remember that it takes a few minutes before cron changes take effect.  So don’t look for the time to begin syncing for a few minutes.

For me, this worked perfectly.  Ntpdate is not subject to the skew and offset issues that ntpd is.  So we don’t have to worry about the skew becoming too great and the sync process stopping.

If anyone has additional information on syncing Linux in this situation, please comment.  Keep in mind that this is for Red Hat Linux and the kernel with RHEL5 is 2.6.18 which does not include the latest kernel time updates that may be found in some distributions like Ubuntu.  Recent releases of Ubuntu likely do not suffer this issue as I expect OpenSuse 11.1 or the latest Fedora would not either.

]]>
https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/feed/ 7
Installing Windows Server 2003 on Xen on Red Hat Linux 5 https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/ https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/#respond Mon, 02 Feb 2009 03:31:21 +0000 http://www.sheepguardingllama.com/?p=3492 Continue reading "Installing Windows Server 2003 on Xen on Red Hat Linux 5"

]]>
After being challenged several times during the process of installing Windows Server 2003 into a fully virtualized Xen environment on Red Hat Enterprise Linux 5 (RHEL 5 or CentOS 5) I decided that a quick tutorial for those of you who wish to install in exactly the same way would be helpful.  There are several potential road blocks that must be addressed including issues with accessing the graphical console (necessary for a normal Windows installation process)  if you are not working from a local Linux workstation with a graphical environment installed.

I like to start a Xen installation using the very handy virt-install command.  Virt-install, available by default, makes creating a new virtual machine very simple.  I will assume that you are familiar with this part of the process and already have Xen installed and working.  If you are not sure if your environment is set up properly, I suggest that you start by paravirtualizing a very simple, bare-bones Red Hat Linux server using the virt-install process to test out your setup before challenging yourself with a much more lengthy Windows install that has many potential pitfalls.

The first potential problem that many users face is a lack of support for full virtualization.  This is becoming less common of a problem as time goes on.  Full virtualization must be supported at the hardware level in both the processor and in the BIOS/firmware.  (I personally recommend the AMD Opteron platform for virtualization but be sure to get a processor revision, like Barcelona or later, that supports this.)

Using virt-install to kick off our install process is great but, most likely, you will do this and, if all goes well, you will begin to format your hard drive and then you will find that your Xen machine (xm) simply dies leaving you with nothing.  Do not be concerned.  This is a known issue that can be fixed with a simple tweak to the Xen configuration file.

CD Drive Configuration Issues

In some cases, you may have problems with your CD / DVD drive not being recognized correctly.  This can be fixed by adding a phy designation in the Xen configuration file to point to the CD-Rom drive.  This is only appropriate for people who are installing directly from CD or DVD.  Most people prefer to install from an ISO image.  Using an ISO does not have this problem.

In Red Hat, your Xen configuration files should be stored, by default, in /etc/xen.  Look in this directory and open the configuration file for the Windows Server 2003 virtual machine which you just created using virt-install.  There should be a “disk =” configuration line.  This line should contain, at a minimum, configuration details about your virtual hard drive and about the CD ROM device from which you will be installing.

The configuration for the CD ROM device should look something like:

disk = [ “file:/dev/to-w2k3-ww1,hda,w”, “,hdc:cdrom,r” ]

You should change this file to add in a phy section for the cdrom device to point the system to the correct hardware device.  On my machine the cdrom device is mapped to /dev/cdrom which makes this very simple.

disk = [ “tap:aio:/xen/to-w2k3-ww1,hda,w”, “phy:/dev/cdrom,hdc:cdrom,r” ]

Accessing the Xen Graphical Console Remotely via VNC

If you are like me you do not install anything unnecessary on your virtualization servers.  I find it very inappropriate for there to be any additional libraries, tools, utilities, packages, etc. located on the virtualization platform.  These are unnecessary and each one risks bloat and, worse yet, potential security holes.  Since all of the guest machines running on the host machine all all vulnerable to any security concerns on the host it is very important that the host be kept as secure and lean as possible.  To this end I have no graphical utilities of any kind available on the host (Dom0) environment.  Windows installations, however, generally require a graphical console in order to proceed.  This can cause any number of issues.

The simplest means of working around this problem is to use SSH forwarding to bring the remote frame buffer protocol (a.k.a. VNC or RFB) to your local workstation which, I will assume, has a graphical environment.  This solution is practical for almost any situation, is very secure, rather simple and is a good way to access emergency graphical consoles for any maintenance emergency.  Importantly, this solution works on Linux, Mac OSX, Windows or pretty much any operating system from which you may be working.

Before we begin attempting to open a connection we need to know on which port the VNC server is listening for connections on the Xen host (Dom0).  You can discover this, if you don’t know already from your settings, by running:

netstat -a | grep LISTEN | grep tcp

On Linux, Mac OSX or any UNIX or UNIX-like environment utilizing a command-line SSH client (OpenSSH on Windows, CygWin, etc. will also work on Windows in this way) we can easily establish a connection with a tunnel bring the VNC connection to our local machine.  Here is a sample command:

ssh -L 5900:localhost:5900 [email protected]

If you are a normal Windows desktop user you do not have a command-line integrated SSH option already installed.  I suggest PuTTY.  It is the best SSH client for Windows.  In PuTTY you simply enter the name or IP address of the server which is your Dom0 as usual.  Then, before opening the connection, you can go into the PuTTY configuration menu and under Connection -> SSH -> Tunnels you can specify the Source Port (5900, by default for VNC but check your particular machine) and the Destination (localhost:5900.)  Then, just open your SSH connection, log in as root and we are ready to connect with TightVNC Viewer to our remote, graphical console session.

If you are connecting on a UNIX platform, such as Linux, and have vncviewer installed then you can easily connect to your session using:

vncviewer localhost::5900

Notice that there are two colons between localhost and the port number.  If you only use one colon then vncviewer thinks that you are entering a display number rather than a port number.

If you are on Windows you can download the viewer from the TightVNC project, for free, without any need to install.  Just unzip the download and run TightVNC Viewer.  You will enter localhost::5900 and voila, you have remote, secure access to the graphical console of your Windows server running on Xen on Linux.

]]>
https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/feed/ 0
Managing Apache and Subversion Through Active Directory (Part 1 – Authentication) https://sheepguardingllama.com/2009/01/managing-apache-and-subversion-through-active-directory-part-1-authentication/ https://sheepguardingllama.com/2009/01/managing-apache-and-subversion-through-active-directory-part-1-authentication/#comments Thu, 15 Jan 2009 20:01:28 +0000 http://www.sheepguardingllama.com/?p=3376 Continue reading "Managing Apache and Subversion Through Active Directory (Part 1 – Authentication)"

]]>
In my previous article, Installing Subversion on RHEL5, we went over how to install the Subversion server and how to make it accessible through the Apache web server.  This solution is great but leaves us without any user authorization and authentication.  For most Subversion instances these are features that we will want to have.  We have many choices for our A/A solution and I have decided to integrate my example repository with a Microsoft Active Directory (AD) system running on Windows 2003.  This, I feel, is probably the most commonly desired scenario for enterprise shops although a non-AD based LDAP and Kerberos system may also be very popular.  We will start by address authentication via Kerberos in this article.

In addition to using Kerberos for secure authentication, we are also switching from using plain HTTP as our transport to HTTP over SSL so be aware that after applying the Apache configuration file here that you will need to access your Subversion directory with HTTPS rather than HTTP and that, unless otherwise configured, you will need to open your firewall both locally and remotely to allow port 443 traffic out instead of (or in addition to) port 80 traffic.

Installing Necessary Components

As with anything else in the Red Hat world, most of the heavy lifting is done by our friends at Red Hat engineering and we just need to leverage what they have already done for us.  We need to install the module for SSL transport and Kerberos authentication in Apache:

yum -y install mod_auth_kerb

This will automatically install the file /etc/httpd/conf.d/auth_kerb.conf which will take care of loading the Kerberos module into Apache and will provide a sample configuration if you want to learn more about Kerberos authentication in Apache.

Setting Up the Apache KeyTab File

Now we need to set up our Apache to Kerberos authentication table.  The Red Hat standard for this file is to be located at /etc/httpd/conf/keytab although you control its location through your Apache configuration.  We will not deviate from the standard here.

This file needs to contain

echo HTTP/[email protected] >> /etc/httpd/conf/keytab
chown apache.apache /etc/httpd/conf/keytab

Setting Access Control

The traditional examples will generally tell you to use the .htaccess file to manage your authentication mechanisms.  For most cases it is better to avoid the use of the .htaccess file and to switch to configuring these details within your <Location> section in your Apache configuration files.  This is better for performance reasons as well as for ease of security management.  Now you only need to worry about specifying your security information in a single location and Apache need not traverse the entire directory structure seeking out .htaccess files for each access attempt.

I use the file /etc/httpd/conf.d/subversion.conf for the configuration of my Subversion repository.  Here are its contents:

   <Location /svn>
     DAV svn
     SVNPath /var/projects/svn/
     AuthName "Active Directory Login"
     AuthType Kerberos
     Krb5Keytab /etc/httpd/conf/keytab
     KrbAuthRealm EXAMPLE.COM
     KrbMethodNegotiate Off
     KrbSaveCredentials off
     KrbVerifyKDC off
     Require valid-user
     SSLRequireSSL
   </Location>

Configuration of Kerberos

Kerberos is configured in Red Hat Linux in the /etc/krb5.conf file.  Obviously replace EXAMPLE.COM and ad.example.com with the name of your Domain and your KDC.  This file should have been created for you using almost exactly these settings by the RPM installer so there is very little here that needs to be changed.

[libdefaults]
 default_realm = EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 forwardable = yes

[realms]
 EXAMPLE.COM = {
  kdc = ad.example.com:88
 }

[domain_realm]
 example.com = EXAMPLE.COM
 .example.com = EXAMPLE.COM

[appdefaults]
 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false
 }

Enable HTTPS Access Through Firewall

Use the Red Hat management tool to enable HTTPS connection through your host firewall.

system-config-securitylevel-tui

Restart Apache

Now, all that we need to do is to restart the web server to have it pick up the changes that we have made and voila, Kerberos authentication to Active Directory should be working.

/etc/init.d/httpd restart

Testing Your Connection

In order to test your connection you can use a web browser or use the Subversion command line client as below:

svn list https://localhost/svn/

Error Notes:

If you set KrbMethodNegotiate On then, in my experience, you will see Firefox work just fine but Internet Explorer (IE) and Chrome will fail with a 500 error.  In the logs I discovered the following entry:

gss_acquire_cred() failed: Unspecified GSS failure.  Minor code may provide more information (Unknown code krb5 213)

References:

Providing Active Directory Authentication via Kerberos Procol in Apache by Alex Yu, MVP, Microsoft Support

]]>
https://sheepguardingllama.com/2009/01/managing-apache-and-subversion-through-active-directory-part-1-authentication/feed/ 1
Installing ruby-sqlite3 on Red Hat or CentOS Linux https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/ https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/#comments Sun, 23 Nov 2008 22:04:22 +0000 http://www.sheepguardingllama.com/?p=3012 Continue reading "Installing ruby-sqlite3 on Red Hat or CentOS Linux"

]]>
For my development environment, I like to SQLite3 on Red Hat Enterprise Linux (RHEL / CentOS.)  When working with the gem installer for the sqlite-ruby package I kept getting an error on my newest machine.  I searched online and found no answers anywhere while finding many people having this save problem.  I have found a solution.  There is no need to compile Ruby again from source.

The command used was:

gem install sqlite3-ruby

What I found was the following error:

gem install sqlite3-ruby
Building native extensions.  This could take a while…
ERROR:  Error installing sqlite3-ruby:
ERROR: Failed to build gem native extension.

/usr/bin/ruby extconf.rb install sqlite3-ruby
checking for fdatasync() in -lrt… no
checking for sqlite3.h… no

make
make: *** No rule to make target `ruby.h’, needed by `sqlite3_api_wrap.o’.  Stop.

Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-ruby-1.2.4 for inspection.
Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-ruby-1.2.4/ext/sqlite3_api/gem_make.out

There are two main causes of this problem.  The first is that the correct dev packages are not installed.  Be sure that you install the correct packages for Red Hat.  In RHEL 5, which I use, SQLite3 is now simply SQLite.

yum install ruby-devel sqlite sqlite-devel ruby-rdoc

If you are still receiving the error then you most likely do not have a C compiler installed.  The Gem system needs make and the GCC.  So install those as well.  (Obviously you could combine these two steps.)

yum install make gcc

Voila, you SQLite / SQLite3 installation on Red Hat (RHEL), Fedora, or CentOS Linux should be working fine.  Now your “rake db:migrate” should be working.

Update: If you follow these direction and get the error that sqlite3-ruby requires Ruby version > 1.8.5 then you can go to my follow-up directions on
SQLite3-Ruby Gem Version Issues on Red Hat Linux and CentOS

]]>
https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/feed/ 16
High IOWait on VMWare Server on Linux https://sheepguardingllama.com/2008/11/high-iowait-on-vmware-server-on-linux/ https://sheepguardingllama.com/2008/11/high-iowait-on-vmware-server-on-linux/#comments Fri, 21 Nov 2008 04:21:00 +0000 http://www.sheepguardingllama.com/?p=2995 Continue reading "High IOWait on VMWare Server on Linux"

]]>
In using VMWare Server running on Red Hat Enterprise Linux 5 (CentOS 5) I discovered a rather difficult problem.  My setup includes Red Hat Linux 5.2, Solaris 10 and Windows Server 2003 guests running on a Red Hat 5.2 host server all 64bit except for Windows running on AMD Opteron multicore processors on an HP Proliant DL145 G3.

The issue that I found was that the Windows guest was exhibiting serious performance issues.  The box would freeze regularly, networking would stop although pings continued but remote desktop (RDP) would be interrupted.  In the logs I consistently found symmpi errors in the System Event Log:

The device, \Device\Scsi\symmpi1, did not respond within the timeout period.

Because the issues were only exhibiting on Windows and not on Linux or Solaris guests I was convinced that the issue was Windows related.  I could see that the Linux host operating system was showing massive IOWait states (you can see this in top or with the iostat command from the sysstat package.)  I assumed that this was being caused by the Windows guest; it was not.

I turned off all three guest operating systems and noticed almost no drop in the IOWait levels, however if I turned of the VMWare Server process (/etc/init.d/vmware stop) the IOWait would drop almost instantly and return again as soon as I restart the process even without starting any virtual machine images.  Clearly the issue was with VMWare Server itself.

My first thought was to make sure that VMWare Server was up to date.  I have been running VMWare Server 1.0.7 and so downloaded and updated the very recent 1.0.8 update just to be sure that this issue was not addressed in that package.  It was not.  I am aware that the 2.0 series is available now but as this box is used a bit I am not interested yet in moving to the new series unless absolutely necessary.

Once I narrowed down that the issue was a problem with VMWare Server on Linux I was able to track down a solution.  Special thanks to Mr. Pointy for publishing the solution to this for Gutsy Gibson.  Red Hat and Ubuntu are sharing a problem in this case.

The issue is with memory configuration defaults with VMWare Server on this platform.  Very likely this will apply to Novell SUSE Linux, OpenSUSE, Fedora and others, but I have not tested it.  In the main VMWare Server configuration file (/etc/vmware/config) the following changes should be added:

prefvmx.useRecommendedLockedMemSize = “TRUE”
prefvmx.minVmMemPct = “100″

Then, in each of the individual virtual machine configuration files (*.vmx) you need to add:

sched.mem.pshare.enable = “FALSE”
mainMem.useNamedFile = “FALSE”
MemTrimRate = “0″
MemAllowAutoScaleDown = “FALSE”

These changes are taken directly from Mr. Pointy’s blog.  Once the changes are made you can restart VMWare Server (/etc/init.d/vmware restart) and the difference should be immediately visible.  Mr. Pointy posted his own sar results and here are mine.  You can clearly see the change in the %iowait column at 10:10pm when I restarted VMWare with the new configuration.  The numbers are low around 7:00pm because I had VMWare off much of that hour.

06:40:01 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
06:50:01 PM       all      0.16      0.00      1.77     43.15      0.00     54.92
07:00:01 PM       all      2.83      0.00      6.83      9.51      0.00     80.82
07:10:01 PM       all      0.10      0.00      1.38      4.93      0.00     93.59
07:20:01 PM       all      0.11      0.20      1.84     14.78      0.00     83.07
07:30:01 PM       all      0.10      0.00      2.08      8.84      0.00     88.98
07:40:02 PM       all      0.11      0.00      2.36     26.84      0.00     70.70
07:50:01 PM       all      0.11      0.00      2.32     28.54      0.00     69.04
08:00:01 PM       all      0.10      0.00      2.13     30.63      0.00     67.14
08:10:01 PM       all      0.10      0.00      2.06     22.74      0.00     75.10
08:20:01 PM       all      0.09      0.20      2.02     22.75      0.00     74.94
08:30:04 PM       all      0.09      0.00      2.21     23.22      0.00     74.48
08:40:01 PM       all      0.09      0.00      3.03     25.06      0.00     71.81
08:50:01 PM       all      0.09      0.00      3.09     27.21      0.00     69.61
09:00:01 PM       all      0.10      0.00      3.13     29.40      0.00     67.37
09:10:01 PM       all      0.09      0.00      3.11     25.56      0.00     71.23
09:20:01 PM       all      0.09      0.19      3.07     23.79      0.00     72.86
09:30:01 PM       all      0.09      0.00      2.98     21.50      0.00     75.43
09:40:01 PM       all      0.10      0.00      2.97     25.94      0.00     70.99
09:50:01 PM       all      0.10      0.00      3.28     32.70      0.00     63.93
10:00:01 PM       all      0.20      0.00      4.96     40.73      0.00     54.11
10:10:01 PM       all      0.69      0.00      8.57      1.23      0.00     89.50
10:20:01 PM       all      0.88      0.21      6.34      0.67      0.00     91.90
10:30:01 PM       all      0.81      0.00      6.04      0.26      0.00     92.89
10:40:01 PM       all      0.78      0.00      5.55      0.20      0.00     93.47
10:50:01 PM       all      0.77      0.00      5.47      0.07      0.00     93.69

After the change was complete I had no problem running i/o system intensive operations like disk compression, defragmentation, etc.

Original solution from: Mr. Pointy – Gutsy and VMWare Server – You’re In for Some Pain.

]]>
https://sheepguardingllama.com/2008/11/high-iowait-on-vmware-server-on-linux/feed/ 1
Resizing VMWare Server Virtual Disk https://sheepguardingllama.com/2008/11/resizing-vmware-server-virtual-disk/ https://sheepguardingllama.com/2008/11/resizing-vmware-server-virtual-disk/#comments Thu, 20 Nov 2008 18:12:30 +0000 http://www.sheepguardingllama.com/?p=2990 Continue reading "Resizing VMWare Server Virtual Disk"

]]>
Today I needed to resize a VMWare Virtual Disk (vmdk) for a Windows Server 2003 image running on a Red Hat Enterprise Linux 5 host using LVM to manage the physical, local disk space.  In my case, my logical volume was too small to accommodate the vmdk expansion and so I had to grow my logical volume before I could begin the VMWare portion of the work.

I must preface all of this, of course, by stating that you must make a complete backup of your virtual machine before doing something as invasive as this.  While this process is reasonably safe there is always the potential for disaster.  Take precautions.

The lvextend command is used to increase the size of the logical volume.  You can view your current logical volumes with lvdisplay.  I use the -L+ syntax as a safety measure to be sure that my drive is getting larger and not shrinking accidentally due to a typo.  In this example I am expanding the /dev/VolGroup00/lvxen logical volume by an additional 80GB.

lvextend -L+80G /dev/VolGroup00/lvvmware

This first step can be completed while the virtual machine is still running.  It will happily extend your available space in the background.  Our next step, however, requires that you power down your virtual machine before continuing.

Now that we have created space on our logical volume we need to expand the Linux local filesystem before we can expand the virtual filesystem running on top of it.  Assuming that we are using the current standard Ext3 this is very simple:

umount /dev/VolGroup00/lvvmware

e2fsck -f /dev/VolGroup00/lvvmware

resize2fs /dev/VolGroup00/lvvmware

mount /vmware/

Obviously for my purposes I use a /vmware directory structure for holding all of my disk images.  You will need to adjust as needed for your own setup.  /var/vmware is another common option.

Now we just enlarge the virtual disk itself.  We will do this through the vmware-vdiskmanager command.  You will need to execute this command on your vmdk and not your flat-vmdk even though this seems counter-intuitive when looking at your directory structure.

vmware-vdiskmanager -x 22GB ph-w2k3-ad.vmdk

This concludes the easy part.  Now you have plenty of logical disk space available for Windows but in order to expand the System drive of Windows you will need to use a third party tool.  Windows Server 2003 is unable to make partition changes that affect the running system.

If you are like me, you will want to fire up your virtual machine just to make sure that everything is okay after the disk change, but you will need to turn it off again before we make changes to the partition table.

There are many tools that can be used for this task but I decided to use GParted, which is available as a live CD which you can download for free.  For the version that I used, I just cd’d into /tmp and used this command to get my copy of GParted’s bootable CD ISO file.

wget http://downloads.sourceforge.net/gparted/gparted-live-0.3.9-4.iso?modtime=1222872844&big_mirror=0

Using your VMWare Server Console (or through the command line) you will need to set your Windows Server image to boot from the GParted ISO which you just downloaded.  Then go ahead and “start this virtual machine.”

You will likely need to hit “Esc” as soon as the virtual machine starts so that you can select to boot from CD.  I keep my Virtual BIOS set to boot directly to the hard drive under normal circumstances because it is faster and I don’t want to accidentally boot to some CD media unless I really, really mean it.

Once GParted starts you will be given a boot menu.  The default option works fine in more cases and worked fine for me.  You will need to select your keyboard layout and then you will be taken to GParted’s graphical partition manager screen.

Once in the GParted Partition Manager you should see the current partition that you had before we started, in my case called /dev/sda1 and marked as being an NTFS file system.  Mine also shows the “unallocated” partition space into which I will be expanding my /dev/sda1 partition.

Start by selection the partition which you are seeking to resize (sda1 for me) and then select “Resize/Move”.  This will open the Resize/Move window.  Do not alter the first numer – “Free Space Preceding”, this is for “moving” your partition.  You only want to alter the second number – “New Size.”  If you are doing like me and have created some empty space specifically for this purpose then you will simply set this number to the “Maximum Size” as displayed in the window.  Then select “Resize/Move” to continue.

Once you have completed that step you can visually confirm that the disk now looks the way that you want it to look.  If you look at the bottom of the window you will see that there is “1 operation pending.”  If everything looks alright go ahead and click “Apply” to commit your changes and to resize your partition.

Once the resizing completes you are safe to reboot your virtual machine into Windows again.  Double click the “Exit” button on the GParted desktop.  Reboot should already be selected so just choose OK to continue.

When Windows starts it will detect the drive configuration change and force a disk consistency check.  Allow it to run through this process and when it completes the system will restart automatically.  Once Windows restarts you should see that your drive has been resized.

]]>
https://sheepguardingllama.com/2008/11/resizing-vmware-server-virtual-disk/feed/ 2
Subversion Permission Issues https://sheepguardingllama.com/2008/11/subversion-permission-issues/ https://sheepguardingllama.com/2008/11/subversion-permission-issues/#comments Sun, 16 Nov 2008 19:19:02 +0000 http://www.sheepguardingllama.com/?p=2940 Continue reading "Subversion Permission Issues"

]]>
In my installation of Subversion (SVN) on Red Hat Enterprise Linux 5 (a.k.a. RHEL 5 or CentOS 5), I was attempting to access my working Subversion repository through the web interface using Apache.  I came across a permissions issue giving the following errors:

This one is from the Apache error log (/var/log/httpd/error_log) and is generated whenever an attempt to connect to the resource via the web interface is made:

[error] [client 127.0.0.1] Could not open the requested SVN filesystem  [500, #2]

This is what was visible from the web browser.  This is its rendering of the XML response.

<D:error>
<C:error/>
<m:human-readable errcode=”2″>
Could not open the requested SVN filesystem
</m:human-readable>
</D:error>

This one arose when attempting to run the svn command as the apache user (sudo -u apache svn list….)

svn: Can’t open file ‘/root/.subversion/servers’: Permission denied

I eventually discovered that this problem was being caused by the Subversion binary looking to the root home directory, instead of to the Apache / httpd home directory (~apache which was /var/www in my configuration.)  This is not the correct behaviour but until the issue is fixed you can fix the problem yourself with this:

cp -r /root/.subversion/* ~apache/.subversion/

chown -R apache:apache ~apache/.subversion/

]]>
https://sheepguardingllama.com/2008/11/subversion-permission-issues/feed/ 3
Installing Subversion on RHEL5 https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/ https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/#comments Sun, 16 Nov 2008 00:57:20 +0000 http://www.sheepguardingllama.com/?p=2298 Continue reading "Installing Subversion on RHEL5"

]]>
Subversion (SVN) is a popular open source, source code change control package. Today we are going to install and configure Subversion on Red Hat Enterprise Linux 5.2 (a.k.a. RHEL 5.2), I will actually be doing my testing on CentOS 5.2 but the process should be completely identical.

Installing Subversion on Linux

Installation of subversion is very simple if you are using yum. In addition to Subversion itself, you will also want to install Apache as you will most likely want to access Subversion through a WebDAV interface.  You can simple run:

yum -y install subversion httpd mod_dav_svn

Once Subversion is successfully installed we need to create the initial repository. This can be done on the local file system but I prefer to keep high priority and highly volatile data stored directly on the NAS filer as this is far more appropriate for this type of data.

As an aside, I like to keep low volatility data (say, website HTML) stored on local discs in general for performance reasons and since backups are not as difficult to take using traditional backups methods (e.g. tar, cpio, Amanda, Bacula, etc.) High volatility files I prefer to be on dedicated network storage units where backups can be easily taken using more advanced methods like Solaris 10’s ZFS snapshot capability. It is not always clear when data makes sense to keep locally or to store remotely but I feel that you can gauge a lot of the decision on two factors: frequency of data changes – that is changes to existing files not the addition of new files necessarily and the level to which the data is the focus of the storage – that is if the data is incidental or key to the application. In the case of Subversion the entire application is nothing but a complex filesystem frontend so we are clearly on the side of “data focused” application.

I started writing this article on RHEL4 on a system with a small, local file system.  When I returned to the uncompleted article and continued with it I was implementing this on a RHEL5 system with massive local storage and decided to keep my Subversion repository local on a dedicated logical volume for easy Linux based snapshots.

Subversion has two optional backend storage solutions.  The original method of storing Subversion data was with the venerable Berkley DataBase, known as BDB, which is now a product of Oracle.  The newer method, and the method which has been the default choice since Subversion 1.2, is FSFS (I don’t know exactly for what its initials stand) which uses the native filesytem mechanisms for storage.  In my example here and for my own use I choose FSFS as I think it is more often the better choice.  Of most important note, FSFS supports remote filesystems over NFS and CIFS while BSB does not.  FSFS is also easier to deal with when it comes to creating backups.  My feeling is that unless you really know why you want to use BDB, stick with the default FSFS, there is a reason that it was selected as the default.

Another note about creating Subversion repositories: some sources recommend putting Subversion repos under /opt. All I have to say is “No No No!” The /opt filesystem is not appropriate for regularly changing data. Any data that is expected to change on a regular basis (e.g. log files, source code repos, etc.) belongs in /var. This is the entire purpose of the /var filesystem. It stands for “variable” and is purposed for regular filesystem changes. Files going to /var is another indicator that external network filesystem may be appropriate as well.

mkdir -p /var/projects/svn

At this point you can either use /var/svn as a normal or mounted remotely in some manner such as NFS, CIFS or iSCSI.  Regardless of how the repository is set up the rest of this document will function identically.

We are now in a position to use svnadmin to create our repository directory:

svnadmin create /var/projects/svn/

At this point, Subversion should already be working for you.  If you are new to Subversion, we will do a simple import to test our installation.  To perform this test, create a directory called “testproject” and put it in the /tmp directory.  Now touch a couple of files inside that directory so that we have something with which to work.  Then we will do our first Subversion import.

mkdir /tmp/testproject; cd /tmp/testproject; touch test1 test2 test3

svn import tmp/testproject/ file:///var/projects/svn/test -m “First Import”

Your Subversion installation is now working, but few people will be happy accessing their Subversion repositories only from the local machine as we have done here.  If you are used to working from the UNIX (Linux, Mac OSX, Cygwin, etc.) command line you may want to try accessing your new Subversion repository using SVN+SSH.  Here is an example taken from an OpenSUSE workstation with the Subversion client installed:

svn list svn+ssh://myserver/var/projects/svn
testproject/

At this point you now have access from your external machines and can perform a checkout to get a working copy of your code.  To make the process really simple be sure to set up your OpenSSH keys so that you are not prompted for a password.  For many users, most notably Windows users, you are going to want access over the HTTP protocol since Windows does not natively support the SSH protocol.

The first thing that you are going to need to do, if you are running SELinux and Firewall security on your RHEL server like I am, is to open ports 80 and 443 in your firewall so that Apache is enabled.  Normally I shy away from management tools but this one I like.  Just use “system-config-securitylevel-tui” and select the appropriate services to allow.

You will also need to allow the Apache web server to write to the Subversion repository location within SELinux.  To do so we can use the command:

restorecon -R /var/projects/svn/

We have one little trick that we need to perform.  This trick is necessary because of what appears to be a bug in the way that Subversion sets the user ID when it runs.  This is not necessary for all users but can be a pretty tough sticking point for anyone who runs into it and is not aware of what can me done to remedy the situation.

cp -r /root/.subversion/* ~apache/.subversion/

Configuring Apache 2 on Red Hat 5 is a little tricky so we will walk through it together.  The first thing that needs to be added is the LoadModule line for the WebDAV protocol.  This goes into the LoadModule section of the mail /etc/httpd/conf/httpd.conf configuration file.

LoadModule dav_module         modules/mod_dav.so

The rest of our configuration changes for Apache 2 will go into a dedicated configuration file just for our subversion repository: /etc/httpd/conf.d/subversion.conf

I am including here my entire configuration file sans comments.  You will need to modify your SVNPath variable accordingly, of course.

# grep -v \# /etc/httpd/conf.d/subversion.conf

LoadModule dav_svn_module       modules/mod_dav_svn.so
<Location /svn>
  DAV svn
  SVNPath /var/projects/svn/
</Location>

At this stage you should now not only have a working Subversion repository but should be able to access it via the web.  You can test web access from you local box with the svn command.  Here is an example:

svn list http://localhost/svn/

References:

Mason, Mike. “Pragmatic Version Control Using Subversion, 2nd Edition“, Pragmatic Programmers, The. 2006.

Installing Subversion on Apache by Marc Grabanski

Subversion Setup on Red Hat by Paul Valentino

Setting Up Subversion and Trac As Virtual Hosts on Ubuntu Server, How To Forge

The SVN Book, RedBean

Additional Material:

Subversion Version Control: Using the Subversion Version Control System in Development Projects

]]>
https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/feed/ 5
Twitter from the Linux Command Line https://sheepguardingllama.com/2008/10/twitter-from-the-linux-command-line/ https://sheepguardingllama.com/2008/10/twitter-from-the-linux-command-line/#comments Fri, 31 Oct 2008 14:52:47 +0000 http://www.sheepguardingllama.com/?p=2820 Continue reading "Twitter from the Linux Command Line"

]]>
Okay, so you are a crazy BASH or Korn shell nut (DASH, ASH, TCSH, CSH, ZSH, etc., etc. yes, I mean all of you) and you totally want to be able to Tweet on your Twitter feed without going to one of those krufty GUI utilities.  Such overkill for such a simple task.  I feel your pain.  When I found this little nugget of command line coolness I just had to share it with all of you.  Special thanks to Marco Kotrotsos from Incredicorp who published this on IBM Developer Works.

If you have curl installed, all you need to do is:

curl -u username:pass -d status=”text” http://twitter.com/statuses/update.xml

So, to give you a real world example, if you are “bobtheuser” and your password is “pass1234” and you want to say “Hey, my first UNIX Shell Tweet.” then you just need to:

curl -u bobtheuser:pass1234 -d status="Hey, my first UNIX Shell Tweet." \
http://twitter.com/statuses/update.xml

You will get some feedback in the form of a response XML file. Happy Tweeting!

Disclaimer: I realize that using “Linux” in the subject is misleading.  This is not a Linux specific post but will apply to FreeBSD, OpenBSD, NetBSD, Mac OSX, UNIX, Solaris, AIX, Windows with Cygwin or just about any system with a command line and the curl utility installed.

I use this as the basis for my Ruby based Twitter Client for the command line.

]]>
https://sheepguardingllama.com/2008/10/twitter-from-the-linux-command-line/feed/ 1
Ruby/Qt: qtruby4.rb:2144: [BUG] [x86_64-linux] https://sheepguardingllama.com/2008/10/rubyqt-qtruby4rb2144-bug-x86_64-linux/ https://sheepguardingllama.com/2008/10/rubyqt-qtruby4rb2144-bug-x86_64-linux/#respond Sat, 04 Oct 2008 17:01:50 +0000 http://www.sheepguardingllama.com/?p=2661 Continue reading "Ruby/Qt: qtruby4.rb:2144: [BUG] [x86_64-linux]"

]]>
You are working with Ruby and Qt and you get the following error:

/usr/lib64/ruby/site_ruby/1.8/Qt/qtruby4.rb:2144: [BUG] Segmentation fault
ruby 1.8.6 (2008-03-03) [x86_64-linux]

This is usually caused by a library linking problem. Most likely you are using:

require 'Qt'

Personally, I run into this problem when using Ruby/Qt on Novell OpenSUSE 11 64bit (x86_64 / AMD64).  What needs to be done is that linking needs to occur explicitly to the correct library.  If you are using ‘Qt3’ then you can link directly to that or, in my case, you want to use Qt4/KDE4 bindings then you will want to link to korundum4:

require 'korundum4'

Problem solved!

Thanks to Bemerkenswertes Meinerseits for some guidance in German!

]]>
https://sheepguardingllama.com/2008/10/rubyqt-qtruby4rb2144-bug-x86_64-linux/feed/ 0
HowTo WhiteList Proxy for School Using Squid on OpenSUSE Linux 11 https://sheepguardingllama.com/2008/09/howto-whitelist-proxy-for-school-using-squid-on-opensuse-linux-11/ https://sheepguardingllama.com/2008/09/howto-whitelist-proxy-for-school-using-squid-on-opensuse-linux-11/#comments Sun, 28 Sep 2008 22:26:25 +0000 http://www.sheepguardingllama.com/?p=2610 Continue reading "HowTo WhiteList Proxy for School Using Squid on OpenSUSE Linux 11"

]]>
Overview

I am the technology coordinator for a small, private K12 school in rural Upstate New York.  One of our challenges is filtering Internet access so that the students may have access to the Internet as much as possible while not requiring constant, direct supervision.

To meet these needs we decided that we were limited to WhiteListing – managing a list of all allowed websites and blocking everything by default as opposed to blacklisting where everything is allowed except for a specific list of banned sites.  Whitelisting means that we have to manually maintain a list of approved websites, but the parents are confident that the students are only able to access pre-approved web sites.

Our Infrastructure

Before getting into the implementation details, I would like to detail how our network is laid out to put this project into context.  We are a pure 32bit Novell OpenSUSE environment, both desktops and servers, with a single Netgear ProSafe Firewall connecting us to a donated Time-Warner RoadRunner cable connection (Thank You, Time Warner RR!!)

Each desktop is setup without routing so they are limited to communications within the subnet only.  We have no fears of needing to grow beyond our /25 subnet’s limit anytime soon.  We have no DHCP and use static IP assignments throughout the school including machines connected via wireless.  Those machines used for administrators (not teachers – but office use where students do not have access) are routable and will not use our filter (for extra security they are allowed external access at the firewall via an IP list.)  All other machines can only get access to the Internet through the use of the proxy server.  This also allows us to improve bandwidth utilization through aggressive caching since the set of allowed sites is so limited and well known.

For our proxy server hardware we are using an HP Proliant DL380 G2 with dual Pentium IIIs 1.4GHz processors, 1.25GB and six hot-swap 36GB 10,000RPM SCSI drives arranged as RAID 0+1.  This machines is far more than adequate for our needs and does an amazing job.  We could easily run on a DL360 G1 with just a single processor, half that memory and two drives in RAID 1 without any problem.  Our previous machine, which we used for years without any issues in performance, was a Proliant 3000, dual Pentium II 333MHz, 1GB and five 4.3GB 7,200RPM drives in RAID 5.

The older system ran SUSE 9.2 and ran wonderfully for a long time.  I am writing this HowTo guide as I move us to OpenSUSE 11 and do a fresh installation of our proxy server.

The Software

As we are running on OpenSUSE Linux 11, I want to work with Novell managed packages as much as possible.  For the proxy portion of our system we will use the Linux standard proxy server Squid.  OpenSUSE’s repository offers us both Squid3 and Squid2.  We will go ahead and use the latest Squid Proxy package for OpenSUSE 11, Squid3 3.0.5.  The downside to going with the newer Squid3 package is that OpenSUSE’s YaST tool cannot yet manage it so you are stuck working only from the configuration files.

For advanced filtering we have two primary choices: SquidGuard and DansGuardian.  SquidGuard has the advantage of being included in the OpenSUSE repositories making it easier to manage from a patch perspective.  DansGuardian is what I have used in the past.  It is available as an RPM from the OpenSUSE Build site but is not available through the YaST repositories.  DansGuardian is GPL’d but the author asks that you not exercise your GPL right (GPL in fact but not in spirit.)  So, I like to avoid DansGuardian simply because I can’t figure out if the author even wants me to use his software or not.

For our purposes here, using nothing but whitelisting, we do not need the features of either SquidGuard or DansGuardian and can avoid them completely.  If you are looking to do more than just whitefiltering they are your best bets.

Installing the Proxy Server: Squid

Installing Squid3 on OpenSUSE 11 is extremely simple.

zypper install squid3

Of course, if you prefer, you can always use OpenSUSE’s YaST utility, either graphically through the desktop or through an ncurses interface on the command line to install Squid and any necessary dependencies.  I find that working through Zypper (or Yum on a Red Hat, CentOS or Fedora system) to be the most effecient by far.

Configuring Squid

These are the changes that I made to /etc/squid/squid.conf:

acl localnet srv 192.168.4.0/25
http_access allow all whitelist
http_access deny all
http_port 8080

That’s it.  Very, very simple.  The first line is simply to allow my local network.  You will need to add in your own local network and not mine for this to work for you.  If you stick with the Squid3 defaults then all private networks are allowed locally by default so that is a completely viable option.

The next two lines, http_access, first tell the system to allow access to anyone “all” to sites included in the whitelist.  The next line says to deny access to anyone who did not get allowed from the previous rule.

The last line, http_port, is also completely optionaly.  The default port for Squid is 3128 but I prefer to run my proxy on the more common 8080 port.  This is just easier to remember when setting up desktops.

With the default install of Squid3, Squid is not configured to start automatically.  So we need to use chkconfig to configure Squid to start on system boot.  You can skip this step if, for some reason, you do not want your proxy system to start automatically when your server restarts.

chkconfig –level 3 squid

Before we actually start Squid, though, we will want to create our whitelist file which will be the main configuration file that we will be using after Squid is up and running.

Creating the Whitelist

Using your favourite text editor (that’s vi for me) create the file /etc/squid/whitelist.  This file is just a simple list of websites that will be allowed.  The one thing of which to be aware is the fact that your entries need to lead with a dot.  If you leave off the dot you will have problems.  Here is an example from my own whitelist:

.gov
.sheepguardingllama.com
.unicef.org
.eff.org
.conversationsnetwork.org

In this example, all United States government web sites will be allowed (those ending in .gov) as well as this blog, UNICEF, the Electronic Frontier Foundation and The Conversations Network.  Anytime that you alter this file you will need to ask Squid to reread its configuration.

Configuring the Desktop Clients

If you are like me, you will be using OpenSUSE on your desktops as well which I highly recommend.  OpenSUSE makes a wonderful desktop, especially with KDE4.  With OpenSUSE you have the option of setting your proxy settings using the handy YaST tool.  This is fine.  If you are like me, you will prefer to use the command line – mostly because it is easily scriptable but also because it will work for non-SUSE Linux boxes as well.

To set your proxy temporarily just for the current session to test your proxy server you can simply:

http_proxy="http://192.168.4.2:8080/"

Notice that you will need to use your own IP address here as well as your own port number if you decided to use one other than 8080.  My proxy server’s IP address is 192.168.4.2 so modify accordingly.

The most common means of setting this variable to survive through a reboot is to use /etc/profile so that it will apply to all users.  Simply add this line to /etc/profile:

export http_proxy=http://192.168.4.2:8080/

In OpenSUSE, there is a better place to set this information.  Let’s look at /etc/sysconfig/proxy.  This file is a central proxy settings file for all of the OpenSUSE which makes it very handy so that we don’t have to worry about users not picking up changes from other locations.  It is also nice as it will allow us to have some advanced settings if we so desire.

In my case, I am only using the proxy server to handle HTTP and HTTPS requests (we are blocking FTP and GOPHER entirely) so we only need to edit the two lines pertaining to those protocols as well as the “no proxy” setting to list which locations should not be proxied but accessed directly.  Here are my settings:

HTTP_PROXY="http://192.168.4.2:8080/"
HTTPS_PROXY="http://192.168.4.2:8080/"
NO_PROXY="localhost, 127.0.0.1"

With these changes you should now have a functioning, whitelisting proxy server to protect your network.  OpenSUSE’s default installation of FireFox is set to bypass its own proxy settings and to pick up the system changes automatically.  Tools like w3m and wget will use the system proxy settings as well.  If you are using a client that is either unable to or is not configured to get its settings from the system then you will need to configure its proxy settings manually on an application by application basis.

]]>
https://sheepguardingllama.com/2008/09/howto-whitelist-proxy-for-school-using-squid-on-opensuse-linux-11/feed/ 13
Updating Zimbra on Linux https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/ https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/#comments Sat, 13 Sep 2008 04:22:52 +0000 http://www.sheepguardingllama.com/?p=2533 Continue reading "Updating Zimbra on Linux"

]]>
Having been a Zimbra Administrator for some time and having always worked on the Zimbra Open Source platform I have found that documentation on the update process has been very much lacking.  The process is actually quite simple and straightforward under most circumstances but for someone without direct experience with the process it can be rather daunting.

My personal experience with Zimbra, this far, is running the 4.5.x series on CentOS 4 (RHEL 4).  Using CentOS instead of actual Red Hat Enterprise Linux presents a few extra issues with the installer but have no fear, the process does work.

While this document is based on the Red Hat Enterprise Linux version of Zimbra, I expect that non-RPM based systems will behave similarly.

To upgrade an existing installation of Zimbra, first do a complete backup. I cannot state the importance of having a complete and completely up-to-date backup of your entire system.  Zimbra is a massive package that is highly complex.  You will want to be absolutely sure that you are backed up and prepared for disaster.  If you use the open source version of Zimbra, as I do, that means taking Zimbra offline so that a backup can be performed.  I won’t go into backup details here but LVM or virtual instances of your server will likely be your best friend for regular backups.  Email systems can get very large very quickly.

Go to the Zimbra website and download the latest package for your platform.  If you use CentOS, get your matching RHEL package.  It will work fine for you.  I find that the easiest way to move the package to your Zimbra server is with wget.  Downloading to /tmp is fine as long as you have enough space.

Unpack your fresh Zimbra package.  Zimbra downloads as a tarball (gzip’ed tar package) but contains little more than a handy installation script that automates RPM deployments.  It is actually a very nice package.

tar -xzvf zimbra-package.tar.gz

You can cd into your newly unpacked directory and inside you will find that there is a script, install.  Yes, the installation process is really that simple.  If you are on most platforms you may simple run the install script.  If you are on CentOS, rather than RHEL, you will need one extra parameter: –platform-override.

./install.sh –platform-override

Be prepared for this process to run for quite some time, by which, I mean easily an hour or more.  Depending on the version of the platform that you are upgrading from and to you may find that this process can run for quite some time.  Also, depending on the size of your mail store, that may impact the speediness of the process as well.

The installation script will fire off checking for currently installed instances of Zimbra, checking your platform for compatibility (be sure to check this manually if using the override option but CentOS users can rest assured that RHEL packages work perfectly for them), performing an integrity check on your database and checking prerequisite packages.  Chances are that you will need to do something in order to prepare your system for the upgrade.

In my case, upgrading from 4.5.9 to 5.0.9, I needed to install the libtool-libs package.

yum install libtool-libs

While there are processes here that can certainly go wrong, the Zimbra upgrade process is very simple and straightforward.  As long as you have good backups (make sure not to start Zimbra and receive new mail after having made you last backup) you should not be afraid to upgrade your Zimbra Open Source system.

You can also purchase a support contract from Yahoo/Zimbra so that you can move to the Network version of Zimbra and Zimbra support staff are happy to walk you through the process.  Having someone there to make sure everything is okay is always nice.

References:

Linux Zimbra Upgrade HowTo from GeekZine

]]>
https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/feed/ 2
Installing Fedora 9 Linux in VirtualPC https://sheepguardingllama.com/2008/07/installing-fedora-9-linux-in-virtualpc/ https://sheepguardingllama.com/2008/07/installing-fedora-9-linux-in-virtualpc/#comments Fri, 25 Jul 2008 02:46:54 +0000 http://www.sheepguardingllama.com/?p=2462 Continue reading "Installing Fedora 9 Linux in VirtualPC"

]]>
If you are using Microsoft’s VirtualPC 2007 as a host for installing Red Hat’s Fedora 9 Linux (aka Sulphur) distribution you may have run into a few problems.  The first problem that plagues just about anyone attempting to install the latest versions of Linux (not just Fedora) is that of auto-detected virtualization.  To overcome this problem we have to forcibly disable paravirtualization.  This is easier than it sounds.

When the initial Fedora 9 menu comes up after you boot from the install CD ISO image, that is the “Welcome to Fedora 9!” menu, you will need to press [tab] in order to be able to manually edit the boot options.  You only get 60 seconds to press [tab] after the menu comes up so pay attention.

If you pressed [tab] you will get a line that looks roughly like this:

> vmlinuz initrd=initrd.img

This is the boot options line that you can modify.  Simply add the option “noreplace-paravirt” and your installation will go much smoother.  The line should look like this when you are done.

> vmlinuz initrd=initrd.img noreplace-paravirt

In my own installation experience I had some problems with the native text mode of Fedora 9 not displaying correctly.  “Normal” X Window operations were not a problem.  Some installations, however, will go only in text mode which should work fine during initial setup but will then go to the bad screen modes after the installation completes.

If you set your memory level too low (I made the mistake of trying to use only 128MB) then full graphical installation mode will not be possible and the problem will arise.  Increase memory allotment to at least 192MB to allow graphical mode to be used.  256MB is recommended.  The graphical install should work just fine.  [All specs are for the x86 32bit architecture version of Fedora 9 as this is the architecture used for VirtualPC.]

Thanks to Sean of “The Sean Blog” over at TechNet for pointing us in the right direction on this one!

Installation requirements for Fedora 9 can be found at Red Hat’s Fedora 9 Architecture Specification page.

]]>
https://sheepguardingllama.com/2008/07/installing-fedora-9-linux-in-virtualpc/feed/ 1
Dual Head OpenSUSE 11 on the HP dx5150 https://sheepguardingllama.com/2008/07/dual-head-opensuse-11-on-the-hp-dx5150/ https://sheepguardingllama.com/2008/07/dual-head-opensuse-11-on-the-hp-dx5150/#comments Thu, 24 Jul 2008 03:10:08 +0000 http://www.sheepguardingllama.com/?p=2459 Continue reading "Dual Head OpenSUSE 11 on the HP dx5150"

]]>
One of my favourite workhorse platforms is the Hewlett-Packard HP Compaq dx5150 desktop with the AMD Athlon64 processor and ATI Radeon Xpress 200 chipset. I’ve used this model for many years with a variety of operating systems. I recently installed Novell’s OpenSUSE 11 to one of my dx5150 units to which I have attached two identical Samsung SyncMaster 204B monitors. Getting OpenSUSE to support both monitors at once was a bit problematic and finding the necessary resources was a bit of a problem so I decided to share the solution here to make it easier for other hapless souls to stumble across.

What appears to happen to most people is that they either use the Yast and Sax combination of tools to no effect and become discouraged. Many attempt to load the ATI fglrx drivers and find that after doing so they are unable to get anything but a blank screen. This was my experience as well.

The final solution was actually very simple and painless and was actually described on this site hosted by Novell specifically for OpenSUSE: Multiple Screens Using XRandR.  What is difficult is discovering if this set of information is the correct set for the dx5150.  It is.

The solution was quite easy. First, give up on the fglrx driver. User the radeonxrandr12 driver instead. The added Virtual settings to the /etc/X11/xorg.conf file that include the size of both (or all) of your monitors combined. In my case with two 1600×1200 LCDs that was 3200 1200. So the following line had to be added to each “Display” subsection:

Virtual 3200 1200

And change the “Driver” line to:

Driver “radeonrandr12”

Then restart the X server – easiest thing to do is to log out and back in again.  Once you are back in you can open up the command line and start playing with the simple xrandr command to change your monitor configuration.

You can learn more about the xrandr options with the –help option.  The correct command for me to have my two monitors appear side by side with one large desktop is:

xrandr –auto –output VGA-0 –mode 1600×1200 –right-of DVI-0

With OpenSUSE 11 installed on the dx5150, the two monitor adapters available to you natively off of the ATI Radeon Xpress 200 integrated chipset are VGA-0 and DVI-0.  This makes them very simple to work with.

Novell maintains another document about working with the ATI Radeon Xpress 200 chipset and OpenSUSE 11 but I found, as did many other people, that this documentation did not work for this particular set of hardware.

]]>
https://sheepguardingllama.com/2008/07/dual-head-opensuse-11-on-the-hp-dx5150/feed/ 4
Choosing a Linux Distro in the Enterprise https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/ https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/#respond Thu, 10 Jul 2008 17:27:22 +0000 http://www.sheepguardingllama.com/?p=2337 Continue reading "Choosing a Linux Distro in the Enterprise"

]]>
Linux is popular in big business today. No longer, and not for a long time now, has Linux been the purview of the geek community but it is a solid, core piece of today’s mainstream IT infrastructure. That being said, Linux is still plagued by confusion over its plethora of distributions. This being the case I have decided to weigh in with some guidance for businesses looking to use Linux in their organizations.

For those unfamiliar with the landscape, Linux is a family of operating systems that are generally considered to fall under the Unix umbrella although Linux is legally not Unix just highly Unix-like. Individual Linux packages are referred to as distributions or distros, for short. Unlike Windows or Mac OS X which come from a single vendor, Linux is available from many commercial vendors as well as from non-profit groups and individual distribution makers. Instead of there being just one Linux there are actually hundreds or thousands of different distributions. Each one is different in some way. This creates choice but also confusion. To make matters even worse some major vendors such as Red Hat and Novell release more than one Linux distribution targeted at different markets, and within a single distribution will often package features separately. This myriad of choices, before even acquiring your first installation disc does not help make Linux uptake in companies go any faster.

In reality the choices for business use are few and obvious with a little bit of research. To make things easier for you, I will just tell you what you need to know. Problem solved. Now if managing your Linux environment could be so easy!

Before we get started I want to stress that this article is about using Linux for enterprise infrastructure – that is, as a server operating system in a business. I am not looking into desktop Linux or high performance computational clusters and grid or specialty applications or home use. This article is about standard, traditional server applications that require stability, up time, reliability, accessibility, manageability, etc. If you are looking for my guide to the “ultimate Linux desktop environment”, this isn’t it. Desktops, even in the enterprise, do not necessarily have the same criteria as servers. They might, but not necessarily so.

When choosing a distribution for servers we must first consider the target purpose of the distro. Only a handful of Linux distros are built with the primary purpose of being used as a server. If your distro maintainer does not have the same principles in mind that you do it is probably best to avoid that distro for this particular purpose. Server distributions target longer time between releases, security over features, stability over features, rapid patching, support, documentation, etc.

In addition to targeting the distribution in harmony with our own goals we also need to work with a company that is reliable, has the resources necessary to support the product and has a track record with a successful product. Choosing a distribution is a vendor selection process. There are three key enterprise players in the Linux space: Red Hat, Novell and Canonical.

For many Red Hat is synonymous with Linux, having been one of the earliest American Linux distributions and having been a driving force behind the enterprise adoption of Linux globally. Red Hat makes “Red Hat Enterprise Linux”, known widely as RHEL, as well as Fedora Linux. Red Hat is the biggest Linux vendor and important in any Linux vendor discussion.

Novell is the second big Linux vendor having purchased German Linux vendor SUSE some years ago. Novell makes two products as well, Suse Linux Enterprise and OpenSUSE.

The third big Linux enterprise vendor is Canonical well known for the Ubuntu family of Linux distributions. While the Ubuntu distro family includes many members we are only interested in discussing Canonical’s own Ubuntu LTS distribution. LTS stands for “Long Term Support” and is effectively Canonical’s server offering. Their approach to versioning and packaging is quite different from Red Hat and Novell and can be rather confusing.

Before we become overwhelmed with choices (we have presented five so far) we have one here that we can further eliminate. Red Hat’s Fedora is not an “enterprise targeted” distributions. This is a “testing” and “community” platform designed primarily as a desktop and research vehicle and not as a stable server operating system. To be sure it is extremely valuable and a great contribution to the Linux community and has its place but as server operating system it does not shine. Nevertheless, without Fedora as a proving ground for new technologies it is unlikely that Red Hat Enterprise Linux would be as robust and capable as it is.

We can also effectively eliminate OpenSUSE.  OpenSUSE is the unsupported, community driven sibling to Novell SUSE Linux Enterprise.  However, unlike Fedora which is an independant product from RHEL, OpenSUSE is the same code base as SUSE Linux Enterprise but without Novell’s support.  This is a great advantage to the SUSE product line as there is a very large base of home and hobby users in addition to the enterprise users all using the exact same code and finding bugs for each other.  Going forward we will only consider SUSE Linux Enterprise as support is a key factor in the enterprise.  But OpenSUSE, for shops not needing commercial support from the vendor, is a great option as the product is the same, stable release as the supported version.

So we are left with three serious competitors for your enterprise Linux platform: Red Hat Linux, Novell Suse Linux and Ubuntu LTS. All three of these competitors are solid, reliable offerings for the enterprise. Red Hat and Novell obviously have the advantage of having been in the server operating system market for a long time and have experience on their side. But Canonical has really made a lot of headway in the last few years and is definitely worth considering.

Red Hat Linux and Suse Linux Enterprise have a few key advantages over Ubuntu. The first is that they both share the standard RPM package management system. Because RPM is the standard in the enterprise it is well tested and understood and most Linux administrators are well versed in its functionality. Ubuntu uses the Debian based package format which is far less common and finding administrators with existing knowledge of it is far less likely – although this is changing rapidly as Ubuntu has become the leading home desktop Linux distribution recently.

In general, Red Hat Linux and Suse Linux Enterprise have more in common with each other making them able to share resources more easily and giving administrators a broader platform to focus skills upon. This is a significant advantage when it comes time to staff up and support your infrastructure.

Ubuntu suffers from having a directly tie to a “non-enterprise” operating system that is particularly popular with the desktop “tweaking” crowd.  Unlike Red Hat and Suse, Ubuntu is coming at the enterprise from the home market and brings a stigma with it.  Administrators trained on RHEL, for example, tend to be taught enterprise type tasks performed in a business like manner.  Administrators with Ubuntu experience tend to be home users who have been running Linux for their own desktop and entertainment tasks.  This makes the interview and hiring process that much more difficult.  This is in no way a slight against the Ubuntu LTS product which is an amazing, enterprise-ready operating system which should seriously be considered, but shops need to be aware that the vast majority of Ubuntu users are not enterprise system administrators and their experience may be mostly from a non-critical desktop focused role.  It is rare to find anyone running RHEL or Suse Linux in this manner.

In my own experience, having software popular with home users in the enterprise also brings in factors of misguided user expectations.  Users expect the enterprise installations to include any package that the users can install at home and that update cycles be similar.  This can cause additional headaches although the Windows world has been dealing with these issues since the beginning.

At this point you have probably noticed that choosing either Suse and Ubuntu leaves you with the option of both free and fully supported versions, direct from the vendors.  This is a major feature of these distributions because it provides a great cost savings and greater flexibility.  For example, development machines can be run on OpenSUSE and production machines on Suse Enterprise lowering the overall cost if full support isn’t necessary for development environments.  You can run labs from free versions for learning and testing or only pay for support for critical infrastructure pieces.  Or, if you are really looking to save money or feel that your internal support is good enough, running completely on the free, unsupported versions is a viable option because you are still using the stable, enterprise-class code base.

Red Hat, as a vendor, does not supply a freely available edition of Red Hat Enterprise Linux.  Instead, they make their code repositories available to the public and expect interested parties to build their own version of RHEL using these repositories.  If you are interested in a freely available version of RHEL, look no further than CentOS.

CentOS, or the Community ENTerprise Operating System, is a code identical rebuild of RHEL.  It is identical in everyway except for branding.  CentOS is completely free – but unsupported.  CentOS is used in organizations of all sizes exactly like a free copy of RHEL would be expected to be used and many businesses choose to run CentOS exclusively.  As RHEL is the most popular Linux distribution in large businesses and as the commercially support version is rather expensive, CentOS also provides a very important resource to the community by allowing new administrators to experience RHEL at home without the expense of unneeded support.

Choosing between the Red Hat, Suse and Ubuntu families is much more difficult than whittling the list down to these three.  In many cases choosing between these three will be based upon cost, application demands, existing administration experience and features.  It is not uncommon for larger businesses to use two or possibly all three of these distributions as features are needed, but most commonly a single distribution is chosen for ease of management.  All three distributions are solid and capable.

Another potentially deciding factor is if your enterprise is considering using Linux on the desktop.  While RHEL can be used as a desktop operating system it is generally considered to be substantially weaker than Suse and Ubuntu when it comes to desktop environments.  Because of this, Fedora is generally seen as Red Hat’s desktop option but this is not supported by Red Hat nor does it share a code base with RHEL causing support to be somewhat less than unified even though to two are very similar.

For mixed server and desktop environments, Suse and Ubuntu have a very strong lead.  Both of these distributions focus a great many resources onto their desktop systems and they keep these components very much up to date and pay great attention to the user experience.  For a small company that can manage to use only one single distribution on every machine that they own this can be a major advantage.  Homogeneous environments can be extremely cost effective as a much narrower skill set is needed to manage and support them.

In conclusion: Red Hat Enterprise Linux, Novel Suse Enterprise and Ubuntu LTS, in both their supported versions as well as in their free versions (CentOS in the case of RHEL and OpenSUSE in the case of Novell, Ubuntu uses the same package) all represent great opportunities for the data center.  Do not be lulled into using non-enterprise Linux distributions because they are cool, flashy or popular.  Linux lends itself to being in the news often and to generating excess hype.  None of these things are good indicators of data center stability.  The data center is a serious business component and should not be treated lightly.  Linux is a great choice for the corporate IT department but you will be very unhappy if you pick your backbone server architecture based on its popularity as a gaming platform rather than on its uptime and management cost.

]]>
https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/feed/ 0
June 29, 2008: Us, Camping? https://sheepguardingllama.com/2008/06/june-29-2008-us-camping/ https://sheepguardingllama.com/2008/06/june-29-2008-us-camping/#respond Mon, 30 Jun 2008 01:08:46 +0000 http://www.sheepguardingllama.com/?p=2429 Continue reading "June 29, 2008: Us, Camping?"

]]>
Vote for Andy West’s OrbTrak satellite tracking web application in the Dice Tech Challenge.

Dominica and I both slept in a bit this morning before getting up and doing a bunch of apartment cleaning that was very much over due.  No one has been visiting us recently and we have been letting the apartment go quite a bit as our schedules have been pretty busy and I have not been working from home as much as usual which always leads to a lack of time for housework.

We found out last night that a new and as yet unnamed Boston Terrier puppy, just nine weeks old, has moved onto our floor at Eleven80.  We haven’t met the new puppy yet but expect to do so soon.  There are now three full time Boston Terrier residents in the building but Oreo is the only mature one.

Around noon Ramona and Winni came by to visit with the intent of playing a small Dungeons and Dragons game.  We haven’t seen either of them in at least two months.  Winni is living in New Hampshire now, but is moving to Frederick, Maryland in a few weeks, and Ramona recently moved from the Ironbound, here in Newark, to Flushing in Queens.

The idea today was to play a quick, little D&D Fourth Edition adventure which Winni had put together just so that we could test out the fourth edition rules, but we didn’t have a lot of time in which to play as they needed to get over to Ramona’s old apartment to deal with some final packing and stuff and none of us had seen in each other in quite a while so we just spent the afternoon visiting.

We ordered in lunch from Nino’s as pretty much everyone was in the mood for some Italian.  We ate and opened a bottle of Miles Wine Cellars 2005 Cabernet Frank which everyone really enjoyed.

Winni and Ramona left around six and I went to work on a few things.  Almost right away while attempting to get my dual monitors working on my new OpenSUSE 11 workstation I did something that caused the root directory to just vanish.  So there was little that I could do but to install again.  So I kicked off another install.  Crappy.

Dominica and I have our nieces coming to visit us sometime in the next couple weeks and we are attempting to figure out what we are going to do.  The original plan was to go see “The Little Mermaid” on Broadway but the cost was going to be astronomic – like close to $600 or more – and we have heard that the show isn’t very good.  We thought about doing the Bronx Zoo and the science center but Dominica doesn’t think that she could spend a day walking at the Bronx Zoo.  We considered spending the weekend at a resort in the Poconos but that was really expensive as well.

Our final idea was to take the girls camping.  Dominica has never been camping (not actual camping) and I have not gone since going t Allegheny National Forest near Warren, Pennsylvania in June, 1994.  I have never owned a tent of my own and things have improved a lot since Eric and I went all of those years ago.  We thought that camping would be pretty expensive but we compared it to the price of going to see a show on Broadway or to a night or two in the Poconos and it turns out that buying all new, top end camping gear would be significantly cheaper – plus it is all reusable.

We hunted around and the best Coleman tent is only $175 from Amazon ($250 MSRP) and the types of sleeping bags that we would need are very cheap.  Some nice LED lights and tent fans (to keep cool) all came up to being very inexpensive.  After looking at the cool gadgets to get Dominica got really excited about camping and has been going crazy shopping for cool camping stuff for the last two days.

We were hoping to be able to camp at a New York State state park but have not been able to find any camp sites that will work for us.  We absolutely need electrical hookup (cheesy for real camping, I know) because I cannot sleep without my CPAP so there is no way around that requirement.  We almost didn’t think of that and might have booked a camp site without power which would have been disastrous, but since we know now that we have to have it it means that we know ahead of time that we can take the laptop and watch movies at night and charge our phones and stuff.  It’s not exactly “roughing it” camping but that really isn’t an option at this point in our lives anymore and that wasn’t quite what Dominica wanted to do anyway.  It does mean, though, that we can never camp at Handsome Lake again which I would have liked to have been able to have done again.

So since the state parks aren’t available to us in July when we need to go we decided to look at KOA (Kampgrounds of America) and many of them come with power and wireless Internet access which is extremely cool.  That means that we can keep in full contact with the outside world and that I don’t even have to go off of being “on call” for the weekend.  I will be able to work as usual while camping giving us a lot more flexibility to do this more often.  Plus, I think, that camping with this level of amenities makes camping a lot more attractive to us in general.  We just don’t live lives that allow for us to completely break contact with the outside world.  We won’t have the camera problems that I have had in the past either since we will always have the car near by in which to store valuables where they can be locked up safe.

So the plan is that sometime in the next two to three weeks that the four of us will head out into the “wilderness” to go camping.  Dominica is even excited about camping recipes and cooking over an open fire!

Tomorrow morning we have a doctor’s appointment in the morning.  I will be working from home before the appointment and Dominica will be off until after lunch.  So she just has a half day.  I will just be “out” for about two hours.

]]>
https://sheepguardingllama.com/2008/06/june-29-2008-us-camping/feed/ 0
June 28, 2008: House Hunting, Day 2 https://sheepguardingllama.com/2008/06/june-28-2008-house-hunting-day-2/ https://sheepguardingllama.com/2008/06/june-28-2008-house-hunting-day-2/#respond Mon, 30 Jun 2008 00:20:24 +0000 http://www.sheepguardingllama.com/?p=2428 Continue reading "June 28, 2008: House Hunting, Day 2"

]]>
Vote for OrbTrak!

My morning started nice and early.  I was online and working starting at a quarter until eight and had to keep working for the office until one thirty in the afternoon.  Almost six hours of work first thing in the morning.  Not the way that I pictured that I would be spending my Saturday mornings when I was a child – I probably thought that I would be busy watching cartoons and eating cereal – but it does help to pay the bills and with the house payments coming up soon it all helps.

At one thirty I quickly jumped into the shower and got ready to go.  Dominica and I were out the door at ten till two.  We met out real estate agent in Peekskill, New York at three.  We had several townhouses which Dominica had picked out to look at today.

The first townhouse wasn’t in the best part of town and had no basement.  For us the basement is very important because we need the storage space, the utility space and a place for me to have computer equipment that isn’t sitting in the “middle of the house.”  While we could make due in a house without a basement it is extremely unlikely that we will find a house that is cost effective for us without one.  Basements are cheap compared to a lot of other types of space, like extra bedrooms, and serve our purposes just as well or better, in many cases.  This first place was nothing special and the current owner hadn’t vacated like he was supposed to have done so we had a really awkward situation of being shown around by both the real estate agent and the owner and not being able to discuss anything.  There were even someone sleeping in one of the rooms that we were there to see.  Luckily we weren’t impressed with the place at all so it wasn’t like that swayed us in any way.  It just wasn’t the right place for us.  Not a bad place, just not a good fit for us.

Our next stop was to a townhouse down in Croton-on-Hudson.  That was its address but in reality it was more like Cortlandt – which might be even better.  This place, a three bedroom with a great deck and a tiny basement, was very nice and we were quite impressed.  The area was gorgeous and the neighbourhood seemed really nice too.  We were impressed with everything.  The price is a little higher than we were hoping for but technically within our range.  It even had a garage.  This one is the first definite consideration that we have seen thus far in all of our house hunting.  Our selection process has definitely improved.

Next we saw two, nearly identical, townhouses in Peekskill on the east side of town.  The first was amazing but expensive and really set up for adults without children.  Lots of living space but very little bedroom / bathroom space.  It would be great if we weren’t planning on having kids but it isn’t what we need now.

The next place was much better.  More modern than the townhouse in Croton-on-Hudson and with more space (1850 sq. ft. vs 1800 sq. ft.) in a more sensible design.  It is only a two bedroom but with three and a half baths, which is awesome.  It had a giant basement.  The yard, street and neighbourhood weren’t as nice as in Croton but the structure is much nicer.  It is also slightly cheaper although the Croton townhouse is a better market value making it a safer investment, but we are thrilled to have two places that we are really interested in on just our second day.

We looked at one final place in a different complex. It wasn’t a great place but the price was really good and we wouldn’t be unhappy if it was all that we could get.  Not a top contender but a decent fall back option.  Now that we know what we are looking for almost everything that we are seeing is a consideration and we know even more for next time.  This last place was smaller and not as nice.  The lack of space would be tough but workable.  If possible we will shoot for at least 1800 square feet.

It was almost seven when we got back home to Newark.  Oreo did pretty well with us being gone.  We would have taken him with us today but it was horribly hot – well over ninety degrees with a heat index several degrees warmer.  He would have been cooking in the car the whole time.  It was really uncomfortable being in the houses that we were looking at because almost none of them were air conditioned in any way, which didn’t seem like a good sales tactic to me.   I realize that keeping them cool would be expensive when no one was living there but it seems to me that making potential buyers feel more comfortable in the house would trigger a happier memory of it as well as make them less likely to want to leave as quickly as possible.  It definitely shortened the amount of time that we spent in almost all of the houses.

I ended up doing a bit more work this evening.  Another two hours throughout the night.  In between bits of work Dominica and I watched some more Third Rock from the Sun. Then she went to bed a bit after eleven and I worked for a while until almost one in the morning.

Today is my first day really working with OpenSUSE 11.0 which I finally got installed on my HP dx5150 desktop (AMD Athlon64 3200+ 64Bit processor with 2.5GB of memory) late last night.  I have it installed at home now and at the office in Microsoft VirtualPC 2007 SP1.  The install went pretty well and so far I am liking what I see.  It is not a major upgrade over OpenSUSE 10.3 which I have been using for the past six months but all of the packages are slightly updated and now FireFox 3.0 is included which is a very big deal.  The big, new “toy” in the system is KDE 4.0.4 which I am excited to try.

]]>
https://sheepguardingllama.com/2008/06/june-28-2008-house-hunting-day-2/feed/ 0
Linux Processor Ignored https://sheepguardingllama.com/2008/04/linux-processor-ignored/ https://sheepguardingllama.com/2008/04/linux-processor-ignored/#comments Fri, 04 Apr 2008 23:42:06 +0000 http://www.sheepguardingllama.com/?p=2328 Continue reading "Linux Processor Ignored"

]]>
WARNING: NR_CPUS limit of 1 reached. Processor ignored.

Not exactly the error message that you were hoping to see when you were checking you dmesg logs.  Don’t panic, this is easily remedied.  If you are wondering how to check your own Linux system for this error you can look by using this command:

dmesg | grep -i cpu

This error occurs on a multiple logical processor system when a uniprocessor kernel is loaded.  What the error indicates is that one CPU is being used and that more have been found but are being ignored.  The system should come online correctly but with only a single logical CPU.  (For a detailed discussion on logical processors see CPUs, Cores and Threads.)

In today’s market full of multi-core CPU products and hyperthreading this error message has moved from the exclusive realm of multi-socket servers to the home desktop and laptop.  It is now a potentially common site for many casual Linux users.

To correct this issue on a Red Hat, CentOS or Fedora Linux system all you will need to do is make a simple change to your GRUB configuration to tell it to point to a symmetrical multiprocessor (smp) kernel rather than the uniprocessor kernel. The file that you will need to edit is /etc/grub.conf.  After some header comments the beginning of your file should look something like this:

default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.9-67.0.7.plus.c4smp)
     root (hd0,0)
     kernel /vmlinuz-2.6.9-67.0.7.plus.c4smp ro root=/dev/VG0/LV0
     initrd /initrd-2.6.9-67.0.7.plus.c4smp.img
title CentOS (2.6.9-67.0.7.plus.c4)
     root (hd0,0)
     kernel /vmlinuz-2.6.9-67.0.7.plus.c4 ro root=/dev/VG0/LV0
     initrd /initrd-2.6.9-67.0.7.plus.c4.img

The GRUB configuration file can appear daunting at first but, in reality, it is quite simple to deal with.  The only line with which we are concerned with making modifications is the “default” line value.  In this case it is set to 1.  The grub.conf file contains a list of available kernels for us to use.  We may have just one or possible several, maybe even dozens.  In this case we see two.  You can see here that we have a CentOS 2.6.9 c4smp and a CentOS 2.6.9 c4 kernel.  You only need to be concerned with the “title” lines.  These are your kernel titles.  Normally the kernels of most interest will be at the top of the file.

You can check the name of the kernel that you are currently running by issuing:

uname -a

The first title line is kernel “0”, the second is kernel “1”, the next “2” and so forth.  Right now our “default” value is pointing to “1” which is the second kernel from the top and, as you will notice, not an smp kernel (therefore it is a uniprocessor kernel.)  In this case all we need to do is change the “default” value from “1” to “0” so that it now points to the first kernel option which for us is the smp kernel.

After your grub.conf file has been saved you make reboot the Linux system.  If all goes well it will return to you with additional logical processors enabled.  You can verify the name of the loaded kernel with the command given above.

]]>
https://sheepguardingllama.com/2008/04/linux-processor-ignored/feed/ 2
Linux’ kscand https://sheepguardingllama.com/2008/04/linux-kscand/ https://sheepguardingllama.com/2008/04/linux-kscand/#comments Wed, 02 Apr 2008 21:54:40 +0000 http://www.sheepguardingllama.com/?p=2318 Continue reading "Linux’ kscand"

]]>
In Linux the kscand process is a kernel process which is a key component of the virtual memory (vm) system.  According to Unix Tech Tips & Tricks’ excellent Understanding Virtual Memory article “The kscand task periodically sweeps through all the pages in memory, taking note of the amount of time the page has been in memory since it was last accessed. If kscand finds that a page has been accessed since it last visited the page, it increments the page’s age counter; otherwise, it decrements that counter. If kscand finds a page with its age counter at zero, it moves the page to the inactive dirty state.”

For the majority of Linux users and even system administrators on large servers this kernel process requires no intervention.  It is a simple process that works in the background doing its job well.  Nonetheless, under certain circumstances it can become necessary to tune kscand in order to improve system performance in a desirable way.

Issues with kscand are most likely to arise in a situation where a Linux box has an extremely large amount of memory and will be even more noticeable on boxes with slower memory.  The most notable is probably the HP Proliant DL585 G1 which can support 128GB or memory but in doing so drops bandwidth to a paltry 266MHz.  I first came across this particular issue on a server with 32GB of memory with approximately 31.5GB of it in use.  No swap space was being used and most of the memory was being used for cache so there was no strain on the memory system but the total amount of memory being scanned by the kscand process is where the issue truly lies.

Even on a busy server with gobs of memory (that’s the technical term) it would be extremely rare that kscand would cause any issues.  It is a very light process that runs quite quickly.  You are most likely to see kscand as a culprit when investigating problems with latency sensitive applications on memory intensive servers.  The first time that I came across the need to tune kscand was while diagnosing a strange latency pattern of network traffic going to a high-performance messaging bus.  The latency was minor but small spikes were causing concern in the very sensitive environment.  kscand was spotted as the only questionable process receiving much system attention during the high latency periods.

Under normal conditions, that is default tuning, kscand will run every thirty seconds and will scan 100% of the system memory looking for memory pages that can be freed.  This sweep is quick but can easily cause measurable system latency if you look carefully.  Through carefull tuning we can reduce the latency caused by this process but we do so as a tradeoff with memory utilization efficiency.  If you have a box with significantly extra memory or extremely static memory, such as large cache sizes that change very slowly, you can safely tune away from memory efficiency towards low latency with nominal pain and good results.

kscand is controlled by the proc filesystem with just the single setting of  /proc/sys/vm/kscand_work_percent. Like any kernel setting this can be changed on the fly to a live system (be careful) or can be set to persist through reboots by adding it to your /etc/sysctl.conf file.  Before we make any permanent changes we will want to do some testing.  This kernel parameter tells kscand what percentage of the system memory to scan each time that a memory scan is performed.  Since it is normally set to 100 kscand normally scans all in-use memory each time that it is called.  You can verify you current setting quite easily.

cat /proc/sys/vm/kscand_work_percent

A good starting point with kscand_work_percent is to set to 50.  A very small adjustment may not be noticeable so seeing 100 and then 50 should provide a good starting point for evaluating the changes in system performance.  It is not recommended to set kscand_work_percent below 10 and I would be quite wary of dropping even below 20 unless you truly have a tremendous amount of unused memory and your usage is quite static.

echo 50 > /proc/sys/vm/kscand_work_percent

Once you have determined the best balance of latency and memory utilization that makes sense for your environment you can make you changes permanent.  Be sure to only use the echo technique if this is the first time that this will be added to the file. You will need to edit it by hand after that.

echo "kscand_work_percent = 50" >> /etc/sysctl.conf

Keep in mind that the need to edit this particular kernel parameter is extremely uncommon and will need to be done only under extraordinary circumstances.  You will not need to do this in normal, everyday Linux life and even a senior Linux administrator could easily never have need to modify this setting.  On very specific conditions will cause this performance characteristic to be measurable or its modification to be desirable.

All of my testing was done on Red Hat Enterprise Linux 3 Update 6.  This parameter is the same across many versions although the performance characteristics of kscand vary between kernel revisions so do not assume that the need to modify the parameters in one situation will mean that it is needed in another.

RHEL 3 prior to update 6 had a much less efficient kscand process and much greater benefit is likely to be found moving to a later 2.4 family kernel revision.  RHEL 4 and later, on the 2.6 series kernels, is completely different and the latency issues are, I believe, less pronounced.  In my own testing the same application on the same servers moving from RHEL 3 U6 to RHEL 4.5 removed all need for this tweak even under identical load.  [Edit – In RHEL 4 and later (kernel series 2.6) the kscand process has been removed and replaced with kswapd and pdflush.]

Things that are likely to impact the behavior of kscand that you should consider include the following:

  • Total Used Memory Size, regardless of total available memory size.  The more you have the more kscand will impact you.  Determined by: free -m | grep Mem | awk '{print $3}'
  • Memory Latency, check with your memory hardware vendor. Higher latency will cause kscand to have a larger impact.
  • Memory Bandwidth.  Currently in speeds ranging from 266MHz to 1066MHz.  The slower the memory the more likely a scan will impact you and tuning will be useful.
  • Value in kscand_work_percent. The lower the value the lower the latency.  The higher the value the better the memory utilization.
  • Memory Access Hops.  Number of system bus hops necessary to access memory resources.  For example a two socket AMD Opteron server (HP Proliant DL385) never has more than one hop.  But a four socket AMD Opteron server (HPProliant DL585) can have two hops increasing effective memory latency. So a DL585 is more likely to be affected than a DL385 with all other factors being equal (as long as all three or four processor sockets are occupied.)
]]>
https://sheepguardingllama.com/2008/04/linux-kscand/feed/ 5
Solaris Dstream Package Format (Package Stream) https://sheepguardingllama.com/2008/03/solaris-dstream-package-format-package-stream/ https://sheepguardingllama.com/2008/03/solaris-dstream-package-format-package-stream/#respond Thu, 20 Mar 2008 15:51:41 +0000 http://www.sheepguardingllama.com/?p=2306 Continue reading "Solaris Dstream Package Format (Package Stream)"

]]>
If you have worked on Solaris for a while you have probably stumbled across the package stream or “dstream” package format sometimes used for Solaris packages. Dstreams can come as a surprise to Solaris administrators who have become accustomed to the traditional package format. But Dstreams are very easy to work with if you just know some basics.

First of all there appear to be two naming conventions for these packages. The most common, by far, is to end a package in .pkg while the less common variant is to end the name in .dstream. Some people also leave off the postfix altogether leaving it unclear as to what the file is intended to be.

Installing a dstream is only slightly different than a regular package. The dstream is much more similar to a Linux RPM as it is a single, atomic file. Once it is installed it will act just like any other Solaris package and can be managed and removed in the usual ways (e.g. pkginfo, pkgrm, etc.)

Installing is simple. Let’s assume that we are dealing with the package myNewSoftware.dstream which is saved in /tmp. To install simply:

pkgadd -d /tmp/myNewSoftware.dstream

But in some cases you may want to have access to the contents of the Dstream without needing to install it first. If we are on Solaris this is easy. Just use pkgtrans.

pkgtrans myNewSoftware.dstream .

Or, possibly, you need to get access to the contents of the Dstream without having access to a Solaris machine or the pkgadd command. Do not fret. The solution is much simpler than you would imagine. The Dstream is created in the cpio format which we can extract using common tools.  Unfortunately I have had some issues getting the packages to unpack correctly using this trick.  If anyone has additional insight intot his process, please comment.

So to unpack, but not install, our previous example file on any UNIX box (or even Windows with cpio installed via Cygwin or a similar utility) we can simply:

cpio -idvu < myNewSoftware.dstream

or, I have also seen this option as well – both work for me equally:

dd if=myNewSoftware.dstream skip=1 | cpio -idvu

The “v” option just gives us some verbose output so that we can see what we just unpacked without having to look around for it. You will now have a directory (or a few) as contained in the cpio archive.

]]>
https://sheepguardingllama.com/2008/03/solaris-dstream-package-format-package-stream/feed/ 0
Linux CPU Speed Reporting https://sheepguardingllama.com/2008/03/linux-cpu-speed-reporting/ https://sheepguardingllama.com/2008/03/linux-cpu-speed-reporting/#comments Wed, 05 Mar 2008 23:50:17 +0000 http://www.sheepguardingllama.com/?p=2287 Continue reading "Linux CPU Speed Reporting"

]]>
Linux has multiple means of reporting CPU speed. This can make hardware discovery difficult depending on what source you attempt to use as your reference point. You may discover that two sources, such as cpuinfo and dmidecode produce entirely different results. Your first inclination will probably be that one of these is inaccurate but there is generally a reason for this disparity.

In my examples I will be working from Red Hat Enterprise Linux 4 running on an HP DL585 G2 using dual-core CPUs. This model uses AMD Opteron processors which are natively 64bit and based on the AMD64 architecture (a.k.a. x86-64 or x64.)

We will begin by looking at the output of dmidecode. Dmidecode is the most standard method for retrieving this type of information. We know that all four of our processors will be identical due to SMP limitations so we will not bother to look at all four of them – one will be sufficient. Using dmidecode we can retrieve both the Max Speed of the board as well as the Current Speed of the installed processors.

dmidecode | grep "Current Speed" | head -n 1
Current Speed: 2400 MHz

dmidecode | grep "Max Speed" | head -n 1
Max Speed: 2400 MHz

In this case we can see that we are dealing with processors of 2.4GHz which, according to dmidecode, is the fastest processor available for this revision of the motherboard. Often maximum speeds can be changed by the BIOS so do not be surprised if a firmware update changes this number. The “head -n 1” is simply for clarity to remove the extra processors – they all show identical information. Dmidecode pulls its information directly from the system firmware.

Cpuinfo in the proc filesystem is full of very useful CPU information and should be a primary source of discovery. In should be noted, however, that /proc/cpuinfo can be misleading if used incorrectly but that it contains some of the most important information to our system. Again we will only look at a single CPU instance in this example.

grep MHz /proc/cpuinfo | head -n 1
cpu MHz : 1004.694

In this case we see that cpuinfo is reporting to us that the processors are running at just 1GHz. This is a very different number than what we saw with dmidecode. The reason for the difference in the reported speed is that dmidecode is showing the processor’s rated speed – more or less its model number. This is why the speed is an even number. Cpuinfo, on the other hand, is reading the current speed directly from the CPU clock. Normally we would expect these numbers to be very close to each other with an acceptable degree of rounding. This dramatic difference in speeds is because the server is mostly idle and the CPU has been stepped down by the kernel to save power while it is not needed. This is an important power saving characteristic of newer processors and needs to be taken into account when using CPU speed reporting tools.

We can find additional information, as detected during the boot process, from dmesg. Dmesg will give us the maximum detected speed of our process as well as some very useful information if we just know where to look.

dmesg | grep "MHz processor"
time.c: Detected 2411.266 MHz processor.

dmesg | grep "powernow" | head -n 6
powernow-k8: Found 8 Dual-Core AMD Opteron(tm) Processor 8216 processors (version 2.00.00-rhel4)
powernow-k8: 0 : fid 0x10 (2400 MHz), vid 0xa
powernow-k8: 1 : fid 0xe (2200 MHz), vid 0xc
powernow-k8: 2 : fid 0xc (2000 MHz), vid 0xe
powernow-k8: 3 : fid 0xa (1800 MHz), vid 0x10
powernow-k8: 4 : fid 0x2 (1000 MHz), vid 0x12

As you can see from the first command output, dmesg tested the processors and determined that they are approximately 2.4GHz which corresponds to the rated speed as shown by dmidecode. We can see here that dmesg believes that this server has eight CPUs which is technically correct as each Opteron in this configuration is truly two processors. This is confusing, though, because it reports eight dual-core processors which is not correct. There are four dual-core processors here. Each dual-core is two discrete processors for eight total with just four sockets (semantic difference between the die configuration of Intel and AMD chips.)

What we are really interested in, in the above output, is the five PowerNow K8 levels – zero through four. These are the five speed steps available to our processors. When viewing the output of /proc/cpuinfo we should see the processor to be at one of the speeds indicated here as being available to us. This can be extremely useful information.

We can also discover some simple CPU count information through our first two tools. Each tool views the system differently so we need to be aware of what we are looking at. Dmidecode views the system from a socket perspective and cpuinfo views the system from a core perspective (which can be either an independent CPU or cores within a single CPU – AMD versus Intel ideology.)

dmidecode | grep "Central Processor" | wc -l
4

grep processor /proc/cpuinfo | wc -l
8

As you can see, it is important to know which view of the processor you are speaking about. This will become ever more important as CPUs continue to increase in cores per CPU, CPUs per socket and threads per core!

To gain more insight into the CPU discovery process I will now run through the same examples but on an HP DL380 G5 server using two quad-core Intel XEON processors based on its Core 2 architecture.

dmidecode | grep "Current Speed" | head -n 1
Current Speed: 2333 MHz

dmidecode | grep "Max Speed" | head -n 1
Max Speed: 4800 MHz

grep MHz /proc/cpuinfo | head -n 1
cpu MHz : 2333.337

dmesg | grep "MHz processor"
time.c: Detected 2333.337 MHz processor.

dmidecode | grep "Central Processor" | wc -l
2

grep processor /proc/cpuinfo | wc -l
8

Unfortunately the trick used on the AMD Opteron to display the steppings of the PowerNow architecture does not work with the Intel processors. But we can see here how the discrepancy between processor reporting methods varies even more dramatically as we move towards more multi-core processors.

]]>
https://sheepguardingllama.com/2008/03/linux-cpu-speed-reporting/feed/ 3
Linux Memory Monitoring https://sheepguardingllama.com/2008/02/linux-memory-monitoring/ https://sheepguardingllama.com/2008/02/linux-memory-monitoring/#comments Tue, 12 Feb 2008 22:52:50 +0000 http://www.sheepguardingllama.com/?p=2252 Continue reading "Linux Memory Monitoring"

]]>
As a Linux System Administrator one question that I get asked quite often is to look into memory issues. The Linux memory model is rather more complex that many other systems and looking into memory utilization is not always as straightforward as one would hope, but this is for a good reason that will be apparently once we discuss it.

The Linux Virtual Memory System (or VM colloquially) is a complex beast and my objective here is to provide a practitioner’s overview and not to look at it from the standpoint of an architect.

The Linux VM handles all memory within Linux. Traditionally we think of memory as being either “in use” or “free” but this view is far too simplistic with modern virtual memory management systems (modern means since Digital Equipment Corporation produced VMS around 1975.) In a system like Linux memory is not simply either “in use” or “free” but can also be being used as buffer or cache space.

Memory buffers and memory cache are advanced virtual memory techniques used by Linux, and many other operating systems, to make the overall system perform better by making more efficient use of the memory subsystem. Memory space used as cache, for example, is not strictly “in use” in the traditional sense of the term as no userspace process is holding it open and that space, should it be requested by an application, would become available. It is used as cache because the VM believes that this memory will be used again before the space is needed and that it would be more efficient to keep that memory cached rather than to flush to disk and need to reload which is a very slow process.

On the other hand there are times when there is not enough true memory available in the system for everything that we want to have loaded to fit into at the same time. When this occurs the VM looks for the portions of memory that it believes are the least likely to be used or those that can be moved onto and off of disk most effectively and transfers these out to the swap space. Anytime that we have to use swap instead of real memory we are taking a performance hit but this is far more effective than having the system simply “run out of memory” and either crash or stop loading new processes. By using swap space the system is degrading gracefully under excessive load instead of failing completely. This is very important when we consider that heavy memory utilization might only last a few seconds or minutes when a spike of usage occurs.

In the recent past memory was traditionally an extreme bottleneck for most systems. Memory was expensive. Today most companies as well as most home users are able to afford memory far in excess of what their systems need on a daily basis. In the 1990s we would commonly install as little memory as we could get away with and expect the machine to swap constantly because disk space was cheap in comparison. But over time more and more people at home and all but the most backwards businesses have come to recognize the immense performance gains made by supply the computer with ample memory resources. Additionally, having plenty of memory means that your disks are not working as hard or as often leading to lower power consumption, higher reliability and lower costs for parts replacement.

Because systems often have so much memory resources today, instead of seeing system working to move less-needed memory areas out to disk we instead see the system looking for likely-to-be-used sections of disk and moving them into memory. This reversal has proved to be incredibly effective at speeding up our computing experiences but it has also been very confusing to many users who are not prepared to look so deeply into their memory subsystems.

Now that we have an overview of the basics behind the Linux VM (this has been a very simplistic overview hitting just the highlights of how the system works) lets look at what tools we have to discover how out VM is behaving at any given point in time. Our first and most basic tool, but probably the most useful, is “free”. The “free” command provides us with basic usage information about true memory as well as swap space also known as “virtual memory”. The most common flag to use with “free” is “-m” which displays memory in megabytes. You can also use “-b” for bytes, “-k” for kilobytes or “-g” for gigabytes.

# free -m
             total  used  free  shared  buffers  cached
Mem:          3945   967  2977       0       54     725
-/+ buffers/cache:   187  3757
Swap:         4094     0  4094

From this output we can gain a lot of insight into our Linux system. On the first line we can see that our server has 4GB of memory (Mem/Total). These numbers are not always absolutely accurate so rounding may be in order. The mem/used amount here is 967MB and mem/free is 2977MB or ~3GB. So, according to the top line of output, our system is using ~1GB out of a total of 4GB. But also on the first like we see that mem/buffers is 54MB and that mem/cached is 725MB. Those are significant amounts compared to our mem/used number.

In the second line we see the same output but without buffers and cache being added in to the used and free metrics. This is the line that we really want to pay attention to. Here we see the truth, that the total “used” memory – in the traditional use of the term – is only a mere 187MB and that actually have ~3.76GB free! Quite a different picture of our memory utilization.

According to these measurements, approximately 81% of all memory in use in our system is for performance enhancing buffer/cache and not for traditional uses and less than 25% of our total memory is even in use for that.

In the third line we see basic information about our swap space. In this particular case we can see that we have 4GB total and that none of it is in use at all. Very simple indeed.

When reading information about the swap space, keep in mind that if memory utilization spikes and some memory has to be “swapped out” that some of that data may not be needed for a very long time, even if memory utilization drops and swap is no longer needed. This means that some amount of your swap space may be “in use” even though the VM is not actively reading or writing to the swap space. So seeing that swap is “in use” is only an indication that further investigation may be warranted and does not provide any concrete information on its own. If you see that some of your swap is used but there is plenty of free memory in line two then your current situation is fine but you need to be aware that you memory utilization likely spiked in the recent past.

Often all you want to know is what the current “available” memory is on your Linux system. Here is a quick one liner that will provide you with this information. You can save it in a script called “avail” and place it in /usr/sbin and you can use it anytime you need to just see how much headroom your memory system has without touching the swap space. Just add “#!/bin/bash” at the top of your script and away you go.

echo $(free -m | grep 'buffers/' | awk '{print $4}') MB

This one liner is especially effective for non-administrator users to have at their disposal as it provides the information that they are usually looking for quickly and easily. This is generally preferred over giving extra information and having to explain how to decipher what is needed.

Moving on from “free” – not that we know the instantaneous state of our memory system we can look at more detailed information about the subsystem’s ongoing activities. The tool that we use for this is, very appropriately, named “vmstat”.

“vmstat” is most useful for investigating VM utilization when swap space is in use to some degree. This is where “vmstat” becomes particularly useful. When running “vmstat” we generally want to pass it a few basic parameters to get the most useful output from it. If you run the command without any options you will get some basic statistics collected since the time that the system was last rebooted. Interesting but not what we are looking for in particular. The real power of “vmstat” lies in its ability to output several instances (count) of activity over a period (delay.) We will see this in our example.

I prefer to use “-n” which suppresses header reprints. Do a few large “vmstat”s and you will know what I mean. I also like to translate everything to megabytes which we do with “-S M” on some systems and “-m” on others. To make “vmstat” output more than a single line of output we need to feed it a count and a delay. A good starting point is 10 and 5. This will take 40 seconds to complete ( count-1 * delay ). I always like to run “free” just before my “vmstat” just so that I have all of the information on my screen at one time. Let’s take a look at a memory intensive server.

free -m
             total     used      free    shared   buffers    cached
Mem:         64207    64151        56         0        73     24313
-/+ buffers/cache:    39763     24444
Swap:        32767        2     32765
vmstat -n -S M 10 5
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
 7  1      2     53     75  24313    0    0  8058    62   19     1 10  7 71 12
 2  2      2     44     74  24325    0    0 103251   209 2512 20476 15 13 54 18
 3  1      2     43     74  24325    0    0 105376   216 2155 16790 11 12 59 17
 2  1      2     50     74  24320    0    0 107762   553 2205 16819 12 10 60 17
 2  2      2     49     73  24322    0    0 105104   302 2323 18009 23  9 51 17

That is a lot of data. From the “free” command we see that we are dealing with a machine with 64GB of memory that has almost 40GB of that in use and 24GB of it being used for cache. A busy machine with a lot of memory but definitely not out of memory.

The “vmstat” output is rather complex and we will break it down into sections. Notice that the stats in the very first row are significantly different from the stats in the subsequent rows. This is because the first row is average stats since the box last rebooted. And each subsequent row is stats over the last delay period which, in this case, is ten seconds.

The first section is “procs”. In this section we have “r” and “b”. “R” is for runnable processes – that is, processes waiting for their turn to execute in the CPU. “B” is for blocked processes – those that are in uninterruptible sleep. This is process, not memory, data.

The next section is “memory” and this provides the same information that we can get from the free command. “Swpd” is the same as swap/used in “free”. “Free” is the same as mem/free. “Buff” is the same as mem/buffers and “cache” is the same as mem/cached. This view will help you see how the different levels are constantly in motion on a busy system.

If you system is being worked very hard them the real interesting stuff starts to come in the “swap” section. Under “swap” we have “si” for swap in and “so” for swap out. Swapping in means to move virtual memory off of disk and into real memory. Swapping out means to take real memory and to store it on disk. Seeing an occasional swap in or swap out event is not significant. What you should be interested in here is if there is a large amount of constant si/so events. If si/so is constantly quite busy then you have a system that is actively swapping. This does not mean that the box is overloaded but it does indicate that you need to be paying attention to your memory subsystem because it definitely could be overloaded. At the very least performance is beginning to degrade even if still gracefully and possibly imperceptibly.

Under “io” we see “bi” and “bo” which are blocks in and blocks out. This refers to total transfers through the systems block devices. This is to help you get a feel for how “busy” the i/o subsystem is in regards to memory, CPU and other statistics. This helps to paint an overall picture of system health and performance. But it is not directly memory related so be careful not to read the bi/bo numbers and attempt to make direct memory usage correlations.

The next section is “system” and has “in” for interrupts per second. The “in” number includes clock interrupts so even the most truly idle system will still have some interrupts. “Cs” is context switches per second. I will not go into details about interrupts and context switching at this time.

The final section is “cpu” which contains fairly standard user “us”, system “sy”, idle “id” and waiting on i/o “wa” numbers. These are percentage numbers that should add up to roughly 100 but will not always do so due to rounding. I will refrain from going into details about CPU utilization monitoring here. That is a separate topic entirely but it is important to be able to study memory, processor and i/o loads all in a single view which “vmstat” provides.

If you will be using a script to automate the collection of “vmstat” output you should use “tail” to trim off the header information and the first line of numerical output since that is cumulative since system reboot. It is the second numerical line that you are interested in if you will be collecting data. In this example one-liner we take a single snapshot of a fifteen second interval that could be used easily in a larger script.

vmstat 15 2 | tail -n1

This will provide just a single line of useful output for sending to a log, attaching to a bug report or for filing into a database.

Additional, deeper, memory information can be found from a few important sources. Firstly let’s look at “vmstat -s”. This is a very different use of this command.

vmstat -s
  32752428  total memory
  32108524  used memory
  13610016  active memory
  17805412  inactive memory
    643904  free memory
    152168  buffer memory
  23799404  swap cache
  25165812  total swap
       224  used swap
  25165588  free swap
   4464930 non-nice user cpu ticks
         4 nice user cpu ticks
   1967484 system cpu ticks
 193031159 idle cpu ticks
    308124 IO-wait cpu ticks
     37364 IRQ cpu ticks
    262095 softirq cpu ticks
  69340924 pages paged in
  86966611 pages paged out
        15 pages swapped in
        62 pages swapped out
 488064743 interrupts
  9370356 CPU context switches
1202605359 boot time
   1304028 forks

As you can see we get a lot of information from this “-s” summary option. Many of these numbers are static such as “total memory” and will not change on a running system. Others, such as “interrupts” is a counter that is used to generate the “since reboot” statistics for the regular “vmstat” options.

The “free” and “vmstat” commands draw their data from the /proc file system as you would expect. You can see their underlying information through:

cat /proc/meminfo
cat /proc/vmstat
cat /proc/slabinfo

Of course, no talk of Linux performance monitoring can be complete without mentioning the king of monitoring the “top” command. However, the memory information available from “top” is little more than we can see more concisely in “vmstat” and “free”. At the top of “top” we get the “free” summary is a slightly modified form but containing the same data. If this does not appear you can toggle it on and off with the “m” command. You can then sort top‘s listing with “M” to force it to sort processes by memory utilization instead of processor utilization. Very handy for finding memory hogging applications.

Armed with this information you should now be able to diagnose a running Linux system to determine how its memory is being used, when is it being used, how much is being used, how much headroom is available and when more memory should be added if necessary.

References:

Monitoring Virtual Memory with vmstat from Linux Journal
vmstat Man Page on Die.net
free Man Page on Die.net
/proc/meminfo from Red Hat

]]>
https://sheepguardingllama.com/2008/02/linux-memory-monitoring/feed/ 6
Issues Sharing Automount Home Directories from Solaris to Linux https://sheepguardingllama.com/2008/02/issues-sharing-automount-home-directories-from-solaris-to-linux/ https://sheepguardingllama.com/2008/02/issues-sharing-automount-home-directories-from-solaris-to-linux/#respond Sat, 09 Feb 2008 12:04:00 +0000 http://www.sheepguardingllama.com/?p=2235 Continue reading "Issues Sharing Automount Home Directories from Solaris to Linux"

]]>
I discovered this problem while attempting to share our automounted home directories from my Solaris 10 NFS file server to my SUSE and Red Hat Linux NFS clients.

automount[10581]: >> mount: block device 192.168.0.2:/data/home/samiller is write-protected, mounting read-only
kernel: call_verify: server 192.168.0.2 requires stronger authentication.

It turns out that the solution is quite simple. The issue is with a mismatch of anonymous credentials. Let’s take a look at the erroneous entry in /etc/dfs/dfstab on the Solaris NFS server:

share -F nfs -o public,nosuid,rw,anon=-1 -d "backup" /data

The piece of this configuration that is an issue here is “anon=-1”. In theory this is designed to block users who do not have accounts on the local system. However this causes issues with Linux systems. You can solve this problem by simply removing the anon setting from the configuration file. Not an ideal fix but it does solve the problem.

share -F nfs -o public,nosuid,rw -d "backup" /data

Simply run the “shareall” command and you should be back in business.

]]>
https://sheepguardingllama.com/2008/02/issues-sharing-automount-home-directories-from-solaris-to-linux/feed/ 0
February 3, 2008: Oreo is Famous, Again https://sheepguardingllama.com/2008/02/february-3-2008-oreo-is-famous-again/ https://sheepguardingllama.com/2008/02/february-3-2008-oreo-is-famous-again/#respond Mon, 04 Feb 2008 04:50:39 +0000 http://www.sheepguardingllama.com/?p=2245 Continue reading "February 3, 2008: Oreo is Famous, Again"

]]>
While doing some other research today I came across a really nice Introduction to Cron for you UNIX users out there. And this page, which I have found before, on easy ways to do remote file copies via SSH.

I wanted to get up decently early this morning, probably around nine, but when I started to get up Oreo snuggled close and said in his puppy way “don’t get up yet, I want more snuggles” and I just couldn’t resist so I stayed in bed, awake, until almost eleven. Oreo finally, at that point, discovered the sunlight and decided that he would be happy moving out to the living room and laying in a sunspot on the loveseat.

I couldn’t decide what I wanted to work on this morning so I logged into my workstation and got to work on some Brainbench stuff since so much of that is now out of date. Dominica got up shortly after me and decided that she needed to do her homework so it worked out well for both of us. My first project this morning is taking the Red Hat Enterprise Linux (RHEL) 5 Beta exam from Brainbench. I don’t get any credit for this exam because it is only in beta but because I am a senior admin specifically on RHEL 3, RHEL 4 and RHEL 5 running it both at the office and at home I felt that I really should put in the effort to do the beta because I can provide important feedback to improve these tests for everyone.

The one nice thing about doing the beta exams is that you do get feedback even if it doesn’t officially go on your transcript. You have to take the exam twice so that they get a better idea of how the questions stack up in different configurations. On my first pass through I scored a high masters and ranked in the top percentile of all test takers. The test doesn’t have a means to report any higher than that. So that was encouraging.

One of the best things about doing lots of certifications like the Brainbench is that it really forces you to spend a lot of time researching things that you do not use everyday or possible ever. It basically forces you into a one to two hour crammed study session.

The beta tests took a little over two hours but I feel that it was time well spent. From there I continued on and renewed my Linux (General) certification even though the test is horribly out of date. Even with the test being terribly old and out of date and covering nothing that I do I still pulled off eighth in New Jersey.

I took a bit of a break and hung out with Dominica and Oreo for a little while before going on to the next test. I find that once I get into the testing groove I really like to stick with it. I am the same way about homework, believe it or not.

When we took Oreo outside for his afternoon walk we managed to time our reentry into the building just perfectly to coincide with a fifteen week old Boston Terrier puppy named Barney coming into Eleven80 to visit some people. He was black and white just like our Oreo and so adorable. We took Oreo over to meet him and they were pretty friendly. Oreo is generally good with puppies. He just doesn’t like Bull Terriers that are his size or larger for some reason. Then, while the two Bostons were saying hello, two black and white French Bulldogs came down to the lobby with the exact same markings are the two Bostons. It was like a weird Boston Terrier Convention but with French Bulldogs masquerading as Bostons.

I did a quick image seach on Yahoo! today for: “boston terrier” oreo. And would you believe that our Oreo is not only the first hit but is the only dog who shows up on the first seven plus pages and is almost exclusively the only dog for the first nine pages! Our Oreo is the most famous Boston Terrier named Oreo ever.

I took the Linux (SUSE) exam after that and without even bothering to try, as the test was ridiculously outdated and worthless, I managed to tie for ninth place in the world. What a bad exam. I decided to go on with the Server Administration exam which is a general exam covering the basics of server administration without going into an operating system specific details. I rushed through the test as the day was getting shorter by the minute and Oreo is more and more likely to need lots of attention as the day wears on. But I still ranked fifteenth in the United States and pulled off a Masters so it was fine.

We ordered in dinner from Mi Pequeño Mexico.  I did some more reading in my Prototype book.  We watched two episodes of The Cosby Show while we ate our dinner.  Then Dominica had to go back and work on her homework since she has a paper plus numerous other homework assignments due by midnight tonight.

Later in the evening Dominica sent me down to the Market City Deli to find her some cookies.  I went down and ran into Pam on a mission to find a candy bar.  She was out during halftime of the American football match that is going on today.  When we went back to Eleven80 we ran into Ryan who had been watching the game but was relatively bored as American football is not exactly an exciting sport to watch.  So he decided that he would grab some beer or something and stop up to our apartment sometime soon to hang out.

I put in some time looking at rsync and other backup options tonight as I am trying to determine a solid backup strategy for myself.  Ryan came up and we talked about backups for a little while (Ryan is a UNIX system administrator.)  He likes Bacula and I will be looking into it a bit more thoroughly.  I am no backup expert so it is a hard area to make good decisions in for me.

Ryan and I hung out and enjoyed some New Orleans rum and Coke while he regaled us with tales of his week down in Louisiana helping to rebuild the city.  He took off home on the early side so that he could get some sleep and get to work tomorrow.  I decided to stay up with Dominica to keep her company while she worked on her homework.  We should be in bed at approximately midnight.

]]>
https://sheepguardingllama.com/2008/02/february-3-2008-oreo-is-famous-again/feed/ 0
Linux Bonding Modes https://sheepguardingllama.com/2008/01/linux-bonding-modes/ https://sheepguardingllama.com/2008/01/linux-bonding-modes/#comments Tue, 22 Jan 2008 21:22:41 +0000 http://www.sheepguardingllama.com/?p=2231 Continue reading "Linux Bonding Modes"

]]>
When bonding Ethernet channels in Linux there are several modes that can be chosen that affect the way in which the bonding will occur. These modes are enumerated from zero to six. Let’s look briefly at each and see how they differ. Remember that when looking at these modes that bonding can include two or more Ethernet channels. It is not limited to just two.

The mode is set view the modprobe command or, more commonly, is simply inserted into the /etc/modprobe.conf (or /etc/modules.conf) configuration file so that it is configured every time that the Linux Bonding Driver is initialized.

Mode 0: Round Robin.  Transmissions are load balanced by sending from available interfaces sequentially packet by packet.  Transmissions only are load balanced.  Provides load balancing and fault tolerance.

Mode 1: Active-Backup. This is the simplest mode of operation for bonding. Only one Ethernet slave is active at any one time. When the active connection fails another slave is chosen to take over as the active slave and the MAC address is transferred to that connection. The switch will effectively view this the same as if the host was disconnected from one port and then connected to another port. This mode provides fault tolerance but does not provide any increase in performance.

Mode 2: Balanced XOR. This is a simple form of load balancing using the XOR of the MAC addresses of the host and the destination. It works in general fairly well but always sends the packets through the same channel when sending to the same destination. This means that it is relatively effective when communicating with a large number of different remote hosts but loses effectiveness as the number decreases becoming worthless as the value becomes one. This mode provides fault tolerance and some load balancing.

Mode 3: Broadcast. This mode simply uses all channels to mirror all transmissions. It does not provide any load balancing but is for fault tolerance only.

Mode 4: IEEE 803.ad Dynamic Link Aggregation. This mode provides fault tolerance as well as load balancing. It is highly effective but requires configuration changes on the switch and the switch must support 802.3ad Link Aggregation.

Mode 5: Adaptive Transmit Load Balancing. This mode provides fault tolerance and transmit (out going) load balancing. It provides no receiving load balancing.  This mode does not require any configuration on the switch.  Ethtool support is required in the network adapter (NIC) driver.

Mode 6: Adaptive Load Balancing.  Like mode five but provides fault tolerance and bidirectional load balancing.  The transmit load balancing is identical but receipt load balancing is accomplished by ARP trickery.

]]>
https://sheepguardingllama.com/2008/01/linux-bonding-modes/feed/ 3