centos – Sheep Guarding Llama https://sheepguardingllama.com Scott Alan Miller :: A Life Online Sat, 26 Sep 2009 14:01:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Linux Active Directory Integration with LikeWise Open https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/ https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/#respond Sun, 01 Mar 2009 23:27:12 +0000 http://www.sheepguardingllama.com/?p=3648 Continue reading "Linux Active Directory Integration with LikeWise Open"

]]>
I downloaded the latest RPM package (for Red Hat, Suse, CentOS and Fedora) from the LikeWise web site (you need to register before starting your download.)  I downloaded the RPM package to the /tmp directory.  The version that I am testing is the Winter 2009 Edition.

Warning: LikeWise modifies many configuration files and its uninstall routine does not replace these.  Installing LikeWise and then uninstalling again will likely cause you to lose the ability to log back in to your machine.  Treat modifying authentication systems with the utmost care.

The RPM download still uses a script so you will need to add execute permissions.

chmod a+x LikewiseIdentityServiceOpen-5.1.0.5220-linux-x86_64-rpm.sh

./LikewiseIdentityServiceOpen-5.1.0.5220-linux-x86_64-rpm.sh

The package steps you through the installation program.  You will need to accept the license as there are actually several packages, covered under various licenses, that need to be installed to support LikeWise.  If you are installing on an AMD64 platform then you will be questioned as to whether or not you want to install 32-bit support libraries.  Unless you really know what you need just select the “auto” option.  After that, the installation will take care of itself.

If you use SELinux like you should, you will need to turn this off during the configuration.

setenforce Permissive

Then we can join the Linux machine to the Active Directory domain.

/opt/likewise/bin/domainjoin-cli join exampledomain.com domainadminuser

At this point basic authentication is already working.  You will need to make some changes to your setup if you have existing accounts as well, but we can address that later.

Test your login:

ssh -l exampledomain\\username linuxhostname

Once you are all set do not forget to turn SELinux back on.

setenforce Enforcing

The big caveat with using LikeWise Open for your Unix to AD integration needs is that there is no Windows to UNIX GID/UID mapping so your UNIX (Linux, Solaris, Mac OSX, etc.) machines are stuck using Windows IDs.  This is not necessarily the end of the world depending on your environmental needs but it can be quite a pain if you are introducing AD into a large, established Unix environment.  LikeWise Enterprise does not suffer from this limitation, but it is obviously not free.

]]>
https://sheepguardingllama.com/2009/03/linux-active-directory-integration-with-likewise-open/feed/ 0
Time Sync on VMWare Based Linux https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/ https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/#comments Thu, 19 Feb 2009 02:57:49 +0000 http://www.sheepguardingllama.com/?p=3586 Continue reading "Time Sync on VMWare Based Linux"

]]>
In many cases it can be quite difficult to accuracy keep time on a virtualized operating system due to the complex interactions between the hardware, host operating system, virtualization layer and the guest operating system.  In my case I found that running Red Hat Linux 5 (CentOS 5) on VMWare Server 1.0.8 resulted in an unstoppable and rapid slowing of the guest clock.

The obvious steps to take include running NTP to control the local clock.  This, however, only works when the clock skews very slowly.  In my case, as in many, the clock drifts too rapidly for NTP to handle.  So we need another solution.  VMWare recommends installing VMWare Tools on the guest operating system and subsequently adding the following to your VMX configuration file:

tools.syncTime = true

This does not always work either.  You should also try changing you guest system clock type.  Most suggestions include adding clock=pit to the kernel options.  None of this worked for me.  I had to resort to a heavyhanded NTP trick – putting manual ntpdate updates into cron.  In my case, I set it to update every two minutes.  The clock still drifts heavily within the two minute interval but for me it is an acceptable amount.  You should adjust the update interval for your own needs.  Every five minutes may easily be enough but more frequently might be necessary.

Using crontab -e under the root user, add the following to your crontab:

*/2 * * * * /usr/sbin/ntpdate 0.centos.pool.ntp.org

For those unfamiliar with the use of */2 in the first column of this cron entry, that designates to run every two minutes.  For every five minutes you would use */5.  Remember that it takes a few minutes before cron changes take effect.  So don’t look for the time to begin syncing for a few minutes.

For me, this worked perfectly.  Ntpdate is not subject to the skew and offset issues that ntpd is.  So we don’t have to worry about the skew becoming too great and the sync process stopping.

If anyone has additional information on syncing Linux in this situation, please comment.  Keep in mind that this is for Red Hat Linux and the kernel with RHEL5 is 2.6.18 which does not include the latest kernel time updates that may be found in some distributions like Ubuntu.  Recent releases of Ubuntu likely do not suffer this issue as I expect OpenSuse 11.1 or the latest Fedora would not either.

]]>
https://sheepguardingllama.com/2009/02/time-sync-on-vmware-based-linux/feed/ 7
Installing Windows Server 2003 on Xen on Red Hat Linux 5 https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/ https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/#respond Mon, 02 Feb 2009 03:31:21 +0000 http://www.sheepguardingllama.com/?p=3492 Continue reading "Installing Windows Server 2003 on Xen on Red Hat Linux 5"

]]>
After being challenged several times during the process of installing Windows Server 2003 into a fully virtualized Xen environment on Red Hat Enterprise Linux 5 (RHEL 5 or CentOS 5) I decided that a quick tutorial for those of you who wish to install in exactly the same way would be helpful.  There are several potential road blocks that must be addressed including issues with accessing the graphical console (necessary for a normal Windows installation process)  if you are not working from a local Linux workstation with a graphical environment installed.

I like to start a Xen installation using the very handy virt-install command.  Virt-install, available by default, makes creating a new virtual machine very simple.  I will assume that you are familiar with this part of the process and already have Xen installed and working.  If you are not sure if your environment is set up properly, I suggest that you start by paravirtualizing a very simple, bare-bones Red Hat Linux server using the virt-install process to test out your setup before challenging yourself with a much more lengthy Windows install that has many potential pitfalls.

The first potential problem that many users face is a lack of support for full virtualization.  This is becoming less common of a problem as time goes on.  Full virtualization must be supported at the hardware level in both the processor and in the BIOS/firmware.  (I personally recommend the AMD Opteron platform for virtualization but be sure to get a processor revision, like Barcelona or later, that supports this.)

Using virt-install to kick off our install process is great but, most likely, you will do this and, if all goes well, you will begin to format your hard drive and then you will find that your Xen machine (xm) simply dies leaving you with nothing.  Do not be concerned.  This is a known issue that can be fixed with a simple tweak to the Xen configuration file.

CD Drive Configuration Issues

In some cases, you may have problems with your CD / DVD drive not being recognized correctly.  This can be fixed by adding a phy designation in the Xen configuration file to point to the CD-Rom drive.  This is only appropriate for people who are installing directly from CD or DVD.  Most people prefer to install from an ISO image.  Using an ISO does not have this problem.

In Red Hat, your Xen configuration files should be stored, by default, in /etc/xen.  Look in this directory and open the configuration file for the Windows Server 2003 virtual machine which you just created using virt-install.  There should be a “disk =” configuration line.  This line should contain, at a minimum, configuration details about your virtual hard drive and about the CD ROM device from which you will be installing.

The configuration for the CD ROM device should look something like:

disk = [ “file:/dev/to-w2k3-ww1,hda,w”, “,hdc:cdrom,r” ]

You should change this file to add in a phy section for the cdrom device to point the system to the correct hardware device.  On my machine the cdrom device is mapped to /dev/cdrom which makes this very simple.

disk = [ “tap:aio:/xen/to-w2k3-ww1,hda,w”, “phy:/dev/cdrom,hdc:cdrom,r” ]

Accessing the Xen Graphical Console Remotely via VNC

If you are like me you do not install anything unnecessary on your virtualization servers.  I find it very inappropriate for there to be any additional libraries, tools, utilities, packages, etc. located on the virtualization platform.  These are unnecessary and each one risks bloat and, worse yet, potential security holes.  Since all of the guest machines running on the host machine all all vulnerable to any security concerns on the host it is very important that the host be kept as secure and lean as possible.  To this end I have no graphical utilities of any kind available on the host (Dom0) environment.  Windows installations, however, generally require a graphical console in order to proceed.  This can cause any number of issues.

The simplest means of working around this problem is to use SSH forwarding to bring the remote frame buffer protocol (a.k.a. VNC or RFB) to your local workstation which, I will assume, has a graphical environment.  This solution is practical for almost any situation, is very secure, rather simple and is a good way to access emergency graphical consoles for any maintenance emergency.  Importantly, this solution works on Linux, Mac OSX, Windows or pretty much any operating system from which you may be working.

Before we begin attempting to open a connection we need to know on which port the VNC server is listening for connections on the Xen host (Dom0).  You can discover this, if you don’t know already from your settings, by running:

netstat -a | grep LISTEN | grep tcp

On Linux, Mac OSX or any UNIX or UNIX-like environment utilizing a command-line SSH client (OpenSSH on Windows, CygWin, etc. will also work on Windows in this way) we can easily establish a connection with a tunnel bring the VNC connection to our local machine.  Here is a sample command:

ssh -L 5900:localhost:5900 [email protected]

If you are a normal Windows desktop user you do not have a command-line integrated SSH option already installed.  I suggest PuTTY.  It is the best SSH client for Windows.  In PuTTY you simply enter the name or IP address of the server which is your Dom0 as usual.  Then, before opening the connection, you can go into the PuTTY configuration menu and under Connection -> SSH -> Tunnels you can specify the Source Port (5900, by default for VNC but check your particular machine) and the Destination (localhost:5900.)  Then, just open your SSH connection, log in as root and we are ready to connect with TightVNC Viewer to our remote, graphical console session.

If you are connecting on a UNIX platform, such as Linux, and have vncviewer installed then you can easily connect to your session using:

vncviewer localhost::5900

Notice that there are two colons between localhost and the port number.  If you only use one colon then vncviewer thinks that you are entering a display number rather than a port number.

If you are on Windows you can download the viewer from the TightVNC project, for free, without any need to install.  Just unzip the download and run TightVNC Viewer.  You will enter localhost::5900 and voila, you have remote, secure access to the graphical console of your Windows server running on Xen on Linux.

]]>
https://sheepguardingllama.com/2009/02/installing-windows-server-2003-on-xen-on-red-hat-linux-5/feed/ 0
Installing ruby-sqlite3 on Red Hat or CentOS Linux https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/ https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/#comments Sun, 23 Nov 2008 22:04:22 +0000 http://www.sheepguardingllama.com/?p=3012 Continue reading "Installing ruby-sqlite3 on Red Hat or CentOS Linux"

]]>
For my development environment, I like to SQLite3 on Red Hat Enterprise Linux (RHEL / CentOS.)  When working with the gem installer for the sqlite-ruby package I kept getting an error on my newest machine.  I searched online and found no answers anywhere while finding many people having this save problem.  I have found a solution.  There is no need to compile Ruby again from source.

The command used was:

gem install sqlite3-ruby

What I found was the following error:

gem install sqlite3-ruby
Building native extensions.  This could take a while…
ERROR:  Error installing sqlite3-ruby:
ERROR: Failed to build gem native extension.

/usr/bin/ruby extconf.rb install sqlite3-ruby
checking for fdatasync() in -lrt… no
checking for sqlite3.h… no

make
make: *** No rule to make target `ruby.h’, needed by `sqlite3_api_wrap.o’.  Stop.

Gem files will remain installed in /usr/lib64/ruby/gems/1.8/gems/sqlite3-ruby-1.2.4 for inspection.
Results logged to /usr/lib64/ruby/gems/1.8/gems/sqlite3-ruby-1.2.4/ext/sqlite3_api/gem_make.out

There are two main causes of this problem.  The first is that the correct dev packages are not installed.  Be sure that you install the correct packages for Red Hat.  In RHEL 5, which I use, SQLite3 is now simply SQLite.

yum install ruby-devel sqlite sqlite-devel ruby-rdoc

If you are still receiving the error then you most likely do not have a C compiler installed.  The Gem system needs make and the GCC.  So install those as well.  (Obviously you could combine these two steps.)

yum install make gcc

Voila, you SQLite / SQLite3 installation on Red Hat (RHEL), Fedora, or CentOS Linux should be working fine.  Now your “rake db:migrate” should be working.

Update: If you follow these direction and get the error that sqlite3-ruby requires Ruby version > 1.8.5 then you can go to my follow-up directions on
SQLite3-Ruby Gem Version Issues on Red Hat Linux and CentOS

]]>
https://sheepguardingllama.com/2008/11/installing-ruby-sqlite3-on-red-hat-or-centos-linux/feed/ 16
High IOWait on VMWare Server on Linux https://sheepguardingllama.com/2008/11/high-iowait-on-vmware-server-on-linux/ https://sheepguardingllama.com/2008/11/high-iowait-on-vmware-server-on-linux/#comments Fri, 21 Nov 2008 04:21:00 +0000 http://www.sheepguardingllama.com/?p=2995 Continue reading "High IOWait on VMWare Server on Linux"

]]>
In using VMWare Server running on Red Hat Enterprise Linux 5 (CentOS 5) I discovered a rather difficult problem.  My setup includes Red Hat Linux 5.2, Solaris 10 and Windows Server 2003 guests running on a Red Hat 5.2 host server all 64bit except for Windows running on AMD Opteron multicore processors on an HP Proliant DL145 G3.

The issue that I found was that the Windows guest was exhibiting serious performance issues.  The box would freeze regularly, networking would stop although pings continued but remote desktop (RDP) would be interrupted.  In the logs I consistently found symmpi errors in the System Event Log:

The device, \Device\Scsi\symmpi1, did not respond within the timeout period.

Because the issues were only exhibiting on Windows and not on Linux or Solaris guests I was convinced that the issue was Windows related.  I could see that the Linux host operating system was showing massive IOWait states (you can see this in top or with the iostat command from the sysstat package.)  I assumed that this was being caused by the Windows guest; it was not.

I turned off all three guest operating systems and noticed almost no drop in the IOWait levels, however if I turned of the VMWare Server process (/etc/init.d/vmware stop) the IOWait would drop almost instantly and return again as soon as I restart the process even without starting any virtual machine images.  Clearly the issue was with VMWare Server itself.

My first thought was to make sure that VMWare Server was up to date.  I have been running VMWare Server 1.0.7 and so downloaded and updated the very recent 1.0.8 update just to be sure that this issue was not addressed in that package.  It was not.  I am aware that the 2.0 series is available now but as this box is used a bit I am not interested yet in moving to the new series unless absolutely necessary.

Once I narrowed down that the issue was a problem with VMWare Server on Linux I was able to track down a solution.  Special thanks to Mr. Pointy for publishing the solution to this for Gutsy Gibson.  Red Hat and Ubuntu are sharing a problem in this case.

The issue is with memory configuration defaults with VMWare Server on this platform.  Very likely this will apply to Novell SUSE Linux, OpenSUSE, Fedora and others, but I have not tested it.  In the main VMWare Server configuration file (/etc/vmware/config) the following changes should be added:

prefvmx.useRecommendedLockedMemSize = “TRUE”
prefvmx.minVmMemPct = “100″

Then, in each of the individual virtual machine configuration files (*.vmx) you need to add:

sched.mem.pshare.enable = “FALSE”
mainMem.useNamedFile = “FALSE”
MemTrimRate = “0″
MemAllowAutoScaleDown = “FALSE”

These changes are taken directly from Mr. Pointy’s blog.  Once the changes are made you can restart VMWare Server (/etc/init.d/vmware restart) and the difference should be immediately visible.  Mr. Pointy posted his own sar results and here are mine.  You can clearly see the change in the %iowait column at 10:10pm when I restarted VMWare with the new configuration.  The numbers are low around 7:00pm because I had VMWare off much of that hour.

06:40:01 PM       CPU     %user     %nice   %system   %iowait    %steal     %idle
06:50:01 PM       all      0.16      0.00      1.77     43.15      0.00     54.92
07:00:01 PM       all      2.83      0.00      6.83      9.51      0.00     80.82
07:10:01 PM       all      0.10      0.00      1.38      4.93      0.00     93.59
07:20:01 PM       all      0.11      0.20      1.84     14.78      0.00     83.07
07:30:01 PM       all      0.10      0.00      2.08      8.84      0.00     88.98
07:40:02 PM       all      0.11      0.00      2.36     26.84      0.00     70.70
07:50:01 PM       all      0.11      0.00      2.32     28.54      0.00     69.04
08:00:01 PM       all      0.10      0.00      2.13     30.63      0.00     67.14
08:10:01 PM       all      0.10      0.00      2.06     22.74      0.00     75.10
08:20:01 PM       all      0.09      0.20      2.02     22.75      0.00     74.94
08:30:04 PM       all      0.09      0.00      2.21     23.22      0.00     74.48
08:40:01 PM       all      0.09      0.00      3.03     25.06      0.00     71.81
08:50:01 PM       all      0.09      0.00      3.09     27.21      0.00     69.61
09:00:01 PM       all      0.10      0.00      3.13     29.40      0.00     67.37
09:10:01 PM       all      0.09      0.00      3.11     25.56      0.00     71.23
09:20:01 PM       all      0.09      0.19      3.07     23.79      0.00     72.86
09:30:01 PM       all      0.09      0.00      2.98     21.50      0.00     75.43
09:40:01 PM       all      0.10      0.00      2.97     25.94      0.00     70.99
09:50:01 PM       all      0.10      0.00      3.28     32.70      0.00     63.93
10:00:01 PM       all      0.20      0.00      4.96     40.73      0.00     54.11
10:10:01 PM       all      0.69      0.00      8.57      1.23      0.00     89.50
10:20:01 PM       all      0.88      0.21      6.34      0.67      0.00     91.90
10:30:01 PM       all      0.81      0.00      6.04      0.26      0.00     92.89
10:40:01 PM       all      0.78      0.00      5.55      0.20      0.00     93.47
10:50:01 PM       all      0.77      0.00      5.47      0.07      0.00     93.69

After the change was complete I had no problem running i/o system intensive operations like disk compression, defragmentation, etc.

Original solution from: Mr. Pointy – Gutsy and VMWare Server – You’re In for Some Pain.

]]>
https://sheepguardingllama.com/2008/11/high-iowait-on-vmware-server-on-linux/feed/ 1
Subversion Permission Issues https://sheepguardingllama.com/2008/11/subversion-permission-issues/ https://sheepguardingllama.com/2008/11/subversion-permission-issues/#comments Sun, 16 Nov 2008 19:19:02 +0000 http://www.sheepguardingllama.com/?p=2940 Continue reading "Subversion Permission Issues"

]]>
In my installation of Subversion (SVN) on Red Hat Enterprise Linux 5 (a.k.a. RHEL 5 or CentOS 5), I was attempting to access my working Subversion repository through the web interface using Apache.  I came across a permissions issue giving the following errors:

This one is from the Apache error log (/var/log/httpd/error_log) and is generated whenever an attempt to connect to the resource via the web interface is made:

[error] [client 127.0.0.1] Could not open the requested SVN filesystem  [500, #2]

This is what was visible from the web browser.  This is its rendering of the XML response.

<D:error>
<C:error/>
<m:human-readable errcode=”2″>
Could not open the requested SVN filesystem
</m:human-readable>
</D:error>

This one arose when attempting to run the svn command as the apache user (sudo -u apache svn list….)

svn: Can’t open file ‘/root/.subversion/servers’: Permission denied

I eventually discovered that this problem was being caused by the Subversion binary looking to the root home directory, instead of to the Apache / httpd home directory (~apache which was /var/www in my configuration.)  This is not the correct behaviour but until the issue is fixed you can fix the problem yourself with this:

cp -r /root/.subversion/* ~apache/.subversion/

chown -R apache:apache ~apache/.subversion/

]]>
https://sheepguardingllama.com/2008/11/subversion-permission-issues/feed/ 3
Installing Subversion on RHEL5 https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/ https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/#comments Sun, 16 Nov 2008 00:57:20 +0000 http://www.sheepguardingllama.com/?p=2298 Continue reading "Installing Subversion on RHEL5"

]]>
Subversion (SVN) is a popular open source, source code change control package. Today we are going to install and configure Subversion on Red Hat Enterprise Linux 5.2 (a.k.a. RHEL 5.2), I will actually be doing my testing on CentOS 5.2 but the process should be completely identical.

Installing Subversion on Linux

Installation of subversion is very simple if you are using yum. In addition to Subversion itself, you will also want to install Apache as you will most likely want to access Subversion through a WebDAV interface.  You can simple run:

yum -y install subversion httpd mod_dav_svn

Once Subversion is successfully installed we need to create the initial repository. This can be done on the local file system but I prefer to keep high priority and highly volatile data stored directly on the NAS filer as this is far more appropriate for this type of data.

As an aside, I like to keep low volatility data (say, website HTML) stored on local discs in general for performance reasons and since backups are not as difficult to take using traditional backups methods (e.g. tar, cpio, Amanda, Bacula, etc.) High volatility files I prefer to be on dedicated network storage units where backups can be easily taken using more advanced methods like Solaris 10’s ZFS snapshot capability. It is not always clear when data makes sense to keep locally or to store remotely but I feel that you can gauge a lot of the decision on two factors: frequency of data changes – that is changes to existing files not the addition of new files necessarily and the level to which the data is the focus of the storage – that is if the data is incidental or key to the application. In the case of Subversion the entire application is nothing but a complex filesystem frontend so we are clearly on the side of “data focused” application.

I started writing this article on RHEL4 on a system with a small, local file system.  When I returned to the uncompleted article and continued with it I was implementing this on a RHEL5 system with massive local storage and decided to keep my Subversion repository local on a dedicated logical volume for easy Linux based snapshots.

Subversion has two optional backend storage solutions.  The original method of storing Subversion data was with the venerable Berkley DataBase, known as BDB, which is now a product of Oracle.  The newer method, and the method which has been the default choice since Subversion 1.2, is FSFS (I don’t know exactly for what its initials stand) which uses the native filesytem mechanisms for storage.  In my example here and for my own use I choose FSFS as I think it is more often the better choice.  Of most important note, FSFS supports remote filesystems over NFS and CIFS while BSB does not.  FSFS is also easier to deal with when it comes to creating backups.  My feeling is that unless you really know why you want to use BDB, stick with the default FSFS, there is a reason that it was selected as the default.

Another note about creating Subversion repositories: some sources recommend putting Subversion repos under /opt. All I have to say is “No No No!” The /opt filesystem is not appropriate for regularly changing data. Any data that is expected to change on a regular basis (e.g. log files, source code repos, etc.) belongs in /var. This is the entire purpose of the /var filesystem. It stands for “variable” and is purposed for regular filesystem changes. Files going to /var is another indicator that external network filesystem may be appropriate as well.

mkdir -p /var/projects/svn

At this point you can either use /var/svn as a normal or mounted remotely in some manner such as NFS, CIFS or iSCSI.  Regardless of how the repository is set up the rest of this document will function identically.

We are now in a position to use svnadmin to create our repository directory:

svnadmin create /var/projects/svn/

At this point, Subversion should already be working for you.  If you are new to Subversion, we will do a simple import to test our installation.  To perform this test, create a directory called “testproject” and put it in the /tmp directory.  Now touch a couple of files inside that directory so that we have something with which to work.  Then we will do our first Subversion import.

mkdir /tmp/testproject; cd /tmp/testproject; touch test1 test2 test3

svn import tmp/testproject/ file:///var/projects/svn/test -m “First Import”

Your Subversion installation is now working, but few people will be happy accessing their Subversion repositories only from the local machine as we have done here.  If you are used to working from the UNIX (Linux, Mac OSX, Cygwin, etc.) command line you may want to try accessing your new Subversion repository using SVN+SSH.  Here is an example taken from an OpenSUSE workstation with the Subversion client installed:

svn list svn+ssh://myserver/var/projects/svn
testproject/

At this point you now have access from your external machines and can perform a checkout to get a working copy of your code.  To make the process really simple be sure to set up your OpenSSH keys so that you are not prompted for a password.  For many users, most notably Windows users, you are going to want access over the HTTP protocol since Windows does not natively support the SSH protocol.

The first thing that you are going to need to do, if you are running SELinux and Firewall security on your RHEL server like I am, is to open ports 80 and 443 in your firewall so that Apache is enabled.  Normally I shy away from management tools but this one I like.  Just use “system-config-securitylevel-tui” and select the appropriate services to allow.

You will also need to allow the Apache web server to write to the Subversion repository location within SELinux.  To do so we can use the command:

restorecon -R /var/projects/svn/

We have one little trick that we need to perform.  This trick is necessary because of what appears to be a bug in the way that Subversion sets the user ID when it runs.  This is not necessary for all users but can be a pretty tough sticking point for anyone who runs into it and is not aware of what can me done to remedy the situation.

cp -r /root/.subversion/* ~apache/.subversion/

Configuring Apache 2 on Red Hat 5 is a little tricky so we will walk through it together.  The first thing that needs to be added is the LoadModule line for the WebDAV protocol.  This goes into the LoadModule section of the mail /etc/httpd/conf/httpd.conf configuration file.

LoadModule dav_module         modules/mod_dav.so

The rest of our configuration changes for Apache 2 will go into a dedicated configuration file just for our subversion repository: /etc/httpd/conf.d/subversion.conf

I am including here my entire configuration file sans comments.  You will need to modify your SVNPath variable accordingly, of course.

# grep -v \# /etc/httpd/conf.d/subversion.conf

LoadModule dav_svn_module       modules/mod_dav_svn.so
<Location /svn>
  DAV svn
  SVNPath /var/projects/svn/
</Location>

At this stage you should now not only have a working Subversion repository but should be able to access it via the web.  You can test web access from you local box with the svn command.  Here is an example:

svn list http://localhost/svn/

References:

Mason, Mike. “Pragmatic Version Control Using Subversion, 2nd Edition“, Pragmatic Programmers, The. 2006.

Installing Subversion on Apache by Marc Grabanski

Subversion Setup on Red Hat by Paul Valentino

Setting Up Subversion and Trac As Virtual Hosts on Ubuntu Server, How To Forge

The SVN Book, RedBean

Additional Material:

Subversion Version Control: Using the Subversion Version Control System in Development Projects

]]>
https://sheepguardingllama.com/2008/11/installing-subversion-on-rhel5/feed/ 5
Updating Zimbra on Linux https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/ https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/#comments Sat, 13 Sep 2008 04:22:52 +0000 http://www.sheepguardingllama.com/?p=2533 Continue reading "Updating Zimbra on Linux"

]]>
Having been a Zimbra Administrator for some time and having always worked on the Zimbra Open Source platform I have found that documentation on the update process has been very much lacking.  The process is actually quite simple and straightforward under most circumstances but for someone without direct experience with the process it can be rather daunting.

My personal experience with Zimbra, this far, is running the 4.5.x series on CentOS 4 (RHEL 4).  Using CentOS instead of actual Red Hat Enterprise Linux presents a few extra issues with the installer but have no fear, the process does work.

While this document is based on the Red Hat Enterprise Linux version of Zimbra, I expect that non-RPM based systems will behave similarly.

To upgrade an existing installation of Zimbra, first do a complete backup. I cannot state the importance of having a complete and completely up-to-date backup of your entire system.  Zimbra is a massive package that is highly complex.  You will want to be absolutely sure that you are backed up and prepared for disaster.  If you use the open source version of Zimbra, as I do, that means taking Zimbra offline so that a backup can be performed.  I won’t go into backup details here but LVM or virtual instances of your server will likely be your best friend for regular backups.  Email systems can get very large very quickly.

Go to the Zimbra website and download the latest package for your platform.  If you use CentOS, get your matching RHEL package.  It will work fine for you.  I find that the easiest way to move the package to your Zimbra server is with wget.  Downloading to /tmp is fine as long as you have enough space.

Unpack your fresh Zimbra package.  Zimbra downloads as a tarball (gzip’ed tar package) but contains little more than a handy installation script that automates RPM deployments.  It is actually a very nice package.

tar -xzvf zimbra-package.tar.gz

You can cd into your newly unpacked directory and inside you will find that there is a script, install.  Yes, the installation process is really that simple.  If you are on most platforms you may simple run the install script.  If you are on CentOS, rather than RHEL, you will need one extra parameter: –platform-override.

./install.sh –platform-override

Be prepared for this process to run for quite some time, by which, I mean easily an hour or more.  Depending on the version of the platform that you are upgrading from and to you may find that this process can run for quite some time.  Also, depending on the size of your mail store, that may impact the speediness of the process as well.

The installation script will fire off checking for currently installed instances of Zimbra, checking your platform for compatibility (be sure to check this manually if using the override option but CentOS users can rest assured that RHEL packages work perfectly for them), performing an integrity check on your database and checking prerequisite packages.  Chances are that you will need to do something in order to prepare your system for the upgrade.

In my case, upgrading from 4.5.9 to 5.0.9, I needed to install the libtool-libs package.

yum install libtool-libs

While there are processes here that can certainly go wrong, the Zimbra upgrade process is very simple and straightforward.  As long as you have good backups (make sure not to start Zimbra and receive new mail after having made you last backup) you should not be afraid to upgrade your Zimbra Open Source system.

You can also purchase a support contract from Yahoo/Zimbra so that you can move to the Network version of Zimbra and Zimbra support staff are happy to walk you through the process.  Having someone there to make sure everything is okay is always nice.

References:

Linux Zimbra Upgrade HowTo from GeekZine

]]>
https://sheepguardingllama.com/2008/09/updating-zimbra-on-linux/feed/ 2
Choosing a Linux Distro in the Enterprise https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/ https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/#respond Thu, 10 Jul 2008 17:27:22 +0000 http://www.sheepguardingllama.com/?p=2337 Continue reading "Choosing a Linux Distro in the Enterprise"

]]>
Linux is popular in big business today. No longer, and not for a long time now, has Linux been the purview of the geek community but it is a solid, core piece of today’s mainstream IT infrastructure. That being said, Linux is still plagued by confusion over its plethora of distributions. This being the case I have decided to weigh in with some guidance for businesses looking to use Linux in their organizations.

For those unfamiliar with the landscape, Linux is a family of operating systems that are generally considered to fall under the Unix umbrella although Linux is legally not Unix just highly Unix-like. Individual Linux packages are referred to as distributions or distros, for short. Unlike Windows or Mac OS X which come from a single vendor, Linux is available from many commercial vendors as well as from non-profit groups and individual distribution makers. Instead of there being just one Linux there are actually hundreds or thousands of different distributions. Each one is different in some way. This creates choice but also confusion. To make matters even worse some major vendors such as Red Hat and Novell release more than one Linux distribution targeted at different markets, and within a single distribution will often package features separately. This myriad of choices, before even acquiring your first installation disc does not help make Linux uptake in companies go any faster.

In reality the choices for business use are few and obvious with a little bit of research. To make things easier for you, I will just tell you what you need to know. Problem solved. Now if managing your Linux environment could be so easy!

Before we get started I want to stress that this article is about using Linux for enterprise infrastructure – that is, as a server operating system in a business. I am not looking into desktop Linux or high performance computational clusters and grid or specialty applications or home use. This article is about standard, traditional server applications that require stability, up time, reliability, accessibility, manageability, etc. If you are looking for my guide to the “ultimate Linux desktop environment”, this isn’t it. Desktops, even in the enterprise, do not necessarily have the same criteria as servers. They might, but not necessarily so.

When choosing a distribution for servers we must first consider the target purpose of the distro. Only a handful of Linux distros are built with the primary purpose of being used as a server. If your distro maintainer does not have the same principles in mind that you do it is probably best to avoid that distro for this particular purpose. Server distributions target longer time between releases, security over features, stability over features, rapid patching, support, documentation, etc.

In addition to targeting the distribution in harmony with our own goals we also need to work with a company that is reliable, has the resources necessary to support the product and has a track record with a successful product. Choosing a distribution is a vendor selection process. There are three key enterprise players in the Linux space: Red Hat, Novell and Canonical.

For many Red Hat is synonymous with Linux, having been one of the earliest American Linux distributions and having been a driving force behind the enterprise adoption of Linux globally. Red Hat makes “Red Hat Enterprise Linux”, known widely as RHEL, as well as Fedora Linux. Red Hat is the biggest Linux vendor and important in any Linux vendor discussion.

Novell is the second big Linux vendor having purchased German Linux vendor SUSE some years ago. Novell makes two products as well, Suse Linux Enterprise and OpenSUSE.

The third big Linux enterprise vendor is Canonical well known for the Ubuntu family of Linux distributions. While the Ubuntu distro family includes many members we are only interested in discussing Canonical’s own Ubuntu LTS distribution. LTS stands for “Long Term Support” and is effectively Canonical’s server offering. Their approach to versioning and packaging is quite different from Red Hat and Novell and can be rather confusing.

Before we become overwhelmed with choices (we have presented five so far) we have one here that we can further eliminate. Red Hat’s Fedora is not an “enterprise targeted” distributions. This is a “testing” and “community” platform designed primarily as a desktop and research vehicle and not as a stable server operating system. To be sure it is extremely valuable and a great contribution to the Linux community and has its place but as server operating system it does not shine. Nevertheless, without Fedora as a proving ground for new technologies it is unlikely that Red Hat Enterprise Linux would be as robust and capable as it is.

We can also effectively eliminate OpenSUSE.  OpenSUSE is the unsupported, community driven sibling to Novell SUSE Linux Enterprise.  However, unlike Fedora which is an independant product from RHEL, OpenSUSE is the same code base as SUSE Linux Enterprise but without Novell’s support.  This is a great advantage to the SUSE product line as there is a very large base of home and hobby users in addition to the enterprise users all using the exact same code and finding bugs for each other.  Going forward we will only consider SUSE Linux Enterprise as support is a key factor in the enterprise.  But OpenSUSE, for shops not needing commercial support from the vendor, is a great option as the product is the same, stable release as the supported version.

So we are left with three serious competitors for your enterprise Linux platform: Red Hat Linux, Novell Suse Linux and Ubuntu LTS. All three of these competitors are solid, reliable offerings for the enterprise. Red Hat and Novell obviously have the advantage of having been in the server operating system market for a long time and have experience on their side. But Canonical has really made a lot of headway in the last few years and is definitely worth considering.

Red Hat Linux and Suse Linux Enterprise have a few key advantages over Ubuntu. The first is that they both share the standard RPM package management system. Because RPM is the standard in the enterprise it is well tested and understood and most Linux administrators are well versed in its functionality. Ubuntu uses the Debian based package format which is far less common and finding administrators with existing knowledge of it is far less likely – although this is changing rapidly as Ubuntu has become the leading home desktop Linux distribution recently.

In general, Red Hat Linux and Suse Linux Enterprise have more in common with each other making them able to share resources more easily and giving administrators a broader platform to focus skills upon. This is a significant advantage when it comes time to staff up and support your infrastructure.

Ubuntu suffers from having a directly tie to a “non-enterprise” operating system that is particularly popular with the desktop “tweaking” crowd.  Unlike Red Hat and Suse, Ubuntu is coming at the enterprise from the home market and brings a stigma with it.  Administrators trained on RHEL, for example, tend to be taught enterprise type tasks performed in a business like manner.  Administrators with Ubuntu experience tend to be home users who have been running Linux for their own desktop and entertainment tasks.  This makes the interview and hiring process that much more difficult.  This is in no way a slight against the Ubuntu LTS product which is an amazing, enterprise-ready operating system which should seriously be considered, but shops need to be aware that the vast majority of Ubuntu users are not enterprise system administrators and their experience may be mostly from a non-critical desktop focused role.  It is rare to find anyone running RHEL or Suse Linux in this manner.

In my own experience, having software popular with home users in the enterprise also brings in factors of misguided user expectations.  Users expect the enterprise installations to include any package that the users can install at home and that update cycles be similar.  This can cause additional headaches although the Windows world has been dealing with these issues since the beginning.

At this point you have probably noticed that choosing either Suse and Ubuntu leaves you with the option of both free and fully supported versions, direct from the vendors.  This is a major feature of these distributions because it provides a great cost savings and greater flexibility.  For example, development machines can be run on OpenSUSE and production machines on Suse Enterprise lowering the overall cost if full support isn’t necessary for development environments.  You can run labs from free versions for learning and testing or only pay for support for critical infrastructure pieces.  Or, if you are really looking to save money or feel that your internal support is good enough, running completely on the free, unsupported versions is a viable option because you are still using the stable, enterprise-class code base.

Red Hat, as a vendor, does not supply a freely available edition of Red Hat Enterprise Linux.  Instead, they make their code repositories available to the public and expect interested parties to build their own version of RHEL using these repositories.  If you are interested in a freely available version of RHEL, look no further than CentOS.

CentOS, or the Community ENTerprise Operating System, is a code identical rebuild of RHEL.  It is identical in everyway except for branding.  CentOS is completely free – but unsupported.  CentOS is used in organizations of all sizes exactly like a free copy of RHEL would be expected to be used and many businesses choose to run CentOS exclusively.  As RHEL is the most popular Linux distribution in large businesses and as the commercially support version is rather expensive, CentOS also provides a very important resource to the community by allowing new administrators to experience RHEL at home without the expense of unneeded support.

Choosing between the Red Hat, Suse and Ubuntu families is much more difficult than whittling the list down to these three.  In many cases choosing between these three will be based upon cost, application demands, existing administration experience and features.  It is not uncommon for larger businesses to use two or possibly all three of these distributions as features are needed, but most commonly a single distribution is chosen for ease of management.  All three distributions are solid and capable.

Another potentially deciding factor is if your enterprise is considering using Linux on the desktop.  While RHEL can be used as a desktop operating system it is generally considered to be substantially weaker than Suse and Ubuntu when it comes to desktop environments.  Because of this, Fedora is generally seen as Red Hat’s desktop option but this is not supported by Red Hat nor does it share a code base with RHEL causing support to be somewhat less than unified even though to two are very similar.

For mixed server and desktop environments, Suse and Ubuntu have a very strong lead.  Both of these distributions focus a great many resources onto their desktop systems and they keep these components very much up to date and pay great attention to the user experience.  For a small company that can manage to use only one single distribution on every machine that they own this can be a major advantage.  Homogeneous environments can be extremely cost effective as a much narrower skill set is needed to manage and support them.

In conclusion: Red Hat Enterprise Linux, Novel Suse Enterprise and Ubuntu LTS, in both their supported versions as well as in their free versions (CentOS in the case of RHEL and OpenSUSE in the case of Novell, Ubuntu uses the same package) all represent great opportunities for the data center.  Do not be lulled into using non-enterprise Linux distributions because they are cool, flashy or popular.  Linux lends itself to being in the news often and to generating excess hype.  None of these things are good indicators of data center stability.  The data center is a serious business component and should not be treated lightly.  Linux is a great choice for the corporate IT department but you will be very unhappy if you pick your backbone server architecture based on its popularity as a gaming platform rather than on its uptime and management cost.

]]>
https://sheepguardingllama.com/2008/07/choosing-a-linux-distro-in-the-enterprise/feed/ 0
Linux Processor Ignored https://sheepguardingllama.com/2008/04/linux-processor-ignored/ https://sheepguardingllama.com/2008/04/linux-processor-ignored/#comments Fri, 04 Apr 2008 23:42:06 +0000 http://www.sheepguardingllama.com/?p=2328 Continue reading "Linux Processor Ignored"

]]>
WARNING: NR_CPUS limit of 1 reached. Processor ignored.

Not exactly the error message that you were hoping to see when you were checking you dmesg logs.  Don’t panic, this is easily remedied.  If you are wondering how to check your own Linux system for this error you can look by using this command:

dmesg | grep -i cpu

This error occurs on a multiple logical processor system when a uniprocessor kernel is loaded.  What the error indicates is that one CPU is being used and that more have been found but are being ignored.  The system should come online correctly but with only a single logical CPU.  (For a detailed discussion on logical processors see CPUs, Cores and Threads.)

In today’s market full of multi-core CPU products and hyperthreading this error message has moved from the exclusive realm of multi-socket servers to the home desktop and laptop.  It is now a potentially common site for many casual Linux users.

To correct this issue on a Red Hat, CentOS or Fedora Linux system all you will need to do is make a simple change to your GRUB configuration to tell it to point to a symmetrical multiprocessor (smp) kernel rather than the uniprocessor kernel. The file that you will need to edit is /etc/grub.conf.  After some header comments the beginning of your file should look something like this:

default=1
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.9-67.0.7.plus.c4smp)
     root (hd0,0)
     kernel /vmlinuz-2.6.9-67.0.7.plus.c4smp ro root=/dev/VG0/LV0
     initrd /initrd-2.6.9-67.0.7.plus.c4smp.img
title CentOS (2.6.9-67.0.7.plus.c4)
     root (hd0,0)
     kernel /vmlinuz-2.6.9-67.0.7.plus.c4 ro root=/dev/VG0/LV0
     initrd /initrd-2.6.9-67.0.7.plus.c4.img

The GRUB configuration file can appear daunting at first but, in reality, it is quite simple to deal with.  The only line with which we are concerned with making modifications is the “default” line value.  In this case it is set to 1.  The grub.conf file contains a list of available kernels for us to use.  We may have just one or possible several, maybe even dozens.  In this case we see two.  You can see here that we have a CentOS 2.6.9 c4smp and a CentOS 2.6.9 c4 kernel.  You only need to be concerned with the “title” lines.  These are your kernel titles.  Normally the kernels of most interest will be at the top of the file.

You can check the name of the kernel that you are currently running by issuing:

uname -a

The first title line is kernel “0”, the second is kernel “1”, the next “2” and so forth.  Right now our “default” value is pointing to “1” which is the second kernel from the top and, as you will notice, not an smp kernel (therefore it is a uniprocessor kernel.)  In this case all we need to do is change the “default” value from “1” to “0” so that it now points to the first kernel option which for us is the smp kernel.

After your grub.conf file has been saved you make reboot the Linux system.  If all goes well it will return to you with additional logical processors enabled.  You can verify the name of the loaded kernel with the command given above.

]]>
https://sheepguardingllama.com/2008/04/linux-processor-ignored/feed/ 2