I like to start a Xen installation using the very handy virt-install command. Virt-install, available by default, makes creating a new virtual machine very simple. I will assume that you are familiar with this part of the process and already have Xen installed and working. If you are not sure if your environment is set up properly, I suggest that you start by paravirtualizing a very simple, bare-bones Red Hat Linux server using the virt-install process to test out your setup before challenging yourself with a much more lengthy Windows install that has many potential pitfalls.
The first potential problem that many users face is a lack of support for full virtualization. This is becoming less common of a problem as time goes on. Full virtualization must be supported at the hardware level in both the processor and in the BIOS/firmware. (I personally recommend the AMD Opteron platform for virtualization but be sure to get a processor revision, like Barcelona or later, that supports this.)
Using virt-install to kick off our install process is great but, most likely, you will do this and, if all goes well, you will begin to format your hard drive and then you will find that your Xen machine (xm) simply dies leaving you with nothing. Do not be concerned. This is a known issue that can be fixed with a simple tweak to the Xen configuration file.
CD Drive Configuration Issues
In some cases, you may have problems with your CD / DVD drive not being recognized correctly. This can be fixed by adding a phy designation in the Xen configuration file to point to the CD-Rom drive. This is only appropriate for people who are installing directly from CD or DVD. Most people prefer to install from an ISO image. Using an ISO does not have this problem.
In Red Hat, your Xen configuration files should be stored, by default, in /etc/xen. Look in this directory and open the configuration file for the Windows Server 2003 virtual machine which you just created using virt-install. There should be a “disk =” configuration line. This line should contain, at a minimum, configuration details about your virtual hard drive and about the CD ROM device from which you will be installing.
The configuration for the CD ROM device should look something like:
disk = [ “file:/dev/to-w2k3-ww1,hda,w”, “,hdc:cdrom,r” ]
You should change this file to add in a phy section for the cdrom device to point the system to the correct hardware device. On my machine the cdrom device is mapped to /dev/cdrom which makes this very simple.
disk = [ “tap:aio:/xen/to-w2k3-ww1,hda,w”, “phy:/dev/cdrom,hdc:cdrom,r” ]
Accessing the Xen Graphical Console Remotely via VNC
If you are like me you do not install anything unnecessary on your virtualization servers. I find it very inappropriate for there to be any additional libraries, tools, utilities, packages, etc. located on the virtualization platform. These are unnecessary and each one risks bloat and, worse yet, potential security holes. Since all of the guest machines running on the host machine all all vulnerable to any security concerns on the host it is very important that the host be kept as secure and lean as possible. To this end I have no graphical utilities of any kind available on the host (Dom0) environment. Windows installations, however, generally require a graphical console in order to proceed. This can cause any number of issues.
The simplest means of working around this problem is to use SSH forwarding to bring the remote frame buffer protocol (a.k.a. VNC or RFB) to your local workstation which, I will assume, has a graphical environment. This solution is practical for almost any situation, is very secure, rather simple and is a good way to access emergency graphical consoles for any maintenance emergency. Importantly, this solution works on Linux, Mac OSX, Windows or pretty much any operating system from which you may be working.
Before we begin attempting to open a connection we need to know on which port the VNC server is listening for connections on the Xen host (Dom0). You can discover this, if you don’t know already from your settings, by running:
netstat -a | grep LISTEN | grep tcp
On Linux, Mac OSX or any UNIX or UNIX-like environment utilizing a command-line SSH client (OpenSSH on Windows, CygWin, etc. will also work on Windows in this way) we can easily establish a connection with a tunnel bring the VNC connection to our local machine. Here is a sample command:
ssh -L 5900:localhost:5900 [email protected]
If you are a normal Windows desktop user you do not have a command-line integrated SSH option already installed. I suggest PuTTY. It is the best SSH client for Windows. In PuTTY you simply enter the name or IP address of the server which is your Dom0 as usual. Then, before opening the connection, you can go into the PuTTY configuration menu and under Connection -> SSH -> Tunnels you can specify the Source Port (5900, by default for VNC but check your particular machine) and the Destination (localhost:5900.) Then, just open your SSH connection, log in as root and we are ready to connect with TightVNC Viewer to our remote, graphical console session.
If you are connecting on a UNIX platform, such as Linux, and have vncviewer installed then you can easily connect to your session using:
vncviewer localhost::5900
Notice that there are two colons between localhost and the port number. If you only use one colon then vncviewer thinks that you are entering a display number rather than a port number.
If you are on Windows you can download the viewer from the TightVNC project, for free, without any need to install. Just unzip the download and run TightVNC Viewer. You will enter localhost::5900 and voila, you have remote, secure access to the graphical console of your Windows server running on Xen on Linux.
]]>Installing Subversion on Linux
Installation of subversion is very simple if you are using yum. In addition to Subversion itself, you will also want to install Apache as you will most likely want to access Subversion through a WebDAV interface. You can simple run:
yum -y install subversion httpd mod_dav_svn
Once Subversion is successfully installed we need to create the initial repository. This can be done on the local file system but I prefer to keep high priority and highly volatile data stored directly on the NAS filer as this is far more appropriate for this type of data.
As an aside, I like to keep low volatility data (say, website HTML) stored on local discs in general for performance reasons and since backups are not as difficult to take using traditional backups methods (e.g. tar, cpio, Amanda, Bacula, etc.) High volatility files I prefer to be on dedicated network storage units where backups can be easily taken using more advanced methods like Solaris 10’s ZFS snapshot capability. It is not always clear when data makes sense to keep locally or to store remotely but I feel that you can gauge a lot of the decision on two factors: frequency of data changes – that is changes to existing files not the addition of new files necessarily and the level to which the data is the focus of the storage – that is if the data is incidental or key to the application. In the case of Subversion the entire application is nothing but a complex filesystem frontend so we are clearly on the side of “data focused” application.
I started writing this article on RHEL4 on a system with a small, local file system. When I returned to the uncompleted article and continued with it I was implementing this on a RHEL5 system with massive local storage and decided to keep my Subversion repository local on a dedicated logical volume for easy Linux based snapshots.
Subversion has two optional backend storage solutions. The original method of storing Subversion data was with the venerable Berkley DataBase, known as BDB, which is now a product of Oracle. The newer method, and the method which has been the default choice since Subversion 1.2, is FSFS (I don’t know exactly for what its initials stand) which uses the native filesytem mechanisms for storage. In my example here and for my own use I choose FSFS as I think it is more often the better choice. Of most important note, FSFS supports remote filesystems over NFS and CIFS while BSB does not. FSFS is also easier to deal with when it comes to creating backups. My feeling is that unless you really know why you want to use BDB, stick with the default FSFS, there is a reason that it was selected as the default.
Another note about creating Subversion repositories: some sources recommend putting Subversion repos under /opt. All I have to say is “No No No!” The /opt filesystem is not appropriate for regularly changing data. Any data that is expected to change on a regular basis (e.g. log files, source code repos, etc.) belongs in /var. This is the entire purpose of the /var filesystem. It stands for “variable” and is purposed for regular filesystem changes. Files going to /var is another indicator that external network filesystem may be appropriate as well.
mkdir -p /var/projects/svn
At this point you can either use /var/svn as a normal or mounted remotely in some manner such as NFS, CIFS or iSCSI. Regardless of how the repository is set up the rest of this document will function identically.
We are now in a position to use svnadmin to create our repository directory:
svnadmin create /var/projects/svn/
At this point, Subversion should already be working for you. If you are new to Subversion, we will do a simple import to test our installation. To perform this test, create a directory called “testproject” and put it in the /tmp directory. Now touch a couple of files inside that directory so that we have something with which to work. Then we will do our first Subversion import.
mkdir /tmp/testproject; cd /tmp/testproject; touch test1 test2 test3
svn import tmp/testproject/ file:///var/projects/svn/test -m “First Import”
Your Subversion installation is now working, but few people will be happy accessing their Subversion repositories only from the local machine as we have done here. If you are used to working from the UNIX (Linux, Mac OSX, Cygwin, etc.) command line you may want to try accessing your new Subversion repository using SVN+SSH. Here is an example taken from an OpenSUSE workstation with the Subversion client installed:
svn list svn+ssh://myserver/var/projects/svn
testproject/
At this point you now have access from your external machines and can perform a checkout to get a working copy of your code. To make the process really simple be sure to set up your OpenSSH keys so that you are not prompted for a password. For many users, most notably Windows users, you are going to want access over the HTTP protocol since Windows does not natively support the SSH protocol.
The first thing that you are going to need to do, if you are running SELinux and Firewall security on your RHEL server like I am, is to open ports 80 and 443 in your firewall so that Apache is enabled. Normally I shy away from management tools but this one I like. Just use “system-config-securitylevel-tui” and select the appropriate services to allow.
You will also need to allow the Apache web server to write to the Subversion repository location within SELinux. To do so we can use the command:
restorecon -R /var/projects/svn/
We have one little trick that we need to perform. This trick is necessary because of what appears to be a bug in the way that Subversion sets the user ID when it runs. This is not necessary for all users but can be a pretty tough sticking point for anyone who runs into it and is not aware of what can me done to remedy the situation.
cp -r /root/.subversion/* ~apache/.subversion/
Configuring Apache 2 on Red Hat 5 is a little tricky so we will walk through it together. The first thing that needs to be added is the LoadModule line for the WebDAV protocol. This goes into the LoadModule section of the mail /etc/httpd/conf/httpd.conf configuration file.
LoadModule dav_module modules/mod_dav.so
The rest of our configuration changes for Apache 2 will go into a dedicated configuration file just for our subversion repository: /etc/httpd/conf.d/subversion.conf
I am including here my entire configuration file sans comments. You will need to modify your SVNPath variable accordingly, of course.
# grep -v \# /etc/httpd/conf.d/subversion.conf LoadModule dav_svn_module modules/mod_dav_svn.so <Location /svn> DAV svn SVNPath /var/projects/svn/ </Location>
At this stage you should now not only have a working Subversion repository but should be able to access it via the web. You can test web access from you local box with the svn command. Here is an example:
svn list http://localhost/svn/
References:
Mason, Mike. “Pragmatic Version Control Using Subversion, 2nd Edition“, Pragmatic Programmers, The. 2006.
Installing Subversion on Apache by Marc Grabanski
Subversion Setup on Red Hat by Paul Valentino
Setting Up Subversion and Trac As Virtual Hosts on Ubuntu Server, How To Forge
The SVN Book, RedBean
Additional Material:
Subversion Version Control: Using the Subversion Version Control System in Development Projects
]]>After much searching I discovered that the obvious SUNWrsync package does, in fact, exist but not, at this time, for Solaris 10. Rather it is only available in Solaris Express (aka Solaris 11.) This means that it is not available in the standard Solaris 10 Update 4 or earlier installation CDs. Rsync has not been available on any previous version of Solaris to my knowledge either. The version of Rsync available from Solaris Express is currently 2.6.9 which is current, as of November 6, 2006, according to the official Rsync project page.
Fortunately SUN has made Solaris Express available as a free download. Unfortunately it is a single DVD image that must be downloaded in three parts and then combined into a single, huge image. This is not nearly as convenient as having an online package repository from which a single package could be downloaded (hint, hint SUN!)
You will need to download all three files from SUN, unzip them and then concatenate them into a single 3.7GB ISO file from which you can extract the necessary package.
# unzip sol-nv-b64a-sparc-dvd-iso-a.zip
# unzip sol-nv-b64a-sparc-dvd-iso-b.zip
# unzip sol-nv-b64a-sparc-dvd-iso-c.zip
# cat sol*a sol*b sol*c > sol-nv-sparc.iso
# mkdir /mnt/iso
# lofiadm -a sol-nv-sparc.iso /dev/lofi/1
# mount -F hsfs -o ro /dev/lofi/1 /mnt/iso
# cd /mnt/iso/Solaris_11/Product/
# ls -l | grep rsync
You will now have the list of the two available Rsync packages: SUNWrsync and SUNWrsyncS. It is SUNWrsync that we are really interested in here. I like to move all of my packages that I am installing to my own personal repository so that I can keep track of what I am installing and to make it easier to build a matching machine or to rebuild this one. If you are going to use a repository in this way be sure to back it up or it won’t be very useful during a rebuild.
# cp -r SUNWrsync/ /data/PKG/
# pkgadd -d /data/PKG/
You will now be able to pick from the available packages in your repository to choose which to install. [Helpful hint: If you have a large number of packages in your personal repository, consider placing each package into its own directory. For example, make a directory called “rsync” or “rsync_client” and symlink (ln -s) back to the installation directory. This makes it easier and quicker to install a single package. You can simple “cd /data/PKG/rsync” and “pkgadd -d .” Much quick and easier for large repos. By using the symlink method you maintain a single directory of the files while also having handy individual directories.]
Once you have installed the Rsync client package it is ready to be used. Because we are not using Rsync in a server based configuration we have no configuration to worry about. Rsync is most popularly used as a client package over SSH. Unlike most network aware packages that require that an SSH tunnel be created for them Rsync has SSH tunneling built into it making it extremely easy to use.
Let start with an example of moving files from one server to another without using SSH.
/usr/bin/rsync -av remote-host:/data/home/ /data/home
In this example we are synchronizing the remote /data/home directory with the local one pulling files from the remote site to the local. This is a one way sync so only files missing locally are brought over and files that exist locally but do not exist remotely are left intact. This is a relatively safe process. Files of the same name will be overwritten, however, so do this with test directories until you are used to how it works. You can run a test command using the -n option. With this option you will get the output of the Rsync command without actually moving any files so that you have a chance to see what would have happened. Here is the same command run in “test” mode.
/usr/bin/rsync -avn sc-sol-nfs:/data/home/ /data/home
With this particular Rsync package, the default mode of operation is to use SSH as the transport. This can be set explicitly but that is not necessary. By using SSH you have the security of the SSL tunnel to protect your traffic and you have the security and ease of use that comes with not needing to run a daemon process on any server that you want to sync to or from. The final command that you will normally want to run will involve the “-z” option which turns on file compression. This will normally decrease the time that it takes to transfer files. The gzip algorithm used for the compression is very effective on text documents and general files and is quite fast but on already compressed files such as tgz, Z, zip, jpeg, jpg, png, gif, mp3, etc. it can, at worst case, actually expand the files and will use a lot of CPU overhead without increasing the transfer speed. So best to be aware of the file types that you will be transferring. But for most users gzip is the right compression to use. So our final transfer command is:
]]>
/usr/bin/rsync -avz sc-sol-nfs:/data/home/ /data/home
I wanted to get up decently early this morning, probably around nine, but when I started to get up Oreo snuggled close and said in his puppy way “don’t get up yet, I want more snuggles” and I just couldn’t resist so I stayed in bed, awake, until almost eleven. Oreo finally, at that point, discovered the sunlight and decided that he would be happy moving out to the living room and laying in a sunspot on the loveseat.
I couldn’t decide what I wanted to work on this morning so I logged into my workstation and got to work on some Brainbench stuff since so much of that is now out of date. Dominica got up shortly after me and decided that she needed to do her homework so it worked out well for both of us. My first project this morning is taking the Red Hat Enterprise Linux (RHEL) 5 Beta exam from Brainbench. I don’t get any credit for this exam because it is only in beta but because I am a senior admin specifically on RHEL 3, RHEL 4 and RHEL 5 running it both at the office and at home I felt that I really should put in the effort to do the beta because I can provide important feedback to improve these tests for everyone.
The one nice thing about doing the beta exams is that you do get feedback even if it doesn’t officially go on your transcript. You have to take the exam twice so that they get a better idea of how the questions stack up in different configurations. On my first pass through I scored a high masters and ranked in the top percentile of all test takers. The test doesn’t have a means to report any higher than that. So that was encouraging.
One of the best things about doing lots of certifications like the Brainbench is that it really forces you to spend a lot of time researching things that you do not use everyday or possible ever. It basically forces you into a one to two hour crammed study session.
The beta tests took a little over two hours but I feel that it was time well spent. From there I continued on and renewed my Linux (General) certification even though the test is horribly out of date. Even with the test being terribly old and out of date and covering nothing that I do I still pulled off eighth in New Jersey.
I took a bit of a break and hung out with Dominica and Oreo for a little while before going on to the next test. I find that once I get into the testing groove I really like to stick with it. I am the same way about homework, believe it or not.
When we took Oreo outside for his afternoon walk we managed to time our reentry into the building just perfectly to coincide with a fifteen week old Boston Terrier puppy named Barney coming into Eleven80 to visit some people. He was black and white just like our Oreo and so adorable. We took Oreo over to meet him and they were pretty friendly. Oreo is generally good with puppies. He just doesn’t like Bull Terriers that are his size or larger for some reason. Then, while the two Bostons were saying hello, two black and white French Bulldogs came down to the lobby with the exact same markings are the two Bostons. It was like a weird Boston Terrier Convention but with French Bulldogs masquerading as Bostons.
I did a quick image seach on Yahoo! today for: “boston terrier” oreo. And would you believe that our Oreo is not only the first hit but is the only dog who shows up on the first seven plus pages and is almost exclusively the only dog for the first nine pages! Our Oreo is the most famous Boston Terrier named Oreo ever.
I took the Linux (SUSE) exam after that and without even bothering to try, as the test was ridiculously outdated and worthless, I managed to tie for ninth place in the world. What a bad exam. I decided to go on with the Server Administration exam which is a general exam covering the basics of server administration without going into an operating system specific details. I rushed through the test as the day was getting shorter by the minute and Oreo is more and more likely to need lots of attention as the day wears on. But I still ranked fifteenth in the United States and pulled off a Masters so it was fine.
We ordered in dinner from Mi Pequeño Mexico. I did some more reading in my Prototype book. We watched two episodes of The Cosby Show while we ate our dinner. Then Dominica had to go back and work on her homework since she has a paper plus numerous other homework assignments due by midnight tonight.
Later in the evening Dominica sent me down to the Market City Deli to find her some cookies. I went down and ran into Pam on a mission to find a candy bar. She was out during halftime of the American football match that is going on today. When we went back to Eleven80 we ran into Ryan who had been watching the game but was relatively bored as American football is not exactly an exciting sport to watch. So he decided that he would grab some beer or something and stop up to our apartment sometime soon to hang out.
I put in some time looking at rsync and other backup options tonight as I am trying to determine a solid backup strategy for myself. Ryan came up and we talked about backups for a little while (Ryan is a UNIX system administrator.) He likes Bacula and I will be looking into it a bit more thoroughly. I am no backup expert so it is a hard area to make good decisions in for me.
Ryan and I hung out and enjoyed some New Orleans rum and Coke while he regaled us with tales of his week down in Louisiana helping to rebuild the city. He took off home on the early side so that he could get some sleep and get to work tomorrow. I decided to stay up with Dominica to keep her company while she worked on her homework. We should be in bed at approximately midnight.
]]>