Solaris Volume Manager, or SVM, is a logical disk management suite (formerly known as Solstice Disk Suite) that is included for free with the Solaris 9 and high operating systems. SVM is an important piece of SUN’s strategy for Solaris as it provides the basic functionality for logical volume management that UNIX users have come to expect not only in the enterprise but on the desktop as well through its integration with Linux.
As a quick and dirty “how to” introduction to SVM I will walk through the process of setting up SVM for the root volume of a SUN SunFire V100 server. This server is equipped with two matched hard drives. In my example the drives are both 160GB Seagate Barracuda 7200.10. (Keep in mind that the V100’s IDE controller can only address 137GB of these drives.)
I have already done the base install of Solaris 10 having set up a 20GB root partition on slice 0, a 1GB swap partition on slice 1 and leaving the rest of the drive space free for future changes. This much will be assumed before beginning the exercise. The most important pieces here that we have a working Solaris 10 installation with a working root partition and a swap partition and that there is excess free space available as we will need to create a partition for the metabase.
SunFire V100 Notes:
Because this example is being done with the PATA based V100 server our drive naming is consitent between builds when using two drives. My primary drive is
c0t0d0s0 and my secondary drive is
Step One: Create MetaDB Slice
The MetaDB is used to store data about the volumes. It does not need to be a large partition. I have seen recommendations of 50MB – 75MB. I am going to go for the lower number as I have only a single SVM volume to manage. The data will be very small. So let’s use the format command to add a 50MB partition at slice six. If you select 50MB and then do a “print” it comes to 51.80MB on my system. Perfect.
Step Two: Replicate Drive Partitions
Now that we have our first primary drive partitioned as we would like it, it is time to set up the secondary drive to be exactly the same. Being identical is important because we are planning to mirror our root volume and performance will be impacted if we do have our partitions matching exactly.
Step Three: Create First MetaDB
Now that we have our partitions prepared for our root volume and our MetaDB we can create our MetaDB. In this example I have chosen to create one MetaDB space on each drive so that we have a backup should anything go wrong. Having only one is a bad idea. We can create both with a single command:
# metadb -a -f c0t0d0s6 c0t2d0s6
This command, if successful, will finish quietly. To verify that the change has taken place as expected you can simply:
flags first blk block count
a u 16 8192 /dev/dsk/c0t0d0s6
a u 16 8192 /dev/dsk/c0t2d0s6
Step Four: Creating the RAID 0 Concat/Stripe
This is probably the most confusing part of the process. Although our goal is to make a RAID 1 set (a mirror) we have to first create a concat/stripe set. But don’t worry, a mirror is still our goal. We will use the metainit command to create out d11 submirror on the first drive and d12 submirror on the second drive. The naming isn’t actually required to be done in this way but this is following the SUN standard naming convention which makes working with the volumes more straightforward in the future. In this case our entire mirror is d1x with d10 being the top level and d11 being the one submirror and d12 being the other.
# metainit -f d11 1 1 c0t0d0s0
# metainit -f d12 1 1 c0t2d0s0
If you would like to verify the results of these commands you can do so by running the metastat command with no arguments.
Step Five: Creating the Mirror
Now that we have two submirrors created all we have to do is use the metainit command again to combine them together as a single mirror called d10.
# metainit d10 -m d11
Again you can verify the current state of your mirror with
Step Six: Modifying the VFSTAB and SYSTEM files
Now that we have created our working mirror (don’t get excited, we haven’t replicated any data yet so you aren’t done at this point) we need to modify to virtual file system table – /etc/vfstab – to reflect that we will be booting from the d10 mirror device rather than from the disk device
/dev/dsk/c0d0t0s0. We can, of course, do this manually but SUN has provided a very simple command that takes care of this for us. It is always wise to make a backup copy of your old vfstab file just in case you are forced to revert to get the server running again.
In addition to modifying the vfstab we also need to make a small modification to the /etc/system file. Again, it is wise to backup any configuration file before modifying it although this change is very minimal. We simple need
rootdev:/pseudo/[email protected]:0,10,blk added to the end of the file. This too is handled inclusively with the same command.
# metaroot d10
Step Seven: Reboot
Step Eight: Attach Second SubMirror and Begin Replication
Now that the machine has successfully booted to the d10 mirror (your system did come back up, didn’t it) we can tell it to begin the process of replicating the data from the d11 submirror to the d12 submirror.
# metattach d10 d12
You can follow along as the system syncs the two mirror portions through the
metastat command like you used earlier. On my V100s it took several minutes to complete on an idle system. There is a lot of disk I/O involved with this operation.
Once d12 finishes replicating your new mirror your server is RAID 1 protected and ready to go. Your new drive subsystem should be somewhat faster than your single drive of yore and you can sleep well at night knowing that your data is being mirrored. If you have been following along and building your V100 the same as mine then you will also have a second swap partition created on the second drive that you have not yet done anything with. To use this, simply add the following line to the /etc/vfstab (please only copy/paste this if you are using an identical setup to mine.)
/dev/dsk/c0t2d0s1 - - swap - no -
Thanks to Sandra Henry-Stocker at open.itworld.com whose article “Unix Tip: Mirroring your root partition with Solaris Volume Manager” supplied my introductory training in practical SVM.