The SUN SunFire V100 uses legacy Parallel ATA (PATA) – UltraATA 100 – hard drive controllers. A base configuration V100 has one included hard drive (generally 40GB or 80GB) with a removable “drive cage”. A second drive can be installed but will require the addition of a second drive cage. Otherwise there is no where to place the second drive.
Confusingly, the single drive configuration puts the only hard drive on the same cable as the CD-ROM device. This can be confusing – especially when installing a new hard drive to replace the original. Today most hard drives ship with their jumpers set to “Cable Select” as this works 99% of the time. However, in the case of the V100, you will need to manually set the hard drive to “Master” as the CD-ROM is already set to “Slave”. They are backwards on the cable.
If a second hard drive is installed it can be set to either “Cable Select” or “Master” as it will be the only device on the cable. “Master” is the recommended setting as it protects against unknown issues and is more reliable.
Also confusing is, if you look on the server itself, that the primary controller is the one with no hard drive attached natively (IDE 0) and the native hard drive and the CD-ROM attach to the secondary controller (IDE 1). This isn’t an issue but can be confusing when working from the console.
The biggest surprise to many people when adding hard drives to the SunFire V100 is that the V100 has an IDE Controller limitation of 28bit logical block addressing or LBA which means that the IDE controlling is physically limited to 137GB per device. (Technically this makes the device not a true ATA-6 or UltraATA 100 device but truly an ATA-2 device!) To support larger devices a 48bit LBA is required.
I have put in some serious effort into finding a workaround for the 28 bit LBA issue but have no managed to find one. This issue is limited to a very small number of SUN UltraSPARC machines and therefore does appear to have been addressed in Solaris. Perhaps now with the advent of OpenSolaris someone will decide to tackle this problem and write a reliable 48 bit LBA overlay but it appears unlikely. If anyone knows of a workaround for this issue, please comment and let us know.
Possibly the best option is to use 160GB drives as these are inexpensive and only barely overkill as just 23GB will be unusable. Might I recommend the Seagate Barracuda 7200.10 UltraATA100 7200rpm 160GB with 8MB Cache? You can get it quite inexpensively from NewEgg. The 7200.10 is the final generation of the Barracuda drives to include support for PATA connections. The 7200.10 increases performance and reliability over the 7200.9 series by moving to Seagate’s new perpendicular write technology which is very appropriate when installing it into a server of this class.
Check out my SunFire V100 page for everything you ever wanted to know about the SunFire V100 but were afraid to ask.
Somebody seems to have solved the problem.
First of all, purchase a IDE to CF adapter.
Fit the adapter in master mode to the DVD-ROM cable. Make sure the DVD is selected as the slave device via the jumper on the back of the DVD drive.
Fit a compact flash drive of 2,4 or even 8GB to the adapter. Any more is not needed.
Power on the platform and adjust the OBP boot device variable to make the default boot device the CF module. (/[email protected],0/[email protected]/[email protected],0). Issue the command “setenv boot-device disk2” to perform this.
Then adjust the device alias of the DVD drive to now be the the slave device. (/[email protected],0/[email protected]/[email protected],0:f). Issue the command “nvalias cdrom /[email protected],0/[email protected]/[email protected],0:f” to perform this.
Issue the command “setenv use-nvramrc? true” to make the above command work on next reboot.
Issue the command “reset-all” to save these settings.
Now place your Nevada or OpenSolaris (MilaX) CD/DVD into the DVD-ROM drive and boot it.
Install Nevada or OpenSolaris to the CF module.
Issue the command:
“zpool create array c0t0d0 c0t1d0”
ZFS “should” then relabel the disks GPT (EFI), use 148GB per disk and create a stripe of 296GB already mounted as /array . (Valid for 160GB drives and probably higher capacity.)
Worked for me but with slightly different hardware.
Interesting, would not expect this to work. Which hardware did you try this on?
V100 using 300GB IDE Drive (Maxtor MAX 10 7.2K)
When booting FreeBSD 8.x disk on the V100 server using a disk larger than 137Gb you will get a message:
atapci0: using PIO transfers above 137GB as workaround for 48bit DMA access bug, expect reduced performance
So, it’s possible install and user larger disks, but as the message says, you will fall in PIO mode, that means slow performance, for a small storage maybe it can helps.
There’s a note on the release docs on the FreeBSD site about this:
“The ata(4) driver now supports a workaround for some controllers whose DMA does not work properly in 48bit mode. For affected controllers, PIO mode will be used for access to areas beyond 137GB. [MERGED]”
Hope this helps
That must be handled specifically by FreeBSD as there is no such message with Solaris 10 (which is the latest that I have tested, 11 Express is likely the same and 11 is not supported for the CPU) and it just hard limits to 137GB. I have 160GB drives in all of mine and they work fine, just limited to 137GB.
I just formatted one of the V100 units with FreeBSD 9 and gpart and with ZFS it did indeed make a 4GB and a 145GB partition – above teh 137GB limit on my 160GB drives. Excellent.
What is the largest space that anyone has gotten to work? I’ve got some 750GB drives laying around that might be worth experimenting with.
Finding large ATA drives can be problematic these days.
Thanks for the great info, I am a noob to this, but am learning. Got 4 netra x1’s, a v100, and a v20z for cheap, now learning the path…
Leave a comment