hard drive – Sheep Guarding Llama https://sheepguardingllama.com Scott Alan Miller :: A Life Online Thu, 13 Aug 2009 22:40:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 Third Party Hard Drive for HP Proliant DL185 G5 https://sheepguardingllama.com/2009/05/third-party-hard-drive-for-hp-proliant-dl185-g5/ https://sheepguardingllama.com/2009/05/third-party-hard-drive-for-hp-proliant-dl185-g5/#comments Sun, 31 May 2009 18:06:25 +0000 http://www.sheepguardingllama.com/?p=4088 Continue reading "Third Party Hard Drive for HP Proliant DL185 G5"

]]>
This document applies directly to the Hewlett Packard Proliant DL185 G5 server.  I have tested this with the twelve front bay configuration and will test shortly with the rear-facing drive configuration as well.  [Edit – Tested with fourteen drive configuration and it checked out just fine.]

When buying a hot-swap SAS or SATA 3.5″ hard drive for use in your new HP Proliant DL185 G5 you can acquire them directly from HP with the drive carrier (or sled, caddy) already attached.  This is the easiest method.  If you are like me and prefer to select your own drives from third party makers (in my case, I want to use low power, high capacity Seagate Barracuda LP drives) then you must purchase your hot swap drive sleds separately.  Finding the correct part number from HP can be quite a hassle.  Even calling them for support can be tricky as almost no one buys this part directly.

If you wish to get your drive trays separately and not through HP you may be in tough shape.  HP does not stock this part and, in fact, is unable to even look up this part number for you.  I spent some time working with HP in the US on this issue and they were able to provide a visual confirmation on the part for me but could not verify the quality or the usability of the third party drives that I was able to find.  So I was stuck taking a risk to see if these drives would work.  For some machines HP can provide a part number and sometimes can even sell the caddy themselves, but not in the case of the 185 G5.  I have taken the time both with HP and with the third party vendors and with the server in-hand to verify these parts so you do not have to do so.

The part that you need to purchase is HP Part Number: 373211-001.  This part is generally priced around $35 USD.  You will need as many as fourteen of them to fully populate the DL185 G5 drive with the two optional large drive bays (twelve in front and two in back) but you can use them individually as well, or course.  I have had good luck and have gotten a good price getting these trays from Discount Technology: DL185 G5 Hot Swap Drive Tray.

Beware of shops attempting to sell you a much lower cost alternative to this part number.  Quite often the lower cost part is actually a drive blank.  A drive blank is simply a plastic air dam that corrects airflow through the server chassis when a drive is not present. Many of these drive blanks should ship with your DL185 G5 when it is new.  They are readily available and very inexpensive but, mostly, useless.

The big advantage of working with third party sleds and drives is that the DL185 G5 can be populated for thousands of dollars less and can house as much as 28TB of storage in a tiny 2U server.  This is possibly the second densest storage unit on the market when used with the Seagate Barracude LP 2TB drives – the densest is the Sun x4500 “Thumper” 4U storage server at many times the cost of the DL185.

Also, when ordering a DL185 G5 you should be aware that if you get the larger twelve hard drive front drive bay that you cannot also have a front loading optical drive and will need to get your optical drive rear facing.  If you get the optional dual hot swap rear facing drive option then you cannot have a rear facing optical drive.  If you choose both of these options you must use a USB-based optical drive in order to boot from optical media.  This is not always obvious when you are attempting to order one of these machines.

]]>
https://sheepguardingllama.com/2009/05/third-party-hard-drive-for-hp-proliant-dl185-g5/feed/ 31
Third Party Hard Drive for HP Proliant DL385 G5 https://sheepguardingllama.com/2009/02/third-party-hard-drive-for-hp-proliant-dl385-g5/ https://sheepguardingllama.com/2009/02/third-party-hard-drive-for-hp-proliant-dl385-g5/#respond Sat, 28 Feb 2009 17:41:37 +0000 http://www.sheepguardingllama.com/?p=3642 Continue reading "Third Party Hard Drive for HP Proliant DL385 G5"

]]>
This document applies directly to the Hewlett Packard Proliant DL385 G2 and DL385 G5 servers which share a physical chassis.  To the best of my knowledge, this will also apply to the DL585 G2 and DL585 G5 which should share an eight bay drive cage with their 3xx series cousins.  I also believe that this applies to the Intel based DL380 G5 as well as the DL580 G5.  (The DL380 G4 and the DL580 G4 use different drive configurations as does the DL385 G5p.)

When buying a hot-swap SAS or SATA 2.5″ hard drive for use in your new HP DL385 G5 you can acquire them directly from HP with the drive carrier (or sled, caddy) already attached.  This is the easiest method.  If you are like me and prefer to select your own drives from third party makers (in my case, I want to use high performance Seagate drives) then you must purchase your hot swap drive sleds separately.  Finding the correct part number from HP can be quite a hassle.  Even calling them for support can be tricky as almost no one buys this part directly.

I have already done the legwork to find the correct part number and have purchased and tested this part to be sure that it is correct.  The part that you need to purchase is HP Part Number: 378343-002.  This part is generally priced around $50 USD.  You will need eight of them to fully populate the DL385 G5 drive housing but you can use them individually as well, or course.

Beware of shops attempting to sell you a much lower cost alternative to this part number.  Quite often the lower cost part is actually a drive blank.  A drive blank is simply a plastic air dam that corrects airflow through the server chassis when a drive is not present.  Seven of these drive blanks should ship with your DL385 G5 when it is new.  They are readily available and very inexpensive but, mostly, useless.

If you need to reach HP’s Parts Store directly you can call them at (800) 227-8164 in the US.

]]>
https://sheepguardingllama.com/2009/02/third-party-hard-drive-for-hp-proliant-dl385-g5/feed/ 0
The Case Against SAN https://sheepguardingllama.com/2008/08/the-case-against-san/ https://sheepguardingllama.com/2008/08/the-case-against-san/#respond Sat, 16 Aug 2008 14:36:12 +0000 http://www.sheepguardingllama.com/?p=2492 Continue reading "The Case Against SAN"

]]>
Despite an inflamatory post title, I believe that SAN (Storage Area Networks) is a great technology with numerous scenarious where it is the exact right technology and several scenarios that only exist because of SAN’s availability.  However, that being said, many enterprises today use SAN without doing any proper strategy, architecture or engineering.  It is being chosen as a technology not because of its appropriatness to the task at hand but simly because technology managers see it as easier, or more popular, to use it broadly than to carefully evaluate each system in question based on technical and financial factors.

SAN is an amazing technology that wonderfully compliments virtualization, clustering and other advanced use case scenarios.  Not every machine is using these types of scenarios and SAN has many downsides that need to be carefully considered before implementing it blindly.

SAN is Complex. Simply by chosing to use SAN we introduce another layer of complexity into the server equation.  (I am assuming server use situations here as SAN is nearly unheard of in the desktop space.  That being said, I use SAN on my own desktop.)  Having SAN means that either your system administrators need to wear yet another hat or you need to hire and maintain a dedicated storage administration, and possibly engineering, staff.

It also means that you will probably need to deal with sourcing and managing a fibre channel network along with the associated HBAs, fiber optics, etc.  Servers that would otherwise have just three simple Ethernet connections (I’m generalizing horribly here) are suddenly up to five or more connections making your datacenter folks oh so happy.

SAN is Expensive. Unless you opt to use a shared network SAN technology like iSCSI (or Z-SAN) then SAN introduces an expensive array of proprietary networking hardware, cabling and host bus adapters.  Only after all of those expenses must we consider the cost of the SAN itself.  SAN systems are generally quite expensive and only begin to approach being cost effective when utilization rates are extremely high and the systems are very large.  Heavy up front investments can make SAN difficult to cost justify even if long term utilization rates might be high.

SAN is Not Performant. High speed SAN networks, massive switching fabrics and huge drive arrays all play into an expensive and mostly futile attempt to get SAN technologies to perform at or near traditional direct attached storage technologies.  During the Parallel SCSI and PATA drive era, fibre channel SAN had an advantage over most local drives simply because of the high performance of its networking infrastructure.  Today this is not the case.

Unlike shared bandwidth technologies like Parallel SCSI and Parallel ATA (PATA), SAS and SATA drives have dedicated, full duplex bandwidth per device providing greatly increased transfer rates while lowering latency.  Only the largest, most expensive of high performance SAN systems could hope to overcome this gap in technology.

Typical SAN systems tend to use, in my experience, SATA devices traditionally running at 7,200 RPM.  Local drives are often SAS drives running at 15,000 RPM.  Often, especially in the AMD and Intel server worlds, local drives are handled via high powered RAID controller cards with dedicated processors and their own cache.  These cards move the cache closer to the system memory making their burstable throughput far greater than can normally be acheived in a SAN situation.

SAN is Not Easily Tunable. In most situations, SAN is managed as a single, giant storage entity.  Tuning is performed to an entire array but little thought is generally given to small segments within an array.

This is made nearly impossible and definitely impractical by the simple fact that physical drive resources are often shared and the concerns of each accessing system must be considered.  The obvious solution is to just tune for “average” use given no special considerations to any particular system.  If drive resources are not dedicated then we must question where the value of the SAN comes into play.

Drives located on a local machine can easily be tuned for cost and performance as needed.  Careful consideration of high speed SAS versus large volume SATA can be made on a volume by volume basis by the system engineer.  Drives can be grouped as needed into carefully chosen RAID levels such as 0 for raw performance, 5 for high speed random access with some additional reliability, 1 for good sequential access with full redundancy, 6 for additional redundancy over 5, etc.

Drive volumes can also be isolated so that drive systems often accessed simultaneously do not share command paths.  Carefully filesystem design can greatly reduce drive contention and minimize drive head movement for increased performance and reliability.

SAN is Often Political. Simpy by introducing SAN to a large organization we risk introducing new management, new skill sets, new job descriptions and, inevitably, confusion and paperwork.  By separating the storage from the server we create another point of coordination keeping the system administrator from being a single point of contact and troubleshooting for system issues.

Anytime that we introduce a separation of duties we introduce company politics and a chain of communication.  Instead of troubleshooting a single system when a server goes down we have to, in the case of SAN, now consider the server, the SAN box and the connecting network plus the peripheral pieces like the host bus adapters and the local configuration.  What might otherwise be simple, almost meaningless changes like the addition of another drive to expand a server’s capacity by a terabyte, can suddenly scale into major enterprise issues requiring much lead time, planning and expenditure, and, of course, a system outage that used to take minutes to repair could easily become hours as company departments seek shelter rather than simply fixing the issues at hand.

SAN uses Additional Datacenter Footprint. Because almost any server already comes with internal storage capacity, the datacenter space needed by SAN equipment is generally redundant.  Until additional storage capacity is needed beyond that which can fit inside of the existing server chassis the SAN storage is completely additional within what are generally cramped and overutilized data centers.  In many cases when a server needs additional drive capacity SAN is still not necessarily a good option from a footprint perspective as many external drive array systems can be locally attached and use very little datacenter space.

SAN systems require more than simply physical space within the datacenter for their switching and storage pieces, they also require additional power and cooling.  In an era when we are fighting to make our datacenters as green as possible, SAN needs to be considered carefully with respect to its overall power draw.

SAN does not address Solid State Drives. Solid State Drive technology, or SSD, poses yet another obstacle for SAN in the enterprise.  SSD drives are much smaller capacity, currently, than traditional spindle based hard drives but often provide better performance at a fraction of the power consumption.  A traditional hard drive generally draws roughly fifteen watts while a standard SSD generally draws around one watt – a very significant power reduction indeed.

SSDs often have very high burstable transfer rates which swing the performance balance far in favor of the locally attached storage options based on their greatly superior throughput.  For example, a standard Hewlett-Packard DL385 G5 server, a very popular model, as eight 3Gb/s SAS channels available to it for a total aggregate of 24Gb/s.  Six times that of the most common SAN connections.

SANs which choose to use SSD, which is likely to take quite some time because SANs generally lean towards large capacity over performance, will suffer from a lack of throughput available but will have the benefit of eliminating almost all issues mentioned early in regards to drive contention from shared drive resources.

SAN is Confusing. While this factor comes into play less often, it still holds true that a majority of server “customers”, those people who utilize servers but are not the server or storage administrators, have a very poor understanding of SAN, NAS, DAS or filesystems in general and by introducing SAN we can inadvertantly introduce forms of complexity that cause communications and support issues.  While not an issue with SAN itself, in some cases technical confusion can impede adoption even when the technology is appropriate.

Bottom Line. SAN suffers from performance, organization, cost and issues of complexity while local storage is well understood, extremely inexpensive, simple to manage and offers extreme performance.  With rare exception, SAN, in my opinion, has little place competing with traditional direct attached storage options until DAS is unable to deliver necessary features such as resource sharing, certain types of replication, distance or capacity.

]]>
https://sheepguardingllama.com/2008/08/the-case-against-san/feed/ 0