ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Building a home server. Please help, DC!

<< < (17/36) > >>

Stoic Joker:
It's not so much the actual read/write is it is the fact that every drive in the array spins up for every read/write - so there's more wear and tear on the drive mechanics rather than the disk platter's surface.-40hz (August 05, 2011, 10:53 AM)
--- End quote ---

Okay, now we're getting somewhere (i think).


If you saved a file to a single drive, only it would spin up and be written to (along with the housekeeping of finding sufficient free clusters. On a three element RAID-5, three drives would be spun up to accomplish the same thing, plus need to write additional information (i.e. parity) above and beyond that contained in the actual file itself. That's three times the disk activity plus "parity tax" plus three times the heat generated over a single drive save operation.-40hz (August 05, 2011, 10:53 AM)
--- End quote ---

Hm... three times the disk activity regarding spinning up three drives, ok. But three times the i/o I ain't buying (I'm thinking closer to 1.5). The scenario also assumes the drives weren't already spun up for some reason (do SCSI drives ever spin down?).

Energy consumption/heat issues I can see (kinda) but it makes me wonder how much extra is another HDD actually gonna cost in a year (filed under why I hate accountants ;)) $5?


So when you add in the MTBF for each of the three drives, you have a higher probability of a drive failing all other factors being equal.-40hz (August 05, 2011, 10:53 AM)
--- End quote ---

Granted statistics isn't my thing ... But if the MTBF for a given drive is 3,000 hours, then for 3 drives it should still be 3,000 hrs. Or is this just the Murphy's Law More moving parts... argument? <-I'll buy that - as it appeals to my cynical side (hehe)).


And most arrays have more than three drives since that's the least cost effective RAID-5 configuration since you always sacrifice one drive to parity even if that drive doesn't exclusively hold the parity data.-40hz (August 05, 2011, 10:53 AM)
--- End quote ---

Funny, I would consider 3 or a multiple thereof to be the best choice for RAID5. As regardless of how many drives you have you're going to sacrifice 33% of the total storage for parity info. Goes back to the less moving parts is better argument. Use a smaller number of larger drives. *Shrug* Having 3 drives just makes the 33% parity "overhead" more obvious, not higher.


Most times, the drives chosen for arrays are built to a higher quality standard than those normally deployed in PCs - so that may even up the failure occurrence rate up between server and non-server drives despite a higher utilization rate.-40hz (August 05, 2011, 10:53 AM)
--- End quote ---

I understand where you're going here, but I can't help but think that the design of a Server/Enterprise class drive would sort of have to be predicated on the fact that it would not be getting very much sleep (e.g. spinning down) ... Know what I mean?

f0dder:
Keep in mind that when you RAID, you're not addressing at sector or filesystem cluster sizes anymore - you're addressing RAID block sizes. So a 1-byte change change to a file on RAID-5 can end up pretttty expensive - multiple drives as well as large blocks per drive.

But I guess you'd have a smart administrator that tries to match FS cluster size, RAID block size and, in the case of SSDs, erase-block sizes to something reasonable.

Stoic Joker:
Keep in mind that when you RAID, you're not addressing at sector or filesystem cluster sizes anymore - you're addressing RAID block sizes. So a 1-byte change change to a file on RAID-5 can end up pretttty expensive - multiple drives as well as large blocks per drive.
-f0dder (August 05, 2011, 05:29 PM)
--- End quote ---


...sssSo, a 1 bite (file change) write automatically requires/results in (assuming 3 drives) a complete rewrite of both of the corresponding blocks? That does sound a bit pricey. But it does explain why the block size selection is so critically dependent on intended usage during setup.


But I guess you'd have a smart administrator that tries to match FS cluster size, RAID block size and, in the case of SSDs, erase-block sizes to something reasonable.-f0dder (August 05, 2011, 05:29 PM)
--- End quote ---

Hm... Any chance you could give an example on the first part before I commit to a yes or no on that??  :D

The price, performance, reliability trinity will be keeping SSDs out of my range for a while yet. I just can't justify paying top dollar for cutting edge performance that might grenade if ya look at it funny. Pretty much the same reason I never got into overclocking heavily.

f0dder:
...sssSo, a 1 bite (file change) write automatically requires/results in (assuming 3 drives) a complete rewrite of both of the corresponding blocks? That does sound a bit pricey. But it does explain why the block size selection is so critically dependent on intended usage during setup.-Stoic Joker (August 05, 2011, 06:10 PM)
--- End quote ---
Yep - read+modify(inmemory)+write. Just like you've gotta do when dealing with a plain IDE drive, you're only dealing with a single drive and a single sector there, though.

steeladept:
Okay, to add my 2 bits.  SJ brought up a good point - if the main penalty is due to spinup/spindown, then an SSD shouldn't be affected WRT MTBF issues.  There is still that nasty penalty that f0dder eluded to, a change as small as one bit requires all drives to be rewritten to (read data -> change bit -> recalculate parity -> write data+new parity) which will take it's toll on the write-life of an SSD, but that shouldn't change it's MTBF, just it's lifespan, if you will.

As to SJ's question on why Mean Time Between Failure -

Granted statistics isn't my thing ... But if the MTBF for a given drive is 3,000 hours, then for 3 drives it should still be 3,000 hrs. Or is this just the Murphy's Law More moving parts... argument? <-I'll buy that - as it appeals to my cynical side (hehe)).

--- End quote ---
You have the essence of it.  MTBF measures any failure within the system.  The more parts, the more pieces there are to fail, and the more failures there will be - eventually.  This does NOT measure the severity of a failure or even provide a directly useful measure of lifespan, since most failures will occur as the device ages, but it does give a good idea of the expected quality.  The correlation is that lower MTBF means it will fail sooner, and while that may statistically be the case, it doesn't mean that any one device (or system in this case) will last longer than any other one.  It just means the one with the lower MTBF is statistically more likely to fail before the one with the higher MTBF.

Example cancelled - Can't find the formula's other than calculus that I don't want to get into....

One failure of MTBF Marketing is that redundant systems automatically have a lower MTBF even though they are actually more reliable (because they are redundant and can be fixed without loosing system operability).  This is not a knock on the measure, but on the usurped use of the measurement for marketing purposes.  Further research proved this false.  Redundancy is one way to reduce MTBF at the expense of complexity since MTBF looks at the system holistically. 

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version