ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Building a home server. Please help, DC!

<< < (13/36) > >>

Stoic Joker:
Spanning & Striping are both incredibly dangerous for the same reason. If one disk fails (or just has a real bad day), everything on the array is gone. Now you have to rebuild the array and restore all of the data that isn't on it any more, from somewhere. So if it's a 4TB array and you have good backups, you can get everything back up and running. Sure. In a day or so...

Or you can just not be down (RAID5). Blow a drive, you're still running (instead of scrambling for backups while trying to keep your heart rate under 150). Replace the drive, let it rebuild itself, go on with your day. Sure, If something else fails during the process you get to have a catastrophic failure anyway. But, that's what backups are for.

The point is a single disk failure shouldn't automatically cut you off from all your data, until such time as the primary system can be brought back on-line. Because if you're not physically there (due to being at work/out of town) when the box goes poof. You're stuffed for the duration if you don't have a little on-the-fly redundancy cushion.

40hz:
re: Spanning

I don't go much for spanning or multi-drive striping for most personal or business uses.

About the only time RAID-5 makes sense (to me at least) is when you have something like an accounting or web application that can't experience unscheduled downtime for any reason. Usually because it interrupts "the flow of commerce" (i.e. sales) or some other key business function like issuing support tickets or software licenses to your customers. Having RAID allows you to stay up long enough to announce a maintenance period and get a good backup before you rebuild your array.

And don't be fooled by the "live rebuild" argument that says you can hot-swap and rebuild without taking your array offline. Yes, it can be done. But it's slow, and frustrating, and it drags server performance down so much  that it's not practical for general purposes. About the only time it is viable is if you implement load balancing with automatic 'fail-over' to a secondary server that takes on the burden until the primary array gets rebuilt. Once again we're talking heavy-duty data center setups here. If you're something like a bank - go for it. Otherwise put it out of your head. <EDIT/UPDATE: see StoicJoker's comment below before taking the above as gospel. :mrgreen:>

Spanning is something I really don't understand except for very specialized circumstances  - like streaming data collection, or media rendering. Basically where you don't know how big a file will be, other than it's gonna be humongous! Nothing can ruin your morning more then to discover your CGI project (which had been rendering for over 28 hours) aborted at the "94% completed" mark because it was a few hundred megabytes shy of the drive space it needed to finish. I've seen it happen. (There were tears...)

Pooling may be useful for a home media server. Especially where the owner is generally clueless about technology and keeps loading DVD after DVD rip onto their box. For people like this, pooling is probably the easiest and most practical approach. Run out of space? Just slap in another drive and add it to the pool.

But as Stoic pointed out above, it's still a dicey chance to take if it's data that's hard to replace or genuinely important to its owner.

Over time, I've begun to see the space limitations on physical drives as a blessing in disguise. The bigger the drive, the more disorganized they seem to become. And thanks to disk index/search utilities like Everything :-*, most people can get away with it. Fling your stuff in folders - and put folders within folders out the kazoo - and screw organization! Just use a utility to root out where you put something when you need it.



It works. But it's sloppy. And it's not a generally good way to handle file organization.

FWIW, I tend to assign specific drives specific types of data. That allows me to more easily setup backup and sync routines on a case by case basis. Critical files and directories may get mirrored in real time. Other directories may require version control. Others may get simple backups. Some don't get copied or backed up at all since they're kept for convenience and easily replaced with newer versions should they ever be lost. (Linux distro ISOs or Microsoft's WSUS files are a good example of that.)

Simpler is better when it comes to drive and directory setups. Especially on servers. And extra especially when you're as simpleminded as I am about these things. ;D



Stoic Joker:
And don't be fooled by the "live rebuild" argument that says you can hot-swap and rebuild without taking your array offline. Yes, it can be done. But it's slow, and frustrating, and it drags server performance down so much  that it's not practical for general purposes.-40hz (August 02, 2011, 01:48 PM)
--- End quote ---

I've actually never had a problem with it. If the server is under high (steady 50+% capacity) load, I can definitely see that as an issue. but for a SMB it just keeps them running, instead of being down for the duration of a full restore or (eek) Brick-Level rebuild. I've actually hot-swapped a dead drive (out of a 136GB RAID5 SCSI array) on our Exchange server, and let it rebuild during business hours ... Without anyone noticing. (Dell PowerEdge 1800)


Over time, I've begun to see the space limitations on physical drives as a blessing in disguise. The bigger the drive, the more disorganized they seem to become. And thanks to disk index/search utilities like Everything :-*, most people can get away with it. Fling your stuff in folders - and put folders within folders out the kazoo - and screw organization! Just use a utility to root out where you put something when you need it.
 (see attachment in previous post)
It works. But it's sloppy. And it's not a generally good way to handle file organization.

FWIW, I tend to assign specific drives specific types of data. That allows me to more easily setup backup and sync routines on a case by case basis. Critical files and directories may get mirrored in real time. Other directories may require version control. Others may get simple backups. Some don't get copied or backed up at all since they're kept for convenience and easily replaced with newer versions should they ever be lost. (Linux distro ISOs or Microsoft's WSUS files are a good example of that.)

Simpler is better when it comes to drive and directory setups. Especially on servers. And extra especially when you're as simpleminded as I am about these things. ;D-40hz (August 02, 2011, 01:48 PM)
--- End quote ---


Now this I totally agree with! I also like to keep things that fragment quickly (temporary files, logs, user folders) segregated from things that almost never fragment (long term archives, install images, reference materials (we have 17GB of service manuals)). And both of them away from databases that grow slowly and are best kept in one piece.

40hz:
I've actually never had a problem with it. If the server is under high (steady 50+% capacity) load, I can definitely see that as an issue. but for a SMB it just keeps them running, instead of being down for the duration of a full restore or (eek) Brick-Level rebuild. I've actually hot-swapped a dead drive (out of a 136GB RAID5 SCSI array) on our Exchange server, and let it rebuild during business hours ... Without anyone noticing. (Dell PowerEdge 1800)
-Stoic Joker (August 02, 2011, 02:39 PM)
--- End quote ---

Doing better than me on that score with a couple of Dells I've tried it on. Neither were near capacity. But they both had remote users coming in via VPN to heavy duty client-server database apps so that may have had something to do with it.

Hmm...Gonna have to look into that a little more closely... :)

@SJ - Thx for sharing your experiences btw. 8)

superboyac:
40hz, Stoic:
Thanks so much for the discussion.  I'm really following along better than i expected, and I'm learning a lot.  I think I'm getting a clearer picture of what I want.  As far as taking "sides", I think very very much like 40hz in the simplicity approach, and the restrictions being a blessing in disguise.  As you can see, I struggle with this concept when you also add in my desire to overkill and overengineer everything.  It's a maturity thing right now...

Anyway, I like this:  I'm not going to RAID.  I'm pretty sure of that.  I'll have different disks for different stuff.  It's the videos that are the killer.  I think everything can fit on one drive, and videos will have to span multiple derives.  I'd like to pool them, I like that a lot.  Using Windows 7 libraries, and if that proves insufficient for my desires somehow, I'll see what kind of third-party solutions can handle merging several directories on different hard drives so the client computers "sees" them as one drive (please offer ideas if you know of any).

To me, "rebuilding" is as simple as copying all my files over to a new drive.  That's all I need, especially considering that multiple locations will have these copies readily available.  Anything fancier than that just doesn't seem to hit home to me.

I think this discussion is very good.  I've had several discussion int he past couple years about RAID, storage, backing up, etc., with a lot of people, and it seems to be a very divisive, confusing subject.  A lot of people are saying things that don't make a whole lot of sense to me.  I think if I could wrangle in this discussion into a short presentation, it would prove very useful to people.  The question people have about all of this is "What should I do?  What is the right balance?"

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version