ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Please help superboyac build a server (2013 edition).

<< < (16/31) > >>

superboyac:
But it's kinda moot now. Unless I missed something WHS has been officially discontinued. Microsoft is suggesting its very stripped down "Windows Server Essentials" server as the replacement.
-40hz (August 06, 2013, 08:26 AM)
--- End quote ---

It's not moot to those people (i.e. me) hit by it.  If they paid me, I wouldn't use it in any iteration.  It's one thing to be hit by something like this in beta software- but that was supposedly production-ready.  Nope.  Nuh uh.  Not even if they swore on their children's lives would I use it again.
-wraith808 (August 06, 2013, 09:06 AM)
--- End quote ---
Wow this is really surprising to me.  I had heard such rave reviews about WHS and the drive pooling the past couple of years.  OK, this will convince me to go with FreeNAS and ZFS

superboyac:
(intermission)
http://arstechnica.com/information-technology/2013/05/fios-customer-discovers-the-limits-of-unlimited-data-77-tb-in-month/

Yes, Virginia, there is a limit to what Verizon will let you do with FiOS' "unlimited" data plan. And a California man discovered that limit when he got a phone call from a Verizon representative wanting to know what, exactly, he was doing to create more than 50 terabytes of traffic on average per month—hitting a peak of 77TB in March alone.

"I have never heard of this happening to anyone," the 27-year-old Californian—who uses the screen name houkouonchi and would prefer not to be identified by name—wrote in a post on DSLreports.com entitled "LOL VZ called me about my bandwidth usage Gotta go Biz." "But I probably use more bandwidth than any FiOS customer in California, so I am not super surprised about this."

Curious about how one person could generate that kind of traffic, Ars reached out to houkouonchi and spoke with him via instant message. As it turns out, he's the ultimate outlier. His problem is more that he's violated Verizon's terms of service than his excessive bandwidth usage. An IT professional who manages a test lab for an Internet storage company, houkouonchi has been providing friends and family a personal VPN, video streaming, and peer-to-peer file service—running a rack of seven servers with 209TB of raw storage in his house.
--- End quote ---
Spoiler
DC, please make sure this doesn't happen to me!

wraith808:
You have to WARN people before you do stuff like that!  EYEBLEACH!  NSFW!  NSFAnyone!!! :stars:

superboyac:
Very nice article by an experienced ZFS user:
http://nex7.blogspot.com/2013/03/readme1st.html
There are a couple of things about ZFS itself that are often skipped over or missed by users/administrators. Many deploy home or business production systems without even being aware of these gotchya's and architectural issues. Don't be one of those people!

I do not want you to read this and think "ugh, forget ZFS". Every other filesystem I'm aware of has many and more issues than ZFS - going another route than ZFS because of perceived or actual issues with ZFS is like jumping into the hungry shark tank with a bleeding leg wound, instead of the goldfish tank, because the goldfish tank smelled a little fishy! Not a smart move.

ZFS is one of the most powerful, flexible, and robust filesystems (and I use that word loosely, as ZFS is much more than just a filesystem, incorporating many elements of what is traditionally called a volume manager as well) available today. On top of that it's open source and free (as in beer) in some cases, so there's a lot there to love.

However, like every other man-made creation ever dreamed up, it has its own share of caveats, gotchya's, hidden "features" and so on. The sorts of things that an administrator should be aware of before they lead to a 3 AM phone call! Due to its relative newness in the world (as compared to venerable filesystems like NTFS, ext2/3/4, and so on), and its very different architecture, yet very similar nomenclature, certain things can be ignored or assumed by potential adopters of ZFS that can lead to costly issues and lots of stress later.

I make various statements in here that might be difficult to understand or that you disagree with - and often without wholly explaining why I've directed the way I have. I will endeavor to produce articles explaining them and update this blog with links to them, as time allows. In the interim, please understand that I've been on literally 1000's of large ZFS deployments in the last 2+ years, often called in when they were broken, and much of what I say is backed up by quite a bit of experience. This article is also often used, cited, reviewed, and so on by many of my fellow ZFS support personnel, so it gets around and mistakes in it get back to me eventually. I can be wrong - but especially if you're new to ZFS, you're going to be better served not assuming I am. :)
--- End quote ---

I like this part, very helpful!
9. Pool Design Rules
I've got a variety of simple rules I tell people to follow when building zpools:

    Do not use raidz1 for disks 1TB or greater in size.
    For raidz1, do not use less than 3 disks, nor more than 7 disks in each vdev.
    For raidz2, do not use less than 5 disks, nor more than 10 disks in each vdev.
    For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev.
    Mirrors trump raidz almost every time. Far higher IOPS potential from a mirror pool than any raidz pool, given equal number of drives.
    For 3TB+ size disks, 3-way mirrors begin to become more and more compelling.
    Never mix disk sizes (within a few %, of course) or speeds (RPM) within a single vdev.
    Never mix disk sizes (within a few %, of course) or speeds (RPM) within a zpool, except for l2arc & zil devices.
    Never mix redundancy types for data vdevs in a zpool.
    Never mix disk counts on data vdevs within a zpool (if the first data vdev is 6 disks, all data vdevs should be 6 disks).
    If you have multiple JBOD's, try to spread each vdev out so that the minimum number of disks are in each JBOD. If you do this with enough JBOD's for your chosen redundancy level, you can even end up with no SPOF (Single Point of Failure) in the form of JBOD, and if the JBOD's themselves are spread out amongst sufficient HBA's, you can even remove HBA's as a SPOF.

If you keep these in mind when building your pool, you shouldn't end up with something tragic.

--- End quote ---

He also says better to use SAS disks vs SATA.  But...I already have like 15 SATA disks!  Speaking of which...AHHHH NO!  DOn't say that!
17. Crap In, Crap Out
ZFS is only as good as the hardware it is put on. Even ZFS can corrupt your data or lose it, if placed on inferior components. Examples of things you don't want to do if you want to keep your data intact include using non-ECC RAM, using non-enterprise disks, using SATA disks behind SAS expanders, using non-enterprise class motherboards, using a RAID card (especially one without a battery), putting the server in a poor environment for a server to be in, etc.
--- End quote ---
I have in my notes somewhere 40hz talking about ECC-RAM.  Looks like this person agrees.

superboyac:
yet another useful tidbit about SAS vs SATA and HBA vs expanders:
This I doubt very much. As far as I know, all Sun/Oracle storage servers that use ZFS are using HBA only. No one use expanders. Unless you can show links that confirms your statement, I will not accept your statement. I dont think your statement is correct, unfortunately.

For instance, the ZFS storage server Sun X4500 Thumper with 48 disks in 4U, used 6 HBAs. Each HBA connected 8 disks like this:
HBA1: disk0 d1 d2 d3 d4 d5 d6 d7
HBA2: disk8 d9 d10 d11 d12 d13 d14
...
HBA6: disk40 d41 d42 d43 d44 d45 d46 d47

Then you created a zpool with several vdevs. For instance a raidz2 vdev with
disk0
disk8
...
disk40

And another vdev with d1, d9, ..., d41. And another vdev with d2, d10, ..., d42. etc.

Then you collected all vdevs into one zpool. If one HBA broke, for instance HBA1, it doesnt matter because every vdev just lost one disk each and the zpool could still function.



Regarding the first post with the link about SAS being toxic from Garret Damore. This Garret is a former Sun kernel engineer and the man behind Illumos. He has lots of credibility. If he says something, it is a confirmed fact. He works now at Nexenta, the storage company.


Basically, the link says that SAS use a different protocol than SATA. In the expander there will be a conversion from SAS to SATA, and you loose information in the conversion. In worst case, there might be problems. Thus, if you really want to be sure, use SAS disks with SAS expanders so there are no loss of data because of conversion.

Also, because ZFS will detect all problems immediately, ZFS will expose problems with expanders that other filesystems does not notice. ZFS having problems with SAS expanders is not a sign of fragility, but a sign of ZFS superior error detection. With other fileystems the errors are still there, but you will not notice.

I believe (need to confirm this) that if you use ZFS with SAS expanders and if you get problems, then ZFS will detect all errors just as normal, but ZFS might not be able to repair the errors. The same goes with hardware raid + ZFS: ZFS will detect errors but can not repair the errors. Thus you will get an error report, but ZFS can not repair all errors.

I am trying to go for many HBAs instead. Much safer. Expanders introduce another source of problems. KISS - keep it simple
--- End quote ---

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version