topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • Wednesday November 12, 2025, 8:01 am
  • Proudly celebrating 15+ years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Recent Posts

Pages: prev1 ... 38 39 40 41 42 [43] 44 45 46 47 48 ... 252next
1051
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 14, 2013, 12:49 PM »
??? Mirroring for a VHD library? Library meaning storing them only, or running them from that location? Mirroring cuts spindle count in half which will severely limit the number of VM's you can comfortably run simultaneous.
THis is not for the VMs and all that.  This is just for the file storage.  I need to use more than 3 disks, I'm not doing RAID.  I can't tell if you guys are using the term RAID and drive pooling interchangeably.  I have no desire for RAID of any kind, but I'm interested in drive pooling for the purposes of having storage capacities larger than the biggest disks available (4TB).  SO I want 8TB drive pools or larger.
1052
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 14, 2013, 09:25 AM »
If I go with Windows Server 2012 and their Storage Spaces, here are my notes from that:
http://technet.micro...ibrary/jj822938.aspx
Mirror
Stores two or three copies of the data across the set of physical disks.
Increases reliability, but reduces capacity. Duplication occurs with every write. A mirror space also stripes the data across multiple physical drives.
Greater data throughput than parity, and lower access latency.
Uses dirty region tracking (DRT) to track modifications to the disks in the pool. When the system resumes from an unplanned shutdown and the spaces are brought back online, DRT makes disks in the pool consistent with each other.

Requires at least two physical disks to protect from single disk failure.
Requires at least five physical disks to protect from two simultaneous disk failures.

Use for most deployments. For example, mirror spaces are suited for a general-purpose file share or a virtual hard disk (VHD) library.
There are three options: mirror, parity, simple.  So they are recommending mirror for my setup.
1053
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 14, 2013, 08:58 AM »
I do want to try the WIndows Server 2012 pooling features.  Maybe I'll do that first, and if it's a problem I'll go to freenas next.  Here's an article talking about the new R2 drive pooling features:
spaces-verus-arrays.png
1054
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 13, 2013, 05:34 PM »
Rosewill does some nice affordable bare-bone external units. So does SANS Digital.

I would definitely test what you're planning with a smaller incarnation before you committed whole hog to a big installation.

Might also want to check some of the forums (FreeNAS etc.) to see if anybody else has done something like what you're planning. Or tried and failed. Reason I'm saying is that there is a good chance there's a breaking point somewhere once you go over a certain storage capacity and start running into reliability issues with off the shelf open software.

Maybe somebody knows for sure one way or the other? 8)

From everything I've read so far, the thing that is sticking out the most for me is the hard drives.  I have a bunch of consumer grade drives that i basically got on sale the past year or two.  But they are not ideal for a freenas server, although most say that it works fine.  Really, the best way to go is SAS drives.  So based on this, I'm going to try a 4-bay smaller implementation of what i originally planned and test it a little bit before going huge and deciding to get sas drives or something.

There are a lot of hobby builds out there in the htpc community like xbmc folk and stuff.  So many of them use the Norco 24 drive racks, so they are dealing with tons of storage.  it sounds like if I'm careful and follow the guy's advice in that freenas guide above, I should be ok, but it will take some studying.

Another easier option I'm considering toying with is the original "soft" solution.  That is, using a normal windows 7 OS or windows server, and using the software tools to deal with the drive pooling like snapraid.  The advantage there supposedly is an easier and more flexible setup, like, I can pull drives out with less headache and add drives, etc.  The disadvantage is that there will likely be some performance hit.  And maybe a reliability hit also, i mean, there must be a reason why the enterprise users don't do that kind of stuff.
1055
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 13, 2013, 04:55 PM »
Call me crazy, but just for backup, I'm considering getting a standard plug-and-play NAS box.  Maybe from Qnap...a 4 or 5 bay one.  If it becomes redundant later after i have my custom boxes working, i can put them in a relative's home for off-sute backup purposes, etc.

Nope... I call you coming to your senses :)  And I'd recommend taking a look at a Synology Diskstation- I'm very satisfied with mine, and in the end, it made me realize I didn't really need a server anymore.
I'm going to have to intentionally ignore this post for now  :D. (but duly noted)
 FYI, I was just checking out the diskstation and qnaps versions also.
1056
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 13, 2013, 01:02 PM »
OK, yes.  I believe I am first going to do a small 4-drive test build.  I'm going to build a small-case NAS for 4 drives and that will be my first experiment.
1057
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 13, 2013, 12:20 PM »
Call me crazy, but just for backup, I'm considering getting a standard plug-and-play NAS box.  Maybe from Qnap...a 4 or 5 bay one.  If it becomes redundant later after i have my custom boxes working, i can put them in a relative's home for off-sute backup purposes, etc.
1058
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 12, 2013, 05:34 PM »
A guide for newbies to FreeNAS, updated with v9.1 info.  Nice!

Some of my personal highlights:
Please recognize that your Windows hardware knowledge may provide some small insight for selecting hardware but is not equivalent to expertly choosing hardware for a FreeBSD based system.  For example, ECC RAM in a desktop isn’t too useful.  But for ZFS it can be the difference between saving your data and a complete loss of the zpool with no chance for recovery.  Realtek NICs are common in the Windows world, but perform extremely poorly in FreeBSD.
FreeBSD has a very steep learning curve.  It is not for those looking to learn it in a weekend.  I was operating a nuclear reactor before I was old enough to drink alcohol, but I still spent a solid month getting familiar with FreeNAS.  Everyone’s mileage will vary and your level of “comfort” with FreeNAS will be different than mine.  But I haven’t lost any data and I have helped many people recover an unmountable zpool.
VDevs with single disks are known as “striped” disks. They have no redundancy.
VDevs can provide redundancy from individual hard disk failure inside the same VDev.
VDevs cannot operate outside of a zpool.
You cannot add more hard drives to a VDev once it is created.*
When a VDev can no longer provide 100% of its data using checksums or mirrors, the VDev will fail.
If any VDev in a zpool is failed, you will lose the entire zpool with no chance of partial recovery.
You can think of it simply as:
Hard drive(s) goes inside VDevs.
Vdevs go inside zpools.
Zpools store your data.
Disk failure isn’t the concern with ZFS.  Vdev failure is!  Keep the VDevs healthy and your data is safe.
ZIL drive performance will need to exceed the zpool performance of the expected workload to be useful.  Typically an SSD is used for this application.  An Enterprise class SSD or SSD based on SLC memory is recommended.
For maximum performance and reliability, you should never try to use ZFS with less than 8GB of RAM and a 64-bit system.
Intel Network cards are the NIC of choice.  The drivers are well maintained and provide excellent performance(not to mention inexpensive).  Other NICs have been known to perform intermittently, poorly, or not at all.  Realtek NICs can perform decently as long as you have a CPU that has enough power to process all of the network traffic.  (This is one thing that is VERY different between Windows and FreeBSD).  Using “low power” CPUs such as Intel Atoms and AMD C-70s are NOT powerful enough to be used with Realtek and get good performance.
ZFS has very few “recovery tools” unlike many other file systems.  For this reason, backups are very important.  If the zpool becomes unmountable and cannot be repaired there are no easy software tools or reasonably priced recovery specialists you can use to recover your data.  This is because ZFS is enterprise-class software, and no enterprise would waste their time with recovery tools or data recovery specialists.  They would simply recover from a known good backup or mirror server.
OK...this is the freaking tutorial I've been dying for!  :up: :up: Great stuff.  I'm only about 75% of the way through, but I'm tired now.
1060
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 12, 2013, 04:13 PM »
Regarding the cables that connect the m1015 SAS ports to the actual hard drives, I came across this helpful post:
Hi electricd7,

I'm actually going to be bolting pretty much that exact motherboard-processor-memory combination into my existing filer tonight!

What you need are SFF-8087 cables. They come in 2 different "flavors":

Forward Breakout = crossover = SFF-8087 (Controller card) to 4x SAS/SATA ports on DRIVES
Reverse Breakout = 1:1 = SATA/SAS Ports (on Motherboard or controller) to SFF-8087 backplane

You will need "forward" cables available here:

http://www.monoprice...54&cs_id=1025406

I use a Fractal Designs R3 case and bought the .75M ones which turned out to be just about the perfect length to go from the card, down to the case bottom & back up to the 8 drive sleds.

While you are ordering the 8087 cables you might as well get some Molex-to-SATA converters as well:

http://www.monoprice...26&cs_id=1022604

Use these so you only need to get 2 Molex power connectors to the drives instead of 6 proper SATA power connectors, makes for a real clean install.

-Will
1061
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 12, 2013, 11:59 AM »
Is there a difference between a SAS expander and a breakout cable?
1062
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 12, 2013, 11:57 AM »
Regarding using SATA drives with SAS, here's some more input that is making me go round and round:
Most of what you've read about using SAS expanders with SATA drives is just plain old fashioned FUD - stuff posted by people repeating things they didn't really understand or stuff promulgated by one particular system vendor with an agenda.

There is no significant issue. There was a bug in Opensolaris-derived systems that got (incorrectly) blamed on the combination of SAS expander with SATA drives because systems in that configuration could be shown to exercise the bug - but it was clearly and unabiguously a bug in the OpenSolaris code and NOT a side-effect of using SAS expanders with SATA drives. The bug existed in general and all ZFS-based systems were actually at some risk from it whether they used SAS expanders or not. It was fixed in Solaris 11 and separately on other related branches (like OpenIndiana). All of the posted FUD derives from this. It was propagated in some self-serving posts by one systems vendor who either couldn't or didn't want to bother fixing the (well published) bug in their OpenSolaris-based branch.

But if it makes you uncomfortable using then then don't use them.

Besides, as apnar posted above, serving the typical SOHO max of 24 drives using 3 8-port HBAs is almost always cheaper and faster than 1 HBA + expander. But if you need to support more drives or if you don't have the PCIe lanes to spare for the extra HBAs then expanders work great too.
I really am trying not to have to go to SAS drives, because that would just cost me quite a bit (>$3000).  I want to use the SATA drives I already have.  If using SATA drives with SAS is not as big a deal as I'm reading about, maybe the best way to do it is:
--server motherboard (SuperMicro)
--IBM m1015 HBA card
--SAS expanders to SATA drives
1063
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 12, 2013, 10:56 AM »
So if I go the SATA route (no SAS expanders), I need an HBA and some port multipliers.  I don't know what HBA to get, but Addonics has some port multipliers:
AD5HPMSXA_diagramB.jpg

So maybe the IBM m1015 plus these multipliers?
1064
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 12, 2013, 09:54 AM »
Of course, the other option is to just get real deal SAS drives.  It will cost me, though.
1065
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 12, 2013, 09:50 AM »
I'm trying to figure out which HBA card I'm going to get to connect all these drives.  From what I can understand, people are saying it is not a good idea to use sas expanders with sata drives with ZFS.  SO that means I have to use non-sas cards.  The one that is recommend from here is the IBM m1015.  This was also the same card that someone here recommended to me, so maybe that's the way to go.

But what i don't get is that the m1015 seems to use sas expanders also.  No?  I don't understand.
1066
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 07, 2013, 07:37 PM »
On the good side though, when you're sitting alone at 3:00 in the morning after 20 straight hours tearing your hair out, staying awake only by frustration, your eleventieth cup of coffee and pure animal hatred for the server that refuses to cooperate, when the 3,837th thing that shouldn't work fixes everything you feel like a GOD!

Like I said, psychosis.  ;D
Oh no...what am I getting myself into??
1067
Living Room / Re: Please help superboyac build a server (2013 edition).
« Last post by superboyac on August 07, 2013, 03:54 PM »
I haven't been able to find much information about FreeNAS 9 or 9.1, especially regarding the drive pooling features.  Got anything to read or watch for that?

Sorry I missed your question earlier.

Downloadable docs for all current versions of FreeNAS can be found on this page. Manual in PDF format for 9.1.0 can be directly downloaded here. The ZFS pooling info starts on page 111.

Can't comment at this point because it's a brand new version of FreeNAS and I haven't got any hands-on with this version. And I'm not sure I really would be able to say much since this 20 drive Leviathan server you're building will have more drives under unified management than anything I've ever worked with. Biggest I've ever done is 4 drives with ZFS. It's worked really well for me so far.

But remember, with servers - especially storage servers - you're in it for the long game. So it isn't until you hit the 5-year mark and successfully recover from your first few hardware crises that you can realistically say: "This is good. It works."

Look at what happened with Backblaze once a few of their mondo storage servers started experiencing real loads and time frames. Intermittent 'bad surprises' cropped up with some people's data even though Backblaze's techs do know what they're doing.

Like the man says, "It ain't over till it's over."

And with this stuff, it's never really over. 8)
:( OK, I'll brace myself!  As long as i have a plan for not losing data, I'll be fine fixing whatever needs to be fixed.
images.jpg
1068
I'll take one, my friend.  Congratulations.
1069
Living Room / Re: PLease help superboyac build a server (2013 edition).
« Last post by superboyac on August 06, 2013, 05:47 PM »
Good choice on the Cooler Master case. Their cases have as good a combination of component accessibility and airflow as you're likely to find.
Thanks.  I'm a big fan of their designs.  Right up my alley.  Love the handle!  But more importantly, the 9 unbiased bays.
1070
Found Deals and Discounts / Re: WinPatrol Plus Sale
« Last post by superboyac on August 06, 2013, 05:25 PM »
  WinPatrol is awesome, I've been using the pro version for years now.
Me too!  :up: :up:
1072
Living Room / Re: PLease help superboyac build a server (2013 edition).
« Last post by superboyac on August 06, 2013, 04:54 PM »
I think at this point, I'm more than willing to give ZFS a try over server 2012.  I already have looked at 2012 R2, and it's nice.  But there isn't enough written about it right now for me to understand how good or bad it is.  ZFS seems to be reliable, and your go ahead also is something I value for it.

I haven't been able to find much information about FreeNAS 9 or 9.1, especially regarding the drive pooling features.  Got anything to read or watch for that?
1073
Living Room / Re: PLease help superboyac build a server (2013 edition).
« Last post by superboyac on August 06, 2013, 03:09 PM »
So I'm trying to compare the drive pooling features of ZFS+FreeNAS vs. Storage Spaces in MS Server 2012 R2.  Here are some notes:

http://www.zdnet.com...rst-look-7000017675/
Storage and BYOD
Storage Spaces, Microsoft's storage virtualisation technology, also gets an overhaul in Windows Server 2012 R2. Microsoft has added support for storage tiering, letting you mix traditional hard drives and solid-state disks. With storage tiers, you can identify slow and fast disks in a Storage Space, and Windows will move data between them automatically to give you the best performance — putting data that's accessed regularly on SSD, and data that's not needed so often on slower, cheaper hard drives.

Considering that most other operating systems make due with single root mount points, having 26 can be seen as quite sufficient.

You know, you *can* mount any volume anywhere you so desire.

Server 2012 R2 sports storage spaces. As a single volume. You can mount anywhere.

How does that buggy and not-quite-sufficient ZFS work for you on Linux? Or BtrFS - is that ready for production yet? No - I didn't think so, they are still work-in-progress. Server 2012 is production ready, robust, resilient and secure.

Do you run a Linux server? Has it been compromised by Darkleech yet? It soon will be, since nobody can figure out how the servers are being compromised, but they *are* being infected at a steady rate. By someone with root privileges, mind you. You know what they call a server where somebody else has root? Total *pwnage*!
1074
Living Room / Re: PLease help superboyac build a server (2013 edition).
« Last post by superboyac on August 06, 2013, 11:53 AM »
yet another useful tidbit about SAS vs SATA and HBA vs expanders:
This I doubt very much. As far as I know, all Sun/Oracle storage servers that use ZFS are using HBA only. No one use expanders. Unless you can show links that confirms your statement, I will not accept your statement. I dont think your statement is correct, unfortunately.

For instance, the ZFS storage server Sun X4500 Thumper with 48 disks in 4U, used 6 HBAs. Each HBA connected 8 disks like this:
HBA1: disk0 d1 d2 d3 d4 d5 d6 d7
HBA2: disk8 d9 d10 d11 d12 d13 d14
...
HBA6: disk40 d41 d42 d43 d44 d45 d46 d47

Then you created a zpool with several vdevs. For instance a raidz2 vdev with
disk0
disk8
...
disk40

And another vdev with d1, d9, ..., d41. And another vdev with d2, d10, ..., d42. etc.

Then you collected all vdevs into one zpool. If one HBA broke, for instance HBA1, it doesnt matter because every vdev just lost one disk each and the zpool could still function.



Regarding the first post with the link about SAS being toxic from Garret Damore. This Garret is a former Sun kernel engineer and the man behind Illumos. He has lots of credibility. If he says something, it is a confirmed fact. He works now at Nexenta, the storage company.


Basically, the link says that SAS use a different protocol than SATA. In the expander there will be a conversion from SAS to SATA, and you loose information in the conversion. In worst case, there might be problems. Thus, if you really want to be sure, use SAS disks with SAS expanders so there are no loss of data because of conversion.

Also, because ZFS will detect all problems immediately, ZFS will expose problems with expanders that other filesystems does not notice. ZFS having problems with SAS expanders is not a sign of fragility, but a sign of ZFS superior error detection. With other fileystems the errors are still there, but you will not notice.

I believe (need to confirm this) that if you use ZFS with SAS expanders and if you get problems, then ZFS will detect all errors just as normal, but ZFS might not be able to repair the errors. The same goes with hardware raid + ZFS: ZFS will detect errors but can not repair the errors. Thus you will get an error report, but ZFS can not repair all errors.

I am trying to go for many HBAs instead. Much safer. Expanders introduce another source of problems. KISS - keep it simple
1075
Living Room / Re: PLease help superboyac build a server (2013 edition).
« Last post by superboyac on August 06, 2013, 11:29 AM »
Very nice article by an experienced ZFS user:
http://nex7.blogspot...13/03/readme1st.html
There are a couple of things about ZFS itself that are often skipped over or missed by users/administrators. Many deploy home or business production systems without even being aware of these gotchya's and architectural issues. Don't be one of those people!

I do not want you to read this and think "ugh, forget ZFS". Every other filesystem I'm aware of has many and more issues than ZFS - going another route than ZFS because of perceived or actual issues with ZFS is like jumping into the hungry shark tank with a bleeding leg wound, instead of the goldfish tank, because the goldfish tank smelled a little fishy! Not a smart move.

ZFS is one of the most powerful, flexible, and robust filesystems (and I use that word loosely, as ZFS is much more than just a filesystem, incorporating many elements of what is traditionally called a volume manager as well) available today. On top of that it's open source and free (as in beer) in some cases, so there's a lot there to love.

However, like every other man-made creation ever dreamed up, it has its own share of caveats, gotchya's, hidden "features" and so on. The sorts of things that an administrator should be aware of before they lead to a 3 AM phone call! Due to its relative newness in the world (as compared to venerable filesystems like NTFS, ext2/3/4, and so on), and its very different architecture, yet very similar nomenclature, certain things can be ignored or assumed by potential adopters of ZFS that can lead to costly issues and lots of stress later.

I make various statements in here that might be difficult to understand or that you disagree with - and often without wholly explaining why I've directed the way I have. I will endeavor to produce articles explaining them and update this blog with links to them, as time allows. In the interim, please understand that I've been on literally 1000's of large ZFS deployments in the last 2+ years, often called in when they were broken, and much of what I say is backed up by quite a bit of experience. This article is also often used, cited, reviewed, and so on by many of my fellow ZFS support personnel, so it gets around and mistakes in it get back to me eventually. I can be wrong - but especially if you're new to ZFS, you're going to be better served not assuming I am. :)

I like this part, very helpful!
9. Pool Design Rules
I've got a variety of simple rules I tell people to follow when building zpools:

    Do not use raidz1 for disks 1TB or greater in size.
    For raidz1, do not use less than 3 disks, nor more than 7 disks in each vdev.
    For raidz2, do not use less than 5 disks, nor more than 10 disks in each vdev.
    For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev.
    Mirrors trump raidz almost every time. Far higher IOPS potential from a mirror pool than any raidz pool, given equal number of drives.
    For 3TB+ size disks, 3-way mirrors begin to become more and more compelling.
    Never mix disk sizes (within a few %, of course) or speeds (RPM) within a single vdev.
    Never mix disk sizes (within a few %, of course) or speeds (RPM) within a zpool, except for l2arc & zil devices.
    Never mix redundancy types for data vdevs in a zpool.
    Never mix disk counts on data vdevs within a zpool (if the first data vdev is 6 disks, all data vdevs should be 6 disks).
    If you have multiple JBOD's, try to spread each vdev out so that the minimum number of disks are in each JBOD. If you do this with enough JBOD's for your chosen redundancy level, you can even end up with no SPOF (Single Point of Failure) in the form of JBOD, and if the JBOD's themselves are spread out amongst sufficient HBA's, you can even remove HBA's as a SPOF.

If you keep these in mind when building your pool, you shouldn't end up with something tragic.

He also says better to use SAS disks vs SATA.  But...I already have like 15 SATA disks!  Speaking of which...AHHHH NO!  DOn't say that!
17. Crap In, Crap Out
ZFS is only as good as the hardware it is put on. Even ZFS can corrupt your data or lose it, if placed on inferior components. Examples of things you don't want to do if you want to keep your data intact include using non-ECC RAM, using non-enterprise disks, using SATA disks behind SAS expanders, using non-enterprise class motherboards, using a RAID card (especially one without a battery), putting the server in a poor environment for a server to be in, etc.
I have in my notes somewhere 40hz talking about ECC-RAM.  Looks like this person agrees.
Pages: prev1 ... 38 39 40 41 42 [43] 44 45 46 47 48 ... 252next