Home | Blog | Software | Reviews and Features | Forum | Help | Donate | About us
topbanner_forum
  *

avatar image

Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
  • December 11, 2016, 01:59:15 AM
  • Proudly celebrating 10 years online.
  • Donate now to become a lifetime supporting member of the site and get a non-expiring license key for all of our programs.
  • donate

Last post Author Topic: Please help superboyac build a server (2013 edition).  (Read 35031 times)

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #75 on: August 06, 2013, 09:22:53 AM »
But it's kinda moot now. Unless I missed something WHS has been officially discontinued. Microsoft is suggesting its very stripped down "Windows Server Essentials" server as the replacement.

It's not moot to those people (i.e. me) hit by it.  If they paid me, I wouldn't use it in any iteration.  It's one thing to be hit by something like this in beta software- but that was supposedly production-ready.  Nope.  Nuh uh.  Not even if they swore on their children's lives would I use it again.
Wow this is really surprising to me.  I had heard such rave reviews about WHS and the drive pooling the past couple of years.  OK, this will convince me to go with FreeNAS and ZFS

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #76 on: August 06, 2013, 10:52:35 AM »
(intermission)
http://arstechnica.c...data-77-tb-in-month/

Quote
Yes, Virginia, there is a limit to what Verizon will let you do with FiOS' "unlimited" data plan. And a California man discovered that limit when he got a phone call from a Verizon representative wanting to know what, exactly, he was doing to create more than 50 terabytes of traffic on average per month—hitting a peak of 77TB in March alone.

"I have never heard of this happening to anyone," the 27-year-old Californian—who uses the screen name houkouonchi and would prefer not to be identified by name—wrote in a post on DSLreports.com entitled "LOL VZ called me about my bandwidth usage Gotta go Biz." "But I probably use more bandwidth than any FiOS customer in California, so I am not super surprised about this."

Curious about how one person could generate that kind of traffic, Ars reached out to houkouonchi and spoke with him via instant message. As it turns out, he's the ultimate outlier. His problem is more that he's violated Verizon's terms of service than his excessive bandwidth usage. An IT professional who manages a test lab for an Internet storage company, houkouonchi has been providing friends and family a personal VPN, video streaming, and peer-to-peer file service—running a rack of seven servers with 209TB of raw storage in his house.
Spoiler
Typical-Geek_o_22537.jpg
DC, please make sure this doesn't happen to me!


wraith808

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 8,408
  • "In my dreams, I always do it right."
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #77 on: August 06, 2013, 11:02:48 AM »
You have to WARN people before you do stuff like that!  EYEBLEACH!  NSFW!  NSFAnyone!!! :stars:

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #78 on: August 06, 2013, 11:29:06 AM »
Very nice article by an experienced ZFS user:
http://nex7.blogspot...13/03/readme1st.html
Quote
There are a couple of things about ZFS itself that are often skipped over or missed by users/administrators. Many deploy home or business production systems without even being aware of these gotchya's and architectural issues. Don't be one of those people!

I do not want you to read this and think "ugh, forget ZFS". Every other filesystem I'm aware of has many and more issues than ZFS - going another route than ZFS because of perceived or actual issues with ZFS is like jumping into the hungry shark tank with a bleeding leg wound, instead of the goldfish tank, because the goldfish tank smelled a little fishy! Not a smart move.

ZFS is one of the most powerful, flexible, and robust filesystems (and I use that word loosely, as ZFS is much more than just a filesystem, incorporating many elements of what is traditionally called a volume manager as well) available today. On top of that it's open source and free (as in beer) in some cases, so there's a lot there to love.

However, like every other man-made creation ever dreamed up, it has its own share of caveats, gotchya's, hidden "features" and so on. The sorts of things that an administrator should be aware of before they lead to a 3 AM phone call! Due to its relative newness in the world (as compared to venerable filesystems like NTFS, ext2/3/4, and so on), and its very different architecture, yet very similar nomenclature, certain things can be ignored or assumed by potential adopters of ZFS that can lead to costly issues and lots of stress later.

I make various statements in here that might be difficult to understand or that you disagree with - and often without wholly explaining why I've directed the way I have. I will endeavor to produce articles explaining them and update this blog with links to them, as time allows. In the interim, please understand that I've been on literally 1000's of large ZFS deployments in the last 2+ years, often called in when they were broken, and much of what I say is backed up by quite a bit of experience. This article is also often used, cited, reviewed, and so on by many of my fellow ZFS support personnel, so it gets around and mistakes in it get back to me eventually. I can be wrong - but especially if you're new to ZFS, you're going to be better served not assuming I am. :)

I like this part, very helpful!
Quote
9. Pool Design Rules
I've got a variety of simple rules I tell people to follow when building zpools:

    Do not use raidz1 for disks 1TB or greater in size.
    For raidz1, do not use less than 3 disks, nor more than 7 disks in each vdev.
    For raidz2, do not use less than 5 disks, nor more than 10 disks in each vdev.
    For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev.
    Mirrors trump raidz almost every time. Far higher IOPS potential from a mirror pool than any raidz pool, given equal number of drives.
    For 3TB+ size disks, 3-way mirrors begin to become more and more compelling.
    Never mix disk sizes (within a few %, of course) or speeds (RPM) within a single vdev.
    Never mix disk sizes (within a few %, of course) or speeds (RPM) within a zpool, except for l2arc & zil devices.
    Never mix redundancy types for data vdevs in a zpool.
    Never mix disk counts on data vdevs within a zpool (if the first data vdev is 6 disks, all data vdevs should be 6 disks).
    If you have multiple JBOD's, try to spread each vdev out so that the minimum number of disks are in each JBOD. If you do this with enough JBOD's for your chosen redundancy level, you can even end up with no SPOF (Single Point of Failure) in the form of JBOD, and if the JBOD's themselves are spread out amongst sufficient HBA's, you can even remove HBA's as a SPOF.

If you keep these in mind when building your pool, you shouldn't end up with something tragic.

He also says better to use SAS disks vs SATA.  But...I already have like 15 SATA disks!  Speaking of which...AHHHH NO!  DOn't say that!
Quote
17. Crap In, Crap Out
ZFS is only as good as the hardware it is put on. Even ZFS can corrupt your data or lose it, if placed on inferior components. Examples of things you don't want to do if you want to keep your data intact include using non-ECC RAM, using non-enterprise disks, using SATA disks behind SAS expanders, using non-enterprise class motherboards, using a RAID card (especially one without a battery), putting the server in a poor environment for a server to be in, etc.
I have in my notes somewhere 40hz talking about ECC-RAM.  Looks like this person agrees.
« Last Edit: August 06, 2013, 11:40:50 AM by superboyac »

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #79 on: August 06, 2013, 11:53:31 AM »
yet another useful tidbit about SAS vs SATA and HBA vs expanders:
Quote
This I doubt very much. As far as I know, all Sun/Oracle storage servers that use ZFS are using HBA only. No one use expanders. Unless you can show links that confirms your statement, I will not accept your statement. I dont think your statement is correct, unfortunately.

For instance, the ZFS storage server Sun X4500 Thumper with 48 disks in 4U, used 6 HBAs. Each HBA connected 8 disks like this:
HBA1: disk0 d1 d2 d3 d4 d5 d6 d7
HBA2: disk8 d9 d10 d11 d12 d13 d14
...
HBA6: disk40 d41 d42 d43 d44 d45 d46 d47

Then you created a zpool with several vdevs. For instance a raidz2 vdev with
disk0
disk8
...
disk40

And another vdev with d1, d9, ..., d41. And another vdev with d2, d10, ..., d42. etc.

Then you collected all vdevs into one zpool. If one HBA broke, for instance HBA1, it doesnt matter because every vdev just lost one disk each and the zpool could still function.



Regarding the first post with the link about SAS being toxic from Garret Damore. This Garret is a former Sun kernel engineer and the man behind Illumos. He has lots of credibility. If he says something, it is a confirmed fact. He works now at Nexenta, the storage company.


Basically, the link says that SAS use a different protocol than SATA. In the expander there will be a conversion from SAS to SATA, and you loose information in the conversion. In worst case, there might be problems. Thus, if you really want to be sure, use SAS disks with SAS expanders so there are no loss of data because of conversion.

Also, because ZFS will detect all problems immediately, ZFS will expose problems with expanders that other filesystems does not notice. ZFS having problems with SAS expanders is not a sign of fragility, but a sign of ZFS superior error detection. With other fileystems the errors are still there, but you will not notice.

I believe (need to confirm this) that if you use ZFS with SAS expanders and if you get problems, then ZFS will detect all errors just as normal, but ZFS might not be able to repair the errors. The same goes with hardware raid + ZFS: ZFS will detect errors but can not repair the errors. Thus you will get an error report, but ZFS can not repair all errors.

I am trying to go for many HBAs instead. Much safer. Expanders introduce another source of problems. KISS - keep it simple

Stoic Joker

  • Honorary Member
  • Joined in 2008
  • **
  • Posts: 6,296
    • View Profile
    • www.StoicJoker.com
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #80 on: August 06, 2013, 12:04:01 PM »
Is that for the WHS version, the new version, or both? I've never had occasion to play with it ... But I've got all my VMs on an 8 disk hardware RAID5 array.



It was for the WHS version.

Ah! Okay. I remember that debacle. But supposedly they got that all fixed...supposedly... They were pitching it as the best thing since sliced bread at the last MS show I went to. Guess we'll have to wait for 40 to chime in on the clarification then. Thanks.

The first iterations worked ok as long as you didn't push it too much. Then something went terribly wrong a few updates later. Microsoft 'fixed' it by the simple expedient of removing the drive pooling feature from WHS.

But it's kinda moot now. Unless I missed something WHS has been officially discontinued. Microsoft is suggesting its very stripped down "Windows Server Essentials" server as the replacement.

There's a variant of disk pooling available with Server 2012. I never did have a chance to play with it, and can't now (production system). But the MS guy at the last show was demoing add/remove/etc. disks to/from a MS Hyper-V server storage pool of some kind.

This was the same show I found out about the toss running VM from server to server without shutdown capabilities of the new Hyper-V servers. I tried it a few times (just had to), and it really is just as slick, easy, and smooth as they said it was at the show.

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,768
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #81 on: August 06, 2013, 12:59:27 PM »
There's a variant of disk pooling available with Server 2012.

Yes indeed. But that puppy is an entirely different breed of tech from what they had in WHS.

I've never done anything with pooling in a Windows server production environment. But I did get some (brief) hands-on with it in a lab setting. Looked impressive. But that's one of those things you need to have up long-term before  you can say for real how well it works in contrast to something like tossing VMs back and forth. With that, you have immediate feedback if something isn't what they say it is.

I'd have no problem implementing Microsoft's pooling capabilities however, if I were anyplace that would benefit from it. Microsoft's mainline server technology is rock solid. As good - or better - than anything else that's out there in the situations it's intended for.

I've never encountered any real systemic faults with MS Server. Truth is, with Windows Server, most problems I've run into were caused by either a bad initial setup, or by somebody messing with things they were warned were best left alone.

If you follow the directions (RTFM is particularly appropriate advice when doing up a server)  and observe what Microsoft considers to be 'best practices' whenever deploying one of their server products, they really are extremely capable and (mostly) worry free. Windows servers that are correctly provisioned (hardware-wise) and which get set up "by the book," offer years of virtually flawless performance. And with minimal management or maintenance.

One of the reasons I tend to be so hard on Microsoft is because I know what levels of technical excellence they're capable of achieving. So if I'm more critical of what they do than some other companies, it's because I do respect them. And I also expect more of them because of it.
« Last Edit: August 06, 2013, 01:06:24 PM by 40hz »

Stoic Joker

  • Honorary Member
  • Joined in 2008
  • **
  • Posts: 6,296
    • View Profile
    • www.StoicJoker.com
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #82 on: August 06, 2013, 03:02:14 PM »
There's a variant of disk pooling available with Server 2012.

Yes indeed. But that puppy is an entirely different breed of tech from what they had in WHS.

I've never done anything with pooling in a Windows server production environment. But I did get some (brief) hands-on with it in a lab setting. Looked impressive. But that's one of those things you need to have up long-term before  you can say for real how well it works in contrast to something like tossing VMs back and forth. With that, you have immediate feedback if something isn't what they say it is.

Indeed, time will tell as they say. But with SB's desire for flexible access to...stuff. It sounds right down his alley ... Assuming he doesn't decide to go the Linux route. I just thought it best to clarify before preclusively eliminating an option.


I've never encountered any real systemic faults with MS Server. Truth is, with Windows Server, most problems I've run into were caused by either a bad initial setup, or by somebody messing with things they were warned were best left alone.

But, I love stupid people ... They pay for my house! ;)

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #83 on: August 06, 2013, 03:09:41 PM »
So I'm trying to compare the drive pooling features of ZFS+FreeNAS vs. Storage Spaces in MS Server 2012 R2.  Here are some notes:

http://www.zdnet.com...rst-look-7000017675/
Quote
Storage and BYOD
Storage Spaces, Microsoft's storage virtualisation technology, also gets an overhaul in Windows Server 2012 R2. Microsoft has added support for storage tiering, letting you mix traditional hard drives and solid-state disks. With storage tiers, you can identify slow and fast disks in a Storage Space, and Windows will move data between them automatically to give you the best performance — putting data that's accessed regularly on SSD, and data that's not needed so often on slower, cheaper hard drives.

Quote
Considering that most other operating systems make due with single root mount points, having 26 can be seen as quite sufficient.

You know, you *can* mount any volume anywhere you so desire.

Server 2012 R2 sports storage spaces. As a single volume. You can mount anywhere.

How does that buggy and not-quite-sufficient ZFS work for you on Linux? Or BtrFS - is that ready for production yet? No - I didn't think so, they are still work-in-progress. Server 2012 is production ready, robust, resilient and secure.

Do you run a Linux server? Has it been compromised by Darkleech yet? It soon will be, since nobody can figure out how the servers are being compromised, but they *are* being infected at a steady rate. By someone with root privileges, mind you. You know what they call a server where somebody else has root? Total *pwnage*!

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,768
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #84 on: August 06, 2013, 03:21:47 PM »
There's a variant of disk pooling available with Server 2012.

Yes indeed. But that puppy is an entirely different breed of tech from what they had in WHS.

I've never done anything with pooling in a Windows server production environment. But I did get some (brief) hands-on with it in a lab setting. Looked impressive. But that's one of those things you need to have up long-term before  you can say for real how well it works in contrast to something like tossing VMs back and forth. With that, you have immediate feedback if something isn't what they say it is.

Indeed, time will tell as they say. But with SB's desire for flexible access to...stuff. It sounds right down his alley ... Assuming he doesn't decide to go the Linux route. I just thought it best to clarify before preclusively eliminating an option.

Actually, if you're most comfortable with Windows and don't really feel like putting the time into getting over the learning curve on an entirely new OS environment, there's no intrinsic need to look at Linux except maybe to save some significant money and more fully own your server.

I'd definitely be inclined to download a trial copy of a Windows server and give it a try with a project like this one.

One catch would be the hardware compatibility issue. It's important to remember that if your chosen devices aren't on Microsoft's list, they'll make no representations about them working correctly (or remaining stable) when used with their server software.

I've never encountered any real systemic faults with MS Server. Truth is, with Windows Server, most problems I've run into were caused by either a bad initial setup, or by somebody messing with things they were warned were best left alone.

But, I love stupid people ... They pay for my house! ;)

Don't know about love - but yeah...I'm somewhat dependent on their mistakes and lack of knowledge for my own income too. ;D

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,768
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #85 on: August 06, 2013, 03:45:54 PM »
@SB - that second quote sounds like either somebody shilling for Microsoft, or somebody who doesn't know much about Linux other than what they've seen in non-specialist web articles.

The mount point argument being made is incorrect, misinformed, and outdated. Red herring. It can be ignored.

ZFS is very "production ready" although there are poor implementations of it out there. Best to find a distro that officially supports it as one of their filesystems if you want to use it. Btrfs is still in heavy enough development that I wouldn't consider it for anything really big and/or critical at this point...

Regarding Darkleech - I suppose that could be a danger in some areas. But if you're not running an Apache server on the box in question it can't affect you. Smart money also seems to indicate the attack vector is through outdated or insecure copies of Plesk or cPanel since the exploit needs to gain root access to the server in order to do it's dirty deed. Unless you're planning to run a public facing webserver, I wouldn't worry about Darkleech too much. It's more a hosting service provider problem revolving around sloppy configurations plus them not keeping up with security patches since that's mostly who has been targeted.
 8)

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #86 on: August 06, 2013, 04:54:12 PM »
I think at this point, I'm more than willing to give ZFS a try over server 2012.  I already have looked at 2012 R2, and it's nice.  But there isn't enough written about it right now for me to understand how good or bad it is.  ZFS seems to be reliable, and your go ahead also is something I value for it.

I haven't been able to find much information about FreeNAS 9 or 9.1, especially regarding the drive pooling features.  Got anything to read or watch for that?

Vurbal

  • Supporting Member
  • Joined in 2012
  • **
  • Posts: 635
  • Mostly harmless
    • View Profile
    • Read more about this member.
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #87 on: August 06, 2013, 04:56:24 PM »
When I built my server (nothing fancy - just an old dual P3 white box) I decided to go with a separate XFS partition for video files. I'm no Linux expert but I did a lot of research and determined that was probably the best bet since it's well optimized for large file sizes and consistent high throughput. I have no idea how it works with drive pools since it's not something I care about. I definitely wouldn't recommend it for a lot of smaller files.

As to stability issues with various Linux file systems, there's more to consider than the file system - don't use Btrfs though. Sometimes it's related to power issues you shouldn't have as long as you use a quality power supply and UPS. You should always have a UPS for an important machine. If you're really paranoid you could build a dual power supply monster but I wouldn't bother. Quality is more important than quantity.

In any case I agree completely with 40hz that ZFS is production ready as long as it's a proper implementation.
I learned to say the pledge of allegiance
Before they beat me bloody down at the station
They haven't got a word out of me since
I got a billion years probation
- The MC5

Follow the path of the unsafe, independent thinker. Expose your ideas to the danger of controversy. Speak your mind and fear less the label of ''crackpot'' than the stigma of conformity.
- Thomas J. Watson, Sr

It's not rocket surgery.
- Me


I recommend reading through my Bio before responding to any of my posts. It could save both of us a lot of time and frustration.

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member

Vurbal

  • Supporting Member
  • Joined in 2012
  • **
  • Posts: 635
  • Mostly harmless
    • View Profile
    • Read more about this member.
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #89 on: August 06, 2013, 05:46:11 PM »
Good choice on the Cooler Master case. Their cases have as good a combination of component accessibility and airflow as you're likely to find.
I learned to say the pledge of allegiance
Before they beat me bloody down at the station
They haven't got a word out of me since
I got a billion years probation
- The MC5

Follow the path of the unsafe, independent thinker. Expose your ideas to the danger of controversy. Speak your mind and fear less the label of ''crackpot'' than the stigma of conformity.
- Thomas J. Watson, Sr

It's not rocket surgery.
- Me


I recommend reading through my Bio before responding to any of my posts. It could save both of us a lot of time and frustration.

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: PLease help superboyac build a server (2013 edition).
« Reply #90 on: August 06, 2013, 05:47:12 PM »
Good choice on the Cooler Master case. Their cases have as good a combination of component accessibility and airflow as you're likely to find.
Thanks.  I'm a big fan of their designs.  Right up my alley.  Love the handle!  But more importantly, the 9 unbiased bays.

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,768
    • View Profile
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #91 on: August 07, 2013, 03:14:50 PM »
I haven't been able to find much information about FreeNAS 9 or 9.1, especially regarding the drive pooling features.  Got anything to read or watch for that?

Sorry I missed your question earlier.

Downloadable docs for all current versions of FreeNAS can be found on this page. Manual in PDF format for 9.1.0 can be directly downloaded here. The ZFS pooling info starts on page 111.

Can't comment at this point because it's a brand new version of FreeNAS and I haven't got any hands-on with this version. And I'm not sure I really would be able to say much since this 20 drive Leviathan server you're building will have more drives under unified management than anything I've ever worked with. Biggest I've ever done is 4 drives with ZFS. It's worked really well for me so far.

But remember, with servers - especially storage servers - you're in it for the long game. So it isn't until you hit the 5-year mark and successfully recover from your first few hardware crises that you can realistically say: "This is good. It works."

Look at what happened with Backblaze once a few of their mondo storage servers started experiencing real loads and time frames. Intermittent 'bad surprises' cropped up with some people's data even though Backblaze's techs do know what they're doing.

Like the man says, "It ain't over till it's over."

And with this stuff, it's never really over. 8)


wraith808

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 8,408
  • "In my dreams, I always do it right."
    • View Profile
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #92 on: August 07, 2013, 03:34:47 PM »
^ That's a good point.  Perhaps you should start with the capability to do more, but not start with your max storage?  It would be a lot easier to keep track of and maintain with less drives.  Might be something you want to think about.

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #93 on: August 07, 2013, 03:54:19 PM »
I haven't been able to find much information about FreeNAS 9 or 9.1, especially regarding the drive pooling features.  Got anything to read or watch for that?

Sorry I missed your question earlier.

Downloadable docs for all current versions of FreeNAS can be found on this page. Manual in PDF format for 9.1.0 can be directly downloaded here. The ZFS pooling info starts on page 111.

Can't comment at this point because it's a brand new version of FreeNAS and I haven't got any hands-on with this version. And I'm not sure I really would be able to say much since this 20 drive Leviathan server you're building will have more drives under unified management than anything I've ever worked with. Biggest I've ever done is 4 drives with ZFS. It's worked really well for me so far.

But remember, with servers - especially storage servers - you're in it for the long game. So it isn't until you hit the 5-year mark and successfully recover from your first few hardware crises that you can realistically say: "This is good. It works."

Look at what happened with Backblaze once a few of their mondo storage servers started experiencing real loads and time frames. Intermittent 'bad surprises' cropped up with some people's data even though Backblaze's techs do know what they're doing.

Like the man says, "It ain't over till it's over."

And with this stuff, it's never really over. 8)
:( OK, I'll brace myself!  As long as i have a plan for not losing data, I'll be fine fixing whatever needs to be fixed.
images.jpgPlease help superboyac build a server (2013 edition).

40hz

  • Supporting Member
  • Joined in 2007
  • **
  • Posts: 11,768
    • View Profile
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #94 on: August 07, 2013, 04:05:40 PM »
^ Welcome to the world of system administration!

It's not just a job...it's a...well...it's...ah screw it! It's a job. ;D

Vurbal

  • Supporting Member
  • Joined in 2012
  • **
  • Posts: 635
  • Mostly harmless
    • View Profile
    • Read more about this member.
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #95 on: August 07, 2013, 05:03:47 PM »
^ Welcome to the world of system administration!

It's not just a job...it's a...well...it's...ah screw it! It's a job. ;D
It's not just a job, it's a psychosis.
I learned to say the pledge of allegiance
Before they beat me bloody down at the station
They haven't got a word out of me since
I got a billion years probation
- The MC5

Follow the path of the unsafe, independent thinker. Expose your ideas to the danger of controversy. Speak your mind and fear less the label of ''crackpot'' than the stigma of conformity.
- Thomas J. Watson, Sr

It's not rocket surgery.
- Me


I recommend reading through my Bio before responding to any of my posts. It could save both of us a lot of time and frustration.

Stoic Joker

  • Honorary Member
  • Joined in 2008
  • **
  • Posts: 6,296
    • View Profile
    • www.StoicJoker.com
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #96 on: August 07, 2013, 05:55:17 PM »
^ Welcome to the world of system administration!

It's not just a job...it's a...well...it's...ah screw it! It's a job. ;D
It's not just a job, it's a psychosis.
+1 - Point goes to Vurbal.

Vurbal

  • Supporting Member
  • Joined in 2012
  • **
  • Posts: 635
  • Mostly harmless
    • View Profile
    • Read more about this member.
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #97 on: August 07, 2013, 07:14:07 PM »
On the good side though, when you're sitting alone at 3:00 in the morning after 20 straight hours tearing your hair out, staying awake only by frustration, your eleventieth cup of coffee and pure animal hatred for the server that refuses to cooperate, when the 3,837th thing that shouldn't work fixes everything you feel like a GOD!

Like I said, psychosis.  ;D
I learned to say the pledge of allegiance
Before they beat me bloody down at the station
They haven't got a word out of me since
I got a billion years probation
- The MC5

Follow the path of the unsafe, independent thinker. Expose your ideas to the danger of controversy. Speak your mind and fear less the label of ''crackpot'' than the stigma of conformity.
- Thomas J. Watson, Sr

It's not rocket surgery.
- Me


I recommend reading through my Bio before responding to any of my posts. It could save both of us a lot of time and frustration.

superboyac

  • Charter Member
  • Joined in 2005
  • ***
  • Posts: 6,070
  • Is your software in my list?
    • View Profile
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #98 on: August 07, 2013, 07:37:27 PM »
On the good side though, when you're sitting alone at 3:00 in the morning after 20 straight hours tearing your hair out, staying awake only by frustration, your eleventieth cup of coffee and pure animal hatred for the server that refuses to cooperate, when the 3,837th thing that shouldn't work fixes everything you feel like a GOD!

Like I said, psychosis.  ;D
Oh no...what am I getting myself into??

wraith808

  • Supporting Member
  • Joined in 2006
  • **
  • Posts: 8,408
  • "In my dreams, I always do it right."
    • View Profile
    • Donate to Member
Re: Please help superboyac build a server (2013 edition).
« Reply #99 on: August 07, 2013, 07:46:15 PM »
And now they have plastic cases without the sharp edges... so what are you going to do?  The reason that I liked metal cases is because it seemed that every time I cut myself, things would start working.  It just likes blood, you know?  Like voodoo.

Or... is that a sign of psychosis?  ;D