ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Barebone server: what else do I need to complete it?

<< < (3/9) > >>

mouser:
Note: rackmount servers are heavy, noisy (as in 747 takeoff noisy) and run hot. So think about where you're gonna put it.
--- End quote ---


ditto. 

40hz:
Would one Supermicro box really be all that loud and hot?
-superboyac (September 27, 2012, 04:32 PM)
--- End quote ---

Yup.

Any rack mount server is going to be noisy since they "turbo" the airflow. Small hi-volume fans going through a relatively small air passage. Whooosh! They're designed to accommodate server room density requirements - not ergonomics or aesthetics. And the heatsink fans on the CPUs are geared for performance and cooling - and "who cares!" about the noise levels.

Besides, anything with a bunch of disk drives in a relatively small case along with a couple of serious CPUs is going to be a very efficient space heater. The cases are designed to be heat radiators to cool the innards. Think of a rack mount server case as a giant heat sink for the system.
 8)

Thanks 40, I was hoping to get your input. 
-superboyac (September 27, 2012, 04:32 PM)
--- End quote ---

You're welcome. But let's get Stoic Joker, JavaJones, Wraith808,  SteelAdept, skwire, 4wd, Edvard, f0dder, and some of the other people who are involved with servers (sorry to the other DoCo members whose names I can't recall off the top of my head - you guys know who you are! :mrgreen:) in on this too. ;D

Stoic Joker:
Note: rackmount servers are heavy, noisy (as in 747 takeoff noisy) and run hot. So think about where you're gonna put it. Server rooms-40hz (September 27, 2012, 01:34 PM)
--- End quote ---

+1 x 10 :) I just got a great deal on an off lease Dell (1U rack mount) data center server. On start up that thing will flat-out wake the freaking dead! - (Think locked it a closet with 19 vacuum cleaners) - Hell, even with the fans at 'idle' it gave me a headach after sitting next to it for a few hours.

@SB Supermicro does make some great stuff. My home office's main server is a supermicro box that has been running rock steady 24/7 for about 6/7 years now. Unless you have a basement you're not using ...(seriously it really is that loud)... Go for a tower server with an external array.

40hz:
Go for a tower server with an external array.
-Stoic Joker (September 28, 2012, 07:23 AM)
--- End quote ---

Agree completely.

Also, housing the disk array in a separate enclosure will go a long way to distributing heat and keeping as much of it as possible away from your $$$ Xeons. (Looks sexier too IMHO. Racks look like meh. Two nice towers with all those blinkin' lights look ever so much cooler. (Run cooler too!)

Additional note: if your board supports it (and virtually all "server" mobos do) I would spend the extra to get ECC RAM. Especially if I was going to be provisioning for virtual machines, software RAID, or drive pooling. One less opportunity for unexpected surprises cropping up with ECC.

Many memory suppliers discourage people from using ECC since it will reduce performance marginally (1-2%) compared to non-ECC memory modules. But they're primarily addressing the desktop/workstation environment where (with current RAM products) it's not considered needed or desirable.

Servers, however, are a "whole 'nuther smoke." They're not just a tower PC loaded with drives and maxed out with RAM. With real servers, bullet-proof reliability, redundancy, and low-latency data bus engineering are more important than wringing the last fly speck of performance out of each of your individual components. You have to see them as 'one thing' rather than a collection of components. And they really are designed to be "set & forget" devices. You power them up - and "that's that" if you did it right.

I've had servers run continuously for years on end. The only time they are were ever powered down was to add or replace a drive if they didn't support hot-swapping. Or (if they were Windows servers) to perform a required reboot following a software upgrade. I've routinely had BSD/Linux servers go for well over three years without requiring a single reboot. I have one that's been rebooted only twice in seven years and is still going strong.

FWIW there's some disagreement in the "pro" community about whether or not ECC is "worth it" any more. It's split about 60-40 against last I paid attention to it. And it seems to be largely an age dependent thing. The young 'uns say we older guys aren't up enough on advances in memory engineering. We codgers (i.e. anyone over 40) say these youngsters are far too trusting when it comes to reliability claims; and haven't been around long enough to see all the weirdness that goes down in a server environment to know.

The real truth probably falls somewhere in the middle.

So...I'm still inclined to spec ECC for a server. But I'm an "old guy." So maybe some other people might want to put their tuppence in in this one?

About ECC
8)

f0dder:
So...I'm still inclined to spec ECC for a server. But I'm an "old guy." So maybe some other people might want to put their tuppence in in this one?-40hz (September 28, 2012, 09:13 AM)
--- End quote ---
Even with "advances in memory engineering", what about stuff like... cosmic radiation? Even if you only get a single-bit corruption every few years, a single-bit corruption inside a compressed datastream can ruin quite a lot.

I'm about to shop for a new home server, probably in October, and considering whether to go for ECC memory... but that does mean also going for a server motherboard and Xeon CPU, and that does end up in a somewhat other price class than a commodity i3 or i5.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version