ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Building a home server. Please help, DC!

<< < (28/36) > >>

lotusrootstarch:
Well summarized Steel.  :Thmbsup:

A caveat: one problem with having ESXi is that you won't be able to take advantage of gfx-accelerated transcoding using CUDA/DXVA, that is, if you MUST "re-encode" videos.

superboyac:
OK...this is all very good advice.  I really am not sure what to do again.

My my basic need in all of this is this:
What is the easiest way to add 10+ hard drives to my current setup?  And I want access to those hard drives in exactly the same way I use my regular hard drives now.  That is, no kind of restrictions like 40hz brought up above.

I don't know if I need a NAS, or a server, or what.  But I don't want something that is not meant to easily deal with a lot of hard drives.  I don't want like 4 or 6 or even 10 hard drives to be the maximum.  I want it to be something that can easily take in 20 drives if necessary, once everything is set up.  And I definitely don't want any difficulty with access to any of them.  I don't want it to become something like where the drives need to be "mounted" to a client pc in any sort of special way, and the connection can be unstable and disconnect occassionally.  I don't want anything like that.  i don't want transfer speeds any different than my regular sata drives I use right now.  I don't want speeds like a dorky USB stick.  So this is kind of what I want.

lotusrootstarch:
What is the easiest way to add 10+ hard drives to my current setup?
-superboyac (September 04, 2011, 01:04 AM)
--- End quote ---

According to my research, there's none on the market as yet. The closest solution for you is probably getting one or more mega USB3.0/e-SATA enclosures with a shitload of bays on each... directly connected to the desktop/server. Last time I checked there was no such thing. Even if it does come to existence some time on the road, I'd imagine heat dissipation and costs being significant obstacles for adoption.

The dilemma is that, at some stage, within that single device, whether it being a beefy server or desktop, the constraints like the physical room/capacity, connectors, heat, and other performance factors will force you to take the networked, distributed approach.

And the biggest problem with a networked solution is obviously the network itself... Let's look at your requirements:

i don't want transfer speeds any different than my regular sata drives I use right now.
--- End quote ---
connection can be unstable and disconnect occasionally
--- End quote ---

You can actually achieve these using:
1. Gigabit ethernet aggregation (minimum 2Gb/s aggregrated single direction)
2. Powerful switching backbone capacity (heaps of switches that have gigabit ports do not have corresponding switching fabric to support the performance)
3. SMB 3 / NFS / iSCSI as transfer proctocols

At this level of requirement, it is the networking components that demand the biggest budgets. Switches that will properly support Ethernet aggregation with a beefy backplane performance that deliver all the data transfer at line rate do not come cheap. Early this year I deployed a home theater set up for a long-time friend, the wired networking plus storage part of Bill of Material boiled down as follows:

1. Backbone: Cisco 3750G x 2,  @ 2 x $3500 each
Running two 4Gb/s ethernet aggregation via CAT6 cables to two distribution layer switches located in major entertainment hubs in the residence.

2. Distribution/Access: Cisco 3560G x 2 (model with 4 uplinks),  @ 2 x $2800 each
Running 4Gb/s ethernet aggregation back to the backbone switches

3. Miscellaneous customer-grade Gigabit switches,  @ $2000 total
Uplinking to backbone/distribution via single gigabit link.

4. 6 x ReadyNAS 1500 with 4 x 2TB drives,  @ 6 x $2400 each bundled
Connected directly to backbone switches via dual Gigabit ethernet aggregation (maximum uni-directional transfer speed of 2Gb/s)

5. One PowerEdge server, 64GB memory, dual quad-core Xeon with 12Gb/s ethernet aggregation, @ $3500
Connected directly to backbone switches, and thus to all the NAS appliances.


YET there's only less than 40TB of useable storage and there's still noticeable performance degradation when the load is concentrated on a few NAS boxes. There has been no solution, just bear with it.

So... focus your budget on the stuff that you value most, be prepared to make compromises, everybody has to, even for people among craziest of the crazy. :)

In your case, a few NAS boxes plus a customer-grade gigabit switch is really the best solution.

40hz:
I was just thinking...

Since this will be a personal server with most likely only a few people accessing it at any given time, a single 1GB network link on the LAN side should be sufficient for anything being streamed to the users. That's more bandwidth per user than most people get already - and some have multiple family members streaming (via wireless no less!) simultaneously.

Most playback software is aware of this, so its gotten very good at buffering and caching to avoid any stutters or freezes.

If there are problems after that, then it becomes a QoS issue - and that's a whole 'nother tweak&tune discussion we'll leave for another day.

But if the actual scenario is one (or three) people mostly pulling from the server (even HD) I doubt you'll ever see a problem there.

If it does, I'd first try "multihoming" the server by enabling a second NIC LAN port, and point some users to that as their IP gateway address. Put yourself on your own port and let everybody else share the second. Because you paid for the damn thing so "screw them" right? (kidding...just kidding...)


On the WAN side, even a 100Mb port is usually sufficient - unless some ISP is finally allowing faster backbone connections for it's customers.  Because most ISPs throttle or lock your link throughput somewhere in a range any 100Mb NIC can easily handle. If you actually can benefit from having 1Gb on the WAN side then use a 1Gb NIC for that too. No big deal.

So if you're letting your server handle most of the heavy-lifting, and basically only using your LAN side to pull files down, a single (or dual) switched 1Gb network on the LAN side should be plenty.

If you take a look at many preconfigured servers, you'll see one 100Mb and two 1Gb NIC ports built in.

Now you know why.  :)

lotusrootstarch:
My observation is that media traffic can easily be "crowded out" by traffic such as file transfer, network backups etc. And due to whole bunch of factors (everyone's got his/her own opinion on this), I see the actual maximum aggregated speed over a 1Gbps link seldom goes above 40MB/s for a single session (such as one SMB file sharing session), and tend to drop below 20MB/s when you have multiple sessions (such as file transfer, heavy Internet downloading, streaming) going on concurrently.

Don't forget 1Gbps is just a theoretical max, and a bunch of factors slow it down to a disappointing real world speed -- host CPU power, switching/routing infrastructure, host NIC card quality, cable quality/length/distance along the path, disk IO speed, TCP congestion avoidance mechanism, etc.

100Mbps is unusable but don't put too much trust in 1Gps either, it ain't that good.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version