ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Can we compare file transfer protocols?

<< < (2/8) > >>

superboyac:
Thanks steel!  I really appreciate that information.  That's going into my database.  I'm willing to give VPN a try.  It sounds like it may be a pain to setup, which I'm ok with, and after that it can be easy to use.  I guess my next step is to find a good, well written setup procedure that won't require me spending hours and hours googling and reading a bunch of forums for answers.  If you come across any, please send them my way.

4wd:
Does this count?

Or do you specifically not want to use third party software?

If nothing else it will give you an idea of a rather easy OpenVPN set up.

f0dder:
You should give a few more details on your scenario, Superboy.

Is this intended as more of a backup scenario, where individual users have access to their own "backup repository", and will usually be pushing changes and only be pulling sometimes? (pull would be either restoring from backup, which is hopefully seldom :), or for synchronizing between machines). Or are you looking at having several users share access to a "file repository"?

FTP sucks bigtime, it's really a retarded protocol; excusable because it's so ancient and people didn't know better back then, but completely and utterly unsuable for a lot of common workflows. It sucks for a big amount of small files because you open/close a data connection for each transfer, which will get your performance killed because of latency and the slow-start property of TCP. It's OK when you need full transfer of a few big files, but sucks if you need to update changed parts of a big file (can't be done with FTP itself, needs separate server and client stuff to locate changed parts).

SFTP is just FTP with an SSL layer, so it still sucks just as much as FTP. Also, you'll have to be a bit careful with the settings, since it's possible to switch between encrypted and plaintext... you risk setting up encrypted login, but having the rest of the connections going in plaintext.

SCP is basically normal "cp" (ie. the unix "copy" command) running in a SSH tunnel. SSH == good, bigtime - while it's not 100% perfect, it's stood the test of time, and has had flaws worked out. I haven't looked at how the "cp" parts is done, so I assume it just is pretty much the copy command run through SSH, which means "dumb" full-content transfer of files... but since it's run through a SSH tunnel, you don't get FTP's retarded new-connection-per-file behaviour. A GUI client will probably send one copy command per file it's receiving, which will still have *some* latency overhead for smaller files, but not nearly as much as FTP.

HTTP is better, since you get keep-alive connections... but there's some protocol overhead, and the issue of server setup and rights management. I think you'd want to look at WebDAV stuff for that, but it's not something I have experience with; I'd expect it to be better performing than FTP, though.

For VPN, I assume you mean a VPN connection combined with regular Windows explorer style file access. Not somethind I'd personally like - it's secure enough if you use a decent VPN, but the CIFS/SMB protocol Windows uses for remote file transfers was made for LANs and isn't too hot for internet connections, there's too many roundtrips. It works, but since the user interface offered is the standard Explorer file manager, you kinda expect local speeds (even when you know you're accessing the net, your subconscious mind associates the standard interface with local speeds), but that's definitely not what you'll be getting.

So, back to usage. I assume you don't want clients to be viewing/editing data directly on the remote location - VPN+CIFS would allow them to this, but it's something your probably really really really don't want... performance is awful, especially because most programs are designed for local file access patterns - they might be reading and writing small blocks and seeking all over the place, which is pretty awful for remote access patterns... and things often get really nasty if the connection is lost.

If big files are involved, and two-way synchronization is needed, you should probably be looking for a solution that can handle partial updates (and then you still have to realize that some big binary blob formats are modified in a way that partial updates can't even be done - shame on those formats). On top of that, you need to carefully consider the problems involved if multiple clients have access to the same repositories.

A solution that's pretty good in effiency would be SSH+rsync... yum for the performance benefits. It's definitely not user-friendly with a vanilla setup though :)

JavaJones:
+15 for f0dder's comments. VPN is not a "file transfer protocol" per se, so you need to consider what you'd use along with it for file transfer, e.g. "Explorer", which as he pointed out is highly problematic for non-local data. rsync may not be the easiest thing to setup, but it's the most directly applicable to what *seems* to be your scenario (based on the few details provided thus far).

- Oshyan

superboyac:
Thanks fodder!  That is very very useful.  OK, I believe you that VPN is not the way to go.  But what is?  I'm not really sure what SSH+rsync is, but even if it is good for performance, I tend to hesitate when it's not something easy to setup.  I like buttons and dropdowns and such.  I hate when configuration is done using text files and programming language.

I can clarify how I want to use this:
I have a server at home.  Many many gigabytes, terabytes of stuff is on it and it's not rare for the contents to change and shift around a lot (several gigabytes worth) every week or month.  I mention that because most of the "easy" solutions available online like Dropbox and stuff are normally limited to a few gigabytes worth of stuff, and what I'm talking about is way more than that.
Now, I have some business partners and friends and family that I want to be able to share some of my server contents with.  Both ways transferring (uploading and downloading).  Ideally, I'd like my server to appear as an integrated folder or drive in their explorer.  That's the goal.  That way, all other programs on their computer could access the files as they would any other local file.  And when I need to perform backups using SFFS or whatever I use, I can just say backup the "N:" drive which would be the server.  No messy configuration files, no crazy programming language jargon settings.  So how do I do that?

FTP just doesn't work well for that because of that connection issue you and several others have mentioned.  I accept that VPN is not really the solution for this either.  But what is?  I've tried things that use the web interface to make it easy to transfer files...like HFS, which is really great for that.  I'm using that now, and it's ok, but it's limited because you can only access it through the web.  I want to have a network folder on other people's computers the same way as I can add network folders through my work's intranet as mapped drives.  Something like that.

Maybe that's the question.  How do I map a network drive that is NOT intranet, but outer-net (WAN, i suppose)?

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version