You should give a few more details on your scenario, Superboy.
Is this intended as more of a backup scenario, where individual users have access to their own "backup repository", and will usually be pushing changes and only be pulling sometimes? (pull would be either restoring from backup, which is hopefully seldom
, or for synchronizing between machines). Or are you looking at having several users share access to a "file repository"?
FTP sucks
bigtime, it's really a retarded protocol; excusable because it's so ancient and people didn't know better back then, but completely and utterly unsuable for a lot of common workflows. It sucks for a big amount of small files because you open/close a data connection for each transfer, which will get your performance killed because of latency and the slow-start property of TCP. It's OK when you need full transfer of a few big files, but sucks if you need to
update changed parts of a big file (can't be done with FTP itself, needs separate server and client stuff to locate changed parts).
SFTP is just FTP with an SSL layer, so it still sucks just as much as FTP. Also, you'll have to be a bit careful with the settings, since it's possible to switch between encrypted and plaintext... you risk setting up encrypted login, but having the rest of the connections going in plaintext.
SCP is basically normal "cp" (ie. the unix "copy" command) running in a SSH tunnel. SSH == good, bigtime - while it's not 100% perfect, it's stood the test of time, and has had flaws worked out. I haven't looked at how the "cp" parts is done, so I assume it just
is pretty much the copy command run through SSH, which means "dumb" full-content transfer of files... but since it's run through a SSH tunnel, you don't get FTP's retarded new-connection-per-file behaviour. A GUI client will probably send one copy command per file it's receiving, which will still have *some* latency overhead for smaller files, but not nearly as much as FTP.
HTTP is better, since you get keep-alive connections... but there's
some protocol overhead, and the issue of server setup and rights management. I think you'd want to look at
WebDAV stuff for that, but it's not something I have experience with; I'd expect it to be better performing than FTP, though.
For VPN, I assume you mean a VPN connection combined with regular Windows explorer style file access. Not somethind I'd personally like - it's secure enough if you use a decent VPN, but the CIFS/SMB protocol Windows uses for remote file transfers was made for LANs and isn't too hot for internet connections, there's too many roundtrips. It
works, but since the user interface offered is the standard Explorer file manager, you kinda expect local speeds (even when you know you're accessing the net, your subconscious mind associates the standard interface with local speeds), but that's definitely not what you'll be getting.
So, back to usage. I assume you don't want clients to be viewing/editing data directly on the remote location - VPN+CIFS would allow them to this, but it's something your probably really really really don't want... performance is awful, especially because most programs are designed for
local file access patterns - they might be reading and writing small blocks and seeking all over the place, which is pretty awful for remote access patterns... and things often get really nasty if the connection is lost.
If big files are involved, and two-way synchronization is needed, you should probably be looking for a solution that can handle partial updates (and then you still have to realize that some big binary blob formats are modified in a way that partial updates can't even be done - shame on those formats). On top of that, you need to
carefully consider the problems involved if multiple clients have access to the same repositories.
A solution that's pretty good in effiency would be SSH+rsync... yum for the performance benefits. It's definitely not user-friendly with a
vanilla setup though