How about using a DNS name that is round robin to the 3 different "physical sites/server" and have each server at each of the 3 sites respond to that virtualhost name. That way the client side software doesn't really need to care and you do have redundancy of sorts and the load would (in theory) be spread between the sites
You could do that and also give DownloadAndRun 4 url's to fetch the file from, the first being the shared name (to get load sharing between the servers) and then the 3 individual names to go direct to each server to cater for one of them being down or unavailable
I would imagine that the desired extension to DownloadAndRun would simply be to be able to specify a list of url's, where the secondary url's would be used in succession if the connection failed or a non-2xx/3xx response was received from the webserver
Given this application it would probably be quite useful to send information about the existing file with the GET request (Last-Modified and ETag headers) so that the remote server could reply with Not Modified headers if the file hasn't changed since the user last checked as that would potentially save a lot of bandwidth for the people hosting the files (see
http://www.w3.org/Pr...sec10.html#sec10.3.5)
Another potentially useful thing to indicate in the GET request is to send the header 'Accept-Encoding' with either gzip or deflate or both and accept a compressed stream in response, also to cut down on bandwidth usage. This only matters if the remote file hasn't already been compressed before being loaded on the server (and that would make more sense of course because the servers wouldn't need to do on the fly compression before sending it to the client)
Those suggestions might get the ball rolling....