I'v tried learning to use some download managers. Problem I face is a issue that I believe must be already be solved but in no place I look.
A good download manager can import a list of download's or get all from a web page already opened.
There seems to be plenty of software for downloading whole web sites. And often many users would not wish to download WHOLE web sites (specially if the job could take 5-15 times longer and fill your disk (maybe before finished) ), the spidering I tried was too automatic.
If you believe you understand what the next links in a big collection of content should be (i.e.
example.net/page1/File_Download_PDF_Part1.pdf (.doc, .jpg, .mp3, .mov, .flv)
example.net/page1/File_Download_PDF_Part2.pdf
example.net/page1/File_Download_PDF_Part3.pdf
example.net/page1/File_Download_PDF_Part4.pdf (web sites of course can have hundreds, and I've found refer to be homepage, or even none)
) then you could make link list, add them to a download manager, and see how many there is available by (quarrying size) and choose how many to que.
But for
example.net/pageName/download1.ext
example.net/pageSomeOtherPageName/download1.ext
example.net/pageYetSomeOtherPageName/download1.ext
you need to find the URL's first. If they are all on one page, it's easy.
If links are hundreds and not listed on one page, (instead page1, page2,..page89), then users would need something to mine/harvest/spider URL's
and make a txt file. It would be far better to link harvest without having to web browse dozens of pages. The user can then use the partial or full list, (by) importing this list to their installed download manager, choose how much/which parts to download, choose order, even throttle down, ( FreeDownLoadManager.org has option file>import list from clipboard or text file ).
Many download managers are out there, and are built to handle queing dozens (hundreds) of files, but appear to be missing this function to link mine/harvest, to get their link que from large freeware-mp3 and image&vid-promo collections.
I'v tried
http://tools.seobook.com/link-harvester/ (
http://tools.seobook...klinks/backlinks.php ), but for some reason, I find no url works.
The FireFox add-on Link Gopher would be good, but it only works from the page you are on, it goes zerro pages deep, (and you can't feed it a list).
It would be far better to link harvest without having to web browse dozens of pages, it would be much better if plugin (or app) could be given the starting url(s), user choose number of pages deep, choose weather to follow both example.net and hosted.example.net in same quary, and leave downloading all the highly bulky content to dl managers.
OutWit Hub (light), is a much bigger app/plugin. I'v not seen it yet compile url list from multiple pages.
If a developer want's to make it complete, I'd say have the above selections in one box, adding in check/uncheck boxes to spider/harvest links
[]follow to other domains
[]down
example.net/SomeCategory/ may return
example.net/SomeCategory/somesubcategory/FILE.PDF and
example.net/SomeCategory/someothersubcategory/OtherFileName.PDF
[]sideways
example.net/SomeCategory/ may return
example.net/SomeOtherCategory/FILE.PDF
[]up and sideways
example.net/SomeCategory/ may return
example.net/FILE.PDF
(if user just types example.net/ and checks spider downward(s), then []sideways, and []up and sideways,would be unnecessary)
So many users have downloaded and installed download managers. I would think something a step-up from link gopher (one that goes at least one page deep, or accepts a start list) would be popular.
If anyone has something more than link gopher, please post it here. I also thought this would be a much easier program to create than the spiders that attempt to download whole sites.