Wow. Talk about switches! How about I tell you that the link is www.example.com/website. It is hosted on localhost only so it isn't an actual WEB site per se.
But I open it by typing www.example.com/website into any browser. Can you narrow down the switches a bit with that info? I feel "whipped"
This is just a single page like "the home page" that the site opens to. All I need is that one page. As a manual download with Ctrl+S it is an approx 2MB MHTML file. If I could find that old Mouse Keys programmer I would just program the mouse strokes into a script to do it
-questorfla
I can tell you what they do... and you can take out what you need. Since you're going via the http protocol, it doesn't really matter if you're running local or not.
--recursive: get the page recursively, i.e. don't just stop with the queried page
--domains [domain]: don't go outside of the current domain
--no-parent: don't go upward, no matter what the links say
--page-requisites: get extra needed files, like css and js files
--html-extension: other than prereqs, we're only looking for html files
--convert-links: converts the links on the page to explicitly point to the downloaded files, i.e. remove the domain from the files
--restrict-file-names=windows: use windows compatible file names
--no-clobber: if you have to resume the download, it won't download those files that already exist.
I'm not sure how to limit it to only one level- my use for it was to download the website for offline extraction. you might be able to add --l1, but I've never tried it.
All of these switches are detailed on the wget page (
https://www.gnu.org/software/wget/), and they might have a better explanation of them.