My father is always wanting to save complete webpages of sites he comes across. Sometimes he has okay luck with "webpage complete" option in the Save As dialog options. But, some caveats from my own past experience: for relatively small webpages -- say pages that consist of only a handful of supporting files (.js, .gif, .png, etc), it works pretty well, although dad is so technically challenged that he's not sure where he saved the files to, and then he forgets that he has to keep the subfolder of support files in the same directory with the HTML page (dad is 78). For progressively more "involved" pages, doesn't work as well.
Over the years, I've found HTTrack to work well. The catch is that it has lots of options (how many levels deep do you want to recurse in pulling files from a website? How deeply do you want to plunge to extract links that may be buried way down in a website's hierarchy? HTTrack works pretty well if you're patient (it can take hours to download a complete website) and if you configure things reasonably before you press "Go".
There have been products that try to take some of the complexity out by allowing you to capture an entire web page in its entirety (images, links, text), such as Surfulator, Evernote WebClipper, but these are using, I think, database backends and not separating elements of a page out into the constituent parts.
So, what sort of webpages are you saving (from personal websites, or business websites, or perhaps hobbyist sites {dad wants all these gun-enthusiast pages/sites saved in their entirety, then he buries them in three sub-folders deep of organization on his ... Desktop --> C:\Users\Paul\Desktop\Saved Pages\Guns\GunDigest\Sept08\... you get the idea; and forgets them for all eternity).
So give us your short- and long-term objectives in saving these, and also whether they're always just pages, or sometimes entire sites or sections thereof.