Messages - SomebodySmart [ switch to compact view ]

Pages: [1] 2next
UrlSnooper / Cannot find network adapter
« on: November 26, 2015, 07:49 AM »
I installed the latest update and try to run UrlSnooper 2 and it says it cannot find a network adapter.

General Software Discussion / Re: Is there software for this?
« on: June 15, 2015, 06:17 PM »
So... what are you doing, anyway?

I'm building a genealogy website that will help persons trace their family trees. There's a lot of genealogy info in obituaries. I cannot copy the obituaries onto my new website for obvious copyright reasons but I can help persons find those obituaries on the funeral home and newspaper websites for free.

General Software Discussion / Re: Is there software for this?
« on: June 14, 2015, 08:49 PM »
Excellent! It looks like now I'll be able to build hyperlinks to every
obituary on every website built by ! Thanks.

As for GreaseMonkey, I don't know anything about that, but
I do use curl and my home-made Python 3.2 programs.

Go to (and increment the final number)

The last page number is also stored within the source:

Code: HTML [Select]
  1. <input type="hidden" id="totPages" value="83" />

This could probably be done with a GreaseMonkey script that cycles through each page grabbing the links and at the end displaying a page with all of them, which then could be saved using Save Page As ...

Just messing around, this is a heavily modified site scraper from

Currently it will start at the URL @ayryq mentioned above and load every page until the last one, (requires GreaseMonkey naturally), at a rate of about 1 every 3 seconds.  It also grabs all the URLs from each page but as I haven't worked out how to store them yet, they get overwritten at each page load.
Code: Javascript [Select]
  1. // ==UserScript==
  2. // @name Get The Deadites
  3. // @namespace
  4. // @include*
  5. // ==/UserScript==
  7. /*
  8. * Much modified from the original script for a specific site
  9. */
  12. function loadNextPage(){
  13.   var url = "";
  14.   var num = parseInt(document.location.href.substring(document.location.href.lastIndexOf("/") + 1));
  15.   if (isNaN(num)) {
  16.     num = 1;
  17.   }
  18. // If the counter exceeds the max number of pages we need to stop loading pages otherwise we go energizer bunny.
  19.   if (num < maxPage) {
  20.     document.location = url + (num + 1);
  21. //  } else {
  22. // Reached last page, need to read LocalStore using JSON.parse
  23. // Create document with URLs retreived from LocalStore and open in browser, user can then use Save Page As ...
  24.   }
  25. }
  28. function start(newlyDeads){
  29. // Need to get previous entries from LocalStore (if exists)
  30. //  var oldDeads = localStorage.getItem('obits');
  31. //  if (typeof oldDeads === undefined) {   // No previous data so just store the new stuff
  32. //    localStorage.setItem('obits', JSON.stringify(newlyDeads));
  33. //  } else {
  34. // Convert to object using JSON.parse
  35. //    var tmpDeads = JSON.parse('oldDeads');
  36. // Merge oldDeads and newlyDeads - new merged object stored in first object argument passed
  37. //    m(tmpDeads, newlyDeads);
  38. // Save back to LocalStore using JSON.stringify
  39. //    localStorage.setItem('obits', JSON.stringify(tmpDeads));
  40. //  }
  42. /*
  43. * Dont run a loop, better to run a timeout sort of a function.
  44. * Will not put load on the server
  45. */
  46.   var timerHandler = window.setInterval(function(){
  47.   window.clearInterval(timerHandler);
  48.   window.setTimeout(loadNextPage, 2000);
  49.   }, 1000); // this is the time taken for your next page to load
  50. }
  52. //
  53. // function m(a,b,c){for(c in b)b.hasOwnProperty(c)&&((typeof a[c])[0]=='o'?m(a[c],b[c]):a[c]=b[c])}
  55. var maxPage;
  56. var records = document.getElementsByTagName("A");     // Grab all Anchors within page
  57. //delete records[12];                                 // Need to delete "Next" anchor from object (property 13)
  58. var inputs = document.getElementsByTagName("INPUT");  // Grab all the INPUT elements
  59. maxPage = inputs[2].value;                            // Maximum pages is the value of third INPUT tag
  60. start(records);

The comments within the code are what I think should happen but I haven't tested it yet, (mainly because I can't code in Javascript ... but I'm perfectly capable of hitting it with a sledge hammer until it does what I want ... or I give up  :P ).

Someone who actually does know Javascript could probably fill in the big blank areas in record time.

General Software Discussion / Re: Is there software for this?
« on: June 13, 2015, 07:00 PM »
I looked at Teleport Pro but it doesn't look like it will be able to scan
and download the output of scripts, just static pages.

well there are a few programs designed to "spider" a page and download all linked pages, images, etc.

one well known one is "Teleport Pro", but there are others.

General Software Discussion / Is there software for this?
« on: June 13, 2015, 01:37 PM »
I go to and there's a list of twelve obituaries.

Each has a URL that is in the HTML code and is easy to capture, and the target file is easy to curl or wget,

but I want ALL the hundred of URLs to individual obituaries and I don't want to do work. The NEXT key actually produces a list of the next twelve but the VIEW SOURCE function still lists the first twelve in the source code. Now, is there a product that will download and capture everything one at a time so I can leave the machine on auto-pilot?

Pages: [1] 2next
Go to full version