My idea is a searchengine scraper.
It is like this ....
You come to a webpage and you see a search box (like google and the like). You type a url and click the "Scrape SERPs" button.
Now, the web app would visit the SERP page and scrape all the result links. It would follow to the next SERP pages and do likewise until it has met the dept you put.
A spider that visits SERP pages and scrapes all the result links. It then saves them on the website's database under your member username. Others can search band see what you scraped by doing your Username search. Likewise you can do too.
The scraper would scrape not only the links but their anchor texts, page titles, page meta keywords and meta descriptions.
In other words, a searchengine scraper. A web app. Built with php.
Anybody can build this then do the community a favour by releasing the source code here and on the gpl so we can learn from your source code. I am php student. I reckon cURL is good for the job.
Anyone like this idea,. Give it a thumbs up!
Just imagine, you can scrape any searchengine with this.
I have built a .exe one. Anyone who builds a .php one then I am willing to trade or willinbg to give you a copy if you give me the .php copy along with comments so I can learn from your code.