ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Has SEO ruined the web?

<< < (8/10) > >>

JavaJones:
I think the definitions are getting kind of muddied up here. Scraping sites (as opposed to "web scraping" in a generic sense, which is just a technique that *scraping sites* use) are pretty much universally bad because they get their content in whole or in large part from other sites, almost exclusively without permission, thus being not only a duplication of other content, but also a copyright issue:
http://en.wikipedia.org/wiki/Scraper_site
This is different than a content aggregator like Google News which links back to its sources, or a directory system like Open Directory, or a search engine like Bing. Scrapers are pretty much virtueless. ;)

- Oshyan

superboyac:
I think the definitions are getting kind of muddied up here. Scraping sites (as opposed to "web scraping" in a generic sense, which is just a technique that *scraping sites* use) are pretty much universally bad because they get their content in whole or in large part from other sites, almost exclusively without permission, thus being not only a duplication of other content, but also a copyright issue:
http://en.wikipedia.org/wiki/Scraper_site
This is different than a content aggregator like Google News which links back to its sources, or a directory system like Open Directory, or a search engine like Bing. Scrapers are pretty much virtueless. ;)

- Oshyan
-JavaJones (June 11, 2010, 03:35 PM)
--- End quote ---
Thanks JJ, that's a great description.

superboyac:
Man, Paul, you always have like twenty different thoughts going on at the same time.  it's hard to follow what you're trying to say.

Anyway, let's put aside all the technicalities.  i think it's pretty clear what I'm trying to say, whether or not I'm using the exact correct words or not.

So here is what I'm talking about:  Fine, I will accept that a computer algorithm like Google has a hard time distinguishing my software reviews from the hundreds of meaningless software scraping sites (I don't care if I'm using the term incorrectly).  But when you go to my site (and if you are a somewhat experienced internet user), you can tell almost immediately that my articles are my own and unique to my site.  Now, if you go to download.com or one of the more worthless sites like freedownloadscenter.com, you can tell immediately that this is a piece of crap software site that is not going to be of much help or use.

But here is what I say to that.  Baloney.  Google has shown that it could do some mind-blowing stuff, to the point of virtually mind reading the user.  So if they really wanted to, they can easily fix the problem of being able to distinguish sites like mine from the stupid scraper sites.  So I don't buy the excuse that Google can't distinguish the sites.  i don't buy it at all.

I mean, think about it.  In Photoshop CS5, they've added the unbelievably amazing "Content-Aware Fill" feature, where the computer will actually seamlessly fill in parts of a photo by analyzing the rest of the photo.  It can automatically remove trees, remove background clouds, etc.  it's quite shocking...it floored me.  If we can do that with computer technology, I'm certain Google can fix all these SEO things I'm talking about.

But they won't because they don't want to.  Why don't they want to?  I don't know for sure, but it definitely has to do with money.  SEO is about search results.  There is cutthroat competition to be at the top of these search results.  How do you get there?  Ads, clicking, all that stuff that we hate.  But all that stuff is what makes Google 99% of their billions of dollars.  So there is absolutely no motivation for Google to do something to that system which will drastically reduce their profits.  Now, it's not Google's fault per se.  it's the fault of all the scraper site authors who are taking advantage of the rules Google has set in place.  But if Google really wanted to, they could fix it.  Just like antivirus companies can constantly and instantly update their databases as new threats emerge.

So it's not that Google can't fix it...it's that they won't fix it.

Paul Keith:
The thing is though, those scraper sites, the ones fitting the article that JavaJones linked to are actually pretty rare nowadays relative to every other things that can be said to be ruining the web via using SEO.

Even the article admits these kinds of sites are being eliminated by search engines.

The thing is though there's the bigger fishes of "aggregators" and whatever the fine line is between aggregators and scraping sites.

I hate to say this but your experience with the internet and technical knowledge may be blinding you a bit but no, to a casual surfer, especially one who doesn't know the de facto popular web sites, the only difference is that your site seems to have less programs than Download.com.

Even if you include experienced internet users at that.

The only difference between experienced internet users and non-experienced ones is that they found out through experience what are the sites to avoid. They don't know what sites are good, they know what sites have a bad reputation.

If experienced internet users truly knew the distinction, search engine ranking would in itself be unnecessary since all everyone needs is to check are the top sites in an aggregator but then it's not that simple.

This is how things like Download.com and freedownloadscenter.com stay afloat. They tow the line but they aren't scraper sites in the context of what JavaJones is referring to because they link the publishing site back.

It's a huge gray area but as far as ruining the web, it still falls into the category that experience teaches one what site to avoid and inexperienced users don't really know the difference and yet still it takes a backstep towards whatever link first appears as scraper sites are often only found nowadays in early search terms if they are extremely niche search terms and that's more because people don't know that aggregators are much better for searching reliable rare software and that everything else should be checked up via several forums rather than a search engine once you go that far unless it's a document search.

As for having twenty different thoughts, I apologize. Maybe it's just my desire to clarify or verify my own lack of knowledge of things but I feel that the only way a better alternative can come is if people discuss these issues to such...I wouldn't say details...but level of clarification especially from an over-simplified philosophical stance that even inexperienced interent surfers can easily pick up if they stumble upon a topic.



superboyac:
I am not able to follow what is happening here.  i don't even remember what my original question was.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version