ATTENTION: You are viewing a page formatted for mobile devices; to view the full web page, click HERE.

Main Area and Open Discussion > Living Room

Has SEO ruined the web?

<< < (7/10) > >>

higherstate:
Hehe, this is a can of worms.

Personally I don't know what I would do without search engines, it wasn't that long ago that if you wanted to do any kind of research then you had to make a trip to a library & hope they had the relevant books and then hope that had a decent index (or read the whole book) to find what you wanted.

Now we have the luxury of just going to google and doing some searches & in an instant finding what we want. I would say that it is extremely rare that I can't find exactly what I want when doing research (this of course depends on knowing exactly what you want).

If you are not getting the results you want in google then use the advanced search options & be very specific. Google is constantly changing it's algorithms and recently completed a major update called caffeine that it says updates much quicker and is more relevant than the old version. Who knows if that is true but you certainly have more options.

I would also like to make the point that wikipedia is one of the most seo'd sites out there. If you want to know how to setup your website to be loved by the search engines then just take a look at how they do it.

Also, a forum like this is, by its very nature, is heavily seo'd. Google loves large sites with every changing, always updated content i.e. a forum & the software this forum uses will have an seo setup to it.

Seo'ing is just like advertising in the offline world, if you don't advertise then no-one will know your product exists. That is not to say that the world is better or worse for it, it is just business. The difference is that you don't need to have a large amount of money and David can take on Goliath. It has levelled the playing field to a large extent.

I think that at the end of the day, the best sites will always come out on top, think facebook, wikipedia, newspaper sites etc I would also state that running these kinds of heavy content sites takes a huge amount of time, effort and staff i.e. they need to making money in some way. If the content is free then that way is usually via advertising. You tube lost something like 400 million last year, this year I suspect it will make 400 million. The difference is they have implemented advertising, one is sustainable, one is not.

superboyac:
I guess I learned a term here; scraping.  It's not SEO that ruined the web, it's the scraping sites.  The ones that copy other content into a new website.  That's the worst.  That's the problem I'm referring to.  I'm fine with SEO, in theory.

Paul Keith:
Wikipedia is a poor example because it's automatically given precedence over anything else.

To understand how badly Wikipedia is over-ranked, test it on a pop culture entry with a more detailed Wikia entry. Wikipedia still wins out.

Imagine if these were more scholar level entries. You'd have to dissect the searches between Scholar and the main search to narrow down the content to the simplified but educated link most of the time to even get a casual understanding of a topic you know nothing about hence Wikipedia is the easy cop-out.

Any other personal page who copies Wikipedia's model is bound to fail simply for the very reason that they don't have as much leverage on the copping out issue. Example: Hubpages and Squidoo pages etc. etc.

In the end, it goes back to authority + fame. In that sense, it's much easier to work around a model of Twitter, Facebook, Linkedin, Youtube model than it is to copy Wikipedia's model because any new player in the SEO arena isn't going to usurp any tried and true reputation of an encyclopedia like Wikipedia did no matter the quality of their content. Even highly respected and well written websites can't match up with the Wikipedia model unless they are already linked to someone or some concept with prestige like a Web service, dictionary, professional magazine, online newspaper, etc. etc.

@superboyac,

I don't really follow your conclusion.

The Superior Software List (for Windows) is in itself a scraping site. The only questionable area is how much is being copied.

Still you're still copying the content of a software's title or the theme of a general review site. How then can you conclude that scraping is worst at ruining SEO when that is only slightly different from what your site is doing?

I can understand if you say blatant plagiarism is bad but scraping?

Not only does that get penalized by Google if it's a blatant copy but scraping helps people better gauge the notability and quality of anything that's on the web as you yourself tried to demonstrate with your site.

superboyac:
I don't know what you are saying, PK.  Maybe I'm not using the word scraping well.  My site is definitely not scraping anything.  Everything there is my own content that I have put a lot of thought into.

Scraping is when I search for something on google, like the top ten movies of 2010, and I get 5 pages of different websites with pretty much the same content.  The same movies in the same order with the same paragraphs, just in different website addresses.  That's what I'm talking about.

or the hundreds of software review sites that list tons and tons of software, with generic descriptions that are generated automatically somehow.  And they don't help the user at all in finding what he is looking for.  The categories for the are not consistent.  Often times, you are looking for a particluar kind of video software, for example, but it just lists all the software that has anything to do with video, and no matter what VLC will be at the top.  Stuff like that is what I'm talking about.  It's ruining the web because it's impossible to find anything good.

Paul Keith:
No, that's much closer to plagiarism or backlinking. (and Google penalizes both although the latter gets a slow fix that anyone can SEO it much longer)

Scraping from my understanding is just that. Taking content and posting it to another website as a collection or set or link collection. (The amount, the quantity, the content is really up to the person's taste)

or the hundreds of software review sites that list tons and tons of software, with generic descriptions that are generated automatically somehow.  And they don't help the user at all in finding what he is looking for.  The categories for the are not consistent.  Often times, you are looking for a particluar kind of video software, for example, but it just lists all the software that has anything to do with video, and no matter what VLC will be at the top.  Stuff like that is what I'm talking about.  It's ruining the web because it's impossible to find anything good.
--- End quote ---

See, the problem with that definition is that it relies on your opinion of generic.

Albeit, the low quality sites are pretty obvious but how do you differentiate The Superior Software List with Download.com with Fileforum at the generic categorical level that search engine spiders play at?

From a personal user level or even a review level, it's very easy. However from a grand macro level of search value, it's almost similar.

You could for example take two different contents talking about the same program but at the generic level, the end result is just to convince the searcher, that you are among the hundred of software reviewers who praise this specific program name.

Sure, the content can be unique in the sense that you wrote it but if the general theme is the same from every other positive reviewer over the internet, it's really no more generic as having someone's spidered definition praising the very same program.

...And it gets more generic as the program becomes more popular.

Let's use simplespark.com as a more concrete example to reference with.

Is Simple Spark useful or useless?

Well before that, does it fit your definition of a scraping site with generic descriptions? The answer would be yes.

But is it useful?

The answer is also yes especially earlier on.

Why?

Because there aren't that much copies of Web 2.0 search engine services even today.

It's easy to spot the popular services but the rarer ones like ProtoPage, you often get much earlier before the blogs start reviewing them and scraping them into "Alternative services to Netvibes".

It is only ruining the web via SEO in the sense that there have been lots of copycats.

...but in those copycats, there are still sites who aim for a much more useful goal like your site does.

The question is, how do you separate the fluff from the value when the value of a search engine ranking is also very much vague?

Albeit Google could do a better manual job of fixing things but generally it's not just Google. It's DuckDuckGo. It's Bing. It's even pseudo-human powered search engines like Mahalo.

At the end of the day though, if there isn't even at least "1" semi-credible scrape site, it ruins the web more because it's much harder to discover these cool quality lesser known apps.

By that very same token, these scrape sites are no more ruining the web than malware sites are or even less because they're the easiest to filter out. Sure, they are still suckering and annoying casual surfers but...it's also much easier to discover that...now... sites like Digg, Reddit, Mixx, Propeller, Delicious, Diigo, Twitter, Facebook...even Wikipedia... are slightly less bogged down scraping sites to discover things because the idea of a scraping site evolved. The idea of a scraping site became added with up or down buttons or Wiki-like possibilities that anyone can improved upon or individually filled public bookmarks.

In the grand scheme of things, they don't go so far as to ruin the web precisely because they are scraping sites where all the crap is littered into one url instead of all those marketing and SEO dominant crap sites with tons of backlinks towards their own self-made little information Twitter/Facebook/Squidoo/Hubpage/Youtube/Linkbait set of channels.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version