Help - Search - Members - Calendar
Full Version: Your bio = Your problem
> Wikimedia Discussion > General Discussion
KamrynMatika
QUOTE(Thomas Dalton)
Google's rankings are Google's responsibility. Wikipedia has never
worried about search engine rankings, we just do what we think best
and let the search engines do what they think best. Fortunately, those
generally coincide, so Wikipedia has very high rankings, but we don't
make any special effort to achieve those rankings. We also don't make
any special effort to get rid of them.1

QUOTE(Ray Santoinge)
This appears to be sound reasoning. Basing our activity on "What does
Google think?" is a cyber-equivalent to letting your home decisions be
guided by what the neighbours think. It relinquishes control to outside
elements who have no vested interest in your efforts.2


What a great way for them to relinquish responsibility!
Unrepentant Vandal
In a sense I agree with them. As I tried to get accross in my recent pagerank thread, google's pagerank algorithm will fail because it causes positive feedback once google is popular.

But while google is the definitive search engine, they have some reposnibility to consider it.
Kato
QUOTE(KamrynMatika @ Sat 8th September 2007, 7:05pm) *

QUOTE(Ray Santoinge)
This appears to be sound reasoning. Basing our activity on "What does
Google think?" is a cyber-equivalent to letting your home decisions be
guided by what the neighbours think. It relinquishes control to outside
elements who have no vested interest in your efforts.2


What a great way for them to relinquish responsibility!

That's Ray Santoinge getting all Randian on us all of a sudden. By the way, my home descisions are guided by what the neighbours think. It's a process of mutual respect in order to live in a viable society. If I didn't care what the neighbours thought, and visa versa, then we'd be at constant loggerheads.
Somey
We've seen this line of reasoning before, of course... To their credit, there are also some people who are much more realistic about it, and who realize that it's actually in their interests to make certain pages less visible via Google (so as to reduce the amount of edit-warring, spamming and so on that takes place on them). I mean, it's not like they're trying to justify an increase in their advertising rates, is it?

I've still got it on my own to-do list to write a MediaWiki extension that would add "rel=noindex" to specific pages on an ad hoc basis, but things have been a little hectic in Someyland lately... By the time I work out how to do it, someone will probably have beaten me to the punch!

And yes, if I didn't mow my lawn for three straight months, I'd say the neighbors would have every right to get a little pissed off at me.

Also, can we change the thread title? People might get the wrong idea, if you know what I'm sayin'. smiling.gif
Morton_devonshire
QUOTE(Somey @ Sat 8th September 2007, 6:21pm) *

I've still got it on my own to-do list to write a MediaWiki extension that would add "rel=noindex" to specific pages on an ad hoc basis...

If you show me how to do it, I would be happy to insert it manually onto choice pages, including Murphy's. ~~~~The Mort
Unrepentant Vandal
QUOTE(Somey @ Sat 8th September 2007, 7:21pm) *

And yes, if I didn't mow my lawn for three straight months, I'd say the neighbors would have every right to get a little pissed off at me.


That's taking things a bit far you know...
Somey
QUOTE(Morton_devonshire @ Sat 8th September 2007, 1:26pm) *
QUOTE(Somey @ Sat 8th September 2007, 6:21pm) *
I've still got it on my own to-do list to write a MediaWiki extension that would add "rel=noindex" to specific pages on an ad hoc basis...
If you show me how to do it, I would be happy to insert it manually onto choice pages, including Murphy's.

Good, thanks! Though presumably it would have to require admin rights...

I mean, one really easy way to do it would simply be to disable the code in the WikiML text parser that removes certain types of "raw" HTML tags from each post (incl. <rel="noindex">) while allowing formatting tags to remain in place. Or even just make a substitution tag available that could be added via a protected template or parser function, if they want to get all jiggy with it... But that would mean the "power" would be available to everyone, and they wouldn't like that, now would they?

Still, it's like such a simple thing, and it would help solve (or at least alleviate) so many problems - it's no wonder they won't do it!
Daniel Brandt
I thought the "rel=???" got inserted in particular links in a page, and the "rel=nofollow" (which is how it's generally used) tells the search engine to avoid juicing up the linked target page on the basis of this particular link. The "noindex" that is intended to prevent a bot from indexing an entire page is something that goes into the header of that page: <META NAME="ROBOTS" CONTENT="NOINDEX">

You can also use NOFOLLOW or NOARCHIVE (i.e., no cache link) in that META command, but I believe it has to be in the header of the target page.

Even if the "noindex" works inside of a link, you'd have to find every last link on the web and insert this in order to keep the target page from getting indexed. That's impossible to do, which is why I don't think it's right.

It seems to me that you'd have to be a developer with root access to be able to stick a META command in a page header.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.