Help - Search - Members - Calendar
Full Version: Wisdom of crowds withers
> Wikimedia Discussion > General Discussion
Peter Damian


QUOTE
European scientists asked volunteers to estimate statistics like the population density of Switzerland. Each person got five guesses. Some were shown their peers’ answers and others weren’t. Turns out that seeing others’ estimates led to a lot of second guessing. Which narrowed the range of the group’s responses and pointed them in the wrong direction. Even worse, knowing that others said the same thing made everyone more confident they were right. So there is wisdom in numbers—as long as those numbers keep quiet `til they’re counted.
http://www.scientificamerican.com/podcast/...=SA_DD_20110517


Full article is here. http://www.pnas.org/content/early/2011/05/...3b-e4139d4d54f1
radek
QUOTE(Peter Damian @ Wed 18th May 2011, 12:14pm) *

QUOTE
European scientists asked volunteers to estimate statistics like the population density of Switzerland. Each person got five guesses. Some were shown their peers’ answers and others weren’t. Turns out that seeing others’ estimates led to a lot of second guessing. Which narrowed the range of the group’s responses and pointed them in the wrong direction. Even worse, knowing that others said the same thing made everyone more confident they were right. So there is wisdom in numbers—as long as those numbers keep quiet `til they’re counted.
http://www.scientificamerican.com/podcast/...=SA_DD_20110517


Full article is here. http://www.pnas.org/content/early/2011/05/...3b-e4139d4d54f1


IIRC this points is actually made several times in Surowiecki's book, though most reviews of the book that I have seen always focus on the "oh wow aren't crowds smart" portion of the book, while omitting all the caveats and limitations he discusses. Again, IIRC, he actually devotes quite a bit of attention to the situations where the crowds are pretty dumb.

This is also one of the reasons why in a lot of cases committees (*cough*arbitration*cough*) don't work very well - the discussion among the committee members tends to just accentuate whatever biases there are present to begin with.

And this kind of result makes perfect sense from a statistical point of view. Recall that in Galton's original experiment, with the weight of the pig or whatever it was, all the guesses were made independently. And that's the key - for the average guess to be close to the true value (the estimator to be unbiased) the guesses (random variables) have to be independent (I don't think identical distribution that each guess is drawn from is necessary though). It's exactly the correlation between guesses (whether positive or negative) that is induced by discussion and the extra information of what others are guessing, and filtered through some basic cognitive/psychological biases, that makes the thing go awry.
It's the blimp, Frank
The problem is complicated by the fact that at Wikipedia, people offer opinions as part of a game where success depends on sucking up to the right people.
Abd
Wikipedia process grew out of a kind of "wisdom of crowds" thinking, but with no understanding of how that could go awry. The research cited here shows one problem, and there are others.

The basic neutrality policy requires true consensus, which requires that diversity be protected, or else the social effects described will tend to eliminate diversity, because diversity may be considered "disruptive." Obviously, some kinds of "diversity" disrupt process, and must be handled, but mature societies have learned how to contain this without demolishing the diversity that is necessary for true consensus.

False consensus is based on the exclusion of what does not conform to majority opinion.

Anyone game to work on wiki studies at Wikiversity? I intend to do it, following, carefully, guidelines to avoid what came down before: if "wiki studies" becomes an excuse to settle scores, to personally attack users, it will attract disruptive response. Even legitimate studies may attract some negative response, but I think Wikiversity can handle that.

How do wikis work? How do they fail? What can be designed to improve performance?

How about we revivify Topic:Wiki_science?
Peter Damian
QUOTE(radek @ Wed 18th May 2011, 7:18pm) *

And this kind of result makes perfect sense from a statistical point of view. Recall that in Galton's original experiment, with the weight of the pig or whatever it was, all the guesses were made independently. And that's the key - for the average guess to be close to the true value (the estimator to be unbiased) the guesses (random variables) have to be independent (I don't think identical distribution that each guess is drawn from is necessary though). It's exactly the correlation between guesses (whether positive or negative) that is induced by discussion and the extra information of what others are guessing, and filtered through some basic cognitive/psychological biases, that makes the thing go awry.


And as I argued here http://dl.dropbox.com/u/5532250/newsletter%202011-1.pdf there is a much stronger form of selection bias going on. If you deliberately select people from the crowd who consistently overestimate the weight of pigs, the result is obviously going to be skewed. Wikipedia is a crank magnet, therefore the result will be skewed towards what cranks think. It also attracts sceptics, of course, but many of these are simply another variety of crank.
lilburne
Wasn't this always known? That in most interactions a group dynamic takes over, which few are willing to challenge. I think I first came across it in either The Psychology of Computer Programming, or the Mythical Man Month. Turns out that it only takes one or two dissenting voices to break the cycle and for others to express their reservations about a conclusion, or policy. A number of organisations eventually arranged for meetings to have a designated Devil's Advocate.
Kelly Martin
QUOTE(lilburne @ Thu 19th May 2011, 3:56am) *

Wasn't this always known? That in most interactions a group dynamic takes over, which few are willing to challenge. I think I first came across it in either The Psychology of Computer Programming, or the Mythical Man Month. Turns out that it only takes one or two dissenting voices to break the cycle and for others to express their reservations about a conclusion, or policy. A number of organisations eventually arranged for meetings to have a designated Devil's Advocate.
The effect the study mentioned by the OP revealed is different from the group-pressure dynamic, because that requires active interaction between the members. In this study all that was provided to the experimental group was the average of the estimates made by all members in the preceding round; there was no direct interaction between the participants and the information flow was one way only. I suspect it has more to do with the psychology of uncertainty and the fact that most people are readily willing to subordinate their judgment to someone else (no matter how unreliable) whenever they are uncertain of their own reliability.
Abd
QUOTE(Kelly Martin @ Thu 19th May 2011, 9:13am) *
The effect the study mentioned by the OP revealed is different from the group-pressure dynamic, because that requires active interaction between the members. In this study all that was provided to the experimental group was the average of the estimates made by all members in the preceding round; there was no direct interaction between the participants and the information flow was one way only. I suspect it has more to do with the psychology of uncertainty and the fact that most people are readily willing to subordinate their judgment to someone else (no matter how unreliable) whenever they are uncertain of their own reliability.
What the study shows is that there are negative group effects that are not about "pressure," as such.

Jurors in Massachusetts are prohibited from discussing the case, at all, until all the evidence is presented, and deliberations start, and they are discouraged from taking "straw polls" on the innocent/guilty outcome, at first. Rather, it is suggested that they start to go through the evidence and decide on the credibility of each piece of the case.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.