Help - Search - Members - Calendar
Full Version: Modelling Wikipedia Promotion Decisions
> Wikimedia Discussion > Bureaucracy
Peter Damian
Following Eric's excellent post here http://wikipediareview.com/index.php?s=&sh...ndpost&p=254746 and a good link given by Jon, http://en.wikipedia.org/wiki/User:Reagle/B...n_Reading_Group, can I suggest a discussion of the Burke and Kraut paper here

http://www.cs.cmu.edu/afs/cs.cmu.edu/user/...iaPromotion.pdf

It's a description, from the digerati POV, of the RfA process. As many of us have a 'real world' view of this, it may be of interest.

I've scanned through - the main problem I am seeing so far is they omit the concept of 'butt snorkelling' a.k.a. a moral compass covered in smelly brown bits? Is this a valid academic concept? Could it be computer modelled?
Peter Damian
They give a list of requirements given in the 'Guide to Rfa'. The problem, as others have observed with regard to Reagle's work on the subject, is that what Wikipedia explicitly lists as the policy, isn't the real, implicit policy.

QUOTE
• Varied experience. RfAs where an editor has
mainly contributed in one way (little editing of
articles, or little or no participation in [Articles
for Deletion], or little or no participation in
discussions about Wikipedia policies and
processes, for example) have tended to be more
controversial than those where the editor's
contributions have been wider.

• User interaction. Evidence of you talking to
other users, on article talk or user talk pages.
These interactions need to be helpful and polite.

• Trustworthiness. General reliability as
evidence that you would use administrator
rights carefully to avoid irreversible damage,
especially in the stressful situations that can
arise more frequently for administrators.

• Helping with chores. Evidence that you are
already engaging in administrator-like work and
debates such as RC Patrol and articles for
deletion.

• High quality of articles. A good way to
demonstrate this is contributing to getting
articles featured, although good articles are also
well-regarded.

• Observing consensus. A track record of
working within policy, showing an
understanding of consensus.

• Edit summaries. Constructive and frequent use
of edit summaries is a quality some RfA
contributors want to see. Some expect use of
edit summaries to approach 100% of the time.

Kelly Martin
QUOTE(Peter Damian @ Sat 2nd October 2010, 7:35am) *

They give a list of requirements given in the 'Guide to Rfa'. The problem, as others have observed with regard to Reagle's work on the subject, is that what Wikipedia explicitly lists as the policy, isn't the real, implicit policy.
Note that the goal of this research was to identify which, if any, of the "official" criteria are actually used in promotion decisions. The proxies they identified that correspond to the official criteria for promotion are only mildly predictive of promotion; a significant portion of the variability is not explained by them. They correctly noted that involvement in "conflict resolution" actually lowers likelihood of passing.

Overall, one of the better studies I've seen about Wikipedia behavior, although I won't vouch for the statistical validity of their methods.
Peter Damian
QUOTE(Kelly Martin @ Sat 2nd October 2010, 3:45pm) *

QUOTE(Peter Damian @ Sat 2nd October 2010, 7:35am) *

They give a list of requirements given in the 'Guide to Rfa'. The problem, as others have observed with regard to Reagle's work on the subject, is that what Wikipedia explicitly lists as the policy, isn't the real, implicit policy.
Note that the goal of this research was to identify which, if any, of the "official" criteria are actually used in promotion decisions. The proxies they identified that correspond to the official criteria for promotion are only mildly predictive of promotion; a significant portion of the variability is not explained by them. They correctly noted that involvement in "conflict resolution" actually lowers likelihood of passing.

Overall, one of the better studies I've seen about Wikipedia behavior, although I won't vouch for the statistical validity of their methods.


Are we reading the same paper? The goal of the paper, as stated in the abstract, was to build a model that will predict who will succeed at RfA, and to use 'policy capture' to determine how well the policy matches the actual promotion. As I read the paper, there are 'similarities and differences' - some aspects of policy are captured, others aren't (I may be wrong, I didn't read it that carefully).

The silliest aspect of the paper was the idea of using the model to build an 'adminfinderbot'.
Kelly Martin
QUOTE(Peter Damian @ Sat 2nd October 2010, 10:02am) *

Are we reading the same paper? The goal of the paper, as stated in the abstract, was to build a model that will predict who will succeed at RfA, and to use 'policy capture' to determine how well the policy matches the actual promotion. As I read the paper, there are 'similarities and differences' - some aspects of policy are captured, others aren't (I may be wrong, I didn't read it that carefully).

The silliest aspect of the paper was the idea of using the model to build an 'adminfinderbot'.
That's because it comes from CMU. Surely you're familiar with the sort of work that goes on there; much of what they do focuses on robotics, automation, artificial intelligence, and statistical decision engines. This is right up that alley.

I did read the paper, and at several points they noted that factors that Wikipedia claims increase one's chance of passing are either neutral or negative predictors of passing. In addition, the authors are clearly skeptical of Wikipedia's claims about its own processes; they're not suffering from Reagle's unwillingness to see past the pretty wrapper. They even used the term "level up" to refer to seeking adminship, and they consistently refer to voters as "voters", with a footnote commenting on how Wikipedians refuse to call them that, but they're going to use that term anyway for "simplicity".
Peter Damian
QUOTE(Kelly Martin @ Sat 2nd October 2010, 4:13pm) *

QUOTE(Peter Damian @ Sat 2nd October 2010, 10:02am) *

Are we reading the same paper? The goal of the paper, as stated in the abstract, was to build a model that will predict who will succeed at RfA, and to use 'policy capture' to determine how well the policy matches the actual promotion. As I read the paper, there are 'similarities and differences' - some aspects of policy are captured, others aren't (I may be wrong, I didn't read it that carefully).

The silliest aspect of the paper was the idea of using the model to build an 'adminfinderbot'.

much of what they do focuses on robotics, automation, artificial intelligence, and statistical decision engines. This is right up that alley.


That explains the silliness then. They should be made to get a proper job.
Kelly Martin
QUOTE(Peter Damian @ Sat 2nd October 2010, 10:21am) *
That explains the silliness then. They should be made to get a proper job.
You do realize that most of the technology you use every day relies in many ways on stuff these people develop, don't you?
Jon Awbrey
That paper looks typical of a whole vain of literature that we've been seeing lately, but it does make a good weather vane for the many splintered crosswinds of Artificial Intelligence Research (AIR), most of which wind their way.bak as far.bak as the 1940s.

That larger topic is a compelling one, and there are many threads here that touch on it, but I won't have more time until I don't know when, maybe tonight, maybe Monday …

Jon Awbrey
Peter Damian
QUOTE(Kelly Martin @ Sat 2nd October 2010, 4:36pm) *

QUOTE(Peter Damian @ Sat 2nd October 2010, 10:21am) *
That explains the silliness then. They should be made to get a proper job.
You do realize that most of the technology you use every day relies in many ways on stuff these people develop, don't you?


artificial intelligence
Peter Damian
Here's an even stupider one:

http://people.csail.mit.edu/csauper/pubs/s...r-sm-thesis.pdf

An algorithm for writing Wikipedia articles. Perhaps that explains the peculiar style of writing we find there.

Seriously though

(a) Computers can't even read printed text very well, and ordinary handwriting not at all. Even the algorithms that do partly work use the crudest mechanical method.

(b) They can't translate from one language to another. There's a story, probably apocryphal, that the mathematical geniuses who worked on code-breaking in WWII thought that the translation problem was pretty easy. Ordinary language is like a code isn't it? So shouldn't translation be as simple as cracking the difficult codes we used to crack? The fact they even imagined this was possible shows how stupid geniuses are.

© As for writing encyclopedias .... I think 90% of the problem is that so many people believe that computers can solve all sort of problems that they can't.
Cock-up-over-conspiracy
QUOTE(Kelly Martin @ Sat 2nd October 2010, 3:36pm) *
You do realize that most of the technology you use every day relies in many ways on stuff these people develop, don't you?

Perhaps ... and the Wikipedia is an example of what happens when they do not have managers and employment contracts to do it efficiently.

I get what Peter is saying about how people think computers can solve all sort of problems that they can't. They can't. They just move the problem areas around to somewhere else.
thekohser
What I find amazing is that scholarly papers are even being written about the ins and outs of Wikipedia. Are similar papers being written about the economics of eBay, or the artistry of Flickr, or the communications efficiency of Twitter? If so, then I'm simply astounded. I don't think anyone back in my grad school days had any notion that academia would one day explore statistically the inner workings of websites.

I'm kind of jealous, actually. If I ever do go back to get my PhD, it's comforting to know that my dissertation can be a sociological examination of YouTube comment fields.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.