It wouldn't quite happen unless there's some form of consensus to make the Fair Use policy a bit more user friendly, and by that I don't just mean the guideline, but some sort of committee. I'm not really quite sure how well that would be created, as the Fair Use policy is one of those 800 lb gorillas in the room. Everyone kind of knows about it, but most people really don't like to do anything with it unless it slaps them in the face.
Unfortunately it wouldn't really happen, because most users will funnel to the bot/owner who gave them the message and badger them. Path of least resistance. Giving them some sort of alternative path would be nice, but as I said, I'm not quite sure it could omplemented
QUOTE(Pumpkin Muffins @ Fri 23rd May 2008, 1:13am)
![*](style_images/brack/post_snapback.gif)
QUOTE(CrazyGameOfPoker @ Thu 22nd May 2008, 10:04pm)
![*](style_images/brack/post_snapback.gif)
I found this laughable Duk wants that ST47 not to have any type II errors, but later goes on to block the bot twice for type I errors. I wonder if he realizes that what he's asking for is impossible, unless we're considering something that is 100% accurate, which is pretty much impossible given the diversity of the sample.
Duk can't spell worth a shit (it's surly, not surley) and he doesn't communicate very well, but I think he finally managed to spit it out:
What ST47 refused to state clearly, because he knew that it was wrong but he wanted anyway, was for fair use images to use this particular template. As our conversation continued it became clear that this particular template is not a requirement for fair use, but that ST47 is just too lazy to fix his bot. So then he moved on to criticize everything else he could think of, except his own laziness. --Duk 03:47, 23 May 2008 (UTC)Yeah, I saw that too, but I was referring more or less to the Catch-22. (If you set the threshold for a hypothesis too low, you'll invariably get false positives (type I), but if you raise it too high, you'll invariably get false negatives (type II). Essentially, ST would be damned if he let any slip through, but damned if he didn't let any slip through.
regarding the template, I was wondering to the metric that ST was using, and I really don't think that was the correct method to use. What did ST47 promise to check when he got the approval anyway?
Edit: Found it
http://en.wikipedia.org/wiki/Wikipedia:Bot...proval/STBotI_2http://en.wikipedia.org/wiki/User:ST47/NFCC10cQuite frankly, I'm appalled that was the rule that got used if there's no check that the images actually use the template before flagging. That's really a horrid design choice, and frankly I'd wonder what the actual false positive rate is.