We're talking apples and oranges here. I used Google's "Webmaster Tools" interface to remove both wikipedia-watch.org and google-watch.org. All you need is a gmail account, and you need access to your site's root directory. You fix your robots.txt to exclude Google if it's the whole site you want deleted, and then while you're in the Webmaster Tool after logging into your Google account, you are given a dummy filename with a long string of encrypted numbers. You stick this in your root directory, and Google instantly checks you off as "verified" by looking to see if that file is there.
Once verified, you can remove individual pages or the entire site. It takes only a few hours before the individual pages or the entire site is out of Google.
You can use the same tool to
reinclude that content. That page says "3-5 business days" for reinclusion, which is why I was surprised that it happened in about 5 hours.
This is completely different than crawling pages that haven't been indexed by Google. After the few hours it took for Google to
completely remove those two sites, I restored the robots.txt to allow crawling. Google continued to crawl, even though both sites were out of the index.
When I requested reinclusion, everything came back
exactly as it was before the removal. The PageRanks popped up from zero to their previous values, and all the rankings were exactly the same for all the search terms I tried to look at, on any and all of the pages on those two sites.
This is a
back-end filter operation that I just described. It's not a "let's look for new pages and index them, and make them available, and see where they rank from day to day in competition with the rest of the web."
It was fun and educational.
I have noticed that Google's crawl frequency of the User_talk pages in Wikipedia is rather slow. While they all seem to have the "noindex,follow" in them, including in the archived User_talk pages, a huge number are still in Google. If you look at Google's cache copy of the ones still available, you will see that some were last crawled as long as two or three months ago. My guess is that the User_talk pages are on an entirely different crawl schedule than the article mainspace pages. Certainly a lot different than the crawling of wikipediareview.com, which is very fast. Most blogs and forums are crawled almost constantly, it seems. Static sites are crawled less frequently.
What I think the Wikimedia Foundation should do is get one of the Google fanboys on staff to study the Webmaster Tool situation, and become something of an expert. Maybe even ask Matt Cutts if there's a way to submit stuff in a bulk file to get various URLs that fit some pattern instantly deleted. It might be asking too much of Google to expect them to provide this capability, but it wouldn't hurt to ask.