Research talk:Wikimedia Summer of Research 2011

Latest comment: 13 years ago by Welles in topic RfA statistics

If you have questions or would like to participate in research such as what we have blogged about, please feel free to drop a note here and we'll get in contact with you. Thanks! Steven Walling at work 19:02, 2 May 2011 (UTC)Reply

I'm interested in this aspect very much and am looking to initiate a dialogue with new users. I rarely get responses to my welcome messages and am wondering how to crack the barrier. AshLin 02:56, 3 May 2011 (UTC)Reply
Hi! I am very interested in the methodology you use, down to the smallest decisions (software used, sampling method, sample size, etc., etc.), as it would be very useful if anyone wanted to replicate or follow-up on your research on other language editions (I expect you would be focusing on the English Wikipedia?). Best regards, --Dami 13:27, 3 May 2011 (UTC)Reply
Hi Bence, I was going to write a lengthy response explaining our methods, but I realized Felipe Ortega made much the same commentary on the blog, and before the summer gets here we need start documenting more. Will you let me know if you have more questions after I write that today? :) Thanks, Steven Walling at work 20:37, 3 May 2011 (UTC)Reply
Thanks, Steven! As I say above, please don't spare us even the very boring details (like "we used Open Office Calc to tag the edits" or we used the "NUD*IST software for tagging"; we used the query "SELECT from ... "), these are all very helpful to replicate or improve on your research - and if you employed any paid resources, I can imagine that chapters can step up and provide those resources for people who come to them based on your lessons.
Ps, I know you're still editing the page, this is not criticism, just a preliminary plea that you be thorough. Keep up the good work, :) --Dami 21:05, 3 May 2011 (UTC)Reply
Hey, I think I've added everything, except I'll just note here that we simply used Google Docs spreadsheets to edit the samples. That let us collaborate easier than NUD*IST would, plus we're on Ubuntu or Mac. :) Let me know if you have any other process questions about replicating the studies. Steven Walling at work 22:00, 3 May 2011 (UTC)Reply
Thanks, this is really useful info! Probably Google Docs is already more advanced than Nud*ist was at the time my textbook on social science research was written. :) Unfortunately, I am not good at SQL but want to understand, do I see it correctly that it basically selects the chronologically first 500 revisions it finds, and the "u.user_id BETWEEN 135000 AND 235000" part served to make the sampling random? --Dami 22:10, 3 May 2011 (UTC)Reply
The "u.user_id BETWEEN 135000 AND 235000" is necessary to make the query run faster. All of the users that meet the criteria fall between those two user_ids -- because user_id is an indexed field on the toolserver, that makes MySQL's job a lot easier because it doesn't have to search all the users, but only the users between those two numbers. I went through and found the user_ids for the approximate start and end of the time ranges I was trying to capture and then picked user_ids significantly lower and higher to make sure I didn't miss anyone. That user_id restriction isn't actually what defines the sample -- the other parts of the WHERE statement do that. And yes, you're right that it selects the first 500 chronologically in order of user_id. It would have been better probably to do it randomly, but in this case I don't think it makes much difference. Like Steven and Maryana have said -- these are just experiments to get ready for the summer and work out the bugs in advance. Zackexley 17:46, 6 May 2011 (UTC)Reply
Thanks Zack for the explanation! --Dami 11:20, 7 May 2011 (UTC)Reply
I have tried to do what you have done on a smaller sample and I appreciate the difficulty and hard work in tagging the edits; it might be a good idea for the summer research project to publish example edits for the various categories you define (it can be interesting of itself, even if the focus of the research is not the negativity of the warnings; you might have to worry about anonymity, but the fact that every edit is made publicly should give you the licence to quote your research sample). --Dami 19:32, 7 May 2011 (UTC)Reply
That's awesome work Bence! Do you want to maybe cross-post that to the Wikimedia blog this week? I can help you out setting that up. Steven Walling at work 01:06, 9 May 2011 (UTC)Reply
Examples would be good but the full codebook would be better. And you should also publish details of how you trained your coders and the level of intercoder reliability you experienced when you used the codebook. (Apologies if these details are already out there or if you're already working on them!) ElKevbo 03:53, 10 May 2011 (UTC)Reply
We're not really comfortable publishing the complete codebook, considering the research subjects did not agree to being publicly evaluated as part of the study. Making Wikipedians feel singled out in our analysis is not really something we're interested in. As for the other aspects... the two coders were myself and Maryana Pinchuk. We did do a comparison of our work on the last sample, and there was 3.5% difference in our coding. We did not get a larger team, since this was mostly a test of our codes and methods. In the future we'll add a few PhD candidates who we will train. Steven Walling at work 21:11, 10 May 2011 (UTC)Reply

corrections

edit

Did your analysis pick up any examples where people realised they'd made a mistake and struck or removed the warning they'd issued? WereSpielChequers 22:44, 19 May 2011 (UTC)Reply

For all the user talk-specific analysis (so all but the last post about preliminary research) we only analyzed one diff at a time, not the complete talk page history. Steven Walling at work 22:04, 26 May 2011 (UTC)Reply

RfA statistics

edit

One of the things I would like to know is which English Wikipedians have nominated the most administrators. Welles 22:04, 4 June 2011 (UTC)Reply

Return to "Wikimedia Summer of Research 2011" page.