Learning and Evaluation/Evaluation reports/2015/Writing Contests/Key findings
Peer Review in Process now through July 3
The Learning & Evaluation from the WMF will be actively reading and responding to comments and questions until July 3.
Please submit comments, questions, or suggestions on the talk page!
What are the key findings to take away from the report?
What did we learn in terms of impact?
Building and Engaging Communityedit
The total number of participants engaged in the 39 contests included in this report was 1,027; 909 (89%) were existing users. The proportion of existing active editors to total existing editors increases from 66% to 89% between 30 days before the contest to 30 days after the start date. Much of this increase (about 60%) can be attributed to WikiCup participants. Last year's report about on-wiki writing contests suggested that writing contests "aim at engaging existing editors". This report suggests differently. Thirteen of 39 programs involved new users, ranging from 2.8% to 64.5% new. In all these contests, 57 (50.0%) out of 114 made at least five edits in the first month following the start of the event, and 20 (17.5%) made at least one edit between the second and third month following the event. While these data seem promising, more data is needed to make stronger statements and to learn how effective writing contests are at engaging new users.
"In one competition, university departments donated prize money and space for an awards ceremony, and judges were able to volunteer as little as 30 minutes of their time rating articles in their area of expertise and providing comments on talk pages so that articles can be improved in the future, and students were able to compete for a sizeable prize, but also a participation certificate that they could reference in their CVs."
Kacie Harold, WMF
Increasing Awareness of Wikimedia Projectsedit
None of the metrics in this report specifically measures awareness of Wikipedia projects. It may be useful to explore with program leaders how they think they are increasing awareness and include these measure in the data collection phase of the report. Some metrics may measure awareness indirectly. For example, the metric for number of new editors could say something about how new users are engaging with the projects. We found several contests that recruited new editors. In the 30 contests with data on numbers of new users, 114 (18%) of 627 participants were newly registered users. In the future, it might prove useful to explore what strategies program leaders are using to engage new users in writing contests.
Increasing Diversity of Information Coverageedit
From the 39 programs reporting, at least eleven Wikipedia language projects were represented. This figure does not include the 4 contests that were "interwiki" contests, or contests that spanned more than one language within a Wikimedia project. While contests are conducted in multiple languages, more measures could be used to examine other aspects of diversity, such as diversity of content or diversity of contributors. Recruiting and/or converting New Usersedit
The 39 contests included in the report had a total 1,027 participants, and 114 (11%) were known to be newly registered users two weeks before the events. Four contests had more than 40% new users. In examining retention over time of all new users, we found that while new users participated in the contests, they may not continue editing for much longer. For example, from programs that were 3 months long or less, only 3 (3%) remained active 6 months after the start date of the event. Previously,[2] writing contests were believed to only engage existing users. But in this round of data collection, two programs emerged that included over 50% new users, and four with over 40%. While it is great to see contests being designed to engage new users, more program implementation data would be needed to understand how and to what extent contests are able to recruit and retain new users.
"We try to make sure that judges in the first stage have just one page to review, because that limits the amount of time that they must commit to working. By the way, this judging system is another thing that gets a lot of people involved in WP. We have 40 scientists throughout Israel that participate by reading and commenting on just one article. It is not a lot of work because they are reading something from their field that may only take 30 minutes of time. Usually these judges haven't contributed to Wikipedia before. This is very helpful because afterwards, if people want to improve a page they have specific feedback from a subject area expert."
Physiwiki, Hebrew Wikipedia
| ||||||||||||||||||||
How this information can apply to program planning
edit
|
Planning for Program Inputs & Outputsedit
Use the information to help you in planning for program inputs and outputs.
Contests leverage donated resources, which shows there are approaches to resourcing beyond monetary funds. From the 14 program leaders reporting donated resources, 13 (93%) reported having received any donation, and 13 contests (93%) reported receiving donated contest prizes. This number suggests that program leaders running contests may do well in obtaining donations, but especially contest prizes.
The boxplots illustrating cost per participant and cost per text page or articles created/improved can also be helpful references for comparing the cost of your event with how much content is produced. As with overall budget information, the boxplots should be taken in the context of each event. If planning a new program, you might expect the costs to fall within middle 50% of costs per output reported (ie, within the green bar on the boxplot.) As programs move down the boxplot they create better outcomes with fewer inputs. We hope, as we continue to evaluate programs and feed the results back into program design, that we can learn more from the programs achieving the most impact using the fewest resources.
| |||||||||||||||||||
How easily can the program be replicated?
edit
|
Writing contests differ in goals, length, subject area and scope, yet they are organized successfully within and across many Wikipedia language communities, and in many contexts to meet diverse goals. In some ways, this speaks to their ability to be replicated. In planning and implementing a writing contest, recognition is typically offered through awards or prizes based on quality or quantity of content produced. Judging who receives recognition or prizes may require tracking content contributions or evaluating them. Program leaders use many different methods to track contributions, such as event pages, bots and wiki-based tools, in order to judge contributions or contestants. Use the data tables in the report to find program leaders who are being successful in tracking submissions. You can also reach out to program leaders who helped create the toolkit. We are currently in the process of developing a Program Toolkit for writing contests. We will reach out to writing contest program leaders, learn about best practices and challenges, and collect the information to make a resource. We will have stories, resources and advice on how to plan, run, and evaluate writing contests.
| |||||||||||||||||||
How does the cost of the program compare to its outcomes?
edit
|
Our very rough cost-benefit analysis includes data from 30 implementations with non-zero budgets and outputs data that ended between May 2013 through September 2014. First we examine the total content contributions that result from these contests to Wikimedia projects, in terms of content and participation. We also divide the cost by the amount of content produced. Budget: $10,601 Participants: 745 Pages of text: 15,144 (or 22.7 million characters) Articles created/improved: 15,156 Projects: At least 11 different language Wikipedias Retention: Existing Editors, 3 months, 1 edit or more: 520 (82.9%) New editors, 3 months, 1edit or more: 20 (17.5%) 520 627 20 114Median cost per text page: $1.14 USD Median cost per article created or improved: $0.59 USD These numbers show that contests produce a substantial amount of content, and engage many participants. We cannot make strong comparisons or judgements about writing contests because we need more quantitative and qualitative data. We need more information about the context in which the contests exist as well as additional outcomes that are not included in this analysis. Also, many writing contests are conducted entirely by volunteers without a budget and are not included in these cost comparisons. It would be great to one day learn how much content writing contests produce across all the Wikimedia projects, and learn more about how writing contests motivate and build our lively online communities.
| |||||||||||||||||||
Next steps
edit
|
Join the conversation! Visit the report talk page to share and discuss:edit
|