Grants:IdeaLab/Future IdeaLab Campaigns/Results
Future IdeaLab Campaigns
Summary
editFrom 4 December 2015 to 5 January 2016, participants were asked to suggest topics and voice preferences for upcoming IdeaLab campaigns. Using AllOurIdeas as a means of collecting data on preferences and suggestions for these topics, approximately 90-100 participants[1] from over 25 countries[2] submitted a combined 1700 votes comparing across 45 topics (33 submitted by participants, 12 seeded with the initial survey). Outcomes of our results are as follows:
- A number of compelling topics for future IdeaLab campaigns were identified. Some campaign topics that participants preferred included:
- volunteer engagement and motivation,
- accessibility & use of multimedia content in projects,
- content curation,
- engaging partnerships / experts,
- improvements in contributing to or use of Wikidata content,
- Addressing abuse / harassment, and
- API improvements
- Content curation will be the focus of the next IdeaLab campaign to start end of February. This decision was made on the basis of five (5) campaign topics receiving moderate to strong preference that were related to improving content review and content maintenance processes. The Community Resources team is eager to support volunteer efforts aimed at ensuring and raising the quality of content across Wikimedia projects.
- IdeaLab campaigns on these topics and others will be held
quarterlytwice a year. Campaigns will last approximately one month each, and will generally be scheduled preceding open calls for the upcoming Project Grants designed as a part of the recent Reimagining WMF grants consultation.[3]
Background
editThe Future IdeaLab Campaigns Survey was launched in early December 2015, and ran for one month. It was targeted across many projects including Wikipedia, Commons, Wikisource, Wikidata, and Wiktionary across several languages including French, Italian, Russian, Spanish, and English. Contributors on mailing lists for Wikimedia India and CEE were also contacted.
The survey asked community members "Which IdeaLab campaign topic do you prefer?"
AllOurIdeas provides a score for each option based on preferences from survey participants. The score answers the following question: If this campaign idea was randomly paired with another idea from this survey, what percentage of the time would it be preferred? In addition the score by clicking on the idea itself, AllOurIdeas will show how many completed contests it has gone through, which indicates how many times it was compared to another idea. Lower values in completed contests tend to produce more extreme scores.
Results
editTopic | Broader campaign idea | Score[4] | # of contests[5] |
---|---|---|---|
Accessibility to and use of multimedia content in projects | N/A | 45 | 168 |
Developing bots for routine maintenance tasks | Content curation | 48 | 172 |
Strategies for engaging with and motivating project volunteers | N/A | 59 | 162 |
Improvements in contributing to or use of Wikidata content | N/A | 53 | 161 |
Developing or improving content review or curation processes | Content curation | 59 | 157 |
Workflows or tools for editing and maintenance tasks | Content curation | 60 | 150 |
Strategies and tools to handle cases of long-term abuse | Abuse / Harassment | 45 | 142 |
Addressing harassment of Wikimedia project contributors | Abuse / Harassment | 36 | 136 |
Tools to help experts on a subject matter advise on that subject matter, so that editors with less expertise can make better decisions |
Engaging partnerships / experts | 61 | 117 |
Engaging outside knowledge networks (libraries, educators, etc), in novel participation strategies |
Engaging partnerships / experts | 64 | 116 |
Establishing Wikimedia groups in universities (e.g. through student organizations) | Engaging partnerships / experts | 72 | 27 |
Building the next generation of tools using Wikimedia APIs (Application Programming Interface) | API improvements | 75 | 22 |
Improving maps and other location-based multimedia | API improvements | 71 | 32 |
Raw idea list with score and # of contests | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Additional notes and observations
edit- Several submissions to the AllOurIdeas survey were better suited as specific proposals or ideas rather than a theme for an IdeaLab campaign. In these cases, we identified a larger theme for which the idea would sensibly fit under. For instance, "Improve Abuse Filter" was considered too specific, but would fit in a larger theme of how to better address/prevent disruptive editing behavior.
- Conversely, some submissions were too broad in scope, and would benefit from some narrowing so that a strategic problem or need within Wikimedia projects can be addressed. For instance, "Improvements in enabling content creation" (which admittedly, I, User:I JethroBT (WMF), initially seeded into the survey) could be interpreted to apply to most efforts that contributors to Wikimedia projects engage in.
- Consequently, the above chart is sorted roughly by # of contests for ideas consistent with themes we identified from all submissions. The raw list of submissions and scores can be viewed on AllOurIdeas, and is also in the collapsed table above.
- Some ideas were added later in the survey, and did not get the benefit of being voted for (or against) very often. These scores tended to be more extreme and is based on the way AllOurIdeas calculates score.
- AllOurIdeas provides anonymized information on participants and their behavior in the survey. Review of this data shows behavior consistent with attempting to game the system by, for instance, selectively voting for or against specific topics while ignoring most other topics via skipping. These votes were discounted from analysis. Topics that were most frequently subjected to this behavior were:
- Workflows or tools for editing and maintenance tasks,
- Correcting systematic bias in article content,
- Developing bots for routine maintenance tasks,
- Accessibility to and use of multimedia content in projects,
- Improvements in enabling content creation, Developing or improving content review or curation processes
- Strategies and tools to handle cases of long-term abuse, and
- Tools to help experts on a subject matter advise on that subject matter
- In total, these invalid votes represented about 7% of the 1880 votes cast.
- One technical limitation in AllOurIdeas was that descriptions were limited to 140 characters that could not contain any external links. As a result, users were sometimes presented with choices that they did not recognize and could not easily get more information about.
Notes
edit- ↑ This is an estimate; AllOurIdeas provides information on unique sessions per day (of which there were 111). Participants are allowed (and encouraged) to take the survey multiple times, in part because new ideas were submitted throughout the course of the consultation.
- ↑ AllOurIdeas voting map
- ↑ Grants:IdeaLab/Reimagining_WMF_grants/Outcomes.
- ↑ a b Score is a percentage reflecting how often the idea would be preferred if it were randomly paired with another idea from this list.
- ↑ a b # of contests refers to the total number of times the idea was actually compared with another idea across participants.