Community Insights/2018 Report/Survey design

Community Engagement Insights 2018 Report: Support & Safety

The purpose of Community Engagement insights is to improve the Foundation's ability to hear from communities in order to make decisions that align with community needs. This survey also helps us to learn what are the long-term social changes happening in Wikimedia communities. This page is a short summary of the design of the survey for 2018 and provides a breakdown of response and completion rates for the survey this year.

Methods

edit

Question design

edit

The design for CE Insights questions is a collective effort. Eleven Foundation teams participated in writing questions related to their work and goals. These questions are organized into one large survey that reaches. In total, there were 170 questions in the survey, but each audience only saw a fraction of these questions. Sections of the survey were randomized as well in order to reduce burden. Teams are encouraged to ask questions that are best suited to learn from CE Insights. Questions about attitudes, awareness and in some cases behaviors are encouraged in order to both gather feedback from our communities as well as measure the impact of team's work.

Once the questions have been designed, they were tested with 40 of community members, where we heard from respondents about jargon or errors in the survey. After testing, the survey was translated from English into 11 languages by both Wikimedia community members as well as paid contractors. The languages were Dutch, French, Spanish, Arabic, Ukrainian, German, Italian, Japanese, Chinese, Portuguese, and Russian.

Reaching Community Audiences

edit

Community audiences have been defined based on interviews with Foundation staff in 2016.[1] Audiences for this survey include:

  • Editors, or contributors, are people who add content or help to curate information on the Wikimedia projects. For editors, we sent the survey using mass message to 12 different language Wikipedias, Wikidata, Wikimedia Commons, as well as a sampling of all other Wikimedia projects.
  • Wikimedia Affiliates are officially recognized organizations who represent Wikimedia around the world. They include Wikimedia Chapters, Wikimedia Thematic Organizations, and Wikimedia user groups. For this survey, we sent the survey to leaders from each organization such as a board member, senior staff, or lead volunteer.
  • Program leaders are people who conduct outreach activities to help share Wikimedia to the world and to engage folks in contributing. Popular programs organized by program leaders include Editathons, editing workshops, editing contests, Conferences, Wikipedia Education programs, GLAM, Wiki Loves Monuments, other photo events, and several other programs. We sent the survey to program leaders who have been in contact with the Foundation in various ways, such as conferences, events, and as grantees.
  • Technical contributors are sometimes also called "volunteer developers". They include anyone who contributors to the process of improving Wikimedia technology, from those who submit bug reports, those who create bots and tools, and those who write software for the MediaWiki. We heard from volunteer developers in this survey through mailing lists.

Results

edit

Response rates

edit

Improvements to response rates in 2018 were mixed. For editors and volunteer developers, response rates increased 11% and 37%, respectively. Response rates for affiliate and program organizers both decreased 38% and 15% respectively. Among editors, response rates ranged from 18% (Portuguese Wikipedia) to 40% (German Wikipedia).

Among high activity editors, the lowest response rate was 25% from Portuguese Wikipedia while the highest was 25% from German Wikipedia. Among low activity editors, the lowest response rate was 11% from Asian Language Wikipedias (excluding Chinese and Japanese) while the highest was 32% from German Wikipedia.

Completion rates

edit

Since each audience received varying number of questions, we first create a rough estimate to show how many question each audience received in order to tabulate each audience’s rate of completion to approximated 25%, 50%, 75% and 100% survey benchmarks. So for those who finished the survey, we calculated how many questions those individuals responded to. This information can be found in Figure 1.

We use these as benchmarks to calculate the survival rate (Figure 2) of those taking the survey. We also use these benchmarks to calculate an approximation of the completion rate (Figure 3), using the mean benchmark for each audience to calculate.

The median completion rate across the audiences is 76%. We observe that audiences with the highest burden in terms of number of questions are program organizers and affiliates. Across the audiences, high activity editors and affiliate organizers had the highest completion rates at 81% and 80%, respectively. Affiliates had the longest survey while high activity contributors had the third-longest, showing that these audiences are more dedicated to completing the full survey. Completion rates was lowest among volunteer developers at 59%.

In terms of survival rates, low activity editors dropped off earlier than the other audiences, specifically, 27% did not make it through the first set of questions, while the median drop off rate for across the audience groups was 19%. Developers dropped off faster after the first set of questions. While for all the other audiences, 50% or above answered up to 25% of the questions, for developers 43% responded up to 25% of the questions. While affiliate organizers had a higher response through 50% of the survey, many dropped off before the remaining 25% of the survey.

   

Conclusions & Next steps

edit
  • While response rates increased among contributors, they decreased for Affiliates & Program organizers. Even though our total count of program and affiliate organizers is higher than last year, the response rate decrease was concerning. We have begun to brainstorm what the causes were for the decrease in response rates.
  • Further investigation into response rates is needed for contributors. The sampling strategy for contributors was complex. We used stratified sampling in which we sampled various projects intentionally. We are able to use project and editor activity levels to learn how effective our sampling strategies and communications were for this survey.
  • We plan to continue improving our response rates each year. Our focus will be to find ways to improve response rates for contributors, but also for affiliates and program organizers. We hope to improve our sampling strategy as well as our communications about the survey next year.
  • We plan to improve completion rates. We understand that the survey is quite long for all the audiences. We will be working to improve the length of this survey. Since teams at the Foundation have similar long-term goals, we will work to find ways to consolidate certain types of questions to improve the overall experience. We also plan to perform user experience testing for the survey.

Notes

edit