Learning and Evaluation/News/Round II Announcement/FAQ

Learning and Evaluation

Voluntary Data Collection: Frequently Asked Questions

edit

I have never heard about this data collection.
What kind of data?

edit


Programs data
These data are measures of the inputs, outputs, and general impact of 10 popular Wikimedia programs: editathons, editing workshops, on-wiki writing contests, GLAM content donations, Wiki Loves Monuments, other photo events (e.g. Wiki Loves Earth), Wikipedia Education Program, hackathons, conferences, and wikimedians in residence. This data collection is part of an ongoing effort to learn more about great work that organizations and individual volunteers do to increase participation and content of the wikimedia projects. The more data is submitted for this project, the better analysis we can provide the movement.

View the latest reports in our portal.

What will the data be used for?

edit


Evaluation Reports
In 2013, the WMF published the Evaluation Reports (beta). These reports were a first step towards understanding the impact of the most popular programs that are being replicated across the movement. For these reports, the data you contribute will be examined more deeply. We will run additional metrics about impacts to quality content and participation, analyze the data, and share the reports as the final product. The results could be useful to your organization as you evaluate your past work and plan future work. The reports can be used as tools to inform program planning, especially around potential inputs and impact, as well as to model a consistent approach for us to learn from, and share about, programs in a similar fashion in reporting.


Program Toolkits
Program toolkits are a collection of best practices for planning, implementing, and evaluating a program. As the Program Evaluation reports are completed, certain patterns may emerge from examining the collective data. Through high-level analysis, we will be surfacing success cases and investigating their practices in order to develop program toolkits. For example, last year, we found a possible pattern that one-time editathons or workshops might not be effective at retaining editors in the long-run while conducting a series of editathons or workshops may be more effective at doing so. Several program leaders have tried that approach in 2014. We now hope to gather enough data that we can make comparison across program implementations and find the most promising practices to share.

What are goals of the Evaluation Reports and Program Toolkits? How do they benefit me or the movement?

edit


Shared language of programs
Together, the evaluation reports and toolkits begin to establish a shared language about what these programs are and what they aim to accomplish. With the data you share and through the published reports, you and other program leaders would be able to compare, contrast, and connect to one another across program contexts.
The published reports could help your organization:
  • prioritize programs that work toward impact along shared movement goals;
  • demonstrate impact, find and share experiences among different Wikimedia communities;
  • identify promising practices and potential new partnerships.
The evaluation report findings will directly feed into Program toolkits. Toolkits are focused guides on best practices and program design. The aim of these guides? That anyone who is planning a program can have community experiences and data to help them choose a design that will reach the most impact. We are working to release our first toolkit next month (February 2014) for the Wikipedia Education Program. The following toolkits will be for on-wiki writing contests and photo events by the end of the fiscal year (September 30, 2015).


Shared language of evaluation
(images/link to posters) Wikimedians love to share their work, but sometimes its unclear which are the best practices. The program toolkits and evaluation reports would create a pool of knowledge that would offer program leaders and movement organizations insights on how to evaluate programs, the practical applications of evaluation, and how to find the stories in the data that can be shared to the general public about Wikimedia programs. The evaluation reports offer a basic method for how to evaluate programs across the movement, using metrics that are commonly shared within programs. The program toolkits will include information about important metrics and strategies for evaluating programs.


Shared metrics
Metrics are used to make sense out of which programs work best for which goals. Metrics are the specific methods we use to measure inputs, outputs and outcomes. (See examples in Measures for evaluation). The issue with metrics is finding common measures within and across programs that are able to indicate the success of a program. This requires conversation. With the collective data from programs for the evaluation reports, we are able to closely examine the measures we use and start talking about the key metrics that are important for programs. We are able to use metrics to compare and contrast programs and have a better understanding about how they work.


Evidence-based decision-making support around program choice
Programs aim to solve a problem or to reach a goal. With the reports to inform knowledge and the toolkits as guides, program leaders and organizations would be empowered to know which programs to use for reaching a particular goal or solving a particular problem. (NEEDS EXAMPLE - not sure if this is the best one, since On wiki editing contests typically don't target newbies: For example, on-wiki writing contests are a great way to engage editors, but they are frequently most useful when engaging existing editors. Using contest to engage new editors might be challenging since it can take time for new editors to be able to compete with existing ones.) Isn't this idea already clear in the first answer, Shared language for programs? I think we can add a sentence like «to make evidence-based decisions as to what programs to implement» or similar (MC)

It sounds like this takes a lot of time. How do we know movement partners have this data easily available to them?

edit


Evaluation Pulse survey data
We believe much of the data may be on-hand but not reported because we have seen increased reporting in grantee reports as well as based on program leader self-report of evaluation capacity this year in Evaluation Pulse 2014. While the reportability (the ability to report information) must be triangulated with these self-reported improvements, we expect that reportability has also increased based on reports from the 90 respondents to this year's survey. Importantly, we saw significant increase in self-report of tracking and monitoring of these data in particular from the original capacity survey in 2013 to this year's Evaluation Pulse collected in July. At this point a large majority of program leaders report tracking, in most cases, most true of Annual Plan Grantees more than non-grantee program leaders. This sentences is unclear to me (MC) Specifically, we heard from 20 program leaders who have operated under an FDC APG grant. Of these program leaders:

75% to 100%
are reporting they "mostly" or "completely" track their program's:

  • Date/Time of program
  • Program Budget/Costs
  • Number of New Articles
  • Number of media uploads
  • Participant User Names

60 to 70%
are reporting they "mostly" or "completely" track their program's

  • Number of Articles Edited
  • Program Length (Days, Hours, etc.)
  • Number of New Accounts Created
  • Donated Resources

40 to 50%
are reporting they "mostly" or "completely" track their program's

  • Gender of Participants
  • Staff/Contractor hours
  • Lessons Learned (i.e., what worked as planned and what did adjustments, if any, were made)

10 to 25%
are reporting they "mostly" or "completely" track their program's

  • Volunteer Hours
  • Other Demographics of Participants
  • Content Areas Improved/Increased (i.e., articles about women scientists, animals)

We do not find as much data in their reporting however. Still 85% of those program leaders reported that they have measured against their target outcomes and have reported on it. Part of the problem is that program leaders report in many different channels and do not often ensure the same information gets to all reports. This is also more true of APG program leaders than others: Reporting Spaces shared:

Reporting Space % APG % Non APG %
Grant Reports 69% 85% 63%
Chapter Reports 56% 85% 47%
Blogs 59% 90% 48%
On-wiki news channel or project page 54% 55% 53%
Social Media 59% 90% 48%
Other 23% 40% 17%

As we have been engaged in evaluation workshops and dialogues with many of you about metrics, tracking, and reporting in various conference workshops and virtual meet-up sessions we have presented a tracking and reporting toolkit (February 2014) following the initial beta reports and now online learning module to support program leaders in capturing these data. We have searched through monthly chapter reports, grantee progress reports, event pages, linked blogs, and other data you submitted to Grantmaking about programs, but we know there are more places to search and we cannot look everywhere. Based on recent self-reports and past exchanges, we anticipate program leaders will be able to easily complete a good number of blanks with only a small amount of effort or direct us to additional documentation. We now are inviting you all to do so: share your data!


I have a program and data I'd like to report on. How can I participate?

edit
We are still collecting more data! If you have run a program implementation that started and ended between September 2013 and September 2014, we would really like to hear from you. We are looking for data about the following programs; please note the deadlines. If you've received an Annual Plan Grant between September 2013 and September 2014, we will be in contact with you. Please feel free to email egalvez at wikimedia.org if you have any questions.
To make things easier, we are happy to go through any documentation or links you have that contains the data and we will mine -- or search and organize -- the data for you!

Due by February 1

Wiki Loves Monuments
Other Photo Events (e.g. Wiki Loves Earth, Wiki Takes)
GLAM Content Donations
On-wiki Editing Contests

Due by March 1

Wikipedia Education Program
Hackathons
Conferences
Wikimedians in Residence




TO THOSE OF YOU WHO HAVE ALREADY REPORTED:
THANK YOU VERY MUCH !!!