Which volunteers execute this protocol routinely?

edit

Hello. Is there any volunteer who calculates global metrics with this protocol routinely? What about sometimes?

I presume that all moderately funded Wikimedia chapters receiving funds from the WMF are running these metrics. Are there any paid staff who follow this page available to say hello here on this talk page? I am more interested in which volunteers do this, but would also like to confirm that paid chapter staffers do this. Blue Rasberry (talk) 20:14, 25 November 2015 (UTC)Reply

Aggregated reports and research

edit

This protocol describes how to generate data about small Wikipedia events. For example, these instructions could be applied - and are supposed to be applied - to all WMF grant funded outreach events which include participant online contributions to Wikimedia projects.

A presumption behind requesting these reports is that the data from these reports will be used by others. The most natural way to use reports from this system is by combining them as some aggregate, analyzing the aggregate, then publishing insights about collective categories of events. An example of this is the Art + Feminism project, in which dozens of small local events were tracked as groups of maybe 20 people, but then all the events collectively were combined to report on how hundreds of people engaged collectively.

Where are any aggregated reports derived from the metrics from this system? I am curious about how much reuse the reports from this system get. Blue Rasberry (talk) 20:19, 25 November 2015 (UTC)Reply

This page seems like the heart of the relationship between the WMF and the Wikimedia community

edit

This seems like one of the most important single pages in any Wikimedia project. The page currently says, "This learning pattern describes how to successfully gather the 7 global metrics. Grant-funded projects and programs will need to include these metrics in their project reports beginning in late 2014". Since these are requirements for community members administering WMF grants, that means this page guides or defines the way that grant-seeking affiliates of the WMF will position themselves when requesting grants. I think this page documents the minimal reporting requirements to which grant recipients must commit when seeking funding, and it is a part of minimal compliance of WMF affiliates when they seek to remain in good standing with the WMF.

Here are some strange aspects of this page:

  • From August 2014 - November 2015, there is no community input in the development of this page.
  • There is no log in this talk forum advertising discussion of these guidelines elsewhere.
  • I am not aware of community discussion about these metrics.
  • The metrics required here are arbitrary
    • They probably are not the ones of most use to the community.
    • I am not sure of the extent to which these metrics are of use to the Wikimedia Foundation, either.
    • Other metrics could have been chosen and I am not sure how a decision was made to increase the attention to these and not increase attention to others.
  • It is not obvious what is done with these metrics when they are reported.
  • There is no discussion here about time commitment. Suppose that there is a 3-hour Wikipedia meetup. Executing these instructions might also take three hours. I am curious about the relative value which the WMF is assigning to this system as compared to the value it assigns to activities with impact. Administration and impact complement each other, but this is administration, and it is common in the nonprofit sector to differentiate the two and to be mindful that administration should be capped as a percentage of nonprofit resource allocation. Here is one scheme for rating nonprofit organization management by comparing investment in administration to investment in impact. I am curious about whether this metrics system can be performed in a way that does not consume a huge amount of volunteer resources compared to the volunteer resources invested in impact.
  • I am not even sure which Wikimedia community groups or contributors have adopted this system. I can see at Special:WhatLinksHere/Grants:Learning_patterns/Calculating_global_metrics that through templates it is on lots of pages but I am not sure who is following these steps.
  • Surely this process will be automated someday, because other online communities manage this with automation. Does anyone have an idea of when these instructions are likely to be deprecated due to automation?
  • Is there any metrics guideline more significant than this one? This seems to have so little attention. Is this really the center of mandatory metrics reporting?

These are just thoughts. I do not necessarily need a response. I am still thinking about metrics. Blue Rasberry (talk) 20:40, 25 November 2015 (UTC)Reply

Hello Blue Rasberry, Jaime knows more of the history of Global Metrics, so she's going to answer in more detail, but I wanted to respond here saying that we are working with WMF Engineering on a tool we hope will make collecting global metrics easier. We'll have an update by March on our progress. Abittaker (WMF) (talk) 23:49, 1 December 2015 (UTC)Reply
Hello Blue Rasberry, in answer to your above questions on Global Metrics history, application, and use I share a brief outline of the process:
  • 1. The Beginning
TL;DR Let’s Start Talking Program Evaluation blog, [|Budapest logic model sessions], and pilot programs metrics and reporting
The Program Evaluation and Design team at WMF initially published reports with systematic measures across popular program types in 2013-2014. This initial set of program reports was a pilot of metrics identified in Budapest which were most attainable. A much larger set of metrics was used in this report based on the work of those community program leaders who convened in Budapest where we completed a first round of logic model mapping for programs and began discussion of standardizing metrics (see session documentation
TL;DR A number of opportunities for discussing core metrics for content programs in reporting round, at Wikimedia Conference and promoted through blogs. While the direct “Global Metrics” discussion was somewhat brief, it influenced some changes to the bytes added metric before implementation began.
Following the pilot reports, request for comments went out and we engaged a number of movement leaders in discussions held at Wikimedia Conference as well as on meta as follow-up (see Metrics Dialogue. The Learning & Evaluation team met to develop a proposed set of core metrics, smaller than those of the pilot program reports; priority metrics which could be triangulated across metrics input sessions and reports, which were found useful to telling the basic story of a program. The proposal of the current global metrics then came about in late July 2014, the Global Metrics were adjusted somewhat based on additional comments and concerns received at that time before they were launched in September 2014. Most notably, the “bytes added” metric was changed from “positive bytes” to “absolute bytes.” There was however, limited time for review of the metrics from when they were announced to grantees in August and their launch in September.
  • 3.As for where the metrics had been reported and used prior to the implementation of “Global Metrics"
TL;DR Similar metrics were already being reported (to varying degrees) in grantee reports before Global Metrics became required.
In addition to the sessions and online work engaging program leaders in mapping metrics, the metrics reported by all grantees previous to the definition of the Global Metrics have also been and analyzed in the Grants Impact reports where you can see the alignment of most the Global Metrics to core metrics previously reported by grantees. Most Global Metrics are simply those core reporting points that kept popping up across grantee discussions and reporting. Still, we are experimenting with one metric and added a learning question regarding motivation. The absolute bytes metrics was included as a proxy to get at the amount of text-based content affected (and available through Wikimetrics along with other vital signs metrics. Motivation was among most frequently referenced intended outcome of programs, but not reported on, so it was designed into the Global Metrics as the learning question. We wanted to put it out to grantees to see how they might capture and report on it. I know WMUK included a number of global metrics in their reporting before they were established, many other grantees less consistently, and many more since.
Perhaps others may have additional case studies and reports beyond these that they can link to in response to your question. However, this talk page is meant for the mechanics of calculating global metrics rather than discussing their purpose and use, this topic seems to belong on the main global metrics talk page so I will link this conversation there.
TL;DR Grantees have been reporting Global Metrics most of 2015 and the Community Resources team rolled them up in their reporting for their first quarter review. Overall metrics review is pending.
As a way to standardize core metrics for grantees, Global Metrics became required reporting beginning September 2014. Grantees reporting of these metrics then began earlier this year. This reporting can be observed in some IEG, and all PEG grants reports for projects funded after that time and all FDC grant reports beginning with the Round I progress reports in July 2015.
The Community Resources team was able to roll-up these metrics for first time in their first quarter review this past October (see Key Performance Indicators reported in slide 15). We are discussing issues related to their review and standardization issues and will be developing a plan for those next steps over the next six months.
Again, perhaps others may have additional case studies and reports beyond these that they can link to in response.
  • and the story goes on… additional metrics mapping work and documentation
TL;DR ”Measures of Evaluation”
In addition to those metrics which are currently being implemented, and the documentation from before their implementation about the many discussions and activities. Many more metrics were surfaced along a few core measurement areas (Quality Content, Participation, and Reach/Readership) and are outlined on the ”Measures of Evaluation” page. There the mapping is expanded for those who want to consider additional potential metrics for telling the impact stories of program work.
I realize this is quite a lot of information, I hope the summaries help. Thanks for you interest and let us know if you have further comments or questions. JAnstee (WMF) (talk) 04:17, 4 December 2015 (UTC)Reply
Members of Wikimedia NYC continue to discuss this intensely. Thanks for your feedback. I shared it a month ago and we continue to reflect on it. Blue Rasberry (talk) 16:42, 20 January 2016 (UTC)Reply
Return to "Learning patterns/Calculating global metrics" page.