Wikimedia Foundation Annual Plan/Quarterly check-ins/Editing Oct–Dec 2016

Notes from the quarterly check-in meeting with the Wikimedia Foundation's Editing department, January 31, 2017, 9:00 am - 10:00 am PDT.

Please keep in mind that these minutes are mostly a rough paraphrase of what was said at the meeting, rather than a source of authoritative information. Consider referring to the presentation slides, blog posts, press releases and other official material.

Present (in the office): Neil Patel Quinn, Volker Eckl, Katherine Maher, Joe Matazzoni, Toby Negrin, Michelle Paulson, Lisa Seitz-Gruwell, Heather Walls

Participating remotely: Trevor Parscal, Amir Aharoni, Maggie Dennis, Niklas Laxström, Pau Giner, Runa Bhattacharjee, Sherry Snyder, Subbu Sastry, Wes Moran, Ed Sanders, Kartik Mistry, Kristen Lans, Mark Holmquist, Nick Wilson, Daisy Chen

Slide deck

edit
 

Minutes

edit

Slide 1: product

edit

Slide 2: mission

edit

Slide 3: mission

edit

TREVOR: There are a lot of people who build internet tools for posting content, but our focus on mass collaboration and free knowledge sets us apart.

Slide 4: strategy

edit

Slide 5: timeline

edit

TREVOR: Since the Editing team started, we've been in a 3 year process of shoring up and reevaluating existing commitments so that we have space to work on building new things. We're currently developing our new annual plan; it will focus on such new things (while still acknowledging our responsibility to support existing products).

Slide 6: annual plan

edit

TREVOR: Unifying our technology stack so that we don't have to support multiple instances of the same concept will put us in a much better position.

We already have some experience with these new technologies; in the coming year, we're ready to start them at scale.

Slide 7: tactics

edit

Slide 8: teams

edit

Slide 9: products

edit

TREVOR: We're responsible for many projects—literally hundreds—but we're not working on all of them; each team has a couple they're actively working on, with others they're supporting in maintenance mode. Maintenance is about half of our workload.

Slide 10: metrics

edit

TREVOR: We also pay attention to top-line metrics to see general trends, even though they don't give us a lot of feedback on our particular interventions.

Two high-level trends we're seeing right now. First, existing editors are steady. That's exactly what we'd expect, given that we haven't rolled out anything particularly aimed at them recently (this excludes thing in beta). Second, mobile editing is growing quickly. That's a trend we're really excited about. We're working with Audience Research right now to do a research project over the next several months that will give us a lot of insight into emerging mobile-heavy regions and editors from them.

We believe there are still shifts in these metrics that we don't understand, so we're looking to shift from high-level metrics to product-focused ones. Beginning in Q3, 100% of our goals will have metrics attached, so we can see what effect our interventions have; we can't get this information from high-level metrics

TOBY: Second-month active editors is up 12.8%.

TREVOR: But at 0.7% year to year, I'm not ready to call that anything other than seasonality. Even if that were the effect of something we did, the only way we're going to prove that is to measure much more closely so that we can see local metrics move.

TOBY: Still, if that were to hold that would be significant.

TREVOR: Well, it was up higher last month [chuckle].

TOBY: We in Reading see changes like that we can't trace to any specific intervention, but we believe are the result of many small changes.

Slide 11: improve existing tools

edit

Slide 12: interfaces

edit

TREVOR: The VE team has been focusing on this goal and moving the 2017 wikitext editor closer and closer to being a production-ready tool. We have a 6 month or so release schedule for this editor, which the team will be doing on the side since it doesn't take a full team, but does take a lot of time for community buy-in and discussion.

In addition, right now, you can use VE with little exposure to wikitext, but as soon as you want to review what you've changed, you're confronted with a pure wikitext diff. So in the coming quarter, the team will be working on visual diffs.

TOBY: Is this diff engine going to be available for both VE and the new wikitext editor?

TREVOR: Yes, there's a new algorithm that one of the VE engineers has developed. It's based on Parsoid info, so it should be able to be successfully ported to the older wikitext editors. We're particularly focused on making this work on mobile; side-by-side wikitext doesn't work well there.

ED: Initially, it's designed for the visual mode, where you're not expected to know wikitext, so it can give you a really intuitive representation of what you've done. In theory, we can port that to the wikitext mode or the history page, but initially, it's just the VE mode.

TREVOR: It could be moved, but the initial intervention here is just to keep people in the mode they've been editing in, for a more consistent user experience

Slide 13: mobile edits

edit

TREVOR: Here's a zoomed in view of the really exciting chart of mobile edits. Eventually, we're going to move into being really serious about mobile contribution; this is a really good time for that. We're excited to see a metric moving this quickly.

TOBY: So, you're saying "clearly, mobile is part of our audience and going forward we're going to design with them in mind"?

TREVOR: Yes, I think that reading saw a larger portion of their users moving to mobile before editing did. We needed to be sure of what we had and get our house in order. Now this metric is showing us this is absolutely the time, so we're excited for moving into this space.

TOBY: What's the metric you're shooting for with the visual diffs?

TREVOR: We'll have a metric that shows specifically how people interact with that, but that's work we're doing in the next quarter. Still being refined. We also often have the luxury of being able to look at metrics historically too.

Slide 14: reduce tech debt

edit

TREVOR: A majority of the teams have been focused on technical debt this last quarter.

Slide 15: parsers

edit

TREVOR: Our trajectory is toward a single parser rather than two parsers.

SUBBU: Tidy is a library used to clean up the output of the PHP parser. Tidy is actually from the 1990s and based on the HTML 4 standard. We're now on HTML 5, browsers are much better at compliance. Tidy introduces problems for editors, and we've been getting requests for its removal since 2013. We're working with Community Liaisons to move off Tidy, since that change can have a lot of user-facing effects, especially on pages that have broken markup.

The replacement for Tidy we wrote was in Java, but that doesn't work for third-party users and adds complexity to the Wikimedia cluster, so we're working on one in PHP as well. Now it looks like we can go ahead with the PHP version rather than work on two separate things. That's why we didn't deploy this quarter. Hopefully, we can deploy the PHP version this quarter.

Parsoid knows a lot about broken markup; we want to expose that info to editors with a Linter extension so they can fix that markup. This also has a relation to the Tidy replacement. If we can fix pages with broken markup, a lot of the sources of rendering differences between HTML4 and HTML5 will go away. We hope to deploy that this quarter as well.

TREVOR: This is exciting, because we've never before leveraged communities to fix errors in wikitext and drive improvements in the format.

Slide 16: structured data

edit

TREVOR: Another team that's been working on technical debt, or actually investing in new technology, is Multimedia team. They're paving the way for an initial feature that will make use of the multi-content revision system being spearheaded by WMDE. We said we needed to work on a feature that would use this, so we're going productizing a community-built gadget used for image annotations on Commons. As soon as that system is in place, we'll have one of our first examples of how structured data can be used. This also reduces technical debt; this is something the community wanted so much they built it themselves, and we're going to productize it and clean it up. So, in concert with the Commons structured data grant, we're going to be working on multi-content revisions, metadata editing, and this will dovetail into that.

Next, picking up some more community work and taking it further to make it possible to view 3-D models from Commons, using a rendering system 3d2png, will be integrated with Media Viewer.

WES: This came from the community wishlist?

TREVOR: Yes, it was the 11th item [NOTE: actually 33rd, with 41 votes], so it didn't quite make the Community Tech team's cut. We're very excited to be working on something with a lot of community support behind it.

Slide 17: UI standardization

edit

TREVOR: Future work in this area is building a base set of CSS rules used by Mediawiki, in this toolkit, but also available standalone, which will allow people making prototypes and mockups to match the style guide even as it changes. That should streamline the design process, and increase the number of ppl conforming to the style guide.

VOLKER: Yes, it should lower the barrier to entry to conforming with the style guide.

Slide 18: UI style guidelines

edit

TREVOR: If we have guidelines, we want them to be easy to follow. These are some of the guidelines as they exist on the web where we publish them for developers to follow, so even if they are adapting old technology that doesn't use our OOjs UI] they can follow the guidelines

WES: shared effort across all the designers?

VOLKER: yes, thanks to the other designers and to Wes for supporting it. Many groups involved—for example, Legal gave feedback on licensing. Based on work from the Design offsite and on principles from the Design Statement of Purpose. Speaks to our vision of having the style guide be as widely used as possible. We've had great communication across vertical teams and beyond.

WES: Ultimately, the goal is to unify a lot of existing resources into a single comprehensive one.

VOLKER: right. In making this, we've abandoned several resources; there are a few open tasks left but the majority has been abandoned or had their abandoned status clearly stated.

KATHERINE: I'm really impressed by the focus on accessibility, nod to ink and heritage. Can I ask what the rocketship is?

VOLKER: Just an icon Discovery has been using. Not used in the interface.

WES: Note that this is just a snapshot. There's a comprehensive web interface which gives everything in a lot more detail.

TREVOR: For example, we have guidance on how to make a new icon matching (rather than trying to make an infinite icon library).

WES: shout out to Volker for getting this work together.

Slide 19: invest in new tech

edit

Slide 20: Content Translation

edit

TREVOR: This is part of our focus on new kinds of content creation and new tools. The overall arc is that they achieved something that's a landmark. Next quarter we'll be working on using the visual editor for the Content Translation (CX) tool so that we have only one editor that we're all working on together. This will significantly reduce the amount of maintenance work and technical debt.

AMIR: The main goal of the last quarter was developing the new template support. When CX started in 2014, we built it with very simplistic template support, and there were a lot of requests from users to improve it. Pretty much all articles use templates—they need to be supported. In addition, it often happened that the HTML created could be called dirty and hard to maintain. This was one of the main blockers for taking CX out of its beta status.

We now have support for complicated templates, like infoboxes with lots of parameters. There's an example in the coming slides.

Slide 21: Content Translation milestones

edit

AMIR: This is a very quick timeline of CX's development. Developed in 2014, deployed first in January 2015. The rate of article creation was 1900 per week a year ago, and in January 2017 the rate is at 3000 a week. We now support a third machine translation system: Youdao, mostly for translations into Chinese.

Slide 22: translated article

edit

AMIR: This is an example of what became possible with template support work from the last quarter. This is the first revision of an article translated from English into French, Before, it was impossible to add the infobox template using CX; it had to be added manually with the normal editors once the article had been published, but here you can see it's added automatically with all of the parameters and the image. This was frequently requested and is a big step toward taking CX out of beta.

We plan to replace the editing component of CX with VE in the coming quarters; more maintainable, more unified interface.

TREVOR: The VE team is committed to this transition and contributing resources to make sure it's achieved. Interesting thing: in the beginning of this quarter, the team was planning to start on a new feature called translation lists, but had trouble due to tightness of resourcing. However, we were able to shift early on, adapt, and focus on the work we just outlined and make it a success.

Slide 23: edit review improvements

edit

TREVOR: The collaboration team has continued work on the Edit Review Improvements (ERI) project. The overall arc is that we're trying to make it easier for people to handle a large number of incoming edits by classifying them.

JOE: ERI is a project to both improve edit review for reviewers and also to improve the experience for new editors who can be discouraged or chased away by harsh edit review. Has something for everybody. The first release for this project is a slate of tools for the Recent Changes page. This is the first project to productize ORES and find a way to present its predictions so laypeople can understand and use them.

Designing and testing the new tools took longer than anticipated; the old tools were a patchwork of stuff that had grown up over 10 years. We determined that slapping another layer on top would just make a bigger mess, and that redesigning the page could make everything work more effectively together. Turned out to be a really knotty problem—for example, standardizing and presenting the ORES findings in a way people can understand was very challenging. Multiple rounds of design and testing. Having done that, however, we now have an interface and a standardized way of presenting ORES scores that we feel very confident about. These are already making it much easier to design new products (like the improved Huggle tools we’re working on).

We see no reason this project won't launch this quarter

Our goal for next quarter is based on the fact that many recent changes patrollers don't use the website; they use tools like Huggle or Real-Time Recent Changes. We're working on a “ReviewStream” feed they can use. Should be ready this quarter, but there is a significant dependency on Research and Data and Analytics. I believe they're on track to meet their goals, but it is a consideration.

TREVOR: Just want to add that this year, the overall strategy was based on a concept of supply and demand. When we get a lot of edits coming in, that creates a demand for review that needs to be met. When we work on tools that increase the amount of edits coming in, we also have to work on tools that increase the capacity for review. We have to be careful not to just add more and more stuff for the community to deal with. It’s also an example of us investing in new technologies (ORES)—thanks to the team for that.

JOE: That's absolutely right; we'll be tracking the beta release. If it works the way it should, it should really help reviewers review the work that is doubtful and ignore the stuff that with 99% confidence is probably fine.

KATHERINE: It's exciting to see ORES in production.

TREVOR: I think our collaboration with Research and Data has been very fruitful. That's the kind of thing we need to learn by doing.

Slide 25: lessons learned

edit

TREVOR: I think we learned a lot more than this, but here are some highlights.

Q2 has lots of vacation and time off. We know this, but it's hard to adjust for it and I think we overcommitted this years.

We decided to focus on product level metrics shortly after the Q2 goals were set. But we didn't set to work on that quickly enough; those metrics will be ready for coming quarter, but if we had done that more quickly, we could've had that ready for this quarter.

With the new wikitext editor rollout plan, you [Katherine] asked for it; we didn't expedite it enough and presented it at the 11th hour, which wasn't fair to you. We apologize for that.

JOEL: Compared to VE rollout in 2015, which had a lot of bad-faith criticism, the new wikitext editor rollout had a huge good-faith engagement. We got lots of bug reports instead of criticism. Not sure what the lesson and underlying causes was yet, but it's definitely worth noting.

WES: Thank you, I'm happy to see the focus on audience research, focus on community engagement, plans for myself and Katherine. I like the increase in granularity and the greater focus more closely on the products we’re developing.

KATHERINE: Thanks very much for making this time work.