ⴰⵖⴰⵡⴰⵙ ⴰⵙⴳⵯⵙⴰⵏ ⵏ ⵜⵉⵎⵔⵙⵍⵜ ⵏ ⵡⵉⴽⵉⴱⵉⴷⵢⴰ/2024-2025/ⵉⵡⵜⵜⴰⵙⵏ/ⵜⵓⵚⴽⵉⵡⵜ ⵜⴰⵏⴰⴷⴷⴰⵡⵜ

This page is a translated version of the page Wikimedia Foundation Annual Plan/2024-2025/Product & Technology OKRs and the translation is 0% complete.
Outdated translations are marked like this.

This document represents the first part of the 2024-25 Annual Planning process for the Wikimedia Foundation's Product & Technology department. It describes on the departments' "objectives and key results" (OKRs). This is a continuation of the structure of work portfolios (called "buckets") that began last year.

Portrait of Selena

I spoke with you all back in November about what I believe is the most pressing question facing the Wikimedia movement: how do we ensure that Wikipedia and all Wikimedia projects are multigenerational? I’d like to thank everyone who took the time to really consider that question and to respond to me directly, and now that I’ve had the chance to spend some time reflecting on your responses, I’ll share what I’ve learned.

First, there is no single reason volunteers contribute. In order to nurture multiple generations of volunteers, we need to better understand the many reasons people contribute their time to our projects. Next, we need to focus on what sets us apart: our ability to provide trustworthy content as disinformation and misinformation proliferate around the internet and on platforms competing for the attention of new generations. This includes ensuring we achieve the mission to assemble and deliver the sum of all human knowledge to the world by expanding our coverage of missing information, which can be caused by inequity, discrimination or bias. Our content needs to also serve and remain vital in a changing internet driven by artificial intelligence and rich experiences. Lastly we need to find ways to sustainably fund our movement by building a shared strategy for our products and revenue so that we can fund this work for the long term.

These ideas will be reflected in the Wikimedia Foundation’s 2024–2025 annual plan, the first portion of which I’m sharing with you today in the form of draft objectives for our product & technology work. Like with last year, our entire annual plan will be centered around the technology needs of our audiences and platforms, and we’d like your feedback to know if we’re focusing on the right problems. These objectives build off ideas we’ve been hearing from community members over the past several months through Talking:2024, on mailing lists and talk pages, and at community events about our product and technology strategy for the year ahead. You can view the full list of draft objectives below.

An “objective” is a high level direction that will shape the product and technology projects we take on for the next fiscal year. They’re intentionally broad, represent the direction of our strategy and, importantly, what challenges we’re proposing to prioritize among the many possible focus areas for the upcoming year. We’re sharing this now so community members can help shape our early-stage thinking and before budgets and measurable targets are committed for the year.

ⵜⴰⵏⵏⴰⵢⵉⵏ

One area in which we’d particularly like feedback is our work grouped under the name “Wiki Experiences.” “Wiki Experiences” is about how we efficiently deliver, improve, and innovate how people directly use the wikis, whether as contributors, consumers, or donors. This involves work to support our core technology and capabilities and making sure we can improve the experience of volunteer editors — in particular, editors with extended rights — through better features and tooling, translation services, and platform upgrades.

Here are some reflections from our recent planning discussions, and some questions for all of you to help us refine our ideas:

  1. Volunteering on the Wikimedia projects should feel rewarding. We also think that the experience of online collaboration should be a major part of what keeps volunteers coming back. What does it take for volunteers to find editing rewarding, and to work better together to build trustworthy content?
  2. The trustworthiness of our content is part of Wikimedia’s unique contribution to the world, and what keeps people coming to our platform and using our content. What can we build that will help grow trustworthy content more quickly, but still within the quality guardrails set by communities on each project?
  3. To stay relevant and compete with other large online platforms, Wikimedia needs a new generation of consumers to feel connected to our content. How can we make our content easier to discover and interact with for readers and donors?
  4. In an age where online abuse thrives, we need to make sure our communities, platform, and serving system are protected. We also face evolving compliance obligations, where global policymakers look to shape privacy, identity, and information sharing online. What improvements to our abuse fighting capabilities will help us address these challenges?
  5. MediaWiki, the software platform and interfaces that allow Wikipedia to function, needs ongoing support for the next decade in order to provide creation, moderation, storage, discovery, and consumption of open, multilingual content at scale. What decisions and platform improvements can we make this year to ensure that MediaWiki is sustainable?
ⴰⵎⵙⴳⴷⴰⵍ

–– Selena Deckelmann

Currently published are the highest planning level - the "Objectives".

The next level - the "Key Results" (KR) for each finalised objective are provided below.

The underlying "Hypotheses" for each KR are also published below will be updated on the relevant project/team's wiki pages throughout the year to be updated throughout the year as lessons are learned.

Wiki Experiences (WE) objectives
Objective Objective area Objective Objective context Owner
WE1

ⴰⵎⵙⴳⴷⴰⵍ

Contributor experience Both experienced and new contributors rally together online to build a trustworthy encyclopedia, with more ease and less frustration. In order for Wikipedia to be vibrant in the years to come, we must do work that nurtures multiple generations of volunteers and makes contributing something people want to do. Different generations of volunteers need different investments -- more experienced contributors need their powerful workflows streamlined and repaired, while newer contributors need new ways to edit that make sense to them. And across these generations, all contributors need to be able to connect and collaborate with each other to do their most impactful work. With this objective, we will make improvements to critical workflows for experienced contributors, we will lower barriers to constructive contributions for newcomers, and we will invest in ways that volunteers can find and communicate with each other around common interests. Marshall Miller
WE2

ⴰⵎⵙⴳⴷⴰⵍ

Encyclopedic content Communities are supported to effectively close knowledge gaps through tools and support systems that are easier to access, adapt, and improve, ensuring increased growth in trustworthy encyclopedic content.

Encyclopedic content primarily on Wikipedia can be increased and improved through continuous engagement and innovation. Tools and resources (both technical and non-technical) that are available for contributors to use for their needs can be made more discoverable, and reliable. These tools should be better supported by WMF, through feature improvements achievable in short cycles. In view of recent trends around AI assisted content generation and changing user behaviour, we will also explore groundwork for substantial changes (e.g. Wikifunctions) that can assist scaled growth in content creation and reuse. Mechanisms to identify content gaps should be easier to discover, and plan with. Resources that support growth of encyclopedic content, including content on sister projects, projects such as Wikipedia Library, and campaigns can be better integrated with contribution workflows. At the same time, methods used for growth should have guardrails against growing threats, that can ensure that there is continued trust in the process while staying true to the basic tenets of encyclopedic content as recognised across Wikimedia projects.

Audience: Editors, Translators

Runa Bhattacharjee
WE3

ⴰⵎⵙⴳⴷⴰⵍ

Consumer experience (Reading & Media) A new generation of consumers arrives at Wikipedia to discover a preferred destination for discovering, engaging, and building a lasting connection with encyclopedic content.

Goals:

Retain existing and new generations of consumers and donors.

Increase relevance to existing and new generations of consumers by making our content more easy to discover and interact with.

Work across platforms to adapt our experiences and existing content, so that encyclopedic content can be explored and curated by and to a new generation of consumers and donors.

Olga Vasileva
WE4

ⴰⵎⵙⴳⴷⴰⵍ

Trust & Safety Improve our infrastructure, tools, and processes so that we are well-equipped to protect the communities, the platform, and our serving systems from different kinds of scaled and directed abuse while maintaining compliance with an evolving regulatory environment. Some facets of our abuse fighting capabilities are in need of an upgrade. IP-based abuse mitigation is becoming less effective, several admin tools are in need of efficiency improvements, and we need to put together a unified strategy that helps us combat scaled abuse by using the various signals and mitigation mechanisms (captchas, blocks, etc) in concert. Over this year, we will begin making progress on the largest problems in this space. Furthermore, this investment in abuse protection has to be balanced by an investment in the understanding and improvement of community health, several aspects of which are included in various regulatory requirements. Suman Cherukuwada
WE5

ⴰⵎⵙⴳⴷⴰⵍ

Knowledge platform I (Platform evolution) Evolve the MediaWiki platform and its interfaces to better meet Wikipedia's core needs. MediaWiki has been built to enable the creation, moderation, storage, discovery and consumption of open, multilingual content at scale. In this second year of Knowledge Platform we will take a curating look at the system and begin working towards platform improvements to effectively support the Wikimedia projects core needs through the next decade, starting with Wikipedia. This includes continuing work to define our knowledge production platform, strengthening the sustainability of the platform, a focus on the extensions/hooks system to clarify and streamline feature development, and continuing to invest in knowledge sharing and enabling people to contribute to MediaWiki. Birgit Müller
WE6

ⴰⵎⵙⴳⴷⴰⵍ

Knowledge platform II (Developer Services) Technical staff and volunteer developers have the tools they need to effectively support the Wikimedia projects. We will continue work started to improve (and scale) development, testing and deployment workflows in Wikimedia Production and expand the definition to include services for tool developers. We also aim to improve our ability to answer frequently asked questions in the field of developer/engineering workflows and audiences and make relevant data accessible to enable informed decision making. Part of this work is to look at practices (or lack of such) that currently present a challenge for our ecosystem. Birgit Müller

Signals and Data Services (SDS) objectives
Objective Objective area Objective Objective context Owner
SDS1

ⴰⵎⵙⴳⴷⴰⵍ

Shared insights Our decisions about how to support the Wikimedia mission and movement are informed by high-level metrics and insights. In order for us to effectively and efficiently build technology, support volunteers, and advocate for policies that protect and advance access to knowledge, we need to understand the Wikimedia ecosystem and align on what success looks like. This means tracking a common set of metrics that are reliable, understandable, and available in a timely manner. It also means surfacing research and insights that help us understand the whys and hows behind our measurements. Kate Zimmerman
SDS2

ⴰⵎⵙⴳⴷⴰⵍ

Experimentation platform Product managers can quickly, easily, and confidently evaluate the impacts of product features. To enable and accelerate data informed decision making about product feature development, product managers need an experimentation platform in which they can define features, select treatment audiences of users, and see measurements of impact. Speeding the time from launch to analysis is critical, as shortening the timeline for learning will accelerate experimentation, and ultimately, innovation. Manual tasks and bespoke approaches to measurement have been identified as barriers to speed. The ideal scenario is that product managers can get from experiment launch to discovery with little or no manual intervention from engineers and analysts. Tajh Taylor

Future Audiences (FA) objective
Objective Objective area Objective Objective context Owner
FA1

ⴰⵎⵙⴳⴷⴰⵍ

Test hypotheses Provide recommendations on strategic investments for the Wikimedia Foundation to pursue – based on insights from experiments that sharpen our understanding of how knowledge is shared and consumed online – that help our movement serve new audiences in a changing internet. Due to ongoing changes in technology and online user behavior (e.g., increasing preference for getting information via social apps, popularity of short video edu-tainment, the rise of generative AI), the Wikimedia movement faces challenges in attracting and retaining readers and contributors. These changes also bring opportunities to serve new audiences by creating and delivering information in new ways. However, we as a movement do not have a clear data-informed picture of the benefits and tradeoffs of different potential strategies we could pursue to overcome the challenges or seize new opportunities. For example, should we...
  • Invest in large new features like chatbots or social video on our platform?
  • Bring Wikimedia's knowledge and pathways to contribution to popular third-party platforms?
  • Something else?

To ensure that Wikimedia becomes a multi-generational project, we will test hypotheses to better understand and recommend promising strategies – for the Wikimedia Foundation and the Wikimedia movement – to pursue to attract and retain future audiences.

Maryana Pinchuk

Product and Engineering Support (PES) objective
Objective Objective area Objective Objective context Owner
PES1

ⴰⵎⵙⴳⴷⴰⵍ

Efficiency of operations Make the Foundation's work faster, cheaper, and more impactful. Staff do a lot in their regular work to make our operations faster, cheaper, and more impactful. This objective highlights specific initiatives that will both a) make substantial gains toward faster, cheaper, or more impactful; and b) take coordinated effort and change of formal and informal practices at the Foundation. Essentially, the KRs included in this objective are the hardest and best improvements we can make this year to operational efficiency of work touching our products and technology. Amanda Bittaker


Key Results

The "Key Results" (KR) for each finalised objective are here. They correspond to each of the objectives, above.

The underlying "Hypotheses" for each KR are published below on this page and will be updated on the relevant project or team's wiki pages throughout the year as lessons are learned.

Wiki Experiences (WE) Key Results

[ Objectives ]

Key Result shortname Key Result text Key Result context Owner
WE1.1

ⴰⵎⵙⴳⴷⴰⵍ

Develop or improve one workflow that helps contributors with common interests to connect with each other and contribute together. We think community spaces and interactions on the wikis make people happier and more productive as contributors. Additionally, community spaces help onboard and mentor newcomers, model best practices of contributing, and help address knowledge gaps. However, the existing resources, tools, and spaces that support human connection on the wikis are subpar and do not meet the challenges and needs of the majority of editors today. Meanwhile, the work of the Campaigns team has demonstrated that many organizers are eager to adopt and experiment with new tools with structured workflows that help them in their community work. For these reasons, we want to focus on encouraging and promoting a sense of belonging among contributors on the wikis. Ilana Fried
WE1.2

ⴰⵎⵙⴳⴷⴰⵍ

Constructive Activation: Widespread deployment of interventions shown to collectively cause a 10% relative increase (y-o-y) on mobile web and a 25% relative increase (y-o-y) on iOS of newcomers who publish ≥1 constructive edit in the main namespace on a mobile device, as measured by controlled experiments.

Note: this KR will be measured on a per platform basis.

Current full-page editing experiences require too much context, patience, and trial and error for many newcomers to contribute constructively. To support a new generation of volunteers, we will increase the number and availability of smaller, structured, and more task-specific editing workflows (E.g. Edit Check and Structured Tasks).

Note: Baselines will only be established towards the end of Q4 current FY, after which our KR target metric percentage will also be established.

Peter Pelberg
WE1.3

ⴰⵎⵙⴳⴷⴰⵍ

Increase user satisfaction of 4 moderation products by 5pp each.

Editors with extended rights make use of a wide range of existing features, extensions, tools, and scripts to perform moderation tasks on Wikimedia projects. This year we want to focus on making improvements to this tooling, rather than undertaking projects to build new functionality in this space. We're aiming to touch a number of products over the course of the year, and want to make impactful improvements to each. In doing so we hope to improve the experience of moderating content overall.

We will define baselines for common moderator tools that we may target with this workstream to determine increase in satisfaction per tool. The Community Wishlist will be a substantial contributor to deciding on the priorities for this KR.

Sam Walton
WE2.1

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, support organizers, contributors, and institutions to increase the coverage of quality content in key topic areas i.e. Gender (women's health, women's biographies), and Geography (biodiversity) by 138 articles through experiments.

This KR is about improving topic coverage towards reducing existing knowledge gaps. We’ve established that communities benefit from effective tools paired with campaigns targeted at increasing the quality of content in our projects. This year we want to focus on improving existing tools and experimenting with new ways of prioritizing key topic areas that address knowledge gaps.

Our target 138 articles will be determined by looking at existing baselines of quality content creation.

Purity Waigi & Fiona Romeo
WE2.2

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, implement and test two recommendations, both social and technical, to support languages onboarding for small language communities, with an evaluation to analyze community feedback.

There are editions of Wikipedia in about 300 languages. And yet, there are many more languages that are spoken by millions of people, in which there is no Wikipedia and no Wiki at all. This is a blocker to fulfilling our vision: that every single human being can freely share in the sum of all knowledge. The Wikimedia Incubator, is where potential Wikimedia project wikis in new-language versions can be arranged, written, tested and proven worthy of being hosted by the Wikimedia Foundation. The Incubator was launched in 2006 with the assumption that its users would have prior wiki editing knowledge. This problem is exacerbated by the fact that this process is supposed to be mostly performed by people who are the newest and the least experienced in our movement.While editing on Wikimedia wikis has significantly improved since then, the Incubator hasn't received these updates due to technical limitations. Currently, it takes several weeks for a wiki to graduate from the Incubator and only around 12 wikis are created each year, showing a significant bottleneck.

Existing research and materials reveal technical challenges in every phase of language onboarding: adding new languages to the Incubator, complexities in developing and reviewing content, and slow process in creating a wiki site when a language graduates from Incubator.

Each phase is slow, manual, and complex, indicating the need for improvement. Addressing this problem will allow creating wikis in new languages more quickly and easily, and allow more humans to share knowledge. Various stakeholders, existing research and resources have highlighted proposed recommendations both social and technical. This key result proposes testing two recommendations both social and technical and evaluating the community feedback.

Satdeep Gill & Mary Munyoki
WE2.3

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, 2 new features guide contributors to add source materials that comply with project guidelines, and 3-5 partners have contributed source material that addresses language and geography gaps. To grow access to the quality source material that’s needed to close strategic content gaps, we will:
  • Partner with the Biodiversity Heritage Library; AfLIA; and the Wikisource Loves Manuscripts learning network.
  • Support the acquisition and retention of content partners through more accessible reuse metrics.
  • Guide contributors to add images and references that comply with project guidelines and increase trust in content, for example, by flagging potential issues during their upload/addition.
Fiona Romeo & Alexandra Ugolnikova
WE2.4

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, enable Wikifunctions calls on at least one smaller language Wikipedia to provide a more scalable way to seed new content. To reduce our knowledge gaps effectively, we need to improve workflows that support scalable growth in quality content, especially in smaller language communities. Amy Tsay
WE3.1

ⴰⵎⵙⴳⴷⴰⵍ

Release two curated, accessible, and community-driven browsing and learning experiences to representative wikis, with the goal of increasing the logged-out reader retention of experience users by 5%.

This KR focuses on increasing the retention of a new generation of readers on our website, allowing a new generation to build a lasting connection with Wikipedia, by exploring opportunities for readers to more easily discover and learn from content they are interested in. This will include explorations and the development of new curated, personalized, and community-driven browsing and learning experiences (for example, feeds of relevant content, topical content recommendations and suggestions, community-curated content exploration opportunities, etc).

We plan on beginning the fiscal year by experimenting with a series of experiments of browsing experiences to determine which we would like to scale for production use, and on which platform (web, apps, or both). We will then focus on scaling these experiments and testing their efficacy in increasing retention in production environments. Our goal by the end of the year is to launch at least two experiences on representative wikis and to accurately measure a 5% increase in reader retention for readers engaged in these experiences.

To be optimally effective at achieving this KR, we will require the ability to A/B test with logged-out users, as well as instrumentation capable of measuring reader retention. We might also need new APIs or services necessary to present recommendations and other curation mechanisms.

Olga Vasileva
WE3.2

ⴰⵎⵙⴳⴷⴰⵍ

50% increase in the number of donations via touch points outside of the annual banner and email appeals per platform. Our goal is to provide a diversity of revenue sources while recognizing our existing donors. Based on feedback and data, our focus is on increasing the number of donations beyond the methods the Foundation has relied upon in the past, specifically the annual banner appeals. We want to show that by investing in more integrated donor experiences, we can sustain our work and expand our impact by providing an alternative to donors and potential donors that are unresponsive to banner appeals. 50% is an initial estimate based on the decreased visibility of the donate button on Web as a result of Vector 2022, and the increase in the number of donations from FY 2023-2024's pilot project on the Wikipedia apps to enhance donor experiences (50.1% increase in donations). Evaluating this metric by platform will help us understand trends in platforms and if different tactics should be deployed in the future based on a difference in behavior based on platform audience. Jazmin Tanner
WE3.3

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2 2024-25, volunteers will start converting legacy graphs to the new graph extension on production Wikipedia articles.

The Graph extension has been disabled for security reasons since April 2023, leaving readers unable to view many graphs that community members have invested time and energy into over the last 10 years.

Data visualization plays a role in creating engaging encyclopedic content, so in FY 2024-25, we will build a new secure service to replace the Graph extension that will handle the majority of simple data visualization use cases on Wikipedia article pages. This new service will be built in an extensible way to support more sophisticated use cases if WMF or community developers choose to do so in the future.

We will know we’ve achieved success when community members are successfully converting legacy graphs and publishing new graphs using the new service. We will determine which underlying data visualization library to use and which graph types to support during the initial phase of the project.

Christopher Ciufo
WE3.4

ⴰⵎⵙⴳⴷⴰⵍ

Develop the capability model to improve website performance through smaller scale cache site deployments that take one month to implement while maintaining technical capabilities, security and privacy.

The Traffic team is responsible for maintaining the Content Delivery Network (CDN). This layer caches frequently accessed content, pages, etc, in memory and on disk. This reduces the time it takes to process requests for users. The second bit is storing content closer to the user in a physical sense. That reduces the time it takes for data to reach the user (latency). Last year, we enabled one site in Brazil meant to reduce latency in the Southern American region. Setting up new data centers would be great but it is expensive, time consuming, and requires a lot of work to get done – for example, last year’s work spanned the full year. We would love to have centers in Africa and Southeast Asia, and we would love to have them all around the world.

Our hypothesis is to spin up smaller sites in other places around the world where traffic is lower. These would require fewer servers, of not more than four or five servers. This reduces our cost. It would still help us reduce latency for users in these regions, while being more lightweight in terms of time and effort to maintain them.

Kwaku Ofori
WE4.1

ⴰⵎⵙⴳⴷⴰⵍ

Provide a proposal of 3 countermeasures to harassment and harmful content informed by data and in accordance with the evolving regulatory environment by the end of Q3.

Ensuring user safety and well-being is a fundamental responsibility of online platforms. Many jurisdictions have laws and regulations that require online platforms to take action against harassment, cyberbullying and other harmful content. Failing to address these may expose platforms to legal liability and regulatory sanctions.

Right now we do not have a very good idea about how big these problems are or the reasons behind them. We rely heavily on anecdotal evidence and manual processes which leave us exposed both to legal risks as well other far-reaching consequences: underestimation of the problem, escalation of harm, reputational damage and erosion of user trust.

We need to build a strong culture of measuring the incidence of harassment & harmful content and proactively implement countermeasures.

Madalina Ana
WE4.2

ⴰⵎⵙⴳⴷⴰⵍ

Develop at least two signals for use in anti-abuse workflows to improve the precision of actions on bad actors by the end of Q3.

The wikis rely heavily on IP blocking as a mechanism for blocking vandalism, spam and abuse. But IP addresses are increasingly less useful as stable identifiers of an individual actor, and blocking IP addresses has unintended negative effects on good faith users who happen to share the same IP address as bad actors. The combination of the decreasing stability of IP addresses and our heavy reliance on IP blocking result in less precision and effectiveness in targeting bad actors, in combination with increasing levels of collateral damage for good faith users. We want to see the opposite situation: decreased levels of collateral damage and increased precision in mitigations targeting bad actors.

To better support the anti-abuse work of functionaries and to provide building blocks for reuse in existing (e.g. CheckUser, Special:Block) and new tools, in this KR we propose to explore ways to reliably associate an individual with their actions (sockpuppetting mitigation), and combine existing signals (e.g. IP addresses, account history, request attributes) to allow for more precise targeting of actions on bad actors.

Kosta Harlan
WE4.3

ⴰⵎⵙⴳⴷⴰⵍ

Reduce the effectiveness of a large-scale distributed attack by 50% as measured by the time it takes us to adapt our measures and the traffic volume we can sustain in a simulation.

The evolution of the landscape of the internet, including the rise of large-scale botnets and more frequent attacks have made our traditional methods of limiting large-scale abuse obsolete. Such attacks can make our sites unavailable by flooding our infrastructure with requests, or overwhelm the ability of our community to combat large-scale vandalism. This also puts an unreasonable strain on our high privilege editors and our technical community.

We urgently need to improve our ability to automatically detect, withstand, and mitigate or stop such attacks. In order to measure our improvements, we can't rely solely on frequency/intensity of actual attacks, as we would be dependent on external actions and it would be hard to get a clear quantitative picture of our progress.

By setting up multiple simulated attacks of varying nature/complexity/duration to be run safely against our infrastructure, and running them every quarter, we will be able to both test our new countermeasures while not under attack, and to report objectively on our improvements.

Giuseppe Lavagetto
WE4.4

ⴰⵎⵙⴳⴷⴰⵍ

Launch temp accounts to 100% of all wikis. Temporary accounts are a solution for complying with various regulatory requirements around the exposure of IPs on our platform on various surfaces. This work involves updating many products, data pipelines, functionary tools, and various volunteer workflows to cope with the existence of an additional type of account. Madalina Ana
WE5.1

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q3, complete at least 5 interventions that are intended to increase the sustainability of the platform. The MediaWiki platform sustainability is an evergreen effort important for our ability to scale, increase or avoid degradation of developer satisfaction, and grow our technical community. This is hard to measure and depends on technical and social factors. However, we carry tacit knowledge about specific areas of improvements that are strategic for sustainability. The planned interventions may help increase the sustainability and maintainability of the platform or avoid its degradation. We plan to evaluate the impact of this work in Q4 with recommendations for sustainability goals moving forward. Examples of sustainability interventions are: simplify complex code domains that are core to MediaWiki but just a handful of people know how it works; increase the usage of code analysis tooling to inform quality of our codebase; streamline processes like packaging and releases. Mateus Santos
WE5.2

ⴰⵎⵙⴳⴷⴰⵍ

Identify by the end of Q2 and complete by the end of Q4 one or more interventions to evolve the MediaWiki ecosystem’s programming interfaces to empower decoupled, simpler and more sustainable feature development. The main goal of KR 5.2 is to improve and clarify the interaction between MediaWiki's core platform and its extensions, skins, and other parts. Our intent is to provide functional improvements to MediaWiki’s architecture that enable practical modularity and maintainability, for which it is easier to develop extensions, and to empower the requirements from the wider MediaWiki product vision. This work also aims to inform what should exist (or not) within core, extensions, or the interfaces between them. The year will be divided into two phases: a 5-month research and experimentation phase that will inform the second phase where specific interventions are implemented. Jonathan Tweed
WE5.3

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, complete one data gathering initiative and one performance improvement experiment to inform followup product and platform interventions to leverage capabilities unlocked by MediaWiki’s modeling of a page as a composition of structured fragments.

The primary goal here is to empower developers and product managers to leverage new MediaWiki platform capabilities to meet current and future needs of encyclopedic content by making possible new product offerings that are currently difficult to implement as well as improve performance and resiliency of the platform.

Specifically, at a MediaWiki platform level, we want to shift the processing model of MediaWiki from treating a page as a monolithic unit to treating a page as a composition of structured content units. Parsoid-based read views, Wikidata integration, and Wikifunctions integration into wikis are all implicit moves towards that. As part of this KR, we want to more intentionally experiment with and gather data to inform future interventions based on these new capabilities to ensure we can achieve the intended infrastructure and product impacts.

Subramanya Sastry
WE5.4

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, execute the 1.43 LTS release with a new MW release process that synchronizes with PHP upgrades.

The MediaWiki software platform relies on regular updates to the next PHP version to remain secure and sustainable, which is a pain point in our process and important for the modernization of our infrastructure. At the same time, we regularly release new versions of the MediaWiki software, which e.g. translatewiki.net depends on, the platform used to translate software messages for the Wikimedia projects and many other open source projects.

There’s an opportunity to improve the MediaWiki release process, including technical documentation and synchronization with PHP upgrades in alignment with the MediaWiki product strategy before the next release, which will be a long term support version (LTS). This work is part of our strategic investment in the sustainability of the MediaWiki platform (see also: 5.1) and aims to improve developer experience and infrastructure management.

Mateus Santos
WE6.1

ⴰⵎⵙⴳⴷⴰⵍ

Resolve 5 questions to enable efficiency and informed decisions on developer and engineering workflows and services and make relevant data accessible by the end of Q4. “It’s complicated” is a frequent response to questions like “which repositories are deployed to Wikimedia production”. In this KR we will explore some of our “evergreens” in the field of engineering productivity and experience - recurring questions that seem easy but are hard to answer, questions that we can answer, but the data is not accessible and require custom queries by subject matter experts, or questions that are cumbersome to get a response on for process gap or other reasons. We will define what “resolve” means for each of the questions: For some this may just mean to make existing, accurate data accessible. Other questions will require more research and engineering time to address them. The overarching goal of this work is to reduce the time, workarounds and effort it takes to gain insights in key aspects of the developer experience and enable us to make improvements to engineering and developer workflows and services. [TBD]
WE6.2

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q4, enhance an existing project and perform at least two experiments aimed at providing maintainable, targeted environments moving us towards safe, semi-continuous delivery. Developers and users depend on the Wikimedia Beta Cluster (beta) to catch bugs before they affect users in production. Over time, the uses of beta have grown and come into conflict—the uses are too diverse to fit in a single environment. We will enhance one existing alternative environment and perform experiments aimed at replacing a single high-priority testing need currently fulfilled by beta with a maintainable alternative environment that better serves each use case's needs. Tyler Cipriani
WE6.3

ⴰⵎⵙⴳⴷⴰⵍ

Develop a Toolforge sustainability scoring framework by Q3. Apply it to improve at least one critical platform aspect by Q4 and inform longer-term strategy. Toolforge, the key platform for Wikimedia’s volunteer-built tools, plays a crucial role from editing to anti-vandalism. Our goal is to enhance Toolforge usability, lower the barriers to contribution, improve community practices, and promote adherence to established policies. To this effect, we will introduce a scoring system by the end of Q2 to evaluate the Toolforge platform sustainability, focusing on technical and social aspects. Using this system as a guide, we aim to improve one of the key technical factors by 50%. Slavina Stefanova

Signals & Data Services (SDS) Key Results

[ Objectives ]

Key Result shortname Key Result text Key Result context Owner
SDS1.1

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q3, 2 programs or KR driven initiatives have evaluated the direct impact of their work on one or more core metrics.

Our core organizational metrics serve as key tools to assess the Foundation's progress toward its goals. As we allocate resources to programs and design key result (KR) oriented workstreams, these high-level metrics should guide how we link these investments to the Foundation's overarching goals as defined in the annual plan.

The work in this key result acknowledges that the Foundation as a whole is at an early stage in its ability to quantitively link the impacts of all planned interventions to high-level, or core metrics. In pursuit of that eventual goal, this KR aims to develop the process by which we share the logical and theoretical links between our initiatives and our high-level metrics. In practice, this means partnering with initiative owners throughout the Foundation to understand how the output of their work at a project level is linked to and impacts our core metrics at a Foundation level.

Currently, the Foundation is in an early stage in its goal of being able to execute program or product-driven initiatives and attribute the impact of those activities on Core Foundation level metrics. In pursuit of this goal, this KR aims to do the following: identify at least two candidate program or product driven initiatives, design an evaluation strategy to assess core metric impacts, and execute this evaluation strategy. Starting with two initiatives will help us quickly understand the challenges of performing analyses that allow us to attribute the impact of our work to observable changes in our core metrics. Learnings from this KR will inform a broader strategy to apply this measurement strategy to a wider range and quantity of Foundation initiatives.

Omari Sefu
SDS1.2

ⴰⵎⵙⴳⴷⴰⵍ

Answer 3 strategic open research questions by December 2024 in order to provide recommendations or inform FY26 annual planning.

There are many open research questions in the Wikimedia ecosystem, and answering some of those questions is strategic for WMF or the affiliates. The answers to these questions can inform future product or technology development or can support decision-making/advocacy in the policy space. While some of these questions can be answered by utilizing purely research or research engineering expertise, given the socio-technical nature of the WM projects arriving at trustworthy insights often requires cross-team collaboration for data collection, context building, user interaction, careful design of experiments, and more. Through this KR we aim to prioritize some of our resources towards answering one or more of such questions.

The work in this KR includes prioritizing a list of strategic open questions, as well as doing experimental work to find an answer for X number (currently estimated 2) of them. The ideal type of questions we tackle in this KR are questions that once answered can have an unlocking effect by enabling multiple other teams or groups to do (better? informed) product, technology, or policy work. We intend the work in this KR to be complementary to the following KRs:

  • PES1.3. where the focus is on experimenting with on-platform product or feature ideas based on existing products.
  • FA1.1. where the focus is on experimentation about future audiences by utilizing AI/ML technologies.
Leila Zia
SDS1.3

ⴰⵎⵙⴳⴷⴰⵍ

Achieve at least a 50% reduction in the average time required for data stakeholders to trace data flows for 3 core and essential metrics.

Required for Data Governance standards.

Tracing back the transformation and source of datasets is difficult and requires knowledge of different repos and systems. We should make it easy to understand how data flows around our systems so that data stakeholders can work in a more self service way.

This work will support workflows where data is transformed and used for analytics, features, API’s and data quality jobs. There will be a follow up KR around documenting metrics.

Luke Bowmaker
SDS2.1

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, we can support 1 product team to evaluate a feature or product via basic A/B testing that reduces their time to user interaction data by 50%.

We think using shared tools will increase product teams' data-driven decision making, improve efficiency and productivity, and enhance product strategy and innovation.

Establishing the UX and technical systems for logged-in users allows us to advance towards the long term goal of supporting A/B tests on logged out users while the feasibility work of SDS 2.3 is underway. We will look at adopting team's individual time to user interaction data baselines and improve it by 50%. We will also investigate how we can contextualize these gains in the fuller context of all product teams.

We expect to learn how we can improve the experience and identify and prioritize capability enhancements based on feedback from the adopting team and results of SDS 2.2.

Virginia Poundstone
SDS2.2

ⴰⵎⵙⴳⴷⴰⵍ

By end of Q2, we will have 3 essential metrics for analyzing experiments (A/B tests) to support testing product/feature hypotheses related to FY24-25 KRs.

When a product manager (or designer) has a hypothesis that a product/feature will address a problem/need for the users or the organization, an experiment is how they test that hypothesis and learn about the potential impact of their idea on a metric. The results of the experiment inform the product manager and help them make a decision about what action to take next (abandon this idea and try a different hypothesis, continue development if the experiment was performed early in the development lifecycle, or release the product/feature to more users). Product managers must be able make such a decision with confidence, supported by evidence they trust and understand.

A major hurdle to this is that product teams currently formulate their hypotheses with custom project-specific metrics which require dedicated analyst support to define, measure, analyze, and report on them. Switching to a set of essential metrics for formulating all testable product/feature hypothesis statements would make it:

  • easier and faster to design, deploy, and analyze experiments to test those hypotheses
  • easier to communicate results and learnings from experiments to decision makers (product managers) and other audiences (e.g. senior leadership, others in the organization, communities)

We think that a set of essential metrics which are widely understood and consistently used – and informed/influenced by industry standard metrics – would also improve organizational data literacy and promote a culture of review, experimentation, and learning. We are focusing on essential metrics that (1) are needed for best measurement and evaluation of success/impact of products/features related to 2 Wiki Experiences KRs – WE3.1 and WE1.2 – and (2) reflect or map to industry-standard metrics used in web analytics.

Mikhail Popov
SDS2.3

ⴰⵎⵙⴳⴷⴰⵍ

Deploy a unique agent tracking mechanism to our CDN which enables the A/B testing of product features with anonymous readers. Without such a tracking mechanism, it is not reasonable to implement A/B testing of product features with anonymous readers.

This is basically a milestone-based result to create a new technical capability that others can build measurable things on top of. The key priority use-case will be A/B testing of features with anonymous readers, but this work also enables other important future things, which may create follow-on hypotheses later in WE4.x (for request risk ratings and mitigating large-scale attacks) and for metrics/research about unique device counts as their resourcing and priorities allow.

Brandon Black

Future Audiences (FA) Key Result

[ Objectives ]

Key Result shortname Key Result text Key Result context Owner
FA1.1

ⴰⵎⵙⴳⴷⴰⵍ

As result of Future Audiences experimental insights and recommendations, by the end of Q3 at least one objective or key result owned by a non-Future Audiences team is present in the draft for the following year's annual plan. Since 2020, the Wikimedia Foundation has been tracking external trends that may impact our ability to serve future generations of knowledge-consumers and knowledge-contributors and remain a thriving free knowledge movement for generations to come. Future Audiences, a small R&D team, will:
  • Perform rapid, time-bound experiments (aiming for at least 3 experiments per fiscal year) to explore ways to address these trends
  • Based on insights from experimentation, make recommendations for new non-experimental investments that WMF should pursue – i.e. new products or programs that need to be taken on by a full team or teams – during our regular annual planning period.This key result will be met if at least one objective or key result that is owned by a team outside of Future Audiences and is driven by a Future Audiences recommendation appears in the draft annual plan for the following fiscal year.
Maryana Pinchuk

Product and Engineering Support (PES) Key Results

[ Objectives ]

Key Result shortname Key Result text Key Result context Owner
PES1.1

ⴰⵎⵙⴳⴷⴰⵍ

Culture of Review: Incrementally improve scores for P+T staff sentiment related to our delivery, alignment, direction, and team health in a quarterly survey. A culture of review is a product development culture based on shorter cycles of iteration, learning, and adaptation. This means that our organization may set yearly goals, but what we do to achieve these goals will change and adapt over the course of the year as we learn. There are two components to building a culture of review: processes and behaviors. This KR focuses on the latter. Behavior changes can grow and strengthen our culture of review. This involves changes in individual habits and routines as we move towards more iterative product development. This KR will be based on self-reported changes in individual behaviors, and measuring resulting changes, if any, in staff sentiment. Amy Tsay
PES1.2

ⴰⵎⵙⴳⴷⴰⵍ

By the end of Q2, the new Wishlist better connects movement ideas and requests to Foundation P+T activities: items from the Wishlist backlog are addressed via a 2024-5 KR, the Foundation has completed 10 smaller Wishes, and the Foundation has partnered with volunteers to identify 3+ areas of opportunity for the 2025-26 FY.

The Community Wishlist represents a narrow slice of the movement; approximately 1k people participate, most of whom are contributors or admins. People often bypass the Wishlist by writing feature requests and bug reports via Phabricator, where it’s hard to discern requests from WMF or the community. For participants, the Wishlist is a costly time investment with minimal payoff. They still engage with the Wishlist because they feel it is the only vehicle to call attention to impactful bugs and feature improvements, or signal a need for broader, strategic opportunities. Wishes are often written as solutions, vs problems. The solutions may seem sensible on paper, but don’t necessarily consider the technical complexity or movement strategy implications.

The scope and breadth of wishes sometimes exceeds the scope and capacity of Community Tech or a single team, perpetuating the frustration, leading to RFCs and calls to dismantle the Wishlist. Whereas community members prefer to use the Wishlist for project ideas, teams at the Foundation look at the Wishlist and other intake processes for prioritization, in part because wishes are ill-timed for Annual Planning and are hard to incorporate into roadmaps / OKRs.

The Future Wishlist should be a bridge between the community and the Foundation, where communities provide input in a structured way, so that we are able to take action and in turn make volunteers happy. We’re creating a new intake process for any logged in volunteer to submit a wish, 365 days a year. Wishes can report or highlight a bug, request an improvement, or ideate on a new feature. Anyone can comment on, workshop, or support a Wish to influence prioritization. The Foundation won’t categorize wishes as “too big” or “too small.”

Wishes that thematically map to a larger problem area can influence annual planning and team roadmaps, offering strategic directions and opportunities. Wishes will be visible to the Movement in a dashboard that categorizes wishes by project, product/problem area, and wish type. The Foundation will respond to wishes in a timely manner, and partner with the Community to categorize and prioritize wishes. We will partner with Wikimedians to identify and prioritize three areas of improvement, incorporated in the Foundation’s 2025-26 Annual Plan, which should improve the adoption rate and fulfillment of impactful wishes. We will flag well-scoped wishes for the volunteer developer community and Foundation teams, leading to more team and developer engagement and more wishes fulfilled, leading to community satisfaction. Addressing more wishes improves contributor happiness, efficacy, and retention, which should generate more quality edits, higher quality content, and more readers.

Jack Wheeler
PES1.3

ⴰⵎⵙⴳⴷⴰⵍ

Run and conclude two experiments from existing exploratory products/features that provides us with data/insights into how we grow Wikipedia as a knowledge destination for our current consumer and volunteer audiences in Q1 and Q2. Complete and share learnings and recommendations for potential adoption for future OKR work in the Wiki Experiences bucket by the end of Q3.

This work is a counterpart to the Future Audiences objective, but focuses instead on uncovering opportunities to increase and deepen engagement of our existing audiences (of Wikipedia consumers and contributors) through more nimbly testing more on-platform product ideas.

It lives in PES1 as it is an energiser and multiplier - channelling the time individuals and teams have already devoted to hacking/experimenting on side projects to bring more promising features into focus. Instead of these side projects languishing (not a good use of our limited resources), this KR provides a path for some of these ideas to potentially make it into larger APP setting through proven experiments, thus more efficiently using staff time and motivating their creativity and productivity.

By shepherding more of these smaller, shorter projects into play, we also diversify our spread of ‘bets’ for more learnings and trials of ideas that may transform Wikipedia in line with the changing needs and expectations of our current audiences. This will make our work more impactful and faster as it helps the foundation to align on the correct goal in less time.

Rita Ho
PES1.4

ⴰⵎⵙⴳⴷⴰⵍ

Learn how to: set, monitor, and make decisions on SLOs. Pick at least one new thing to define SLOs for as we release it. Collaborate with the respective team(s) (typically: product, development teams, SRE) to define that SLO. Reflect and document guidelines for what releases should have SLOs in the future and how to set them.

FUTURE KR: Set up process and rudimentary tools for setting and monitoring SLOs for new releases. Report on a quarterly basis, and use it to make decisions on when to (and not to) prioritize work to fix something. Share report with the community.

WHY:

We don’t know when we need to prioritize work to fix something. And we have a lot of code. As this footprint continues to grow, there are more situations where we may need to decide between addressing issues or focus on innovation, and more uncertainty around when we should. Also, not clear to staff and community what our level of support/commitment on reliability and performance is for all the different features and functionality they interact with. If we define a expected level of service, we can know when we should allocate resources to it or not.

Mark Bergsma
PES1.5

ⴰⵎⵙⴳⴷⴰⵍ

Define ownership and commitments (including SLOs) on services and learn how to track, report and make decisions as a standard and scalable practice by trialing it in 3 teams across senior leaders in the department. After collaboratively defining an SLO for the EditCheck feature as part of PES1.5, we will now trial and learn from using the SLO in practice to help prioritisation of reliability work. We will also document roles and responsibilities for ownership of code/services, allowing us to make clear shared commitments on the level of ongoing support. We will try to use these as practices in 3 teams across the department. Mark Bergsma

Hypotheses

The hypotheses below are the specific things we are doing each quarter to address the associated key results above.

Each hypothesis is an experiment or stage in an experiment we believe will help achieve the key result. Teams make a hypothesis, test it, then iterate on their findings or develop an entirely different new hypothesis. You can think of the hypotheses as bets of the teams’ time–teams make a small bet of a few weeks or a big bet of several months, but the risk-adjusted reward should be commensurate with the time the team puts in. Our hypotheses are meant to be agile and adapt quickly. We may retire, adjust, or start a hypothesis at any point in the quarter.

To see the most up-to-date status of a hypothesis and/or to discuss a hypothesis with the team please click the link to its project page below.

Q1

The first quarter (Q1) of the WMF annual plan covers July-September.

Wiki Experiences (WE) Hypotheses

[ WE Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q1 text Details & Discussion
WE1.1.1 If we expand the Event List to become a Community List that includes WikiProjects, then we will be able to gather some early learnings in how to engage with WikiProjects for product development.
WE1.1.2 If we identify at least 15 WikiProjects in 3 separate Wikipedias to be featured in the Community List, then we will be able to advise Campaigns Product in the key characteristics needed to build an MVP of the Community List that includes WikiProjects.
WE1.1.3 If we consult 20 event organizers and 20 WikiProject organizers on the best use of topics available via LiftWing, then we can prioritize revisions to the topic model that will improve topical connections between events and WikiProjects.
WE1.2.1 If we build a first version of the Edit Check API, and use it to introduce a new Check, we can evaluate the speed and ease with other teams and volunteers could use the API to create new Checks and Suggested Edits.
WE1.2.2 If we build a library of UI components and visual artefacts, Edit Check’s user experience can extend to accommodate Structured Tasks patterns.
WE1.2.3 If we conduct user tests on two or more design prototypes introducing structured tasks to newcomers within/proximate to the Visual Editor, then we can quickly learn which designs will work best for new editors, while also enabling engineers to assess technical feasibility and estimate effort for each approach. mw:Growth/Constructive activation experimentation
WE1.2.4 If we train an LLM on detecting "peacock" behavior, then we can learn if it can detect this policy violation with at least >70% precision and >50% recall and ultimately, decide if said LLM is effective enough to power a new Edit Check and/or Suggested Edit.
WE1.2.5 If we conduct an A/B/C test with the alt-text suggested edits prototype in the production version of the iOS app we can learn if adding alt-text to images is a task newcomers are successful with and ultimately, decide if it's impactful enough to implement as a suggested edit on the Web and/or in the Apps. mw:Wikimedia Apps/iOS Suggested edits project/Alt Text Experiment
WE1.3.1 If we enable additional customisation of Automoderator's behaviour and make changes based on pilot project feedback in Q1, more moderators will be satisfied with its feature set and reliability, and will opt to use it on their Wikimedia project, thereby increasing adoption of the product. mw:Automoderator
WE1.3.2 If we are able interpret subsets of wishes as moderator-related focus areas and share these focus areas for community input in Q1-Q2, then we will have a high degree of confidence that our selected focus area will improve moderator satisfaction, when it is released in Q3.
WE2.1.1 If we build a country-level inference model for Wikipedia articles, we will be able to filter lists of articles to those about a specific region with >70% precision and >50% recall. m:Research:Language-Agnostic Topic Classification/Countries
WE2.1.2 If we build a proof-of-concept providing translation suggestions that are based on user-selected topic areas, we will be set up to successfully test whether translators will find more opportunities to translate in their areas of interest and contribute more compared to the generic suggestions currently available. mw: Translation suggestions: Topic-based & Community-defined lists
WE2.1.3 If we offer list-making as a service, we’ll enable at least 5 communities to make more targeted contributions in their topic areas as measured by (1) change in standard quality coverage of relevant topics on the relevant wiki and (2) a brief survey of organizer satisfaction with topic area coverage on-wiki.
WE2.1.4 If we developed a proof of concept that adds translation tasks sourced from WikiProjects and other list-building initiatives, and present them as suggestions within the CX mobile workflow, then more editors would discover and translate articles focused on topical gaps. By introducing an option that allows editors to select translation suggestions based on topical lists, we would test whether this approach increases the content coverage in our projects. mw: Translation suggestions: Topic-based & Community-defined lists
WE2.2.1 If we expand Wikimedia's State of Languages data by securing data sharing agreements with UNESCO and Ethnologue, at least one partner will decide to represent Wikimedia’s language inclusion progress in their own data products and communications. On top of being useful to our partner institutions, our expanded dataset will provide important contextual information for decision-making and provide communities with information needed to identify areas for intervention.
WE2.2.2 If we map the language documentation activities that Wikimedians have conducted in the last 2 years, we will develop a data-informed baseline for community experiences in onboarding new languages.
WE2.2.3 If we provide production wiki access to 5 new languages, with or without Incubator, we will learn whether access to a full-fledged wiki with modern features such as those available on English Wikipedia (including ContentTranslation and Wikidata support, advanced editing and search results) aids in faster editing. Ultimately, this will inform us if this approach can be a viable direction for language onboarding for new or existing languages, justifying further investigation. mw:Future of Language Incubation
WE2.3.1 If we make two further improvements to media upload flow on Commons and share them with community, the feedback will be positive and it will help uploaders make less bad uploads (with the focus on copyright) as measured by the ratio of deletion requests within 30 days of upload. This will include defining designs for further UX improvements to the release rights step in the Upload Wizard on Commons and rolling out an MVP of logo detection in the upload flow.

phab:T347298 phab:T349641

WE2.4.1 If we build a prototype of Wikifunctions calls embedded within MediaWiki content, we will be ready to use MediaWiki’s async content processing pipeline and test its performance feasibility in Q2. phab:T261472
WE2.4.2 If we create a design prototype of an initial Wikifunctions use case in a Wikipedia wiki, we will be ready to build and test our integration when performance feasibility is validated in Q2 (see hypothesis 1). phab:T363391
WE2.4.3 If we make it possible for Wikifunctions users to access Wikidata lexicographical data, they will begin to create natural language functions that generate sentence phrases, including those that can handle irregular forms. If we see an average monthly creation rate of 31 for these functions, after the feature becomes available, we will know that our experiment is successful. phab:T282926
WE3.1.1 Designing and qualitatively evaluating three proofs of concept focused on building curated, personalized, and community-driven browsing and learning experiences will allow us to estimate the potential for increased reader retention (experiment 1: providing recommended content in search and article contexts, experiment 2: summarizing and simplifying article content, experiment 3: making multitasking easier on wikis.
WE3.1.3 If we develop models for remixing content such as a content simplification or summarization that can be hosted and served via our infrastructure (e.g. LiftWing), we will establish the technical direction for work focused on increasing reader retention through new content discovery features.
WE3.1.4 If we analyze the projected performance impact of hypothesis WE3.1.1 and WE3.1.2 on the Search API, we can scope and address performance and scalability issues before they negatively affect our users.
WE3.1.5 If we enhance the search field in the Android app to recommend personalized content based on a user's interest and display better results, we will learn if this improves user engagement by observing whether it increases the impression and click-through rate (CTR) of search results by 5% in the experimental group compared to the control group over a 30-day A/B test. This improvement could potentially lead to a 1% increase in the retention of logged out users.
WE3.2.1 If we create a clickable design prototype that demonstrates the concept of a badge representing donors championing article(s) of interest, we can learn if there would be community acceptance for a production version of this method for fundraising in the Apps. Fundraising Experiment in the iOS App
WE3.2.2 Increasing the prominence of entry points to donations on the logged-out experiences of the web mobile and desktop experience will increase the clickthrough rate of the donate link by 30% Year over Year phab:T368765
WE3.2.3 If we make the “Donate” button in the iOS App more prominent by making it one click or less away from the main navigation screen, we will learn if discoverability was a barrier to non banner donations.
WE3.3.1 If we select a data visualization library and get an initial version of a new server-rendered graph service available by the end of July, we can learn from volunteers at Wikimania whether we’re working towards a solution that they would use to replace legacy graphs.
WE4.1.1 If we implement a way in which users can report potential instances of harassment and harmful content present in discussions through an incident reporting system, we will be able to gather data around the number and type of incidents being reported and therefore have a better understanding of the landscape and the actions we need to take.
WE4.2.1 If we explore and define Wikimedia-specific methods for a unique device identification model, we will be able to define the collection and storage mechanisms that we can later implement in our anti-abuse workflows to enable more targeted blocking of bad actors. phab:T368388
WE4.2.9 If we provide contextual information about reputation associated with an IP that is about to be blocked, we will see fewer collateral damage IP and IP range blocks, because administrators will have more insight into potential collateral damage effects of a block. We can measure this by instrumenting Special:Block and observing how behavior changes when additional information is present, vs when it is not. WE4.2.9 Talk page
WE4.2.2 If we define an algorithm for calculating a user account reputation score for use in anti-abuse workflows, we will prepare the groundwork for engineering efforts that use this score as an additional signal for administrators targeting bad actors on our platform. We will know the hypothesis is successful if the algorithm for calculating a score maps with X% precision to categories of existing accounts, e.g. a "low" score should apply to X% of permanently blocked accounts WE4.2.2 Talk page
WE4.2.3 If we build an evaluation framework using publicly available technologies similar to the ones used in previous attacks we will learn more about the efficacy of our current CAPTCHA at blocking attacks and could recommend a CAPTCHA replacement that brings a measurable improvement in terms of the attack rate achievable for a given time and financial cost.
WE4.3.1 If we apply some machine learning and data analysis tools to webrequest logs during known attacks, we'll be able to identify abusive IP addresses with at least >80% precision sending largely malicious traffic that we can then ratelimit at the edge, improving reliability for our users. phab:T368389
WE4.3.2 If we limit the load that known IP addresses of persistent attackers can place on our infrastructure, we'll reduce the number of impactful cachebusting attacks by 20%, improving reliability for our users.
WE4.3.3 If we deploy a proof of concept of the 'Liberica' load balancer, we will measure a 33% improvement in our capacity to handle TCP SYN floods.
WE4.3.4 If we make usability improvements and also perform some training exercises on our 'requestctl' tool, then SREs will report higher confidence in using the tool. phab:T369480
WE4.4.1 If we run at least 2 deployment cycles of Temp Accounts we will be able to verify this works successfully.
WE5.1.1 If we successfully roll out Parsoid Read Views to all Wikivoyages by Q1, this will boost our confidence in extending Parsoid Read Views to all Wikipedias. We will measure the success of this rollout through detailed evaluations using the Confidence Framework reports, with a particular focus on Visual Diff reports and the metrics related to performance and usability. Additionally, we will assess the reduction in the list of potential blockers, ensuring that critical issues are addressed prior to wider deployment.
WE5.1.2 If we disable unused Graphite metrics, target migrating metrics using the db-prefixed data factory and increase our outreach efforts to other teams and the community in Q1, then we would be on track to achieve our goal of making Graphite read-only by Q3 FY24/25, by observing an increase of 30% in migration progress.
WE5.1.3 If we implement a canonical url structure with versioning for our REST API then we can enable service migration and testing for Parsoid endpoints and similar services by Q1. phab:T344944
WE5.1.4 If we complete the remaining work to mitigate the impact of browsers' anti-tracking measures on CentralAuth autologin and move to a more resilient authentication infrastructure (SUL3), we will be ready to roll out to production wikis in Q2.
WE5.1.5 If we increase the coverage of Sonar Cloud to include key MediaWiki Core repos, we will be able to improve the maintainability of the MediaWiki codebase. This hypothesis will be measured by spliting the selected repos into test and control groups. These groups will then be compared over the course of a quarter to measure impact of commit level feedback to developers.
WE5.2.1 If we make a classification of the types of hooks and extension registry properties used to influence the behavior of MediaWiki core, we will be able to focus further research and interventions on the most impactful. [1]
WE5.2.2 If we explore a new architecture for notifications in MW core and Echo, we will discover new ways to provide modularity and new ways for extensions to interact with core. [2]
WE5.3.1 If we instrument parser and cache code to collect template structure and fine-grained timing data, we can quantify the expected performance improvement which could be realized by future evolution of the wikitext parsing platform. T371713
WE5.3.2 On template edits, if we can implement an algorithm in Parsoid to reuse HTML of a page that depends on the edited template without processing the page from scratch and demonstrate 1.5x or higher processing speedup, we will have a potential incremental parsing solution for efficient page updates on template edits. T363421
WE5.4.1 If the MediaWiki engineering group is successful with release process accountability and enhances its communication process by the end of Q2 in alignment with the product strategy, we will eliminate the current process that relies on unplanned or volunteer work and improve community satisfaction with the release process. Measured by community feedback on the 1.43 LTS release coupled with a significant reduction in unplanned staff and volunteer hours needed for release processes.
WE5.4.2 If we research and build a process to more regularly upgrade PHP in conjunction with our MediaWiki release process we will increase speed and security while reducing the complexity and runtime of our CI systems, by observing the success of PHP 8.1 upgrade before 1.43 release.
WE6.1.1 If we design and complete the initial implementation of an authorization framework, we’ll establish a system to effectively manage the approval of all LDAP access requests.
WE6.1.2 If we research available documentation metrics, we can establish metrics that measure the health of Wikimedia technical documentation, using MediaWiki Core documentation as a test case. mw:Wikimedia Technical Documentation Team/Doc metrics
WE6.1.3 If we collect insights on how different teams are making technical decisions we are able to gather good practices and insights that can enable and scale similar practices across the organization.
WE6.2.1 If we publish a versioned build of MediaWiki, extensions, skins, and Wikimedia configuration at least once per day we will uncover new constraints and establish a baseline of wallclock time needed to perform a build. mw:Wikimedia Release Engineering Team/Group -1
WE6.2.2 If we replace the backend infrastructure of our existing shared MediaWiki development and testing environments (from apache virtual servers to kubernetes), it will enable us to extend its uses by enabling MediaWiki services in addition to the existing ability to develop MediaWiki core, extensions, and skins in an isolated environment. We will develop one environment that includes MediaWiki, one or more Extensions, and one or more Services. wikitech:Catalyst
WE6.2.3 If we create a new deployment UI that provides more information to the deployer and reduce the amount of privilege needed to do deployment, it will make deployment easier and open deployments to more users as measured by the number of unique deployers and number of patches backported as a percentage of our overall deployments. Wikimedia Release Engineering Team/SpiderPig
WE6.2.4 If we migrate votewiki, wikitech and commons to MediaWiki on Kubernetes we reap the benefits of consistency and no longer need to maintain 2 different infrastructure platforms in parallel, allowing to reduce the amount of custom written tooling, making deployments easier and less toilous for deployers. This will be measured by a decrease in total deployment times and a reduction in deployment blockers. task T292707
WE6.2.5 If we move MultiVersion routing out of MediaWiki, we 'll be able to ship single version MediaWiki containers, largely cutting down the size of containers allowing for faster deployments, as measured by the deployment tool. SingleVersion MW: Routing options
WE6.3.1 By consulting toolforge maintainers about the least sustainable aspects of the platform, we will be able to gather a list of potential categories to measure.
WE6.3.2 By creating a "standard" tool to measure the number of steps for a deployment we will be able to assess the maximal improvement in the deployment process.
WE6.3.3 If we conduct usability tests, user interviews, and competitive analysis to explore the existing workflows and use cases of Toolforge, we can identify key areas for improvement. This research will enable us to prioritize enhancements that have the most significant impact on user satisfaction and efficiency, laying the groundwork for a future design of the user interface.
Signals & Data Services (SDS) Hypotheses

[ SDS Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q1 text Details & Discussion
SDS 1.1.1 If we partner with an initiative owner and evaluate the impact of their work on Core Foundation metrics, we can identify and socialize a repeatable mechanism by which teams at the Foundation can reliably impact Core Foundation metrics.
SDS1.2.2 If we study the recruitment, retention, and attrition patterns among long-tenure community members in official moderation and administration roles, and understand the factors affecting these phenomena (the ‘why’ behind the trends), we will better understand the extent, nature, and variability of the phenomenon across projects. This will in turn enable us to identify opportunities for better interventions and support aimed at producing a robust multi-generational framework for editors. phab:T368791
SDS1.2.1 If we gather use cases from product and feature engineering managers around the use of AI in Wikimedia services for readers and contributors, we can determine if we should test and evaluate existing AI models for integration into product features, and if yes, generate a list of candidate models to test. phab:T369281

Meta Page

SDS1.3.1 If we define the process to transfer all data sets and pipeline configurations from the Data Platform to DataHub we can build tooling to get lineage documentation automatically.
SDS 1.3.2 If we implement a well documented and understood process to produce an intermediary table representing MediaWiki Wikitext History, populated using the event platform, and monitor the reliability and quality of the data we will learn what additional parts of the process are needed to make this table production ready and widely supported by the Data Platform Engineering team.
SDS2.1.2 If we investigate the data products current sdlc, we will be able to determine inflection points where QTE knowledge can be applied in order to have a positive impact on Product Delivery.
SDS2.1.3 If the Growth team learns about the Metrics Platform by instrumenting a Homepage Module on the Metrics Platform, then we will be prepared to outline a measurement plan in Q1 and complete an A/B test on the new Metrics platform by the end of Q2.
SDS2.1.4 If we conduct usability testing on our prototype among pilot users of our experimentation process, we can identify and prioritize the primary pain points faced by product managers and other stakeholders in setting up and analyzing experiments independently. This understanding will lead to the refinement of our tools, enhancing their efficiency and impact.
SDS2.1.5 If we design a documentation system that guides the experience of users building instrumentation using the Metrics Platform, we will enable those users to independently create instrumentation without direct support from Data Products teams, except in edge cases. phab:T329506
SDS2.2.1 If we define a metric for logged-out mobile app reader retention, which is applicable for analyzing experiments (A/B test), we can provide guidance for planning instrumentation to measure retention rate of logged out readers in the mobile apps and enable the engineering team to develop an experiment strategy targeting logged out readers.
SDS2.2.2 If we define a standard approach for measuring and analyzing conversion rates, it will help us establish a collection of well-defined metrics to be used for experimentation and baselines, and start enabling comparisons between experiments/projects to increase learning from these.
SDS2.2.3 If we define a standard way of measuring and analyzing clickthrough rate (CTR) in our products/features, it will help us design experiments that target CTR for improvement, standardize click-tracking instrumentation, and enable us to make CTR available as a target metric to users of the experimentation platform.
SDS2.3.1 If we conduct a legal review of proposed unique cookies for logged out users, we can determine whether there are any privacy policy or other legal issues which inform the community consultation and/or affect the technical implementation itself.
Future Audiences (FA) Hypotheses

[ FA Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q1 text Details & Discussion
FA1.1.1 If we make off-site contribution very low effort with an AI-powered “Add a Fact” experiment, we can learn whether off-platform users could help grow/sustain the knowledge store in a possible future where Wikipedia content is mainly consumed off-platform. m:Future Audiences/Experiment:Add a Fact
Product and Engineering Support (PES) Hypotheses

[ PES Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q1 text Details & Discussion
PES1.1.1 If the P&T leadership team syncs regularly on how they’re guiding their teams towards a more iterative software development culture, and we collect baseline measurements of current development practices and staff sentiment on how we work together to ship products, we will discover opportunity areas for change management. The themes that emerge will enable us to build targeted guidance or programs for our teams in coming quarters.
PES1.2.2 If the Moderator Tools team researches the Community Wishlist and develops 2+ focus areas in Q1, then we can solicit feedback from the Community and identify a problem that the Community and WMF are excited about tackling.
PES1.2.3 If we bundle 3-5 wishes that relate to selecting and inserting templates, and ship an improved feature in Q1, then CommTech can take the learnings to develop a Case Study for the foundation to incorporate more "focus areas" in the 2025-26 annual plan.
PES1.3.1 If we provide insights to audiences about their community and their use of Wikipedia over a year, it will stimulate greater connection with Wikipedia – encouraging greater engagement in the form of social sharing, time spent interacting on Wikipedia, or donation. Success will be measured by completing an experimental project that provides at least one recommendation about “Wikipedia insights” as an opportunity to increase onwiki engagement. mw: New Engagement Experiments#PES1.3.1_Wikipedia_user_insights
PES1.3.2 If we create a Wikipedia-based game for daily use that highlights the connections across vast areas of knowledge, it will encourage consumers to visit Wikipedia regularly and facilitate active learning, leading to longer increased interaction with content on Wikipedia. Success will be measured by completing an experimental project that provides at least one recommendation about gamification of learning as an opportunity to increase onwiki engagement. mw: New Engagement Experiments#PES_1.3.2:_Wikipedia_games
PES1.3.3 If we develop a new process/track at a Wikimedia hack event to incubate future experiments, it will increase the impact and value of such events in becoming a pipeline for future annual plan projects, whilst fostering greater connection between volunteers and engineering/design staff to become more involved with strategic initiatives. Success will be measured by at least one PES1.3 project being initiated and/or advanced to an OKR from a foundation-supported event. mw: New Engagement Experiments#PES_1.3.3:_Incubator_space
PES1.4.1 If we draft an SLO with the Editing team releasing Edit Check functionality, we will begin to learn and understand how to define and track user-facing SLOs together, and iterate on the process in the future.
PES1.4.2 If we define and publish SLAs for putting OOUI into “maintenance mode”, growth of new code using OOUI across Wikimedia projects will stay within X% in Q1.
PES1.4.3 If we map ownership using the proposed service catalog for known owned services in Q1, we will be able to identify significant gaps in service catalog as it helps in solving the SLO culture by the end of the year.

Q2

The second quarter (Q2) of the WMF annual plan covers October-December.

Wiki Experiences (WE) Hypotheses

[ WE Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q2 text Details & Discussion
WE1.1.1 If we expand the Event list to become a Community List that includes WikiProjects, then we will be able to gather some early learnings in how to engage with WikiProjects for product development. Campaigns/Foundation Product Team/Event list
WE1.1.2 If we launch at least 1 consultation focused on on-wiki collaborations, and if we collect feedback from at least 20 people involved in such collaborations, then we will be able to advise Campaigns Product on the key characteristics needed to develop a new or improved way of connecting. Campaigns/WikiProjects
WE1.1.3 If we consult 20 event organizers and 20 WikiProject organizers on the best use of topics available via LiftWing, then we can prioritize revisions to the topic model that will improve topical connections between events and WikiProjects.
WE1.1.4 If we integrate CampaignEvents into Community Configuration in Q2, then we will set the stage for at least 5 more wikis opting to enable extension features in Q3, thereby increasing tool usage.
WE1.2.2 If we build a library of UI components and visual artifacts, Edit Check’s user experience can extend to accommodate Structured Tasks patterns.
WE1.2.5 If we conduct an A/B/C test with the alt-text suggested edits prototype in the production version of the iOS app we can learn if adding alt-text to images is a task newcomers are successful with and ultimately, decide if it's impactful enough to implement as a suggested edit on the Web and/or in the Apps.
WE1.2.6 If we introduce new account holders to the “Add a Link” Structured Task in Wikipedia articles, we expect to increase the percentage of new account holders who constructively activate on mobile by 10% compared to the baseline.
WE1.3.1 If we enable additional customisation of Automoderator's behaviour and make changes based on pilot project feedback in Q1, more moderators will be satisfied with its feature set and reliability, and will opt to use it on their Wikimedia project, thereby increasing adoption of the product. mw:Moderator_Tools/Automoderator
WE1.3.3 If we improve the user experience and features of the Nuke extension during Q2, we will increase administrator satisfaction of the product by 5pp by the end of the quarter. mw:Extension:Nuke/2024_Moderator_Tools_project
WE2.1.3 If we offer list-making as a service, we’ll enable at least 5 communities to make more targeted contributions in their topic areas as measured by (1) change in standard quality coverage of relevant topics on the relevant wiki and (2) a brief survey of organizer satisfaction with topic area coverage on-wiki.
WE2.1.4

If we developed a proof of concept that adds translation tasks sourced from WikiProjects and other list-building initiatives, and present them as suggestions within the CX mobile workflow, then more editors would discover and translate articles focused on topical gaps.

By introducing an option that allows editors to select translation suggestions based on topical lists, we would test whether this approach increases the content coverage in our projects.

WE2.1.5 If we expose topic-based translation suggestions more broadly and analyze its initial impact, we will learn which aspects of the translation funnel to act on in order to obtain more quality translations.
WE2.2.4 If we provide production wiki access to 5 new languages, with or without Incubator, we will learn whether access to a full-fledged wiki with modern features such as those available on English Wikipedia (including ContentTranslation and Wikidata support, advanced editing and search results) aids in faster editing. Ultimately, this will inform us if this approach can be a viable direction for language onboarding for new or existing languages, justifying further investigation.
WE2.2.5 If we move addwiki.php to core and customize it to Wikimedia, we will improve code quality in our wiki creation system making it testable and robust, and we will make it easy for creators of new wikis and thereby make significant steps towards simplifying wiki creation process. phab:T352113
WE2.3.2 If we make two further improvements to media upload flow on Commons and share them with community, the feedback will be positive and it will help uploaders make less bad uploads (with the focus on copyright) as measured by the ratio of deletion requests within 30 days of upload. This will include release of further UX improvements to the release rights step in the Upload Wizard on Commons and automated detection of external sources.
WE2.3.3

If the BHL-Wikimedia Working Group creates Commons categories and descriptive guidelines for the South American and/or African species depicted in publications, they will make 3,000 images more accessible to biodiversity communities. (BHL = Biodiversity Heritage Library)

WE2.4.1 If we build a prototype of Wikifunctions calls embedded within MediaWiki content and test it locally for stability, we will be ready to use MediaWiki’s async content processing pipeline and test its performance feasibility in Q2. phab:T261472
WE2.4.2 If we create a design prototype of an initial Wikifunctions use case in a Wikipedia wiki, we will be ready to build and test our integration when performance feasibility is validated in Q2, as stated in Hypothesis 1. phab:T363391
WE2.4.3 If we make it possible for Wikifunctions users to access Wikidata lexicographical data, they will begin to create natural language functions that generate sentence phrases, including those that can handle irregular forms. If we see an average monthly creation rate of 31 for these functions, after the feature becomes available, we will know that our experiment is successful. phab:T282926
WE3.1.3 If we develop models for remixing content such as a content simplification or summarization that can be hosted and served via our infrastructure (e.g. LiftWing), we will establish the technical direction for work focused on increasing reader retention through new content discovery features. Research
WE3.1.6 If we introduce a personalized rabbit hole feature in the Android app and recommend condensed versions of articles based on the types of topics and sections a user is interested in, we will learn if the feature is sticky enough to result in multi-day usage by 10% of users exposed to the experiment over a 30-day period, and a higher pageview rate than users not exposed to the feature.
WE3.1.7 If we run a qualitative experiment focused on presenting article summaries to web readers, we will determine whether or not article summaries have the potential to increase reader retention, as proxied by clickthrough rate and usage patterns
WE3.1.8 If we build one feature which provides additional article-level recommendations, we will see an increase in clickthrough rate of 10% over existing recommendation options and a significant increase in external referrals for users who actively interact with the new feature.
WE3.2.2 Increasing the prominence of entry points to donations on the logged-out experiences of the Vector web mobile and desktop experience will increase the clickthrough rate of the donate link by 30% YoY. mw:Readers/2024_Reader_and_Donor_Experiences
WE3.2.3 If we make the “Donate” button in the iOS App more prominent by making it one click or less away from the main navigation screen, we will learn if discoverability was a barrier to non banner donations. Navigation Refresh
WE3.2.4 If we update the contributions page for logged-in users in the app to include an active badge for someone that is an app donor and display an inactive state with a prompt to donate for someone that decided not to donate in app, we will learn if this recognition is of value to current donors and encourages behavior of donating for prospective donors, informing if it is worth expanding on the concept of donor badges or abandoning it. Private Donor Recognition Experiment
WE3.2.5 If we create a Wikipedia in Review experiment in the Wikipedia app, to allow users to see and share personalized data about their reading, editing, and donation habits, we will see 2% of viewers donate on iOS as a result of this feature, 5% click share and, 65% of users rating the feature neutral or satisfactory. Personalized Wikipedia Year in Review
WE3.2.7 Increasing the prominence of entry points to donations on the logged-out experiences of the Minerva web mobile and desktop experience will increase the clickthrough rate of the donate link by 30% YoY.
WE3.3.2 If we develop the Charts MVP and get it working end-to-end in production test wikis, at least two Wikipedias + Commons agree to pilot it before the code freeze in December.
WE3.4.1 If we were to explore the feasibility by doing an experiment of setting up smaller PoPs in cloud providers like Amazon, we can expand our data center map and reach more users around the world, at reduced cost and increased turn-around time.
WE4.1.2 If we deploy at least one iteration of the Incident Reporting System MVP on pilot wikis, we will be able to gather valuable data around the frequency and type of incidents being reported. https://meta.wikimedia.org/wiki/Incident_Reporting_System#
WE4.2.1 If we explore and define Wikimedia-specific methods for a unique device identification model, we will be able to define the collection and storage mechanisms that we can later implement in our anti-abuse workflows to enable more targeted blocking of bad actors.
WE4.2.9 If we provide contextual information about reputation associated with an IP that is about to be blocked, we will see fewer collateral damage IP and IP range blocks, because administrators will have more insight into potential collateral damage effects of a block. We can measure this by instrumenting Special:Block and observing how behavior changes when additional information is present, vs when it is not.
WE4.2.2 If we define an algorithm for calculating a user account reputation score for use in anti-abuse workflows, we will prepare the groundwork for engineering efforts that use this score as an additional signal for administrators targeting bad actors on our platform. We will know the hypothesis is successful if the algorithm for calculating a score maps with X% precision to categories of existing accounts, e.g. a "low" score should apply to X% of permanently blocked accounts.
WE4.2.3 If we build an evaluation framework using publicly available technologies similar to the ones used in previous attacks we will learn more about the efficacy of our current CAPTCHA at blocking attacks and could recommend a CAPTCHA replacement that brings a measurable improvement in terms of the attack rate achievable for a given time and financial cost.
WE4.3.1 If we apply some machine learning and data analysis tools to webrequest logs during known attacks, we'll be able to identify abusive IP addresses with at least >80% precision sending largely malicious traffic that we can then ratelimit at the edge, improving reliability for our users.
WE4.3.3 If we deploy a proof of concept of the 'Liberica' load balancer, we will measure a 33% improvement in our capacity to handle TCP SYN floods.
WE4.3.5 By creating a system that spawns and controls thousands of virtual workers in a cloud environment, we will be able to simulate Distributed Denial of Service (DDoS) attacks and effectively measure the system's ability to withstand, mitigate, and respond to such attacks.
WE4.3.6 If we integrate the output of the models we built in WE 4.3.1 with the dynamic thresholds of per-ip concurrency limits we've built for our TLS terminators in WE 4.3.2, we should be able to increase our ability to neutralize automatically attacks with 20% more volume, as measured with the simulation framework we're building.
WE4.3.7 If we roll out a user-friendly web application that enables assisted editing and creation of requestctl rules, SREs will be able to mitigate cachebusting attacks in 50% less time than our established baseline.
WE4.4.2 If we deploy Temporary Accounts to a set of small-to-medium sized projects, we will be able to the functionality works as intended and will be able to gather data to inform necessary future work. mw:/wiki/Trust_and_Safety_Product/Temporary_Accounts
WE5.1.1 If we successfully roll out Parsoid Read Views to all Wikivoyages by Q1, this will boost our confidence in extending Parsoid Read Views to all Wikipedias. We will measure the success of this rollout through detailed evaluations using the Confidence Framework reports, with a particular focus on Visual Diff reports and the metrics related to performance and usability. Additionally, we will assess the reduction in the list of potential blockers, ensuring that critical issues are addressed prior to wider deployment.
WE5.1.3 If we reroute the endpoints currently exposed under rest_v1/page/html and rest_v1/page/title paths to comparable MW content endpoints, then we can unblock RESTbase sunsetting without disrupting clients in Q1.
WE5.1.4 If we complete the remaining work to mitigate the impact of browsers' anti-tracking measures on CentralAuth autologin and move to a more resilient authentication infrastructure (SUL3), we will be ready to roll out to production wikis in Q2.
WE5.1.5 If we increase the number of relevant SonarCloud rules enabled for key MediaWiki Core repositories and refine the quality of feedback provided to developers, we will optimize the developer experience and enable them to improve the maintainability of the MediaWiki codebase in the future. This will be measured by tracking developer satisfaction levels and whether test group developers feel the tool is becoming more useful and effective in their workflow. Feedback will be gathered through surveys and direct input from developers to evaluate the perceived impact on their confidence in the tool and the overall development experience.
WE5.1.7 If we represent all content module endpoint responses (10 in total) in our MediaWiki REST API OpenAPI spec definitions, we will be able to implement programmatic validation to guarantee that our generated documentation matches the actual responses returned in code.
WE5.1.8 If we introduce support for endpoint description translation (ie: does not include actual object definitions or payloads) into our generated MediaWiki REST API OpenAPI specs, we can lay the foundation to support Wikimedia’s expected internationalization standards.
WE5.2.3 If we conduct an experiment to reimplement at least [1-3] existing Core and Extension features using a new Domain Event and Listener platform component pattern as an alternative to traditional hooks, we will be able to confirm our assumption of this intervention enabling simpler implementation with more consistent feature behavior.
WE5.3.3 If we instrument both parsers to collect availability of prior parses and timing of template expansions, and to classify updates and dependencies, we can prioritize work on selective updates (Hypothesis 5.3.2) informed by the quantification of the expected performance benefits.
WE5.3.4 If we can increase the capability of our prototype selective update implementation in Parsoid using the learnings from the 5.3.1 hypothesis, we can leverage more opportunities to increase the performance benefit from selective update.
WE5.4.1 If the MediaWiki engineering group is successful with release process accountability and enhances its communication process by the end of Q2 in alignment with the product strategy, we will eliminate the current process that relies on unplanned or volunteer work and improve community satisfaction with the release process. Measured by community feedback on the 1.43 LTS release coupled with a significant reduction in unplanned staff and volunteer hours needed for release processes.
WE5.4.2 If we research and build a process to more regularly upgrade PHP in conjunction with our MediaWiki release process we will increase speed and security while reducing the complexity and runtime of our CI systems, by observing the success of PHP 8.1 upgrade before 1.43 release.
WE6.1.3 If we collect insights on how different teams are making technical decisions we are able to gather good practices and insights that can enable and scale similar practices across the organization.
WE6.1.4 If we research solutions for indexing the code of all projects hosted in WMF’s code repositories, we will be able to pick a solution that allows our users to quickly discover where the code is located whenever dealing with incident response or troubleshooting.
WE6.1.5 If we test a subset of draft metrics on an experimental group of technical documentation collections, we will be able to make an informed decision about which metrics to implement for MediaWiki documentation. Wikimedia_Technical_Documentation_Team/Doc_metrics
WE6.2.1 If we publish a versioned build of MediaWiki, extensions, skins, and Wikimedia configuration at least once per day we will uncover new constraints and establish a baseline of wallclock time needed to perform a build. mw:Wikimedia Release Engineering Team/Group -1
WE6.2.2 If we replace the backend infrastructure of our existing shared MediaWiki development and testing environments (from apache virtual servers to kubernetes), it will enable us to extend its uses by enabling MediaWiki services in addition to the existing ability to develop MediaWiki core, extensions, and skins in an isolated environment. We will develop one environment that includes MediaWiki, one or more Extensions, and one or more Services. wikitech:Catalyst
WE6.2.3 If we create a new deployment UI that provides more information to the deployer and reduce the amount of privilege needed to do deployment, it will make deployment easier and open deployments to more users as measured by the number of unique deployers and number of patches backported as a percentage of our overall deployments. mw:SpiderPig
WE6.2.5 If we move MultiVersion routing out of MediaWiki, we 'll be able to ship single version MediaWiki containers, largely cutting down the size of containers allowing for faster deployments, as measured by the deployment tool. https://docs.google.com/document/d/1_AChNfiRFL3VdNzf6QFSCL9pM2gZbgLoMyAys9KKmKc/edit
WE6.2.6 If we gather feedback from QTE, SRE, and individuals with domain specific knowledge and use their feedback to write a design document for deploying and using the wmf/next OCI container, then we will reduce friction when we start deploying that container. T379683
WE6.3.4 If we enable the automatic deployment of a minimal tool, we will be able to evaluate the end to end flow and set the groundwork to adding support for more complex tools and deployment flows. phab:T375199
WE6.3.5 By assessing the relative importance of each sustainability category and its associated metrics, we can create a normalized scoring system. This system, when implemented and recorded, will provide a baseline for measuring and comparing Toolforge’s sustainability progress over time. phab:T376896
WE6.3.6 If we conduct discovery, such as target user interviews and competitive analysis, to identify existing Toolforge pain points and improvement opportunities, we will be able to recommend a prioritized list of features for the future Toolforge UI. Phab:T375914
Signals & Data Services (SDS) Hypotheses

[ SDS Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q2 text Details & Discussion
SDS 1.1.1 If we partner with an initiative owner and evaluate the impact of their work on Core Foundation metrics, we can identify and socialize a repeatable mechanism by which teams at the Foundation can reliably impact Core Foundation metrics.
SDS1.2.1.B If we test the accuracy and infrastructure constraints of 4 existing AI language models for 2 or more high-priority product use-cases, we will be able to write a report recommending at least one AI model that we can use for further tuning towards strategic product investments. Phab:T377159

Learn more.

SDS1.2.2 If we study the recruitment, retention, and attrition patterns among long-tenure community members in official moderation and administration roles, and understand the factors affecting these phenomena (the ‘why’ behind the trends), we will better understand the extent, nature, and variability of the phenomenon across projects. This will in turn enable us to identify opportunities for better interventions and support aimed at producing a robust multi-generational framework for editors. Learn more.
SDS1.2.3 If we combine existing knowledge about moderators with quantitative methods for detecting moderation activity, we can systematically define and identify Wikipedia moderators. T376684
SDS1.3.1.B If we integrate the Spark / DataHub connector for all production Spark jobs, we will get column-level lineage for all Spark-based data platform jobs in DataHub.
SDS1.3.2.B If we implement a frequently run Spark-based MariaDB MW history data querying job, reconciliate missing events and enrich them, we will provide a daily updated MW history wikitext content data lake table.
SDS2.1.1 If we create an integration test environment for the proposed 3rd party experimentation solution, we can collaborate practically with Data SRE, SRE, QTE, and Product Analytics to evaluate the solution’s viability within WMF infrastructure in order to make a confident build/install/buy recommendation. mw:Data_Platform_Engineering/Data_Products/work_focus
SDS2.1.3 If the Growth team learns about the Metrics Platform by instrumenting a Homepage Module on the Metrics Platform, then we will be prepared to outline a measurement plan in Q1 and complete an A/B test on the new Metrics platform by the end of Q2.
SDS2.1.4 If we conduct usability testing on our prototype among pilot users of our experimentation process, we can identify and prioritize the primary pain points faced by product managers and other stakeholders in setting up and analyzing experiments independently. This understanding will lead to the refinement of our tools, enhancing their efficiency and impact.
SDS2.1.5 If we design a documentation system that guides the experience of users building instrumentation using the Metrics Platform, we will enable those users to independently create instrumentation without direct support from Data Products teams, except in edge cases. task T329506
SDS2.1.7 If we provide a function for user enrollment and a mechanism to capture and store CTR events to a monotable in a pre-declared event stream we can ship MPIC Alpha in order to launch an basic split A/B test on logged in users.
SDS2.2.2 If we define a standard approach for measuring and analyzing conversion rates, it will help us establish a collection of well-defined metrics to be used for experimentation and baselines, and start enabling comparisons between experiments/projects to increase learning from these.
SDS2.3.1 If we conduct a legal review of proposed unique cookies for logged out users, we can determine whether there are any privacy policy or other legal issues which inform the community consultation and/or affect the technical implementation itself.
Future Audiences (FA) Hypotheses

[ FA Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q2 text Details & Discussion
FA1.1.1 If we make off-site contribution very low effort with an AI-powered “Add a Fact” experiment, we can learn whether off-platform users could help grow/sustain the knowledge store in a possible future where Wikipedia content is mainly consumed off-platform. Experiment:Add a Fact
Product and Engineering Support (PES) Hypotheses

[ PES Key Results ]

ⴰⵎⵙⴳⴷⴰⵍ

Hypothesis shortname Q2 text Details & Discussion
PES1.2.4 If we research the Task Prioritization focus area in the Community Wishlist in early Q2, we will be able to identify and prioritize work that will improve moderator satisfaction, which we can begin implementing in Q3.
PES1.2.5 If we are able to publish and receive community feedback on 6+ focus areas in Q2, then we will have confidence in presenting at least 3+ focus areas for incorporation in the 2025-26 annual plan.
PES1.2.6 By introducing favouriting templates, we will improve the number of templates added via the template dialog by 10%.
PES1.3.4 If we create an experience that provides insights to Wikipedia Audiences about their community over the year, it will stimulate greater connection with Wikipedia – encouraging engagement in the form of social sharing, time spent interacting on Wikipedia, or donation.
PES1.4.1 If we draft an SLO with the Editing team releasing Edit Check functionality, we will begin to learn and understand how to define and track user-facing SLOs together, and iterate on the process in the future.
PES1.4.2 If we define and publish SLAs for putting OOUI into “maintenance mode”, growth of new code using OOUI across Wikimedia projects will stay within X% in Q1.
PES1.4.3 If we map ownership using the proposed service catalog for known owned services in Q1, we will be able to identify significant gaps in service catalog as it helps in solving the SLO culture by the end of the year.
PES1.5.1 If we finalize and publish the Edit Check SLO draft, practice incorporating it in regular workflows and decisions, and draft a Citoid SLO, we’ll continue learning how to define and track user-facing and cross-team SLOs together.
PES1.5.2 If we clarify and define in writing a document with set of roles and responsibilities of stakeholders throughout the service lifecycle, this will enable teams to make informed commitments in the Service Catalog, including SLOs


Explanation of buckets

Wiki Experiences

Diversity (40786) – The Noun Project
Diversity (40786) – The Noun Project

The purpose of this bucket is to efficiently deliver, improve and innovate on wiki experiences that enable the distribution of free knowledge world-wide. This bucket aligns with movement strategy recommendations #2 (Improve User Experience) and #3 (Provide for Safety and Inclusion). Our audiences include all collaborators on our websites, as well as the readers and other consumers of free knowledge. We support a top-10 global website, and many other important free culture resources. These systems have performance and uptime requirements on-par with the biggest tech companies in the world. We provide user interfaces to wikis, translation, developer APIs (and more!) and supporting applications and infrastructure that all form a robust platform for volunteers to collaborate to produce free knowledge world-wide. Our objectives for this bucket should enable us to improve our core technology and capabilities, ensure we continuously improve the experience of volunteer editors and moderators of our projects, improve the experience of all technical contributors working to improve or enhance the wiki experiences, and ensure a great experience for readers and consumers of free knowledge worldwide. We will do this through product and technology work, as well as through research and marketing. We expect to have at most five objectives for this bucket.

Knowledge is constructed by people! And as a result our annual plan will focus on the content as well as the people who contribute to the content and those who access and read it.

Our aim is to produce an operating plan based on existing strategy, mainly our hypotheses about the contributor, consumer and content "flywheel". The primary shift I’m asking for is an emphasis on the content portion of the flywheel, and exploration of what our moderators and functionaries might need from us now, with the aim of identifying community health metrics in the future.

Signals and Data Services

Arrythmia noun 246518
Arrythmia noun 246518

In order to meet the Movement Strategy Recommendations for Ensuring Equity in Decision Making (Recommendation #4), Improving User Experience (Recommendation #2), and Evaluating, Iterating and Adapting (Recommendation #10), decision makers from across the Wikimedia Movement must have access to reliable, relevant, and timely data, models, insights, and tools that can help them assess the impact (both realized and potential) of their work and the work of their communities, enabling them to make better strategic decisions.

In the Signals & Data Services bucket, we have identified four primary audiences: Wikimedia Foundation staff, Wikimedia affiliates and user groups, developers who reuse our content, and Wikimedia researchers, and we prioritize and address the data and insights needs of these audiences. Our work will span a range of activities: defining gaps, developing metrics, building pipelines for computing metrics, and developing data and signals exploration experiences and pathways that help decision makers interact more effectively and joyfully with the data and insights.

Future Audiences

The purpose of this bucket is to explore strategies for expanding beyond our existing audiences of consumers and contributors, in an effort to truly reach everyone in the world as the essential infrastructure of the ecosystem of free knowledge. This bucket aligns with Movement Strategy Recommendation #9 (Innovate in Free Knowledge). More and more, people are consuming information in experiences and forms that diverge from our traditional offering of a website with articles – people are using voice assistants, spending time with video, engaging with AI, and more. In this bucket, we will propose and test hypotheses around potential long-term futures for the free knowledge ecosystem and how we will be its essential infrastructure. We will do this through product and technology work, as well as through research, partnerships, and marketing. As we identify promising future states, learnings from this bucket will influence and be expanded through Buckets #1 and #2 in successive annual plans, nudging our product and technology offerings toward where they need to be to serve knowledge-seekers of the future. Our objectives for this bucket should drive us to experiment and explore as we bring a vision for the future of free knowledge into focus.

Sub-buckets

Noun project 3067
Noun project 3067

We also have two other “sub-buckets” which consist of areas of critical functions, which must exist at the Foundation to support our basic operations, and some of which we have in common with any software organization. These “sub-buckets” won’t have top level objectives of their own, but will have input on and will support the top level objectives of the other groups. They are:

  1. Infrastructure Foundations. This bucket covers the teams which sustain and evolve our datacenters, our compute and storage platforms, the services to operate them, the tools and processes that enable the operation of our public facing sites and services.
  2. Product and Engineering Support. This bucket includes teams which operate “at scale” providing services to other teams that improve the productivity and operations of other teams.