Research:Newcomer desirability modeling

This page documents a completed research project.


In order to better support Snuggle users, I'd like to allow them to view newcomers who are most likely to be editing in good-faith first to save them time. To do this, I'll need to build a model of newcomer activities that allows me to assign a likelihood of goodfaith. This model will have to be able to make useful judgments about newcomers who have made few edits and would, preferably, refine these judgments based on new information. In other words, I need to be able to produce a useful rating for users that I know little about and refine that rating with more information.

Project Summary

edit

One simple and naive approach would sort editors by the proportion of their edits that have been reverted. One problem with this approach is that it, in some ways, defeats the intended use of Snuggle. If only the newcomers who are least reverted are determined to be working in good-faith, snuggle won't be a very useful tool for newcomers who run into the dark side of Wikipedia's quality control system. Certainly, there's potential for a massive difference between quality of newcomers who submit racial slurs and those who simply ran foul of some of Wikipedia's more complicated policies and guidelines.

To differentiate between these two types of reverts, I'll take advantage of models already used to assess newcomer behavior in Wikipedia, anti-vandal bots. Many of these bots publish an API that allows an external service (like Snuggle) to request scores and other metadata that the bots' machine classifiers use to determine when an edit is vandalism. For example, see [Stiki's API readme] for an example.

Methods

edit
 
A histogram of STiki scores for main namespace edits by newcomers is plotted for undesirable (0) and desirable (1) newcomers.
 
Two beta distributions are fit based on empirical observations of STiki scores for undesirable (0) and desirable (1) newcomers

To train a model of desirable newcomers, I'll need a training set containing pre-labeled newcomers. Luckily, I built such a dataset in my previous work with the Wikimedia foundation [1].

As a signal, I'll be using scores assigned to individual edits by Stiki, a statistical vandalism detection system running on the English Wikipedia. Sadly, the hand-coded dataset only overlaps with the vandal scores for the last year or so. This means that I only have a 124 observations to work with. This may be close to enough for me to move forward.

Although there is a substantial amount of overlap between the two distributions, the center of density for desirable newcomers (1) is much lower than undesirables. So, if these values are randomly distributed, but bound by 0 and 1 (which seems evident), we should be able to fit a couple of en:beta distributions.

Using these theoretical distributions as a prior for all new editors in Wikipedia, I can use the STiki scores of their new edits to assign a confidence for which model best fits the newcomer's scores. Since the two potential models represent a dichotomy (desirable/undesirable), I can represent the probability that a newcomer's edits fit into the desirable distribution a desirability ratio:

 

Results

edit

To ensure that this model was doing its job, I re-applied it to the original training set to see how well it could re-classify by labeled newcomers. The figure below captures the desirability ratio scores (converted back to a probability) for each quality class.

 
Histograms are plotted for the probability of desirability based on the model for undesirable (0) and desirable (1) newcomers.

The figure above suggests that this model will be effective for projecting a large desirability ratio for desirable newcomers.

Summary

edit

This document describes an approach for predicting the desirability of new editors that has the following characteristics:

  • Useful predictions can be made with very little data
  • Predictions will gain accuracy with more data
  • Prediction output value is trivially sortable (for presentation in en:WP:Snuggle

References

edit