Machine learning models/Production/Turkish Wikipedia goodfaith edit


Model card
This page is an on-wiki machine learning model card.
A diagram of a neural network
A model card is a document about a machine learning model that seeks to answer basic questions about the model.
Model Information Hub
Model creator(s)Aaron Halfaker (User:EpochFail) and Amir Sarabadani
Model owner(s)WMF Machine Learning Team (ml@wikimediafoundation.org)
Model interfaceOres homepage
CodeORES Github, ORES training data, and ORES model binaries
Uses PIINo
In production?Yes
Which projects?Turkish Wikipedia
This model uses data about a revision to predict the likelihood that the revision is in good faith.


Motivation

edit

Not all damaging edits are vandalism. This model is intended to differentiate between edits that are intentionally harmful (badfaith/vandalism) and edits that are not intended to be harmful (good edits/goodfaith damage). The model provides a guess at whether or not a given revision is in good faith, and provides some probabilities to serve as a measure of its confidence level. This model was inspired by research of Wikipedia's quality control system and the potential for vandalism detection models to also be used as "goodfaith newcomer" detection systems.[1]

Users and uses

edit
Use this model for
  • This model should be used for prioritizing the review and potential reversion of vandalism on Turkish Wikipedia.
  • This model should be used for detecting goodfaith contributions by editors on Turkish Wikipedia.
Don't use this model for
  • This model should not be used as an ultimate arbiter of whether or not an edit ought to be considered good faith.
  • The model should not be used outside of Turkish Wikipedia.
Current uses
  • Turkish Wikipedia uses the model as a service for facilitating efficient edit reviews or newcomer support.
  • On an individual basis, anyone can submit a properly-formatted API call to ORES for a given revision and get back the result of this model.
Example API call:
https://ores.wikimedia.org/v3/scores/trwiki/1234/goodfaith

Ethical considerations, caveats, and recommendations

edit

Turkish Wikipedia decided to use this model. Over time, the model has been validated through use in the community.

This model is known to give newer editors lower probability of editing in good faith.

Internal or external changes that could make this model deprecated or no longer usable are:

  • Data drift means training data for the model is no longer usable.
  • Doesn't meet desired performance metrics in production.
  • Turkish Wikipedia community decides to not use this model anymore.

Model

edit

Performance

edit

Test data confusion matrix:

Label n ~True ~False
True 37555 34932 2623
False 1178 355 823

Test data sample rates:

Rate Sample Population
sample 0.97 0.03
population 0.954 0.046

Test data performance:

Statistic True False
match_rate 0.901 0.099
filter_rate 0.099 0.901
recall 0.93 0.699
precision 0.985 0.326
f1 0.957 0.444
accuracy 0.919 0.919
fpr 0.301 0.07
roc_auc 0.935 0.935
pr_auc 0.996 0.404

Implementation

edit
Model architecture
{
    "type": "GradientBoosting",
    "params": {
        "scale": true,
        "center": true,
        "labels": [
            true,
            false
        ],
        "multilabel": false,
        "population_rates": null,
        "ccp_alpha": 0.0,
        "criterion": "friedman_mse",
        "init": null,
        "learning_rate": 0.01,
        "loss": "deviance",
        "max_depth": 7,
        "max_features": "log2",
        "max_leaf_nodes": null,
        "min_impurity_decrease": 0.0,
        "min_impurity_split": null,
        "min_samples_leaf": 1,
        "min_samples_split": 2,
        "min_weight_fraction_leaf": 0.0,
        "n_estimators": 700,
        "n_iter_no_change": null,
        "presort": "deprecated",
        "random_state": null,
        "subsample": 1.0,
        "tol": 0.0001,
        "validation_fraction": 0.1,
        "verbose": 0,
        "warm_start": false
    }
}
Output schema
{
    "title": "Scikit learn-based classifier score with probability",
    "type": "object",
    "properties": {
        "prediction": {
            "description": "The most likely label predicted by the estimator",
            "type": "boolean"
        },
        "probability": {
            "description": "A mapping of probabilities onto each of the potential output labels",
            "type": "object",
            "properties": {
                "true": {
                    "type": "number"
                },
                "false": {
                    "type": "number"
                }
            }
        }
    }
}
Example input and output
Input:
https://ores.wikimedia.org/v3/scores/trwiki/1234/goodfaith

Output:

{
    "trwiki": {
        "models": {
            "goodfaith": {
                "version": "0.5.1"
            }
        },
        "scores": {
            "1234": {
                "goodfaith": {
                    "score": {
                        "prediction": true,
                        "probability": {
                            "false": 0.25792436772637684,
                            "true": 0.7420756322736232
                        }
                    }
                }
            }
        }
    }
}

Data

edit
Data pipeline
Tabular data about edits is collected from the Mediawiki API, preprocessed (via log-transformations, joining with public editor data, etc.), and joined with user-generated goodfaith/damaging labels.
Training data
This model was trained using hand-labeled training data that is several years old.
Test data
The statistics reported here were calculated by selecting a random partition of the training data to hold out from the training process. The model then makes a prediction on that data, which is compared to the underlying ground truth.

Licenses

edit

Citation

edit

Cite this model card as:

@misc{
  Triedman_Bazira_2023_Turkish_Wikipedia_goodfaith,
  title={ Turkish Wikipedia goodfaith model card },
  author={ Triedman, Harold and Bazira, Kevin },
  year={ 2023 },
  url={ https://meta.wikimedia.org/wiki/Machine_learning_models/Production/Turkish_Wikipedia_goodfaith_edit }
}
  1. Halfaker, A., Geiger, R. S., & Terveen, L. G. (2014, April). Snuggle: Designing for efficient socialization and ideological critique. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 311-320).