Welcome to Wikilytics, a free and open-source software toolkit for doing analysis of editing trends on Wikipedia and other Wikimedia projects.

Background

edit

This package offers a set of tools used to create datasets to analyze editing trends. It was first created expressly for the Editor Trends Study, but is well-suited to a variety of research into editing trends. It is thus free to use (as in beer and freedom) if you're interested in expanding on the results of Editor Trend Study or if you'd like to participate in other research projects.

edit

The Python scripts to create the dataset to answer the question “Which editors are the ones that are leaving -- are they the new editors or the more tenured ones?” consists of three separate phases:

  • Chunk the XML dump file in smaller parts
    • and discard all non-zero namespace revisions.
  • Parse XML chunks by taking the following steps:
    • read XML chunk
    • construct XML DOM
    • iterate over each article in XML DOM
    • iterate over each revision in each article
    • extract from each revision
      • username ID
      • date edit
      • article ID
    • determine if username belongs to bot, discard information if yes
    • store data in MongoDB
  • Create dataset from MongoDB database
    • Create list with unique username IDs
    • Loop over each ID
      • determine year of first edit
      • determine year of last edit
      • count total number of edits by year
      • sort edits by date and keep first 10 edits
    • Write to CSV file.
edit

Each person who has contributed to Wikipedia has its own document in the MongoDB. A document is a bit similar to a row in a SQL database but there are important differences. The document has the following structure:

{'editor': id,
 'year_joined': year,
 'new_wikipedian': True,
 'total_edits': n,
 'edits': {
           'date': date,
	   'article': article_id,
	  }
}

The edits variable is a sub document containing all the edits made by that person. The edits variable is date sorted, so the first observation is the first edit made by that person while the last observation is the final edit made by that person. This structure allows for quickly querying the database:

use wikilitycs 
db.editors_dataset.find_one({'editor': '35252'}, {'edits': 1})


Because we know that each editor has their own document, we do not need to scan the entire table to find all relevant matches. Hence, we can use the find_one() function which results in considerable speed improvements.

Installation

edit

Step-by-Step Movie Tutorial

edit

There is an online tutorial available at Vimeo. You cannot install Editor Trends toolkit on Mac OS X at the moment, I will try to code around some Mac OS X restrictions regarding multiprocessing.

Dependencies

edit

Follow the next steps if you would like to replicate the analysis on a Wikipedia of your choice.

  1. Download and install MongoDB, preferably the 64 bit version.
  2. Download and install Python 2.6 or 2.7 (The code is not Python 3 compliant, and it has not been tested using Python < 2.6)
    Linux users may need to install the packages python-argparse, python-progressbar and pymongo if that functionality is not installed by default with Python.
  3. Download and install Subversion client
  4. Depending on your platform, make sure you have one of the following extraction utilities installed:
  • Windows: 7zip
  • Linux: tar (should be installed by default)

To verify that you have installed the required dependencies, do the following:

<prompt>:: mongo
MongoDB shell version: 1.6.3
connecting to: test
<prompt> (in mongo shell) exit

<prompt>:: python
Python 2.6.2 (r262:71605, Apr 14 2009, 22:40:02) [MSC v.1500 32 bit (Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
<prompt> (in python) exit()

<prompt>:: 7z or tar (depending on your platform)
7-Zip [64] 4.65  Copyright (c) 1999-2009 Igor Pavlov  2009-02-03

<prompt>:: svn

Output on the console might look different depending on your OS and installed version.

For Windows Users, add the following directories to the path

c:\python26;c:\python26\scripts;c:\mongodb\bin;

To finish the Mongodb configuration, do the following:

cd \
mkdir data
mkdir data\db
cd \mongodb\bin
mongod --install --logpath c:\mongodb\logs
net start mongodb

Prepare your Python environment by taking the following steps: 1 Check whether easy_install is installed by issuing the command:

easy_install

If easy_install is not installed, then enter the following command:

sudo apt-get install python-setuptools

2 Check whether virtualenv is installed by the issuing the following command:

virtualenv

If virtualenv is not installed, enter this command:

sudo easy_install virtualenv

Go to the directory where you want to install your virtual Python, it's okay to go to the parent directory of editor_trends. Then, issue this command:

virtualenv editor_trends

This will copy the Python executable and libraries to editor_trends/bin and editor_trends/libs Now, we have to activate our virtual Python:

source bin/activate

on Windows, please use the following

\path\to\env\Scripts\activate

You will see that your command prompt has changed to indicate that you are working with the virtual Python installation instead of working with the system's default installation. If you now install dependencies, then these dependencies will be installed in your virtual Python installation instead of in the system Python installation. This will keep everybody happy. Finally, enter the following commands:

easy_install progressbar
easy_install pymongo
easy_install argparse
easy_install python-dateutil
easy_install texttable

Python is installed and you are ready to go!

If everything is running, then you are ready to go.

Important MongoDB Notes

edit

If you decide to use MongoDB to store the results, then you have to install the 64-bit version. 32-bit versions of MongoDB are limited to 2 GB of data and the databases created by this package will definitely be larger than that. For more background information on this limitation, please read MongoDB 32-bit limitations

Install Editor Trend Analytics

edit

First, download Editor Trend Analytics

Getting started

edit

By now, you should have Editor Trend Analytics up and running. The first thing you need to do is to download a Wikipedia dump file.

From now on, I'll assume that you are locate in the directory where you installed Editor Trend Analytics.

Download Wikipedia dump file

edit

To download a dump file, enter the following command:

python manage.py download

You can also specify the language (either using the English name or the local name) of the Wikipedia project that you would like to analyze:

python manage.py -l Spanish download 
python manage.py -l Español download 

Or, if you want to download a non Wikipedia dump file, enter the following command:

python manage.py -l Spanish download {commons|wikibooks|wikinews|wikiquote|wikisource|wikiversity|wikitionary}

To obtain a list of all supported languages, enter:

manage show_languages

or to obtain all languages starting with 'x', enter:

python manage.py show_languages --first x


Extract Wikipedia dump file

edit

WARNING: This process might take hours to days, depending on the configuration of your system. The Wikipedia dump file is extracted and split into smaller chunks to speed up the processing. Enter the following command:

python manage.py extract (for extracting data from the Wikipedia dump file and storing it in smaller chunks)

or, for one of the other Wikimedia projects, enter

python manage.py -l Spanish -p commons extract

Valid project choices are: {commons|wikibooks|wikinews|wikiquote|wikisource|wikiversity|wiktionary}

Note: The extract process may need to be run twice. Once to unzip the dump file, then again to extract the data from the dump file.


Sort Wikipedia dump file

edit

WARNING: This process might take a few hours. The chunks must be sorted before being added to the MongoDB. Enter the following command:

python manage.py sort (for sorting the chunks as generated by the 'manage extract' step)

or, for one of the other Wikimedia projects, enter

python manage.py -l Spanish sort {commons|wikibooks|wikinews|wikiquote|wikisource|wikiversity|wikitionary}


Store Wikipedia dump file

edit

WARNING: This process might take hours to days, depending on the configuration of your system. Now, we are ready to extract the required information from the Wikipedia dump file chunks and store it in the MongoDB. Enter the following command:

python manage.py store
python manage.py -l Spanish store

or, for one of the other Wikimedia projects, enter

python manage.py -l Spanish store {commons|wikibooks|wikinews|wikiquote|wikisource|wikiversity|wikitionary}

Transform dataset

edit

WARNING: This process might take a couple of hours. Finally, the raw data needs to be transformed in useful variables. Issue the following command:

python manage.py transform
python manage.py -l Spanish transform

Create dataset

edit

WARNING: This process might take a couple of hours to days, depending on the configuration of your computer. We are almost there, the data is in the database, and now we need to export the data to a CSV file, so we can import it using a statistical program such as en:R (programming language), en:Stata or en:SPSS.

Enter the following command:

python manage.py dataset 
python manage.py -l Spanish dataset

or, for one of the other Wikimedia projects, enter

manage -l Spanish {commons|wikibooks|wikinews|wikiquote|wikisource|wikiversity|wikitionary} dataset

Everything in one shot

edit

WARNING: This process might take a couple of days or even more than a week, depending on the configuration of your computer. If you don't feel like monitoring your computer and you just want to create a dataset from scratch, enter the following command:

python manage.py all language
python manage.py -l Spanish all
python manage.py -p {commons|wikibooks|wikinews|wikiquote|wikisource|wikiversity|wikitionary} all


Benchmarks

edit
Benchmark German Wiki
Task Configuration 1 Configuration 2
Download 1 minute 14 seconds
Extract 4-6 hours
Sort ~30 minutes
Store 4-5 hours
Transform 2-3 hours
Total time 10-14 hours


Benchmark English Wiki
Task Configuration 1 Configuration 2
Download 15 minutes
Extract ~36 hours
Sort 10.5 hours
Store 21 hours
Transform 14.3 hours
Total time 3.4 days


Benchmark Hungarian Wiki
Task Configuration 3
Download 1-2 minutes
Extract 24.5 minutes
Sort 1.5 minutes
Store 7-8 minutes
Transform 11 minutes
Total time ~45 minutes


Configuration 2

Amazon Web Services Large EC2 Instance

  • Ubuntu 64-bit
  • 4 EC2 Compute Units (2 virtual cores)
  • 7.5 GB memory
  • 850 GB storage
Configuration 3
  • Win7 64 bit
  • Intel i7 CPU (8 virtual core)
  • 6 GB memory
  • 1 TB storage
  • 100/100 Mbit/s Internet connection

See also

edit