Talk:Community health initiative/Archive/2019
Please do not post any new comments on this page. This is a discussion archive first created in 2019, although the comments contained were likely posted before and after this date. See current discussion. |
How often do blocked users attempt to edit? We measured!
A few months back our team revamped the design of the mobile web "you are blocked" note (screenshot to the right). Despite it being a tiny little pop-up on a tiny little screen it was more complicated to implement than we thought. We had to abandon formatting the block reason because so many reasons are templates (e.g. {{schoolblock}}) so we decided to measure how often those notices actually display to see if it mattered at all.
Short answer: enough people see the "you are blocked" message on mobile to warrant fixing block reasons, but it's not urgent.
I've compiled some data about how often the "you are blocked" messages appear to our users on desktop and mobile on Community health initiative/Blocking tools and improvements/Block notices. There are some graphs and a table of raw data as well as some synthesized findings. (Take a look at the Persian Wikipedia chart, it's mesmerizing! 🌈) Here are the main takeaways:
- Block notices appear very often on the largest Wikimedia projects, sometimes outnumbering actual edits. (6.2 million blocked edits occured on English Wikipedia over a 30-day period.)
- The desktop wikitext editor sees the vast majority of impressions by a wide margin. (98% on English Wikipedia, 89.5% on Spanish, and 98.7% on Russian.)
- The VisualEditor and mobile block notices may occur less frequently, but still display to thousands of people every month.
We scratched this curiosity itch (and per usual have more questions than before!) but this lets us know that yes — blocks are stopping people. LOTS of people. — Trevor Bolliger, WMF Product Manager 🗨 00:17, 19 January 2019 (UTC)
Detox deprecated
The Detox tool, cited by the initiative, has been found to produce racist and homophobic results, and has been deleted. Conclusions based on its use are likely to be flawed. DuncanHill (talk) 01:43, 26 June 2019 (UTC)
- @DuncanHill: I can clarify -- nobody at the WMF is using the Detox tool. Our Research team collaborated with Jigsaw on training Detox in 2016-2017, and found some promising-looking initial results. In 2017, the Anti-Harassment Tools team tried out using Detox to detect harassment on Wikipedia, and we found the same kinds of flaws that you have. The tool is inaccurate and doesn't take context into account, leading to false positives (flagging the word "gay" as aggressive even in a neutral or positive context) and false negatives (missing more nuanced uses of language, like sarcasm). As far as we know, the model hasn't really improved. I believe there's a team at Jigsaw who are still investigating how to use Detox to study conversations at Wikipedia, but nobody at the WMF is using Detox to identify harassers on Wikimedia projects. I'm glad that you brought it up; I just edited that page to remove the outdated passages that mention Detox. Other surveys, studies and reports are sufficient to establish that harassment is a serious problem on our platform. -- DannyH (WMF) (talk) 19:10, 29 June 2019 (UTC)