Talk:Wikimedia Foundation Human Rights Impact Assessment

Latest comment: 2 years ago by RGaines (WMF) in topic General concerns and specific concerns

Status of priority recommendations

edit

Could you do a run-through through the priority recommendations and outline to what extent they are implemented and/or to what extent you are planning to implement them?

For reference, the priority recommendations were:

Strategies for the Foundation
  1. Develop a standalone Human Rights Policy that commits to respecting all internationally recognized human rights by referencing the International Bill of Human Rights.
  2. Conduct ongoing human rights due diligence to continually assess risks to rightsholders. A Foundation-level HRIA should be conducted every three years or whenever significant changes could have an effect on human rights.
  3. Develop rights-compatible channels to address human rights concerns, including private channels, and ensure alignment with the UNGPs’ effectiveness criteria.
Harmful Content
  1. Develop an audit protocol to assess projects that are at high risk of capture or government-sponsored disinformation.
  2. Develop a Content Oversight Committee (COC) to review content with a focus on bias and have the ability to make binding editorial decisions in line with ICCPR 19.
  3. Continue efforts outlined in the Knowledge Integrity white paper to develop: a) a machine-readable representation of knowledge that exists within Wikimedia projects along with its provenance; b) models to assess the quality of information provenance; and c) models to assess content neutrality and bias. Ensure that all AI/ML tools are designed to detect content and action that would be considered illegal under international human rights law, and that the response aligns with the threepart ICCPR test requiring that any restriction on the right to free expression be legal, proportional, and necessary.
  4. Provide access to a geotargeted suicide prevention hotline at the top of the articles on Suicide Methods.
Harassment
  1. Develop and deploy training programs for admins and volunteers with advanced rights on detecting and responding to harassment claims.
  2. Commission a “social norms marketing” research project to assess what type of messaging is likely to reduce and prevent harassing comments and actions.
  3. Explore opportunities to rate the toxicity of users, helping to identify repeat offenders and patterns of harassment. Consider awards for projects with the lowest toxicity levels.
  4. Consider developing admin metrics focused on enforcing civility and applying the forthcoming Universal Code of Conduct (UCoC).
  5. Ensure that the (UCoC) and its accompanying governance mechanism is reviewed by human rights experts, including experts on free expression and incitement to violence.
Government surveillance and censorship
  1. Continue efforts underway as part of the IP-masking project to further protect users from public identification.
  2. Develop awareness-raising tools and programs for all volunteers to understand and mitigate risks of engagement. Tools should be made publicly available and should be translated into languages spoken by volunteers in higher risk regions.[1]
Risks to child rights
  1. Conduct a child rights impact assessment of Wikimedia projects, including conducting interviews and focus groups with child contributors across the globe.
  2. Create child safeguarding tools, including child-friendly guidance on privacy settings, data collection, reporting of grooming attempts, the forthcoming UCoC as well a “Child’s Guide to Editing Wikimedia Project” to help advance the right of children to be civically engaged.
Limitations on knowledge equity
  1. Support retention by developing peer support and mentoring for under-represented contributors.
  2. Engage stakeholders on how the “notability” requirement may be shifted to be more inclusive of oral histories, and to identify what definitions resonate with under-represented communities.
  3. Adapt Wikimedia projects to be more accessible via mobile phones. Andreas JN466 15:31, 13 July 2022 (UTC)Reply
Hello @Jayen466,
Thank you so much for your message. Please see below for a quick rundown of the priority recommendations and their status. We look forward to your thoughts!
Strategies for the Foundation
  1. Develop a standalone Human Rights Policy that commits to respecting all internationally recognized human rights by referencing the International Bill of Human Rights. Status: Complete
  2. Conduct ongoing human rights due diligence to continually assess risks to rightsholders. A Foundation-level HRIA should be conducted every three years or whenever significant changes could have an effect on human rights. Status: Ongoing
  3. Develop rights-compatible channels to address human rights concerns, including private channels, and ensure alignment with the UNGPs’ effectiveness criteria. Status: Complete
Harmful Content
  1. Develop an audit protocol to assess projects that are at high risk of capture or government-sponsored disinformation. Status: Ongoing
  2. Develop a Content Oversight Committee (COC) to review content with a focus on bias and have the ability to make binding editorial decisions in line with ICCPR 19. Status: No action, community input needed
  3. Continue efforts outlined in the Knowledge Integrity white paper to develop: a) a machine-readable representation of knowledge that exists within Wikimedia projects along with its provenance; b) models to assess the quality of information provenance; and c) models to assess content neutrality and bias. Ensure that all AI/ML tools are designed to detect content and action that would be considered illegal under international human rights law, and that the response aligns with the three years or whenever significant changes could have an effect on human rights. Status: Ongoing
  4. Develop rights-compatible channels to address human rights concerns, including private channels, and ensure alignment with the UNGPs’ effectiveness criteria. Status: Complete
Harassment
  1. Develop and deploy training programs for admins and volunteers with advanced rights on detecting and responding to harassment claims. Status: Ongoing
  2. Commission a “social norms marketing” research project to assess what type of messaging is likely to reduce and prevent harassing comments and actions. Status: No action, community input needed
  3. Explore opportunities to rate the toxicity of users, helping to identify repeat offenders and patterns of harassment. Consider awards for projects with the lowest toxicity levels. Status: No action, community input needed
  4. Consider developing admin metrics focused on enforcing civility and applying the forthcoming Universal Code of Conduct (UCoC). Status: No action, community input needed
  5. Ensure that the (UCoC) and its accompanying governance mechanism is reviewed by human rights experts, including experts on free expression and incitement to violence. Status: Ongoing
Government surveillance and censorship
  1. Continue efforts underway as part of the IP-masking project to further protect users from public identification. Status: Ongoing
  2. Develop awareness-raising tools and programs for all volunteers to understand and mitigate risks of engagement. Tools should be made publicly available and should be translated into languages spoken by volunteers in higher risk regions. Status: Ongoing
Risks to child rights
  1. Conduct a child rights impact assessment of Wikimedia projects, including conducting interviews and focus groups with child contributors across the globe. Status: Ongoing
  2. Create child safeguarding tools, including child-friendly guidance on privacy settings, data collection, reporting of grooming attempts, the forthcoming UCoC as well a “Child’s Guide to Editing Wikimedia Project” to help advance the right of children to be civically engaged. Status: No action, pending full child rights impact assessment
Limitations on knowledge equity
  1. Support retention by developing peer support and mentoring for under-represented contributors. Status: Ongoing
  2. Engage stakeholders on how the “notability” requirement may be shifted to be more inclusive of oral histories, and to identify what definitions resonate with under-represented communities. Status: No action, community input needed
  3. Adapt Wikimedia projects to be more accessible via mobile phones. Status: Ongoing
RGaines (WMF) (talk) 18:37, 14 July 2022 (UTC)Reply
@RGaines (WMF): Re: several items with "Status: No action, community input needed" - where have the community been informed of this need, and has input been solicited? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 18:58, 14 July 2022 (UTC)Reply
Hi @Pigsonthewing - Thanks for this question. Publishing this human rights impact assessment is the first time the community is learning about these recommendations- precisely so we can hear your thoughts. Some recommendations may not be feasible or right for our projects and communities, so this will require long-term conversations among all the stakeholders to help us understand which ones are right to move forward with. I hope this is helpful! RGaines (WMF) (talk) 19:10, 14 July 2022 (UTC)Reply
@RGaines (WMF): Then it would appear that the correct status is not "No action, community input needed", but "No action, WMF action needed". Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 19:22, 14 July 2022 (UTC)Reply
@RGaines (WMF): Thank you very much, that's great! There was one discrepancy between your list and mine, item 4 under Harmful Content (two document versions?). On page 11 of the HRIA, this reads,
  • "Provide access to a geotargeted suicide prevention hotline at the top of the articles on Suicide Methods."
In your reply above, item 4 under Harmful Content reads,
  • "Develop rights-compatible channels to address human rights concerns, including private channels, and ensure alignment with the UNGPs’ effectiveness criteria."
Could you comment on the status of the geotargeted suicide prevention hotline proposal? Best, Andreas JN466 19:49, 14 July 2022 (UTC)Reply
Thanks for flagging this- that recommendation must have been lost in my copy/pasting while working on my response. The status of this is work "Ongoing," however we don't have any public information available at the moment. RGaines (WMF) (talk) 20:37, 14 July 2022 (UTC)Reply
i feel like the geotargeted suicide numbers is something that local wikipedia community could just do (admitedly the geotargeting aspect might need some dev work, albeit you could also just make a gadget. Maybe even a targeted centralnotice. Regardless, dev work seems minimal). In many ways i thought it was the most interesting suggstion, as it seems reasonable, easy to implement, with no big counter argument or bad side. Like what's the worst someone can say about it? We're violating neutrality by implying suicide is bad? Its a slippery slope as lots of content could be dangerous, and we dont want to get in the business of warning people about things on wikipedia? Neither sound particularly compelling, but i guess im not really a wikipedian so idk. Anyways, i hope this is a recomendation that local communities put into action. Bawolff (talk) 09:39, 15 July 2022 (UTC)Reply
@Bawolff Thanks for your thoughts on this! Indeed, it could be a very useful step in the right direction. Our team who works on harmful content and privacy is working on this from the Foundation's perspective, so we may have some updates on this later in the year. RGaines (WMF) (talk) 13:44, 15 July 2022 (UTC)Reply
Geotargeted information comes with collecting location data of readers. Ailura (talk) 09:45, 17 July 2022 (UTC)Reply
It is a little late for that. The geo-information is already present to suppurt central notice targeting. Bawolff (talk) 21:52, 17 July 2022 (UTC)Reply

reflections

edit

I was a bit disappointed with this report. I have two major concerns.

First of all - the report failed to talk about any of the human rights risks of the proposed interventions. As much as it would be nice if there was always a clear answer, sometimes there isn't and different solutions can be in tension with each other. I would expect a report on risks would talk about the risks of adverse effects of potential interventions. To give an example, the report suggests "Consider developing admin metrics focused on enforcing civility and applying the forthcoming Universal Code of Conduct (UCoC)" - there's a very obvious risk if this becomes quota based policing, which might unfairly affect unpopular groups. Another example "Explore opportunities to rate the toxicity of users..." has a very real risk of dehumanizing people. Boiling people down to internet popularity points has such obvious risks that it is a common subject of dystopian tv shows (e.g. Black Mirror - Nose dive, Community - App Development and Condiments, Orville - Majority Rule). These may vary well be things we should do, but we should go into them eyes open, with an honest appraisal of how they could go wrong.

Second - I can't help but feel there is a very pro-WMF spin to this report. The report was based on interviewing only a few WMF staff members, and perhaps it shows. If we truly want to take the principles of human rights to heart, we need to look on the places that are uncomfortable and talk to people who aren't entrenched in the power structure of the status quo. In particular, the recommendations in this report can be split into 3 groups - stuff that aligns very well with existing WMF efforts and political goals (e.g. Continue IP masking, develop peer support for underrepresented groups), Stuff that is generally neutral (e.g. geo-targeted suicide hotlines), and stuff that is neutral to WMF political goals but are unlikely to happen due to movement politics (e.g. parental filters). Nothing in the report is really about issues that go against WMF's political status quo. Of course, maybe there simply aren't any such issues. However, i can't help but feel that the type of intra-movement drama that WMF finds itself embroiled in, often involve a principle of human rights, or at least something that can be seen as a clearly in the spirit of human rights. For example, recently there was a dispute over rebranding. A key part of the dispute was a general feeling by many community members that the consultation was done in bad faith, and hence hindering users' rights to effectively engage with their "government" (UDHR 21). Since this report was written in 2020, a more timely example might be fram-gate and arguments about lack of due process that are analagous to UDHR 11, ICPR 14. Now there is a reasonable argument that this reading is BS - after all, WMF is not really government and these rights are meant to be applied as a protection against the state and not an internet company. However it is telling that so many disputes seem to involve arguments rooted in the principles and spirit of human rights, even if technically you could argue it is not in the "letter" of human rights. Given this is a report on risks, I would expect some discussion of these thorny issues, which even if in the end analysis it is determined to not be a human rights "issue" per se, its clear it is in the neighborhood and potentially a "risk". Bawolff (talk) 07:42, 14 July 2022 (UTC)Reply

Great comments. If you're not a wikipedian then... we're all at sea ;) I don't get the impression of "WMF spin" so much as "focus on an arbitrary subset of topics" (likely drawn as you say from discussions with current staff)
I might break down 'human rights' into:
How we protect the rights of editors and community members;
How we protect the rights of readers/reusers of the projects (in part by preventing coercive manipulation or restriction of content on the projects);
Where the projects are well-situated to advance human rights for the world
For the latter this certainly is missing the growing risk of having one's culture bombed into oblivion, or having ones internet access destroyed, not just censored. Or the obvious (c)-related harms of the lack of freedom of panorama, the growing enclosure of the public domain, &c. These are also things other interviewees might have turned up more prominently. –SJ talk  19:38, 15 July 2022 (UTC)Reply
Hi @Bawolff - Thanks for your thoughts on this report. We appreciate you sharing these concerns.To your first point, some recommendations may not be feasible or right for our projects and communities, so this will require long-term conversations among all the stakeholders to help us understand which ones are right to move forward with. This includes discussions on possible unintended consequences of new solutions. On major issues, like developing new tools, the Foundation is committed to carrying out human rights due diligence to better understand the pros, cons, and possible unintended consequences. We’re currently working on two additional human rights impact assessments to this end.
On your second point, the report does note some shortcomings of the status quo, and we don’t intend to sugar coat anything. By publishing this report and inviting comment, we are seeking to have those uncomfortable conversations you refer to and to engage in genuine, open dialogue about the challenges our movement faces and how to overcome them. We'll be hosting a session on human rights issues, including this report, at Wikimania (more to come on that soon!). We'll also be discussing this report at our upcoming Community Conversation Hours on 28 July at 12 and 17:00 UTC and have started a thread on the Movement Strategy Forum. RGaines (WMF) (talk) 08:23, 19 July 2022 (UTC)Reply

Content Oversight Committee

edit

I am pretty sure the second of these two recommendations

  1. Develop an audit protocol to assess projects that are at high risk of capture or government-sponsored disinformation.
  2. Develop a Content Oversight Committee (COC) to review content with a focus on bias and have the ability to make binding editorial decisions in line with ICCPR 19.

would go down like a lead balloon with the community; moreover, this hands-on, "boots on the ground" approach would be likely to cement the perception abroad of Wikipedia as a mouthpiece of the US State Department.

ICCPR 19 also presents the difficulty that it allows exceptions based on "respect of the rights or reputations of others" as well as "protection of national security or of public order"; many autocratic regimes give precisely these reasons for classifying government criticism as an offence.

What I did suggest a few years ago (2015) is a Wikipedia Freedom Index. What I envisaged then was something similar to a press freedom index, but for Wikipedias rather than countries. This would involve reports (much like the one done on the Croatian Wikipedia last year) by international (UN?) human rights experts to assess the degree of political freedom and (self-)censorship in various Wikipedia language versions. These reports could then be published in bulk at regular intervals, with publication announced via a banner notice so readers can view and compare the status of "their" Wikipedia against others.

This would lead to media reporting and public debate and exercise "soft" pressure on the projects/regimes doing badly in this respect. Andreas JN466 08:46, 15 July 2022 (UTC)Reply

Hi @Jayen466 - thank you for your thoughts here! Indeed, some recommendations in this report may not be feasible or right for our projects and communities, so this will require long-term conversations among all the stakeholders to figure out which ones could work for various projects. We welcome any additional feedback from Wikimedians out there, as well! Also, thanks for sharing the Wikipedia Freedom Index proposal! That is not something I had seen before and is certainly and interesting idea. As we continue our internal discussions over the next several months, I'll be sure to share this concept with others. RGaines (WMF) (talk) 14:10, 15 July 2022 (UTC)Reply

General concerns and specific concerns

edit

Hi @RGaines (WMF), thank you in advance for your updating work and for reading the below. I've read both the updated recommendations and the full report (which was of course done by an outside firm prior to us bring the skills in-house permanently).

The primary general concern, is that while I'm particularly glad for you to note some recommendations in this report may not be feasible or right for our projects and communities, so this will require long-term conversations among all the stakeholders to figure out which ones could work for various projects, this seems to have a logical failing. Rather than now going and talking to Community stakeholders after proposing the recommendations, it would have been better accepting of the Community to discuss it before the creation of the recommendations.

Ultimately, in human terms, it's always easier to not form a non-viable or negative recommendation than have to remove one. Additionally, in most areas the conduct specialists that underlie these recommendations are various specific community editors. For example, while CSE is, thankfully, handled by T&S, child editing and consequences of such is heavily handled by Oversighters (suppressors).

Regarding specifics:

Harmful Content (2): even the suggestion of this is going to cause huge blowback. Content is viewed as a purely Community activity, with WMF activity barely present for specific court orders and similar. Additionally, as an inherent part of "anyone can edit", we don't have editorial boards because that is counter to the consensus methodology of wikimedia projects. I am personally inclined to think that it was included because the outside form did a pure desk review and only interviewed staff members, without any community engagement.

Harassment (3): "toxicity" grading is an immensely problematic aspect, and immensely concerning anywhere with the nuance we have. It reduces a reputation to a numerical value, and also raises any number of issues. Such as, who defines toxicity. Is every judgement of toxicity going to appealable? Is every mark up or down of toxicity going to have to be sourced/backed by reasoning? Additionally, the concern that the Foundation would start grading projects is, to me, taking on an authority it does not possess.

Limitations on knowledge equity (2): the first thought that jumps out here is that this is verificability not notability. Oral history isn't accept as a suitable source at the lowest level (verificability), rather than it being accepted as a source and then just excluded when it comes to being the basis of a whole article. That aside, notability is always being discussed so it's not a "even bringing it to discussion is unwise" topic. However, I would stress that notability is a highly local area - it will need a lot of specific discussion community by community if you opt to go ahead with it.

For the sake of clarity, there are others that I particularly like and others that I can't form an opinion until I see it raised with the communities, but didn't want to review every recommendation right now for the sake of our mutual sanity. My thanks for reading, Nosebagbear (talk) 02:37, 1 August 2022 (UTC)Reply

Hi @Nosebagbear - thank you for your comments! We do appreciate you reading through this report and sharing your thoughts here on some specific recommendations. Ultimately, mitigating human rights risks on our projects is a long-term endeavor that will require a lot of work with volunteers like yourself, so we see this as a starting point. By providing your thoughts now, you're giving us great insight as to how we can chart a path forward in a thoughtful way. A consistent theme we're hearing is that involving volunteers early on in a human rights impact assessment is key- which is something we're currently working to do in other human rights impact assessments we have ongoing. While we cannot get every volunteer involved in every assessment, we've been working to identify volunteers with the right backgrounds and experience to make sure we get high-impact input early on.
Again, we appreciate your feedback on this HRIA. We hope you and other volunteers will join us for more brainstorming on a path forward during our upcoming session at Wikimania at 17:45 UTC on 14 August! RGaines (WMF) (talk) 21:55, 4 August 2022 (UTC)Reply
Return to "Wikimedia Foundation Human Rights Impact Assessment" page.