Controversial content/Brainstorming/personal private filters

Just brainstorming. More on the talk page.

Category free filtering

edit

This is based on some discussions on the Foundation mailing list, and a slightly refined version of something I proposed there in 2011. As someone who supports both the idea of enabling people to choose not to see things they'd rather not see, and also of the idea that community should be self-governing; I had pulled back from this subject in 2010 as I wouldn't want to be associated with a filter that was imposed against the will of the community. However there were postings on the mailing list in early 2011 calling for an image filter design that might be acceptable to the community, hence this proposal.


Objections

edit

Aside from the big assumption that there are people who want to filter out certain images when they look at our sites, and that the sorts of images they wish to filter out will vary widely; My understanding is that the most common objections to an image filter include:

  1. Whilst almost no-one objects to individuals making decisions as to what they want to see, as soon as one person decides what others on "their" network or IP can see you have crossed the line into enabling censorship. However as Wikimedia accounts are free, a logged in only solution would still be a free solution that was available to all.
  2. A category based system that used our existing categories would be problematic because our categories are based around the content of images not their potential offensiveness.
  3. Classifying our existing and future images in terms of their potential offensiveness is a very big job. Longterm we may get Artificial Intelligences that can do it for us, but computerised screening is not yet up to speed. That leaves four potential groups, the uploaders - many of whom are no longer with us or don't understand what images may be offensive to people from other cultures. The wider volunteer community - many of whom don't want this in principle or don't understand what images may be offensive to people from other cultures. Paid staff would be an expensive option, and some would consider an inappropriate use of funds raised for our educational purposes. Lastly there are the people who use the filters themselves, this is the least contentious group to the editing community, and I suspect many could live with an optin system in which participants could choose to share their filter lists. Providing that those filter lists had no effect on editors and readers who hadn't opted in to this.
  4. We care about quality, and we don't want anyone putting a low quality unworkable bolt on to our systems that tarnishes our work. Some proposals are to filter with overkill by putting anything in certain categories into the filter and accept that a large proportion of the matches will be incorrect. Such proposals undermine the quality ethos of the project and would be unacceptable to a large proportion of editors. So if we do introduce a system it needs to avoid both underkill and overkill.
  5. Censorship isn't part of our remit and it would be a misuse of funds to finance it. This is a difficult one, one key variable is the extent to which a filter diverges from our mission. Clearly placing a compulsory or opt out filter based on one particular culture's prejudices would be at variance with our mission. Offering people a personal opt in preference system would be different, it would be hard to argue that it conflicted with our mission, if it succeeded in getting our information accepted in parts of the world that currently shun us then arguably it would actually promote our mission.
  6. If we introduce a filter system where particular images are identified as being contentious it will start to influence editorial decisions with people substituting images that often get filtered by less contentious ones. We can avoid this by keeping people's personal filters personal.
  7. We currently have a large backlog of uncategorised images on Commons. Dealing with that categorisation backlog or even stabilising it would be a major task. Reviewing those already catalogued to identify the potentially contentious ones would be a huge task, especially as we can't assume that all the images that others would be offended by are already in obvious categories to review. Some people have pointed out that we are almost there and that reviewing a tiny minority of images would identify the vast majority, perhaps >90% of the contentious ones. This rather misses the point that if we release a 90% accurate system based on categorisation then our categorisers will subsequently be held responsible for the remaining 10%; I'd be uncomfortable taking that on as part of a paid job with adequate resources to get the job done, I'm not doing so as an unpaid volunteer knowing that we don't have the volunteers to succeed. We can avoid this problem by using a system that isn't based on our categories.
  8. A system that only worked if you created an account would be a system for our editors not the rest of our readers. Currently people mostly create accounts if they are editors, on the English language Wikipedia they have to create an account if they want to start an article. Roughly half of our logged in accounts have never edited and we have no idea what proportion of them were created by people who tried to edit or who were attracted by non-editing features such as Watchlists or the ability to Email other editors. So it's true that most of our 35 million accounts were created by editors and that most of our more than 500 million readers don't have accounts. However creating an Account is free and it is reasonable to assume that if you add more features for non-editors then others will create accounts, in particular if you offer people a Free image filter with their free account then people who really want an image filter will sign up for a free account.

I think that this proposal meets all these objections except arguably for the issue of whether it is appropriate to have WMF money spent on it. But if this succeeds there would always be the possibility that more money would be raised from the filter users than the filter costs to implement.

Design

edit

A database running on a separate server to any existing Wikimedia Wiki, storing image urls from Commons and other wikis such as the various language versions of Wikinews, Wikipedia, Wikivoyage and Wikisource. For sake of efficiency it probably would only hold records for images that at least one person had chosen to filter, or to see despite the filter.

The database would use various comparison and ranking techniques to identify which editors had similar filter preferences and which were dissimilar.

Setting filters

edit

Any logged in Wikimedian would be able to opt in to the filter, which would have the options:

[ ] Hide all images (recommended for people with slow connections to the Internet)
[ ] Only hide images that I've chosen not to see again
[ ] Hide images that I've chosen to hide and also hide images that fellow filterers with similar preferences have chosen to hide.
[ ] Hide images that I've chosen to hide and also any image that any of my fellow filterers has chosen to hide.

If they've chosen options 3 or 4 they would then get the option:

We have new images coming in all of the time, most of which are inoffensive to almost everyone. Do you want to
[ ] Hide all unfiltered new images

:[ ] Hide all images unless they have been OK'd by a fellow filterer

[ ] Hide all new images unless they have been OK'd by a fellow filterer with similar preferences to me.
[ ] Show me new images that the filter system can't yet make a judgment call on

Applying filters

edit

When an image is hidden the filterer will only see the caption, the alt text and an unhide button.

The unhide button will say one of the following:

  1. Click to view image that you have previously chosen to hide
  2. Click to view image that no fellow filterer has previously checked
  3. Click to view image that other filterers have previously chosen to hide (we estimate from your filter choices there is an x% chance that you would find this image offensive)
and also No thanks, this is the sort of image I don't want to see.

If the filterer clicks on the unhide button the picture will be displayed along with a little radio button that allows the filterer to decide:

Don't show me that image again. or That image is OK!

Clicking No Thanks or Don't show me that image again. or That image is OK! will all result in updates to that Wikimedian's filter preferences.

Advanced options

edit

While the default is for each account that opts in to the edit filter to have one set of filter preferences, the advanced options need to allow for the filterer to have multiple preference lists which they can name and potentially choose to disable some or all. This would enable someone to have different preferences at work and at home. It would also give someone a much better chance of matching with similar editors if they had both arachnophobia and vertigo and separate preference lists for each. These lists would have private usergenerated names that would only be available to the user.

If someone chooses to have multiple lists then when they tag an image as offensive they will also need to get a dropdown list of their filters. So if they have filter lists called yuk!, spiders and vertigo they can choose which list to add further images to.

How would it work?

edit

The complicated option is the one which allows you to also hide images "that other people with similar preferences have chosen to hide". This will require a bit of processing in the background, but in essence it would score your support for other people's filter lists by looking at the images you've chosen to hide, the images you've chosen to see and compared that pattern with fellow filterers. So a filterer who chose to filter images of spiders would have a very low overlap with a filterer who chose to filter out Manchester United players. But a filterer who chose to filter out Leeds United players would have a small overlap with the Manchester United antifan as some players change clubs and some images may depict players of more than one team, unless of course they both had an aversion to grey skies and used the filter to screen out such shots. If practical you could add a sensitivity bar to the user options that would allow filterers to alter the range of similarity.

Complaints

edit

Providing the software works, most of the feedback is likely to be along the lines of:

  1. Why would anyone find image x offensive?
    A Often we don't know. You might consider using the settings to only filter images filtered by "fellow filterer with similar preferences" as some people may have very different filter choices than yours.
  2. Why did no-one find image y offensive?
    A Often we don't know. You might consider using the settings to only show you images filtered by "fellow filterer with similar preferences" as some people may have very different filter choices than yours.
  3. How is the filter going to learn my preferences without showing me stuff that I find offensive?
    A It looks both at what you choose to see and what you choose not to see. So if you don't find pigs offensive and you click to see an image then that will influence your preferences. External organisations can create lists of files which they suggest that those who share their views would import into their filters. If someone else has created a filter list that reflects your views you may import that into your personal filter.

Privacy

edit

Whilst the data from images filtered or not will be pooled and anonymously shared with other filterers, the data will not be shared beyond the sites participating in the filter pool, or made available for other purposes, unless an editor opts in to that by publishing a list. Each individual's choices, including their choice of whether or not to Opt-in to the feature is confidential to that individual. Filterers are free to disclose their use of the feature, discuss their experiences and even publish a filter list, but there is no obligation to do so. Statistical results including numbers of filterers will be made available, but lists of images with an offensive quotient or frequency of being blocked will not be. The system would calculate which filterers had similar or dissimilar filtering preferences, but that information would not be divulged other than on an anonymised basis..

Hardware capacity

edit

One problem in scoping the hardware is that it is difficult to estimate how many people would opt in to this. At one extreme it might just be a small percentage of our currently active editors, at another it could be a significant proportion of our hundreds of millions of readers. At an extreme it could prompt tens of millions of readers to create accounts, and creating an account probably makes you more likely to donate time or money to the project.

Costs

edit

Aside from the development costs, the hardware would depend on the amount of use, and that is something of an unknown. But if it did get more than a few tens of thousands of users then it might be worthwhile for the fundraising team to test a slightly different appeal to filterers.

Alternatively if the community isn't prepared to accept this as a legitimate use of donors money then it would be possible to set this up as a separately funded organisation. That would be more complex and could only happen if there were donors out there willing to pay for this. But it is possible.

Artificial Intelligence

edit

In the long run it should be possible to augment this with AI technology, some sort of pattern recognising software that can tell the difference between humans eating bananas and a certain sex act, we aren't there yet, or at least not with freely licensed software; But we will be one day and when we are this type of filter will work much better and will work on the images that no human has yet seen.