Talk:Article validation

Latest comment: 18 years ago by AdamRetchless in topic Subjective/individualized validation

Headline text

edit

I have a similar concept at http://en.wikipedia.org/wiki/User:Sam_Spade/Policy_Proposals#editable_page_ratings

Philip Marlowe 23:29, 30 Aug 2004 (UTC)

Validated Page

edit

If this is implemented I still hope it is not default, as to encourage edits. There should however be a green button with "view validated" or other indicator that could serve as a link to a stable version.

Regardless, I think we should mature the idea farther and perhaps when the English Wikipedia has 500,000 articles start doing this. --Exigentsky

Let ordinary users approve

edit

Let ordinary users approve articles first. some people might abuse the system. but we can also have approved approvers and unapproved approvers.

-Hemanshu 20:06, 14 Dec 2003 (UTC)

And should we not define "ordinary" users as users that have created an account and not ones that only get logged by their IP address? Most legit users have an accounts and most perps use the anonymous status to their advantage.

Genuine experts may not be interested

edit

Basically, I am not sure that we can generate enough interest, yet, on the part of "genuine experts" to act as reviewers for Wikipedia. That is my one big misgiving about this whole project. What do you think? --LMS

Dumb question: why do we need reviewers? So far, quality control seems to be a potential problem that as of yet shows no sign of turning into an actual problem.


See "advantages," above. :-) --LMS


Approvals and revisions

edit

This is not supposed to freeze the article, but, what is approved is a particular revision of the article. Will the approval be to revision n of the article, that the viewer of version n+m can check? --AN


Yes, and yes, or that's my notion of the thing. --LMS


Will any experts be interested?

edit

I think this could be useful, but I have some vague misgivings about how well it will work in practice. Will we be able to get enough reviewers, who are actively involved? Who in the world is going to come up with the reviewer validation criteria? The problem here is that some articles are about SF authors, others are cookbook type recipes, and what makes an expert cook does not make an expert on Jamaican Jerk Chicken...

Beyond the logistical questions, I'm a bit worried that this may have some effects on wikipedia productivity. I'm sure one of the reasons that wikipedia thrives is just because it is easy to use. But I also think that there are delicate aspects to the way the community works and is structured which are just as important. If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects. On the other hand I'm not certain that it will have adverse effects either...MRC


Related to this, is the idea that if I write an article on Jamaican Jerk Chicken, and do a thorough web-search which supports what I write, why can't I be considered an expert for the purposes of review. After all, I may have a more open mind than a lot of cooks out there.


To quote Lee Daniel Crocker from another page on Wikipedia, "Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. Facts are facts, no matter who writes about them"

I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding. There are examples in almost any field - this generation's experts scoffs at the silly ideas of a previous generation, while destined to be scoffed at themselves by a future generation.

- Tim


Replies to the above:

Will we be able to get enough reviewers, who are actively involved?

That's an excellent question. I just don't know.

Who in the world is going to come up with the reviewer validation criteria?

That obviously would be a matter of some deliberation.

If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects.

I agree that this is a very, very legitimate concern, and I think we probably shouldn't take any steps until we're pretty sure that the project wouldn't suffer in this way.

Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. ... I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding.

I think there is some merit to these claims. But I'm wondering what this has to do with the proposal. Is the idea that we cannot trust authorities, or experts, to reliably state what constitutes a clear, accurate statement of human knowledge on the subjects within their expertise? Or is it, perhaps, that fine articles would be given a thumbs-down because they do not toe the party line sufficiently, whatever it is? Well, I don't know about that. Anyway, I'm not sure what the point is here.

Gotta go, the dog is taking me for a walk. --LMS


Another possible problem is that even the experts can be wrong. How can we verify accuracy, then? Even if the Wikipedia was internally consistent, it can still be wrong. Not only that, but an article can change 30 seconds after it has been reviewed. For starters, we'll have to introduce another concept from programming, the code freeze. What we can do is analyze the change frequency on articles (via a program of some kind), and when the changes in an article stabilize so that changes are minor and infrequent, we copy the article, add all the named authors as authors, date it, and make it a static page on the wikipedia. Then you'll have two articles, one is the latest "stable" revision, and the other is open to flux (the rest are archived, available by request or some other equivalent). The tough part is determining "accuracy". We could go democratically and add a voting system, but that has problems since the majority can be wrong about as easily as an individual. Any verification system either requires money to hire people to check on authors' claims of expertise, or the creation of an elite class of authors. The alternative is to foster the sense of community, and work on the trust a wikipedian can earn from fellow wikipedians, but that opens up the door to any mistakes made by a trusted wikipedian being tougher to correct. So I would tend to think that the best argument for the validity of an article is stability in the face of hits. Perhaps do something like this:

A = (nr/(nh %Δ)) * (√na)/T

where A is the accuracy factor, nr is the number of revisions (since some time), %f¢ is the median (or mean if you must) % change in the article per revision, nh is the total number of hits, T is the technical factor (could be determined by the number of authors involved, etc. it is essentially an attempt at determining the odds that a hit will know enough to revise the article), and na is the number of authors involved (under a radical so that it won't increase linearly). This equation is frightfully arbitrary except in the factors it considers, and a statistician should come up with a better form.


Magnus' implementation of my proposal looks good, but it doesn't quite implement my proposal. The role of a moderator is to approve that a reviewer has the billed (and necessary) qualifications. I don't want anyone standing over the reviewers in the sense of saying, "Yes, you were right to approve this article." In fact, a moderator could very well know nothing about the subject the reviewer addresses, but the moderator can check to see whether the reviewer does have the necessary qualifications (by visiting homepages and matching up e-mail addresses, etc.). In other words, the role of reviewers is quality assurance, whereas the role of moderators would be anti-reviewer-fraud assurance.

Otherwise, the implementation looks pretty good. This advantage is important: "By having reviewers and moderators not chosen for a single category (e.g., biology), but by someone on a "higher level" trusting the individual not to make strange decisions, we can avoid problems such as having to choose a category for each article and each person prior to approval, checking reviewers for special references etc." That's exactly why I wanted it designed this way. Someone could be an ad hoc expert about his pet subject, and a moderator might be able to spot this.

I think ...'s (got to change that nickname, guy!  :-) ) proposal really pales beside Bryce's. If we are going to have a "community approval" process, Bryce's is far superior, because it allows us to "approve the approvers." Frankly, I couldn't give rat's patoot whether lots of people would approve of a given article. I want to know whether people who know about the subject (i.e., by definition, the people I'm calling experts) approve of it.

If we did go my route, as opposed to Bryce's, I think we should have an in-depth discussion of criteria for reviewers. Basically, I think we should use criteria similar to those used by Nupedia, but modified to allow for specific expertise on specific subjects--where such expertise might not be codified in degrees, certificates, etc. Nevertheless, I think that the expertise even in those cases must be genuine. If you've read a half-dozen books on a subject of the breadth of, say, World War II, then you know a heck of a lot about WWII, and you can contribute mightily, but you ain't an expert on WWII (probably). Essentially, if we want to adopt a review mechanism in order to achieve the goals of attracting more, well, experts, and in order to have Wikipedia's content used by various reputable online sources, then we must work with the concept of expertise that they use. One rough-and-ready conception of expertise goes like this: you identify people who are experts on a subject on anybody's conception; then you determine who those people consider colleagues worth speaking to seriously and professionally. Those are the experts on that subject.

Frankly, this whole thing is starting to give me a bit of a headache, and I'm not sure we should do anything at all anytime soon.  :-) --User:Larry Sanger


On Bryce's proposal: this is very interesting and I think we should think more about it. Maybe it would end up being a roaring success. In the context of Wikipedia, I can pretty easily imagine how it could be. There is one main problem with it, though, and that is that it isn't going to make the project any more attractive for high-powered academic types to join the club. They would very likely look on such a system as a reflection of a sort of hopeless amateurism that will doom Wikipedia to mediocrity. We know better, of course--but we would like to have the participation of such people. Or I would, anyway. They know stuff. Stuff that we don't know, because they're smart and well-educated. If we can do something that's otherwise non-intrusive to the community to attract them, we should. Another problem, related to this, is that the world isn't going to be as excited about this cool system as we are. They'll want to see stuff that is approved, period--presented by Wikipedia as approved by genuine experts on the subjects. If they can see this, they'll be a lot more apt to use and distribute Wikipedia's content, which in the end is what we really, really want--because it means world domination.  :-)

After some more thought, I'm now thinking, "Why can't we just adapt Bryce's proposal for these (elitist) purposes?" It would go something like this. We all have our own locked pages on which we can list articles of which we approve. There is a general rule that we should not approve of articles in areas on which we aren't experts. Then, as Bryce says, people can choose who to listen to and who not to listen to when it comes to approvals. But as for presenting the Wikipedia-approved articles, it would be pretty straightforward: some advisory board of some sort chooses which people are to be "listened to" as regards approvals. This would be determined based on some criteria of expertise and whether the approvals the person renders are in that person's areas of expertise. Then we could present one set of articles as the "Wikipedia-approved" articles. Other people could choose a different set of reviewers and get a different set of approved articles.

Moreover, we could conceivably make Bryce's system attractive to "experts." We could say: "Hey, you join us and start approving articles, and definitely your approval list will definitely be one to help define the canonical set of Wikipedia-approved articles.

This looks very promising to me. Right now, I'd have to say I like it better than Sanger's proposal! --Sanger


First, let me say that there's no technical problem in implementing both the Bryce and Sanger/Manske proposal. Just as different category schemes can coexist peacefully at wikipedia, these could too. We could even use the "expert verification" (no matter how this will work in the end) for both approaches.

For the difference between the Sanger and the Manske proposal about what moderators do, you should think about this: Say I get to become a reviewier because I know biology and a little about computers;) So, a moderator made me reviewer. What's going to stop me from approving a two-line-article about "Fiddle Traditions in General"? If I'd be restricted to biology and computers, we'd have to put all articles into categories (which we don't want), otherwise the moderators would have to check every approved article, which is what I suggested in the first place. Maybe I wasn't clear on this point: I don't want the moderators to check articles that were approved by reviewers for scientific correctness; they should just act as another filter, basically approving every article they get from the reviewers, except for those with obvious errors, or with "unfitting" topics, such as foobar... --Magnus Manske


Actually, Magnus, I later came around to your thinking and neglected to mention it. I.e., I think it would be better to have the moderators always be working to check adequate qualifications.

The other possibility is to have some way to "undo" illicit approvals. This would be an enormous headache, though--anyone whose approval was undone would probably quit. --User:LMS


I lean toward something like Bryce's suggestion as well where "approved-ness" is just another piece of metadata about an article that can be used to select it, but I'd simplify it even further with a little software support. Let's not forget Wikipedia's strength: it's easy to create and edit stuff. Because of that, we have a lot of stuff that's been created and edited. We need to make it absolutely trivially easy to provide metadata about an article using the same simple Web interface. For example, have an "Approvals" or "Moderation" link which takes the user to a fill-in form where he checks boxes to answer questions like "Is this article factually accurate in your opinion?", "Is this article clear and well-written in your opinion?", "Does this article cover all major aspects of its topic in your opinion?", etc. That information can be stored in the database associated with the appropriate revisions (perhaps the software could even retain article versions for longer if they have a certain level of approval). Storing that info under the page of the Wikipedian who filled in the form as suggested by Bryce (except under program control) is a good way to do it. Then, users can judge for themselves whose opinions they value and whose they don't. This option is available to anyone who is logged in as a specific user (and not to anonymous users), so the software would know to update the "Lee Daniel Crocker/Well written" page when I checked that box.


This software could be very simple--just present the form, and add lines to the appropriate page, which is just an ordinary Wiki page. The "Lee Daniel Crocker/Well written" page, the "Larry Sanger/Copyright status verified" page, the "Magnus Manske/Factually accurate" page, and the "Bryce Harrington/Interesting subject" page are themselves subject to approval by anyone, and their value can be judged by that.

--User:Lee Daniel Crocker


Implementing an internal equivalent of something like Google's PageRank might be a useful alternative to the approval system. This would effectively rate an article based on how well linked it is and how well ranked the items that link it are. Not only does this scale well (as Google has shown), and work automatically, it is effectively the equivalent to the mental heuristic humans use to establish authority (that is, an expert is one because other people say he is).


In my opinion there should be a separate project that is quality controlled and has approved articles. By having a separate domain and project name, people on wikipedia automatically see the latest revision, while the people at the new site always see the approved revision. With my idea, to make it simpler for newbies, there would be just one affiliated approving project. The staff would consist of administrators, qualifiers, and experts.

To be an expert, you would send in your qualifications such as your degree, job experience, articles written, etc. The necessary qualifications would be fairly small, but your qualifications are assigned a number from 1-5 (which is always displayed in front of your name), and people may filter out articles approved only by barely qualified experts.

The qualifiers contact the university where the expert got their degree, check the archives to see if the expert really wrote the article they claimed to write, etc. The administrator's sole duty is to check the work of the qualifiers and to approve new qualifiers.

Under my idea, any articles that are "stable" (as defined by the formula above) are considered community approved, and as such, are sent to a page on the project (much like the recent changes page) which does not show up in search results and is out of the way, but accessible to anyone.

To actually get approved, the article must be checked for plagiarism by a program, grammar checked by an expert in the language the article is in, and finally, get checked by experts in the subject matter. When the article finally gets approved, all of the registered users who edited the article are invited to add their real or screen name to the list of authors, and the experts get their names and qualifications added in the experts' section at the top of the article.

Even when the article gets approved, people can still interact with it. Any user can submit a complaint about an article if they feel it is plagiarizing or is biased, or they can submit a request to the grammar checkers to fix a typo, and experts may and are encouraged to put their stamp of approval on an already approved article.

An article is rated the same as the highest expert that approves the article. For example, one searching for Brownies might see an article that is titled like this:

4:Recipe for Brownies

This means that at least one expert rated a four approved of the article. A person searching for an article specifies how low they are willing the ratings to go.

-- Techieman


First, I clicked on the edit link for "Why wikipedia doesn't need an additional approval mechanism" (to fix a typo), and got what appeared to be the TOP of the article, not the section that I was trying to link. I had to edit the full article to make the change.

Second, someone may have suggested this already (since I only read Talk halfway down), but might want to test multiple approval techniques simultaneously, although I like the heuristics idea. You might then have a user option or button for showing the heuristically-approved revision of an article. Or, perhaps you could specify the approval level that you want, and the system would show you the most recent revision that meets or exceeds that level (or best revision, if level isn't met). Scott McNay 07:18, 2004 Feb 11 (UTC)

Scoring articles

edit

I have some proposal to address. It seems that the mess in VfD and cleanup is due to the disparities among wikipedians about the quality of aritlces in wikipedia. Those called inclusionists including me tend to defend keeping articles that are even less than stubs while deletionists are inclined to maintain the quality of wikipedia at whole even it takes to get rid of articles that are adequate stubs but contribute to making wikipedia look chessy. It seems to me that the problem is rather that every single article is treated equaly. Some articles are brilliant prose while some are crap or bot's generated.

Anyway, my proposal is to evaluate all articles with the range of 1-5 scores. 5 means brilliant prose, 4 peer-reviewed copyedited article, 3 draft, 2 stub and 1 less than stub or non-sense. This put a lot more burden to wikipedians but we really need some kind of approval system. The growth in the number does not consist with that in the quality. I am afraid that the vast number of the nonsense and bot-generated articles make wikipedia look like a trash. It is important to remember that readers might make a quick guess about the quality of wikipedia only by seeing stubs or less than stubs. -- Taku 23:04, Oct 11, 2003 (UTC)

See Wikipedia:Wikipedia approval mechanism for prior discussion of this point. See wikipedia:bug reports to submit a feature request. See wikitech-l to volunteer to help develop MediaWiki and code your request yourself. Please don't submit feature requests to the village pump. Martin 00:48, 12 Oct 2003 (UTC)

This is not a feature request and I have already read Wikipedia approval mechanism. -- Taku

The argument isn't usually about the quality of the text, it's usually about the appropriateness of the topic. These kinds of debates will go on until we formally decide whether or not Wikipedia is the appropriate place for every postal code in the world, or any random elementary school, or any professor who has written a paper, or any subway station in any town, or anyone who gets 20 hits on Google, etc. I'd try to organize some sort of formal decisions on these topics but I'm not sure I have the energy... Axlrosen 14:49, 12 Oct 2003 (UTC)

We have had this debate on the mailing list several times (This username is just an pseudonym for another username). Most people don't want an approval mechanism. That would ruin the wiki-ness of it. ++Liberal 16:26, 12 Oct 2003 (UTC)

The proposal is not yet another approval mechanism, but simple editorial information. Scoring is intended only to improve poorly written articles.

I also favor putting the primary author and reviwers of the article. I often checked the page history to know who is primary responsible for the content. It is often convinient to contact such a person to discuss facts or POV issues. The article looks like this.

Takuya Murata is bahaba
....

The author:Taku, reviewed by Taku. The article is scored 4.

However, I guess people just don't like things that sound approval at the first glace without looking at the details. -- Taku

Do I understand correctly, that the primary purpose of the scoring would be to alert other users that an article needs help? And that the score would be visible on Recent Changes and on Watchlist? Or would you want it to only be visible once you click the link to the article?
Hmm. If one were to have a range of scores on different aspects of the article, it would infact amount to a software-fix which would merge Wikipedia:Cleanup into Recent Changes, thus making it (Cleanup) obsolete. I could definitely get behind that, if the developers have enough time, and they think it worth their while.
Double-hmm. While we are waiting for a software feature to allow this, why don't we try to implement this on a trial basis at Cleanup, add a score element to the comment tags, eh? I know it isn't quite what you originally suggested, but it could provide some guidance as to how such a feature would be used by editors, eh? -- Jussi-Ville Heiskanen 07:16, Oct 15, 2003 (UTC)

I think you are in the same line with my idea. It seems problems seen particularly in VfD is originated from the situation where every article is treated equally. The truth is some articles are in very poor writing and some are completely readly to be read by general readers. I love to see features like low scored articles are not pop up in the google results. Such features allow us to keep articles of the low quality while they don't look wikipedia a trash of crap.

I also think it is important to store some editiorial information to an article itself to avoid duplicated information. Many articles are listed in VfD over and over again and the reason is quite often of the low quality of the article, rather than the question of existence. Unfortunately, tons of article remain as stub for months, which however cannot be used as justification of deletion of such articles.

Scoring is very similar to the stub caveat with more extented and extensive use.

-- Taku

Keep it simple...

edit

To me the issue that we are trying to prevent is abuse of trust (eg, [Vandalism] ). It would seem to me that to do that, it would be better to improve the means by which improper changes are detected.

Rather than a complex approval process, simply make it possible for one or more users to "sign off" on any given version, and allow filtering in the Recent Changes for articles that haven't been signed off by at least a given number of registered users. Then, users on recent changes patrol can "sign off" on what appear to be valid changes, allowing reviewers who are primarily concerned about combating malicious changes to more readily identify which articles still need to be looked at, and which have already been found to be ok.

While malicious edits do occur from logged in users as well, this does seem to be much less frequent, and if you show who's signed off on a change in the recent changes, very easy to spot.

Theres no need for a "disapprove" on a revision. If you disapprove, you should fix the article, or in case of a dispute, start a discussion.

- Triona 23:08, 8 Sep 2004 (UTC)

There are too many pages on this topic. I've started a list at m:Edit approval. I've moved that to the introduction. Brianjd 10:25, 2005 Jan 29 (UTC)

Let ordinary users approve

edit

Let ordinary users approve articles first. some people might abuse the system. but we can also have approved approvers and unapproved approvers.

-Hemanshu 20:06, 14 Dec 2003 (UTC)

Genuine experts may not be interested

edit

Basically, I am not sure that we can generate enough interest, yet, on the part of "genuine experts" to act as reviewers for Wikipedia. That is my one big misgiving about this whole project. What do you think? --LMS

Dumb question: why do we need reviewers? So far, quality control seems to be a potential problem that as of yet shows no sign of turning into an actual problem.


See "advantages," above. :-) --LMS


Approvals and revisions

edit

This is not supposed to freeze the article, but, what is approved is a particular revision of the article. Will the approval be to revision n of the article, that the viewer of version n+m can check? --AN


Yes, and yes, or that's my notion of the thing. --LMS


Will any experts be interested?

edit

I think this could be useful, but I have some vague misgivings about how well it will work in practice. Will we be able to get enough reviewers, who are actively involved? Who in the world is going to come up with the reviewer validation criteria? The problem here is that some articles are about SF authors, others are cookbook type recipes, and what makes an expert cook does not make an expert on Jamaican Jerk Chicken...

Beyond the logistical questions, I'm a bit worried that this may have some effects on wikipedia productivity. I'm sure one of the reasons that wikipedia thrives is just because it is easy to use. But I also think that there are delicate aspects to the way the community works and is structured which are just as important. If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects. On the other hand I'm not certain that it will have adverse effects either...MRC


Related to this, is the idea that if I write an article on Jamaican Jerk Chicken, and do a thorough web-search which supports what I write, why can't I be considered an expert for the purposes of review. After all, I may have a more open mind than a lot of cooks out there.


To quote Lee Daniel Crocker from another page on Wikipedia, "Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. Facts are facts, no matter who writes about them"

I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding. There are examples in almost any field - this generation's experts scoffs at the silly ideas of a previous generation, while destined to be scoffed at themselves by a future generation.

- Tim


Replies to the above:

Will we be able to get enough reviewers, who are actively involved?

That's an excellent question. I just don't know.

Who in the world is going to come up with the reviewer validation criteria?

That obviously would be a matter of some deliberation.

If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects.

I agree that this is a very, very legitimate concern, and I think we probably shouldn't take any steps until we're pretty sure that the project wouldn't suffer in this way.

Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. ... I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding.

I think there is some merit to these claims. But I'm wondering what this has to do with the proposal. Is the idea that we cannot trust authorities, or experts, to reliably state what constitutes a clear, accurate statement of human knowledge on the subjects within their expertise? Or is it, perhaps, that fine articles would be given a thumbs-down because they do not toe the party line sufficiently, whatever it is? Well, I don't know about that. Anyway, I'm not sure what the point is here.

Gotta go, the dog is taking me for a walk. --LMS


Another possible problem is that even the experts can be wrong. How can we verify accuracy, then? Even if the Wikipedia was internally consistent, it can still be wrong. Not only that, but an article can change 30 seconds after it has been reviewed. For starters, we'll have to introduce another concept from programming, the code freeze. What we can do is analyze the change frequency on articles (via a program of some kind), and when the changes in an article stabilize so that changes are minor and infrequent, we copy the article, add all the named authors as authors, date it, and make it a static page on the wikipedia. Then you'll have two articles, one is the latest "stable" revision, and the other is open to flux (the rest are archived, available by request or some other equivalent). The tough part is determining "accuracy". We could go democratically and add a voting system, but that has problems since the majority can be wrong about as easily as an individual. Any verification system either requires money to hire people to check on authors' claims of expertise, or the creation of an elite class of authors. The alternative is to foster the sense of community, and work on the trust a wikipedian can earn from fellow wikipedians, but that opens up the door to any mistakes made by a trusted wikipedian being tougher to correct. So I would tend to think that the best argument for the validity of an article is stability in the face of hits. Perhaps do something like this:

A = (nr/(nh %Δ)) * (√na)/T

where A is the accuracy factor, nr is the number of revisions (since some time), %f¢ is the median (or mean if you must) % change in the article per revision, nh is the total number of hits, T is the technical factor (could be determined by the number of authors involved, etc. it is essentially an attempt at determining the odds that a hit will know enough to revise the article), and na is the number of authors involved (under a radical so that it won't increase linearly). This equation is frightfully arbitrary except in the factors it considers, and a statistician should come up with a better form.


Magnus' implementation of my proposal looks good, but it doesn't quite implement my proposal. The role of a moderator is to approve that a reviewer has the billed (and necessary) qualifications. I don't want anyone standing over the reviewers in the sense of saying, "Yes, you were right to approve this article." In fact, a moderator could very well know nothing about the subject the reviewer addresses, but the moderator can check to see whether the reviewer does have the necessary qualifications (by visiting homepages and matching up e-mail addresses, etc.). In other words, the role of reviewers is quality assurance, whereas the role of moderators would be anti-reviewer-fraud assurance.

Otherwise, the implementation looks pretty good. This advantage is important: "By having reviewers and moderators not chosen for a single category (e.g., biology), but by someone on a "higher level" trusting the individual not to make strange decisions, we can avoid problems such as having to choose a category for each article and each person prior to approval, checking reviewers for special references etc." That's exactly why I wanted it designed this way. Someone could be an ad hoc expert about his pet subject, and a moderator might be able to spot this.

I think ...'s (got to change that nickname, guy!  :-) ) proposal really pales beside Bryce's. If we are going to have a "community approval" process, Bryce's is far superior, because it allows us to "approve the approvers." Frankly, I couldn't give rat's patoot whether lots of people would approve of a given article. I want to know whether people who know about the subject (i.e., by definition, the people I'm calling experts) approve of it.

If we did go my route, as opposed to Bryce's, I think we should have an in-depth discussion of criteria for reviewers. Basically, I think we should use criteria similar to those used by Nupedia, but modified to allow for specific expertise on specific subjects--where such expertise might not be codified in degrees, certificates, etc. Nevertheless, I think that the expertise even in those cases must be genuine. If you've read a half-dozen books on a subject of the breadth of, say, World War II, then you know a heck of a lot about WWII, and you can contribute mightily, but you ain't an expert on WWII (probably). Essentially, if we want to adopt a review mechanism in order to achieve the goals of attracting more, well, experts, and in order to have Wikipedia's content used by various reputable online sources, then we must work with the concept of expertise that they use. One rough-and-ready conception of expertise goes like this: you identify people who are experts on a subject on anybody's conception; then you determine who those people consider colleagues worth speaking to seriously and professionally. Those are the experts on that subject.

Frankly, this whole thing is starting to give me a bit of a headache, and I'm not sure we should do anything at all anytime soon.  :-) --User:Larry Sanger


On Bryce's proposal: this is very interesting and I think we should think more about it. Maybe it would end up being a roaring success. In the context of Wikipedia, I can pretty easily imagine how it could be. There is one main problem with it, though, and that is that it isn't going to make the project any more attractive for high-powered academic types to join the club. They would very likely look on such a system as a reflection of a sort of hopeless amateurism that will doom Wikipedia to mediocrity. We know better, of course--but we would like to have the participation of such people. Or I would, anyway. They know stuff. Stuff that we don't know, because they're smart and well-educated. If we can do something that's otherwise non-intrusive to the community to attract them, we should. Another problem, related to this, is that the world isn't going to be as excited about this cool system as we are. They'll want to see stuff that is approved, period--presented by Wikipedia as approved by genuine experts on the subjects. If they can see this, they'll be a lot more apt to use and distribute Wikipedia's content, which in the end is what we really, really want--because it means world domination.  :-)

After some more thought, I'm now thinking, "Why can't we just adapt Bryce's proposal for these (elitist) purposes?" It would go something like this. We all have our own locked pages on which we can list articles of which we approve. There is a general rule that we should not approve of articles in areas on which we aren't experts. Then, as Bryce says, people can choose who to listen to and who not to listen to when it comes to approvals. But as for presenting the Wikipedia-approved articles, it would be pretty straightforward: some advisory board of some sort chooses which people are to be "listened to" as regards approvals. This would be determined based on some criteria of expertise and whether the approvals the person renders are in that person's areas of expertise. Then we could present one set of articles as the "Wikipedia-approved" articles. Other people could choose a different set of reviewers and get a different set of approved articles.

Moreover, we could conceivably make Bryce's system attractive to "experts." We could say: "Hey, you join us and start approving articles, and definitely your approval list will definitely be one to help define the canonical set of Wikipedia-approved articles.

This looks very promising to me. Right now, I'd have to say I like it better than Sanger's proposal! --Sanger


First, let me say that there's no technical problem in implementing both the Bryce and Sanger/Manske proposal. Just as different category schemes can coexist peacefully at wikipedia, these could too. We could even use the "expert verification" (no matter how this will work in the end) for both approaches.

For the difference between the Sanger and the Manske proposal about what moderators do, you should think about this: Say I get to become a reviewier because I know biology and a little about computers;) So, a moderator made me reviewer. What's going to stop me from approving a two-line-article about "Fiddle Traditions in General"? If I'd be restricted to biology and computers, we'd have to put all articles into categories (which we don't want), otherwise the moderators would have to check every approved article, which is what I suggested in the first place. Maybe I wasn't clear on this point: I don't want the moderators to check articles that were approved by reviewers for scientific correctness; they should just act as another filter, basically approving every article they get from the reviewers, except for those with obvious errors, or with "unfitting" topics, such as foobar... --Magnus Manske


Actually, Magnus, I later came around to your thinking and neglected to mention it. I.e., I think it would be better to have the moderators always be working to check adequate qualifications.

The other possibility is to have some way to "undo" illicit approvals. This would be an enormous headache, though--anyone whose approval was undone would probably quit. --User:LMS


I lean toward something like Bryce's suggestion as well where "approved-ness" is just another piece of metadata about an article that can be used to select it, but I'd simplify it even further with a little software support. Let's not forget Wikipedia's strength: it's easy to create and edit stuff. Because of that, we have a lot of stuff that's been created and edited. We need to make it absolutely trivially easy to provide metadata about an article using the same simple Web interface. For example, have an "Approvals" or "Moderation" link which takes the user to a fill-in form where he checks boxes to answer questions like "Is this article factually accurate in your opinion?", "Is this article clear and well-written in your opinion?", "Does this article cover all major aspects of its topic in your opinion?", etc. That information can be stored in the database associated with the appropriate revisions (perhaps the software could even retain article versions for longer if they have a certain level of approval). Storing that info under the page of the Wikipedian who filled in the form as suggested by Bryce (except under program control) is a good way to do it. Then, users can judge for themselves whose opinions they value and whose they don't. This option is available to anyone who is logged in as a specific user (and not to anonymous users), so the software would know to update the "Lee Daniel Crocker/Well written" page when I checked that box.


This software could be very simple--just present the form, and add lines to the appropriate page, which is just an ordinary Wiki page. The "Lee Daniel Crocker/Well written" page, the "Larry Sanger/Copyright status verified" page, the "Magnus Manske/Factually accurate" page, and the "Bryce Harrington/Interesting subject" page are themselves subject to approval by anyone, and their value can be judged by that.

--User:Lee Daniel Crocker


Implementing an internal equivalent of something like Google's PageRank might be a useful alternative to the approval system. This would effectively rate an article based on how well linked it is and how well ranked the items that link it are. Not only does this scale well (as Google has shown), and work automatically, it is effectively the equivalent to the mental heuristic humans use to establish authority (that is, an expert is one because other people say he is).


In my opinion there should be a separate project that is quality controlled and has approved articles. By having a separate domain and project name, people on wikipedia automatically see the latest revision, while the people at the new site always see the approved revision. With my idea, to make it simpler for newbies, there would be just one affiliated approving project. The staff would consist of administrators, qualifiers, and experts.

To be an expert, you would send in your qualifications such as your degree, job experience, articles written, etc. The necessary qualifications would be fairly small, but your qualifications are assigned a number from 1-5 (which is always displayed in front of your name), and people may filter out articles approved only by barely qualified experts.

The qualifiers contact the university where the expert got their degree, check the archives to see if the expert really wrote the article they claimed to write, etc. The administrator's sole duty is to check the work of the qualifiers and to approve new qualifiers.

Under my idea, any articles that are "stable" (as defined by the formula above) are considered community approved, and as such, are sent to a page on the project (much like the recent changes page) which does not show up in search results and is out of the way, but accessible to anyone.

To actually get approved, the article must be checked for plagiarism by a program, grammar checked by an expert in the language the article is in, and finally, get checked by experts in the subject matter. When the article finally gets approved, all of the registered users who edited the article are invited to add their real or screen name to the list of authors, and the experts get their names and qualifications added in the experts' section at the top of the article.

Even when the article gets approved, people can still interact with it. Any user can submit a complaint about an article if they feel it is plagiarizing or is biased, or they can submit a request to the grammar checkers to fix a typo, and experts may and are encouraged to put their stamp of approval on an already approved article.

An article is rated the same as the highest expert that approves the article. For example, one searching for Brownies might see an article that is titled like this:

4:Recipe for Brownies

This means that at least one expert rated a four approved of the article. A person searching for an article specifies how low they are willing the ratings to go.

-- Techieman


First, I clicked on the edit link for "Why wikipedia doesn't need an additional approval mechanism" (to fix a typo), and got what appeared to be the TOP of the article, not the section that I was trying to link. I had to edit the full article to make the change.

Second, someone may have suggested this already (since I only read Talk halfway down), but might want to test multiple approval techniques simultaneously, although I like the heuristics idea. You might then have a user option or button for showing the heuristically-approved revision of an article. Or, perhaps you could specify the approval level that you want, and the system would show you the most recent revision that meets or exceeds that level (or best revision, if level isn't met). Scott McNay 07:18, 2004 Feb 11 (UTC)

Scoring articles

edit

I have some proposal to address. It seems that the mess in VfD and cleanup is due to the disparities among wikipedians about the quality of aritlces in wikipedia. Those called inclusionists including me tend to defend keeping articles that are even less than stubs while deletionists are inclined to maintain the quality of wikipedia at whole even it takes to get rid of articles that are adequate stubs but contribute to making wikipedia look chessy. It seems to me that the problem is rather that every single article is treated equaly. Some articles are brilliant prose while some are crap or bot's generated.

Anyway, my proposal is to evaluate all articles with the range of 1-5 scores. 5 means brilliant prose, 4 peer-reviewed copyedited article, 3 draft, 2 stub and 1 less than stub or non-sense. This put a lot more burden to wikipedians but we really need some kind of approval system. The growth in the number does not consist with that in the quality. I am afraid that the vast number of the nonsense and bot-generated articles make wikipedia look like a trash. It is important to remember that readers might make a quick guess about the quality of wikipedia only by seeing stubs or less than stubs. -- Taku 23:04, Oct 11, 2003 (UTC)

See Wikipedia:Wikipedia approval mechanism for prior discussion of this point. See wikipedia:bug reports to submit a feature request. See wikitech-l to volunteer to help develop MediaWiki and code your request yourself. Please don't submit feature requests to the village pump. Martin 00:48, 12 Oct 2003 (UTC)

This is not a feature request and I have already read Wikipedia approval mechanism. -- Taku

The argument isn't usually about the quality of the text, it's usually about the appropriateness of the topic. These kinds of debates will go on until we formally decide whether or not Wikipedia is the appropriate place for every postal code in the world, or any random elementary school, or any professor who has written a paper, or any subway station in any town, or anyone who gets 20 hits on Google, etc. I'd try to organize some sort of formal decisions on these topics but I'm not sure I have the energy... Axlrosen 14:49, 12 Oct 2003 (UTC)

We have had this debate on the mailing list several times (This username is just an pseudonym for another username). Most people don't want an approval mechanism. That would ruin the wiki-ness of it. ++Liberal 16:26, 12 Oct 2003 (UTC)

The proposal is not yet another approval mechanism, but simple editorial information. Scoring is intended only to improve poorly written articles.

I also favor putting the primary author and reviwers of the article. I often checked the page history to know who is primary responsible for the content. It is often convinient to contact such a person to discuss facts or POV issues. The article looks like this.

Takuya Murata is bahaba
....

The author:Taku, reviewed by Taku. The article is scored 4.

However, I guess people just don't like things that sound approval at the first glace without looking at the details. -- Taku

Do I understand correctly, that the primary purpose of the scoring would be to alert other users that an article needs help? And that the score would be visible on Recent Changes and on Watchlist? Or would you want it to only be visible once you click the link to the article?
Hmm. If one were to have a range of scores on different aspects of the article, it would infact amount to a software-fix which would merge Wikipedia:Cleanup into Recent Changes, thus making it (Cleanup) obsolete. I could definitely get behind that, if the developers have enough time, and they think it worth their while.
Double-hmm. While we are waiting for a software feature to allow this, why don't we try to implement this on a trial basis at Cleanup, add a score element to the comment tags, eh? I know it isn't quite what you originally suggested, but it could provide some guidance as to how such a feature would be used by editors, eh? -- Jussi-Ville Heiskanen 07:16, Oct 15, 2003 (UTC)

I think you are in the same line with my idea. It seems problems seen particularly in VfD is originated from the situation where every article is treated equally. The truth is some articles are in very poor writing and some are completely readly to be read by general readers. I love to see features like low scored articles are not pop up in the google results. Such features allow us to keep articles of the low quality while they don't look wikipedia a trash of crap.

I also think it is important to store some editiorial information to an article itself to avoid duplicated information. Many articles are listed in VfD over and over again and the reason is quite often of the low quality of the article, rather than the question of existence. Unfortunately, tons of article remain as stub for months, which however cannot be used as justification of deletion of such articles.

Scoring is very similar to the stub caveat with more extented and extensive use.

-- Taku

Keep it simple...

edit

To me the issue that we are trying to prevent is abuse of trust (eg, [Vandalism] ). It would seem to me that to do that, it would be better to improve the means by which improper changes are detected.

Rather than a complex approval process, simply make it possible for one or more users to "sign off" on any given version, and allow filtering in the Recent Changes for articles that haven't been signed off by at least a given number of registered users. Then, users on recent changes patrol can "sign off" on what appear to be valid changes, allowing reviewers who are primarily concerned about combating malicious changes to more readily identify which articles still need to be looked at, and which have already been found to be ok.

While malicious edits do occur from logged in users as well, this does seem to be much less frequent, and if you show who's signed off on a change in the recent changes, very easy to spot.

Theres no need for a "disapprove" on a revision. If you disapprove, you should fix the article, or in case of a dispute, start a discussion.

- Triona 23:08, 8 Sep 2004 (UTC)

There are too many pages on this topic. I've started a list at m:Edit approval. I've moved that to the introduction. Brianjd 10:25, 2005 Jan 29 (UTC)

Let ordinary users approve

edit

Let ordinary users approve articles first. some people might abuse the system. but we can also have approved approvers and unapproved approvers.

-Hemanshu 20:06, 14 Dec 2003 (UTC)

Genuine experts may not be interested

edit

Basically, I am not sure that we can generate enough interest, yet, on the part of "genuine experts" to act as reviewers for Wikipedia. That is my one big misgiving about this whole project. What do you think? --LMS

Dumb question: why do we need reviewers? So far, quality control seems to be a potential problem that as of yet shows no sign of turning into an actual problem.


See "advantages," above. :-) --LMS


Approvals and revisions

edit

This is not supposed to freeze the article, but, what is approved is a particular revision of the article. Will the approval be to revision n of the article, that the viewer of version n+m can check? --AN


Yes, and yes, or that's my notion of the thing. --LMS


Will any experts be interested?

edit

I think this could be useful, but I have some vague misgivings about how well it will work in practice. Will we be able to get enough reviewers, who are actively involved? Who in the world is going to come up with the reviewer validation criteria? The problem here is that some articles are about SF authors, others are cookbook type recipes, and what makes an expert cook does not make an expert on Jamaican Jerk Chicken...

Beyond the logistical questions, I'm a bit worried that this may have some effects on wikipedia productivity. I'm sure one of the reasons that wikipedia thrives is just because it is easy to use. But I also think that there are delicate aspects to the way the community works and is structured which are just as important. If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects. On the other hand I'm not certain that it will have adverse effects either...MRC


Related to this, is the idea that if I write an article on Jamaican Jerk Chicken, and do a thorough web-search which supports what I write, why can't I be considered an expert for the purposes of review. After all, I may have a more open mind than a lot of cooks out there.


To quote Lee Daniel Crocker from another page on Wikipedia, "Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. Facts are facts, no matter who writes about them"

I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding. There are examples in almost any field - this generation's experts scoffs at the silly ideas of a previous generation, while destined to be scoffed at themselves by a future generation.

- Tim


Replies to the above:

Will we be able to get enough reviewers, who are actively involved?

That's an excellent question. I just don't know.

Who in the world is going to come up with the reviewer validation criteria?

That obviously would be a matter of some deliberation.

If good people with lots of real knowledge feel like they are second class citizens, they will fell less motivated to work on the project. I'm not entirely certain that creating an official hierarchy will have no adverse effects.

I agree that this is a very, very legitimate concern, and I think we probably shouldn't take any steps until we're pretty sure that the project wouldn't suffer in this way.

Authority is nothing but a useful shortcut used in the world of humans because we haven't had the luxury of technology like this that makes expertise less relevant. What matters is the argument, not the arguers, and this technology supports--indeed enforces--that. ... I know too much intellectual history to trust the experts much. Expertise quite often has more to do with trendiness than with knowledge and understanding.

I think there is some merit to these claims. But I'm wondering what this has to do with the proposal. Is the idea that we cannot trust authorities, or experts, to reliably state what constitutes a clear, accurate statement of human knowledge on the subjects within their expertise? Or is it, perhaps, that fine articles would be given a thumbs-down because they do not toe the party line sufficiently, whatever it is? Well, I don't know about that. Anyway, I'm not sure what the point is here.

Gotta go, the dog is taking me for a walk. --LMS


Another possible problem is that even the experts can be wrong. How can we verify accuracy, then? Even if the Wikipedia was internally consistent, it can still be wrong. Not only that, but an article can change 30 seconds after it has been reviewed. For starters, we'll have to introduce another concept from programming, the code freeze. What we can do is analyze the change frequency on articles (via a program of some kind), and when the changes in an article stabilize so that changes are minor and infrequent, we copy the article, add all the named authors as authors, date it, and make it a static page on the wikipedia. Then you'll have two articles, one is the latest "stable" revision, and the other is open to flux (the rest are archived, available by request or some other equivalent). The tough part is determining "accuracy". We could go democratically and add a voting system, but that has problems since the majority can be wrong about as easily as an individual. Any verification system either requires money to hire people to check on authors' claims of expertise, or the creation of an elite class of authors. The alternative is to foster the sense of community, and work on the trust a wikipedian can earn from fellow wikipedians, but that opens up the door to any mistakes made by a trusted wikipedian being tougher to correct. So I would tend to think that the best argument for the validity of an article is stability in the face of hits. Perhaps do something like this:

A = (nr/(nh %Δ)) * (√na)/T

where A is the accuracy factor, nr is the number of revisions (since some time), %f¢ is the median (or mean if you must) % change in the article per revision, nh is the total number of hits, T is the technical factor (could be determined by the number of authors involved, etc. it is essentially an attempt at determining the odds that a hit will know enough to revise the article), and na is the number of authors involved (under a radical so that it won't increase linearly). This equation is frightfully arbitrary except in the factors it considers, and a statistician should come up with a better form.


Magnus' implementation of my proposal looks good, but it doesn't quite implement my proposal. The role of a moderator is to approve that a reviewer has the billed (and necessary) qualifications. I don't want anyone standing over the reviewers in the sense of saying, "Yes, you were right to approve this article." In fact, a moderator could very well know nothing about the subject the reviewer addresses, but the moderator can check to see whether the reviewer does have the necessary qualifications (by visiting homepages and matching up e-mail addresses, etc.). In other words, the role of reviewers is quality assurance, whereas the role of moderators would be anti-reviewer-fraud assurance.

Otherwise, the implementation looks pretty good. This advantage is important: "By having reviewers and moderators not chosen for a single category (e.g., biology), but by someone on a "higher level" trusting the individual not to make strange decisions, we can avoid problems such as having to choose a category for each article and each person prior to approval, checking reviewers for special references etc." That's exactly why I wanted it designed this way. Someone could be an ad hoc expert about his pet subject, and a moderator might be able to spot this.

I think ...'s (got to change that nickname, guy!  :-) ) proposal really pales beside Bryce's. If we are going to have a "community approval" process, Bryce's is far superior, because it allows us to "approve the approvers." Frankly, I couldn't give rat's patoot whether lots of people would approve of a given article. I want to know whether people who know about the subject (i.e., by definition, the people I'm calling experts) approve of it.

If we did go my route, as opposed to Bryce's, I think we should have an in-depth discussion of criteria for reviewers. Basically, I think we should use criteria similar to those used by Nupedia, but modified to allow for specific expertise on specific subjects--where such expertise might not be codified in degrees, certificates, etc. Nevertheless, I think that the expertise even in those cases must be genuine. If you've read a half-dozen books on a subject of the breadth of, say, World War II, then you know a heck of a lot about WWII, and you can contribute mightily, but you ain't an expert on WWII (probably). Essentially, if we want to adopt a review mechanism in order to achieve the goals of attracting more, well, experts, and in order to have Wikipedia's content used by various reputable online sources, then we must work with the concept of expertise that they use. One rough-and-ready conception of expertise goes like this: you identify people who are experts on a subject on anybody's conception; then you determine who those people consider colleagues worth speaking to seriously and professionally. Those are the experts on that subject.

Frankly, this whole thing is starting to give me a bit of a headache, and I'm not sure we should do anything at all anytime soon.  :-) --User:Larry Sanger


On Bryce's proposal: this is very interesting and I think we should think more about it. Maybe it would end up being a roaring success. In the context of Wikipedia, I can pretty easily imagine how it could be. There is one main problem with it, though, and that is that it isn't going to make the project any more attractive for high-powered academic types to join the club. They would very likely look on such a system as a reflection of a sort of hopeless amateurism that will doom Wikipedia to mediocrity. We know better, of course--but we would like to have the participation of such people. Or I would, anyway. They know stuff. Stuff that we don't know, because they're smart and well-educated. If we can do something that's otherwise non-intrusive to the community to attract them, we should. Another problem, related to this, is that the world isn't going to be as excited about this cool system as we are. They'll want to see stuff that is approved, period--presented by Wikipedia as approved by genuine experts on the subjects. If they can see this, they'll be a lot more apt to use and distribute Wikipedia's content, which in the end is what we really, really want--because it means world domination.  :-)

After some more thought, I'm now thinking, "Why can't we just adapt Bryce's proposal for these (elitist) purposes?" It would go something like this. We all have our own locked pages on which we can list articles of which we approve. There is a general rule that we should not approve of articles in areas on which we aren't experts. Then, as Bryce says, people can choose who to listen to and who not to listen to when it comes to approvals. But as for presenting the Wikipedia-approved articles, it would be pretty straightforward: some advisory board of some sort chooses which people are to be "listened to" as regards approvals. This would be determined based on some criteria of expertise and whether the approvals the person renders are in that person's areas of expertise. Then we could present one set of articles as the "Wikipedia-approved" articles. Other people could choose a different set of reviewers and get a different set of approved articles.

Moreover, we could conceivably make Bryce's system attractive to "experts." We could say: "Hey, you join us and start approving articles, and definitely your approval list will definitely be one to help define the canonical set of Wikipedia-approved articles.

This looks very promising to me. Right now, I'd have to say I like it better than Sanger's proposal! --Sanger


First, let me say that there's no technical problem in implementing both the Bryce and Sanger/Manske proposal. Just as different category schemes can coexist peacefully at wikipedia, these could too. We could even use the "expert verification" (no matter how this will work in the end) for both approaches.

For the difference between the Sanger and the Manske proposal about what moderators do, you should think about this: Say I get to become a reviewier because I know biology and a little about computers;) So, a moderator made me reviewer. What's going to stop me from approving a two-line-article about "Fiddle Traditions in General"? If I'd be restricted to biology and computers, we'd have to put all articles into categories (which we don't want), otherwise the moderators would have to check every approved article, which is what I suggested in the first place. Maybe I wasn't clear on this point: I don't want the moderators to check articles that were approved by reviewers for scientific correctness; they should just act as another filter, basically approving every article they get from the reviewers, except for those with obvious errors, or with "unfitting" topics, such as foobar... --Magnus Manske


Actually, Magnus, I later came around to your thinking and neglected to mention it. I.e., I think it would be better to have the moderators always be working to check adequate qualifications.

The other possibility is to have some way to "undo" illicit approvals. This would be an enormous headache, though--anyone whose approval was undone would probably quit. --User:LMS


I lean toward something like Bryce's suggestion as well where "approved-ness" is just another piece of metadata about an article that can be used to select it, but I'd simplify it even further with a little software support. Let's not forget Wikipedia's strength: it's easy to create and edit stuff. Because of that, we have a lot of stuff that's been created and edited. We need to make it absolutely trivially easy to provide metadata about an article using the same simple Web interface. For example, have an "Approvals" or "Moderation" link which takes the user to a fill-in form where he checks boxes to answer questions like "Is this article factually accurate in your opinion?", "Is this article clear and well-written in your opinion?", "Does this article cover all major aspects of its topic in your opinion?", etc. That information can be stored in the database associated with the appropriate revisions (perhaps the software could even retain article versions for longer if they have a certain level of approval). Storing that info under the page of the Wikipedian who filled in the form as suggested by Bryce (except under program control) is a good way to do it. Then, users can judge for themselves whose opinions they value and whose they don't. This option is available to anyone who is logged in as a specific user (and not to anonymous users), so the software would know to update the "Lee Daniel Crocker/Well written" page when I checked that box.


This software could be very simple--just present the form, and add lines to the appropriate page, which is just an ordinary Wiki page. The "Lee Daniel Crocker/Well written" page, the "Larry Sanger/Copyright status verified" page, the "Magnus Manske/Factually accurate" page, and the "Bryce Harrington/Interesting subject" page are themselves subject to approval by anyone, and their value can be judged by that.

--User:Lee Daniel Crocker


Implementing an internal equivalent of something like Google's PageRank might be a useful alternative to the approval system. This would effectively rate an article based on how well linked it is and how well ranked the items that link it are. Not only does this scale well (as Google has shown), and work automatically, it is effectively the equivalent to the mental heuristic humans use to establish authority (that is, an expert is one because other people say he is).


In my opinion there should be a separate project that is quality controlled and has approved articles. By having a separate domain and project name, people on wikipedia automatically see the latest revision, while the people at the new site always see the approved revision. With my idea, to make it simpler for newbies, there would be just one affiliated approving project. The staff would consist of administrators, qualifiers, and experts.

To be an expert, you would send in your qualifications such as your degree, job experience, articles written, etc. The necessary qualifications would be fairly small, but your qualifications are assigned a number from 1-5 (which is always displayed in front of your name), and people may filter out articles approved only by barely qualified experts.

The qualifiers contact the university where the expert got their degree, check the archives to see if the expert really wrote the article they claimed to write, etc. The administrator's sole duty is to check the work of the qualifiers and to approve new qualifiers.

Under my idea, any articles that are "stable" (as defined by the formula above) are considered community approved, and as such, are sent to a page on the project (much like the recent changes page) which does not show up in search results and is out of the way, but accessible to anyone.

To actually get approved, the article must be checked for plagiarism by a program, grammar checked by an expert in the language the article is in, and finally, get checked by experts in the subject matter. When the article finally gets approved, all of the registered users who edited the article are invited to add their real or screen name to the list of authors, and the experts get their names and qualifications added in the experts' section at the top of the article.

Even when the article gets approved, people can still interact with it. Any user can submit a complaint about an article if they feel it is plagiarizing or is biased, or they can submit a request to the grammar checkers to fix a typo, and experts may and are encouraged to put their stamp of approval on an already approved article.

An article is rated the same as the highest expert that approves the article. For example, one searching for Brownies might see an article that is titled like this:

4:Recipe for Brownies

This means that at least one expert rated a four approved of the article. A person searching for an article specifies how low they are willing the ratings to go.

-- Techieman


First, I clicked on the edit link for "Why wikipedia doesn't need an additional approval mechanism" (to fix a typo), and got what appeared to be the TOP of the article, not the section that I was trying to link. I had to edit the full article to make the change.

Second, someone may have suggested this already (since I only read Talk halfway down), but might want to test multiple approval techniques simultaneously, although I like the heuristics idea. You might then have a user option or button for showing the heuristically-approved revision of an article. Or, perhaps you could specify the approval level that you want, and the system would show you the most recent revision that meets or exceeds that level (or best revision, if level isn't met). Scott McNay 07:18, 2004 Feb 11 (UTC)

Scoring articles

edit

I have some proposal to address. It seems that the mess in VfD and cleanup is due to the disparities among wikipedians about the quality of aritlces in wikipedia. Those called inclusionists including me tend to defend keeping articles that are even less than stubs while deletionists are inclined to maintain the quality of wikipedia at whole even it takes to get rid of articles that are adequate stubs but contribute to making wikipedia look chessy. It seems to me that the problem is rather that every single article is treated equaly. Some articles are brilliant prose while some are crap or bot's generated.

Anyway, my proposal is to evaluate all articles with the range of 1-5 scores. 5 means brilliant prose, 4 peer-reviewed copyedited article, 3 draft, 2 stub and 1 less than stub or non-sense. This put a lot more burden to wikipedians but we really need some kind of approval system. The growth in the number does not consist with that in the quality. I am afraid that the vast number of the nonsense and bot-generated articles make wikipedia look like a trash. It is important to remember that readers might make a quick guess about the quality of wikipedia only by seeing stubs or less than stubs. -- Taku 23:04, Oct 11, 2003 (UTC)

See Wikipedia:Wikipedia approval mechanism for prior discussion of this point. See wikipedia:bug reports to submit a feature request. See wikitech-l to volunteer to help develop MediaWiki and code your request yourself. Please don't submit feature requests to the village pump. Martin 00:48, 12 Oct 2003 (UTC)

This is not a feature request and I have already read Wikipedia approval mechanism. -- Taku

The argument isn't usually about the quality of the text, it's usually about the appropriateness of the topic. These kinds of debates will go on until we formally decide whether or not Wikipedia is the appropriate place for every postal code in the world, or any random elementary school, or any professor who has written a paper, or any subway station in any town, or anyone who gets 20 hits on Google, etc. I'd try to organize some sort of formal decisions on these topics but I'm not sure I have the energy... Axlrosen 14:49, 12 Oct 2003 (UTC)

We have had this debate on the mailing list several times (This username is just an pseudonym for another username). Most people don't want an approval mechanism. That would ruin the wiki-ness of it. ++Liberal 16:26, 12 Oct 2003 (UTC)

The proposal is not yet another approval mechanism, but simple editorial information. Scoring is intended only to improve poorly written articles.

I also favor putting the primary author and reviwers of the article. I often checked the page history to know who is primary responsible for the content. It is often convinient to contact such a person to discuss facts or POV issues. The article looks like this.

Takuya Murata is bahaba
....

The author:Taku, reviewed by Taku. The article is scored 4.

However, I guess people just don't like things that sound approval at the first glace without looking at the details. -- Taku

Do I understand correctly, that the primary purpose of the scoring would be to alert other users that an article needs help? And that the score would be visible on Recent Changes and on Watchlist? Or would you want it to only be visible once you click the link to the article?
Hmm. If one were to have a range of scores on different aspects of the article, it would infact amount to a software-fix which would merge Wikipedia:Cleanup into Recent Changes, thus making it (Cleanup) obsolete. I could definitely get behind that, if the developers have enough time, and they think it worth their while.
Double-hmm. While we are waiting for a software feature to allow this, why don't we try to implement this on a trial basis at Cleanup, add a score element to the comment tags, eh? I know it isn't quite what you originally suggested, but it could provide some guidance as to how such a feature would be used by editors, eh? -- Jussi-Ville Heiskanen 07:16, Oct 15, 2003 (UTC)

I think you are in the same line with my idea. It seems problems seen particularly in VfD is originated from the situation where every article is treated equally. The truth is some articles are in very poor writing and some are completely readly to be read by general readers. I love to see features like low scored articles are not pop up in the google results. Such features allow us to keep articles of the low quality while they don't look wikipedia a trash of crap.

I also think it is important to store some editiorial information to an article itself to avoid duplicated information. Many articles are listed in VfD over and over again and the reason is quite often of the low quality of the article, rather than the question of existence. Unfortunately, tons of article remain as stub for months, which however cannot be used as justification of deletion of such articles.

Scoring is very similar to the stub caveat with more extented and extensive use.

-- Taku

Keep it simple...

edit

To me the issue that we are trying to prevent is abuse of trust (eg, [Vandalism] ). It would seem to me that to do that, it would be better to improve the means by which improper changes are detected.

Rather than a complex approval process, simply make it possible for one or more users to "sign off" on any given version, and allow filtering in the Recent Changes for articles that haven't been signed off by at least a given number of registered users. Then, users on recent changes patrol can "sign off" on what appear to be valid changes, allowing reviewers who are primarily concerned about combating malicious changes to more readily identify which articles still need to be looked at, and which have already been found to be ok.

While malicious edits do occur from logged in users as well, this does seem to be much less frequent, and if you show who's signed off on a change in the recent changes, very easy to spot.

Theres no need for a "disapprove" on a revision. If you disapprove, you should fix the article, or in case of a dispute, start a discussion.

- Triona 23:08, 8 Sep 2004 (UTC)

Approval

edit

We already have an approval mechanism. It's called a wiki. This measure will, I hope, never pass.Dr Zen 04:54, 14 Dec 2004 (UTC)

Agreed. Aweful idea as is. Maybe if certain pages had change on an hour delay where they would automatically be "approved" without human intervention. Even then it's a pretty bad idea that seems just to try to appease a few critics rather than deal with a genuine issue. Just IMHO. I'd rather see no approval system. --Sketchee 02:24, Dec 30, 2004 (UTC)

A thought, some want to defer for "expert" approval. If that's what we wanted, wouldn't we get an expert to write each article in the first place? --Sketchee 01:46, Jan 30, 2005 (UTC)

If you want approval, go to some other encyclopedia. As far as I can see, all encyclopedias are comparable to each other - the wiki is a good enough approval mechanism. Brianjd 05:33, 2005 Feb 1 (UTC)

Adding tools to help evaluate an article's reliability?

edit

it seems that any proposal is either too authoritarian or not enough (and therefore of no effect). there's no need to declare some people 'experts' and others mere 'users', this is just insulting. And the validation issue is all tied up with the version-1.0 deal so the purpose is somewhat confounded. flagging versions for inclusion in a permanent archive is a very different task from indicating the reliability of an article.

Well, this may not be the place to say it, but one area of the software that could be improved (and that might obviate the need for validation) is to improve the 'review history' function - .. to enable ALL users (expert or not) to better evaluate the reliability of an article's contents. it would be great to be able to query an article's edit history, to know for example which parts of the article are most actively debated and which are not. A few ideas -

  1. something like a thermometer function - a gauge of the activity level. one could highlight a block of text and query - what's the temp here? - with a right-mouse-button-click-enabled option. the activity would mean different things in different fields - of course. so that, in a discussion of history or literature it may signal debate or uncertainty while in computer science or business news it may indicate that the article is up to date. but in either case the temperature would be useful.
  2. graphs of article activity over time - showing numbers of characters edited per hour or edits saved or users active or page views, etc.
  3. option to include 'most recent X article edits' in the main page view. so that, when reading an article, you can see in-line text that indicates what the most recent X edits were - (this view could equivalently be a mirror page analogous to the talk page) - .. at least this way (a) any controversy or questionable content is immediately evident to the user (b) the user cannot ignore the fact that that the information is not permanent truth and is subject to change (c) the user knows at what stage of editing an article is - organization, content, proofreading, etc.

there must be something missing since so many people are complaining about the lack of validation - i think the response should be, decide for yourself how reliable it is!! and, here is some information that might help.


... (a sidenote) ...

i administrate a medical reference wiki - a medical encyclopedia - and we are having a lot of discussion over this issue re our site. validation is perhaps more important because doctors don't want to treat patients based on information that may or may not be 100% correct. they don't want to mis-diagnose because of a typo. so here, the validation seems *very* important, particularly for the articles with less activity. Insofar as wikipedia is more or less recreational (and given that it's already gained so much respect), I wonder what the purpose of the validation is?

i bring this up because we're considering a system whereby an author takes ownership of an article - .. signs up as a volunteer steward or moderator of the discussion and content of the article. the person would describe his or her credentials and, unless there's an objection, agrees to be responsible for the article's contents. Any user can interact with any page as in wikipedia, except that (1) a moderator can block a particular user from editing a particular page (in the case of edit abuse) (2) the moderator can indicate how mature the article is so that readers can know the strength of recommendations (3) user knows whether there is a moderator and how qualified the moderator is. (4) there's a moderator-only-view of the page that helps the moderator keep track of changes, etc. (5) the moderator can comment on the article text in a way that's separate from the article's other contents (show/hide moderator comments). but we would hope to have the moderator's role be otherwise invisible. And actually, because of the way our articles are structured, we want to be able to do this at either the section or the article level (depending on the moderator's preference) - but that's more of an implementation issue.. i share this with the hope that the example will help to think through the issues as they relate to wikimedia ..

one issue we haven't solved (and we would welcome comments) is whether to allow several moderators .. or just one.

the idea was to take the course of successful article development as it occurs on wikipedia and encourage it at our site by formalizing the process and adding tools to make it easier. the fact that formalizing the process gives the impression of authority is not surprising since many wikipedia articles evolved this way and many of them are authoritative.

"Better"?

edit

"Generally, Wikipedia will become comparable to nearly any encyclopedia, once enough articles are approved. It need not be perfect, just better than Britannica and Encarta and ODP and other CD or Web resources."

Huh? Since when are we in competition with others? Brianjd 11:06, 2005 Jan 27 (UTC)

Static URLs

edit
  • Large, reputable websites and the web in general may be more likely to use and/or link to our content if it has been approved by experts and, especially, if the current version of an article has a persistent URL that they can link to. Currently, only a past version of an article actually has such a URL!
This is not true. The URL "http://en.wikipedia.org/wiki/American_Revolution" will always point to the current version of the article. --Sean Kelly 18:04, 17 Apr 2005 (UTC)

On the development of Linux

edit

I'm jumping in blind as a newbie to this discussion (and to wikidom) but I found the introduction of Linux into the discussion very interesting. To my understanding, Linux did in fact have the figure of the "benevolent dictator", Linus Torvalds, to decide which contributions were or weren't up to standards. Of course open-source code can fork, just as GFDL prose can fork. But in the end Linus still owns the brand, and once that brand acquired value through Linux's reputation, he would have been courageous indeed to adopt a wiki model of edits to code going out with the Linux name.

Would it have been legitimate for me to edit the main page and make this point? At any rate, just wanted to share that comment. -- PhilipR 20:48, 8 Jun 2005 (UTC)

Wikijunior

edit

If Wikijunior goes ahead as planned, it will need a validation system. There will (probably) a wiki for editing the content and a non-wiki site with validated articles for the kids. The validation system that's being discussed/developed for wikipedia is perfect for this purpose. Since this kind of system is a requirement for Wikijunior, and just a feature for wikipedia, it might be a good idea to use Wikijunior as a 'testing ground', to see how wellthe system holds up on a real wiki. Wikijunior could sure use the attention (especially from some techies) and it would nice to test the reliability and behavior of the system and to tweak the numbers on a small wiki that's starting pretty much from the ground up, before deploying it on wikipedia in all it's massiveness. Discussion is going on at Talk:Wikijunior#Wikijunior dot org appearance.

(This used to be on the proposals page, but I moved it here, cause this makes more sense, and gets more attention)

Risk 15:02, 28 September 2005 (UTC)Reply

Trust Metrics

edit

http://www.advogato.org/person/raph/diary.html?start=403

i also understand trust metrics: they are, as raph says, a perfect fit for the requirements.

my opinion of the way that trust metrics have been _used_ in advogato.org are that they are too simple - too simple a granularity in order for people to understand it.

also, there is a disadvantage of the _implementation_ - and it is purely of the implementation - that raph has made, and it is this: the ford-fulkersson maximum flow algorithm is depth-first, and what is needed, in order to prune the number of degrees _quickly_ and also reduce CPU usage in what could turn out to be a rather large web of trust, is a breadth-first max-flow algorithm.

it's worth expanding on the over-simplification bit: advogato's trust metric controls are 3-levels deep (4 if you include no certs as a level). yet, the only _use_ made of the certifications is "do you have one at all, if so, yes, posting articles is fine: we don't care what level". _and_ - also - there is only _one_ type of certification.

wikipedia would, i believe, need several types of certification: "i trust this person to certify articles as requiring content-control". "i trust this person's opinion that this specific article requires content control".

those two certs are TOTALLY different things.

the combination is very powerful.

lkcl18dec2005.

Subjective/individualized validation

edit

From what I've read, it seems that everyone is focusing on producing an objective/universal or "official" validation system. This seems like a big mess to me, and I think validation should be done by individual users, for their own use.

My proposal may be difficult to implement on the software end, but I just want to put it out there for consideration. It is inspired by two existing systems

  1. Wikipedia watch-lists
  2. Public key encryption: Wikipedia:web of trust, Web of trust

The idea is that if an individual reads an article, and he KNOWS that the information is accurate (he is an expert in the field), then he adds it to his "validation list". Anyone may then read the articles that any particular user has validated (perhaps by going to that user's webpage and choosing to use his validated pages). For example, a teacher may read a number of articles in his field, validate the ones that are good, and then refer his students to his user page where they can access the articles validated by that teacher.

This becomes particularly powerful if users can choose to trust another user's validation (perhaps limited by category). In this scenerio, the entire faculty of a school could validate articles in their speciality, and then choose to trust each other's evaluations so that a person can use any of their accounts as an entry-point for the collection of all articles validated by any of the faculty members.

This can encourage experts to contribute, because they can make edits, and then validate that particular verison—knowing that they can refer others to particular pages and those pages won't change in between the time that the expert read them and the students read them.

AdamRetchless 17:32, 8 March 2006 (UTC)Reply

Is there any way to aggregate ALL information in an article??

edit

I'm looking for a way to have wikipedia generate an entry for a particular article that is all-inclusive.. in other words the idea would be to display entry from all the revisions in wikipedia without displaying duplicate information but while still displaying all unique information from a compliation of all the revisions... It would probably also be useful to color code specific sections based upon their level of validity -- is anyone working on this?

Thanks Karzy911

I am looking for an approval process that allows for the following:

1. A page version can have a tag/marker to indicate whether it's approved. 2. Only certain people can approve a page. 3. A reader can chose to view the newest version of the page, or the last approved version. 4. Optionally, a user may decide to only view approved version (such as for internal audits) 5. A notification system such that the approver of the page is notified of a change, so he can go and approve it as soon as he can. 6. A report to indicate what are all the still-yet-to-be approved pages in the system, with the ability to filter by approver.

Return to "Article validation" page.