Talk:Community Tech/OCR Improvements

Latest comment: 1 year ago by QueerEcofeminist in topic A tool to create ocr raw pages/text layer

Launch of project: First round of feedback (January 2021)

edit

Hello, everyone! We have just launched the project for OCR Improvements! With this project, we aim to improve the experience of using OCR tools on Wikisource. Please refer to our project page, which provides a full summary of the project and the main problem areas that we have identified. We would then love if you could answer the questions below. Your feedback is incredibly important to us and it will directly impact the choices we make. Thank you in advance, and we look forward to reading your feedback!

Have we covered all of the main OCR tools used by Wikisource editors?

edit
  • So far as I can tell you've covered all the interactive OCR tools that the different language Wikisourcen recommend to their users on-site (the ones I've heard of, but I haven't researched it by any means).
    But note that this does not cover all use cases for OCR in connection with Wikisource. For example, there are some old shell scripts provided on enWS for adding an OCR text layer to a DjVu before upload, and at least myself and Inductiveload have developed custom tools for processing a set of scan images and producing a DJVu with an OCR layer. On-wiki interactive tools represent one major category of users and uses, but the related/complementary category of users and use cases that relies on the text layer in the DjVu/PDF is not insignificant either. For this use case we're not talking about improving one tool, but rather a toolchain and infrastructure. My tool is intended to eventually become a web-based (WMCS) interactive tool to manipulate DjVu files (OCR being one part), letting power users prepare such files for other less technical users.
    Preserving fidelity of existing OCR text when extracting it from the file (on upload) and the database (when editing the page) is another pain point (text layers extracted from PDFs are notably poorer quality in MediaWiki than the same from DjVu files). For DjVu files with a structured text layer, the fidelity is also lost when stored as a blob in the metadata (imginfo, iirc) leading to needlessly deteriorated quality when extracted. And the structure provided by the text layers is not leveraged to provide advanced proofreading features (OCR text overlay on the scan image, offering mobile users single word or single line snippets to proofread, etc.).
    When you squint just right the pre-generated OCR files at IA and the existing text layer in a PDF or DjVu are just another source of OCR data (just like the Google Vision API, or a web-wrapped Tesseract service), and should fit into the overall puzzle too.
    All this stuff falls roughly within the "OCR" umbrella term, but is outside the scope of this Wishlist task as currently construed. My suggestion is therefore to keep these use cases in mind while working on it in order to 1) not waste effort developing functionality in this tool that is really just a workaround for something that should be fixed elsewhere, and 2) to create tasks that leverage the research and experience you accumulate on other components where the real solution lies. Personally I would love to see some attention paid to the path from an upload with a text layer, extraction and storage in the database (multiple forks / MCR, or other improved storage), and fetching and presentation (a non-imginfo based API for ProofreadPage or even a Gadget or user script to get at the structured text layer?). --Xover (talk) 08:21, 22 January 2021 (UTC)Reply
    @Xover: Apologies for the late response, and thank you for explaining this! First, just so we understand correctly, why do you use these shell scripts as opposed to on-wiki OCR tools? Is it for bulk OCR? Or for better support for certain languages? We are asking because we want to understand what the current on-wiki tools are not providing, which you may get with certain off-wiki OCR tools. Also, thank you for providing detailed information on the benefits of DjVU files. Like you wrote, improving the workflow of storing OCR text in a DjVu file (which is then brought over to Commons) is probably outside the scope of the wish proposal. However, it is very useful for the team to be aware of the fact that Wikisource users also depend upon off-wiki OCR tools in some cases, as this can give us a more holistic understanding of the range of tools available. Furthermore, we encourage you to share any insights regarding how we can improve the on-wiki tools over the course of the project, since that will be our focus. Thank you so much! --IFried (WMF) (talk) 22:07, 3 March 2021 (UTC)Reply
    @IFried (WMF): The shell scripts (s:Help:DjVu files/OCR with Tesseract) are old guidance to give contributors a way to add an OCR text layer for a page. It is still occasionally used by some contributors, but is by no means a primary solution. My custom tool is primarily designed to do bulk OCR, but its use use case spans a bit wider. It generates a DjVu file with OCR text layer from a directory of scanned page images. It has three main goals: 1) to create a new OCR text layer when one is missing, of poor quality, or corrupt (cf. T219376 and T240562); 2) to improve the image quality of a DjVu file because (e.g.) IA's DjVu's are excessively compressed or scaled down (and we need the highest fidelity page images we can get in many cases); and 3) to generate specifically a DjVu file rather than other possible formats (both due to MediaWiki's PDF handler doing a really bad job extracting the text layer from PDFs, and because DjVu files can more easily and reliably be manipulated when we need to insert, remove, shift pages, redact an image or other part that is copyrighted, etc.). In addition to this the custom tool lets me control aspects of the DjVu generation (bitonal vs. DjVuPhoto) and Tesseract. For example, since I can control page segmentation mode and language settings I can deal with things like s:Page:Konx Om Pax.pdf/16.
    I also have my own online OCR gadget, backed by a WMCS webservice that uses my own code (wrapping Tesseract) where I am experimenting with various features that, if they work out, may be useful. For example a switch to automatically unwrap lines within a paragraph, including combining hyphenated words; detecting and removing the first line if it represents the page header; educating or straightening quotation marks; etc.. I am also investigating the possibility of interactively selecting a portion of the page image (rectangular marquee) to OCR. This is useful for multi-column or other constellations of text where default OCR may guess incorrectly and combine lines awkwardly, or when a page contains multiple languages (for example a primarily English work that embeds passages of Indic, Hangul, Arabic, etc.). I'm also planning on offering multiple output formats from the backend service, so that those who want to make a specialized tool can ask for hOCR output. Since hOCR contains information on page geometry down to the character box level, that would enable things like overlaying the OCR text on the page image for direct comparison, or showing just a small portion of the page (a single sentence, or word by word) on a mobile phone (where full-page proofreading is effectively impossible today). I also plan to investigate automatically adding wikimarkup to the output, but that's on hold due to Tesseract lacking support for font variants (bold, italic, etc.; which are probably the most useful things to automate, since spotting italics in particular is often hard when proofreading). --Xover (talk) 08:47, 4 March 2021 (UTC)Reply
  • @Xover and IFried (WMF): This conversation gives me an idea. What if we were to write our own version of Google Recaptcha using the precise position of hOCR. The New York Times used Google Recaptcha to digitize it's own back issues [1]. We could enable the Wiki Recaptcha for all non-user edits and make the vandals help us proofread books. We could also allow users to toggle Wiki Recaptcha if they want to help us proofread one word at a time. Certainly, we'd still have to add formatting and do a proofreading, but it could really help, especially with older texts. Languageseeker (talk) 05:29, 12 March 2021 (UTC)Reply
  • For me I see those differents tools : 1/ Google OCR (User:Alex brollo/GoogleOCR.js) 2/ Tesseract OCR (User:Putnik/TesseractOCR.js) 3/ OCR (User:Ineuw/OCR.js) 4/ the native OCR — Koreller (talk) 17:55, 18 February 2021 (UTC)Reply
@Koreller:Thank you for providing these examples! --IFried (WMF) (talk) 22:08, 3 March 2021 (UTC)Reply
  • on frwikisource, because of the long-term anavailability of (Tesseract) OCR, we took the habit of asking a contributor, who has ABBYY as personal OCR, to OCR books before importing them on the project : the OCR are improved, but it means relying on the goodwill of a single contributor to get good OCR. - Personaly, I also have the habit of uploading (PD) books on Archive.org, to get them OCR-ed by ABBYY, then importing them on Commons through IAupload, which on-the-fly converts files to DjVu, but it's too complex a process to rely on for new contributors, who simply want to upload a book and correct it. If ABBYY could be available (maybe with conditions) as a subsidiary tool in the import process of a file, it could be a real improvement. --Hsarrazin (talk) 08:36, 9 March 2021 (UTC)Reply
    @Hsarrazin: Thank you so much for this feedback! We appreciate you explaining in detail why some users have turned to ABBYY, as well as the complications in such a workflow. We don’t know if we can add ABBYY (since it is normally a paid service), but we will investigate if it is possible. Meanwhile, we can aim to improve accessibility of Tesseract. In fact, we are now working to add Tesseract to Wikimedia/Google OCR. So, we have one follow-up question: Can you let us know if Tesseract is still unavailable for you? If so, can you provide more details on how it is not working for you? Thanks! --IFried (WMF) (talk) 18:10, 15 April 2021 (UTC)Reply
    in fact, Tesseract has come back online a few months agosee Phabricator ticket, and is now running, since Phetool has been debugged - it seems there was a problem with non-purged workfile... so yes, Tesseract is now working fine - However, the possibility to use ABBYY for some complex files would be welcome :) --Hsarrazin (talk) 18:51, 16 April 2021 (UTC)Reply

Have we covered the major problems experienced when using OCR tools?

edit

RTL text

edit

I'd like to add an issue unique to RTL languages (such as Hebrew & Arabic). In the Hebrew wikisource, the OCR gadget often fails to render properly punctuation marks, treating them as LTR-text within the general RTL text-flow. This causes problems when proofreading the text, even though intially, no issue is apparent to the viewer. The OCR gadget inserts erroneous BIDI markup characters around the punctuation marks.

Recommended solution: Allow the user to select the language of the document being OCRed, and make it RTL by default on RTL-language Wikisources.

--Thank you, Naḥum (talk) 09:48, 18 January 2021 (UTC)Reply
@Nahum: Thank you so much for this explanation! From our understanding, the issue is that punctuation (which should be RTL) is being expressed incorrectly as LTR in some cases. This is wrong and it makes it very difficult to proofread. We agree that this is a big problem and would like to investigate it more deeply. In that case, can you provide us some specific examples that we can look into? Thank you in advance! --IFried (WMF) (talk) 22:10, 3 March 2021 (UTC)Reply

Automatic/batch OCR for Indic OCR

edit
 
  • In latin language wikisource have a Automatic/batch OCR for Indic OCR by a bot run phe OCR tool and it create a new text layer very PDF/DJVU file. But there are no like this in Indic language wikisources and other non-latin wikisoures. You may find phe OCR tool status page and found Indic language shown running. But no text layer created by this job. We are alwayes depend on OCR4wikisoure python script, which is break the stardard workflow of wikisource. We want this kind of automatic/batch OCR by GoogleOCR/Tesseract for Indic language wikiosurce, when we create a Index: namespace.

From last month Dec 2020, Internet archive started the batch OCR for Indic OCR by tessearct OCR , for example https://archive.org/details/beng-1-1872, they have create FULL TEXT and PDF WITH TEXT. We want this kind of batch process. --Jayantanth (talk) 15:30, 20 January 2021 (UTC)Reply

@Jayantanth: Could you please get in touch with me on my user talk page at English Wikisource with 1) a link to a specific file on Commons that Phe's OCR fails on, 2) the specifics of the language and script it is in (Bengali in Indic script?), 3) a detailed description of how you invoke Phe's OCR on that file (what actions the user takes, what buttons are clicked on, ec.), and 4) as detailed a description as possible of the result you expected and the result you actually got. I have access to the Phetools Toolforge project and would like to try to debug this problem (but I have zero familiarity with any language usually represented in Indic scripts so I will need help navigating there). --Xover (talk) 08:34, 22 January 2021 (UTC)Reply
@Jayantanth: Thank you so much for this comment! From what we gather, you are saying that Latin language Wikisources have an automatic batch OCR. However, Indic language (and other non-Latin language) Wikisources do not have a functioning automatic batch OCR tool. We think this a really important issue to look into, and we would like to fix this. As a team, we will be investigating if we can provide bulk OCR via the Google/Wikimedia OCR tool. If we did this, would this be a good solution for you? Also, your idea about making automatic bulk OCR available upon index creation is interesting and we’ll discuss it as a team. We look forward to your feedback and thank you in advance! --IFried (WMF) (talk) 22:12, 3 March 2021 (UTC)Reply

Not so visionary

edit

You have indeed described "the major problems experienced when using OCR tools", but this is not enough. Some problems that need to be addressed, now or later, have not yet been fully experienced.

  • When OCR is mostly correct, but columns are misinterpreted, e.g. lines are interleaved across columns, the OCR tool user interface should show where the columns are, so that the user can redraw the columns and then ask for a new OCR within the new column definitions. (When you run ABBYY Finereader Professional on a stand-alone PC, this is part of its user interface.) For this to be implemented, the OCR software needs to output where its columns are.
    Agree, adding that not only lines interleaved across columns but pictures may be a problem too. With interleaved lines like titles (see e. g. here) the columns usually follow thus: left column above the title–right column above the title–the title–left column below–right column below. With pictures (see e. g. here) the succession of columns is usually different: left column above the picture–left column below–right above–right below. Enabling the user to redefine the columns would really help when proofreading newspapers and magazines. --Jan Kameníček (talk) 09:28, 9 March 2021 (UTC)Reply
@Jan.Kamenicek: Thank you you so much for this feedback! We also agree that the multiple columns issue a pain point for users. For this reason, we have launched an investigation to see how the issue can be improved (T277192), which the team engineers are looking into. Meanwhile, we have heard from other users about the benefits of ABBYY. We don’t know if we can add it, since it is normally a paid service, but we will investigate. If we have any updates on this issue, we’ll be sure to share them on the project page. Thank you again! --IFried (WMF) (talk) 18:12, 15 April 2021 (UTC)Reply
  • OCR needs to work well on very large pages with many columns, e.g. newspapers. The Internet Archive announced in August 2020 that they are exploring this.
  • When pages are proofread, the words that are corrected need to be fed back into the OCR process. If the OCR text contains bam because barn was missing from its dictionary, my correction of bam to barn should feed barn into the OCR dictionary. Other pages with OCR text that contains bam, and that have been OCRed with the same old dictionary, also need to be updated. This requires a whole new level of bookkeeping. The problem is that we regard the OCR process as an unknown black box. We don't fully control which dictionary it uses. For this to work, we need to know a lot more about how the OCR process works.

--LA2 (talk) 18:41, 4 March 2021 (UTC)Reply

@LA2: Thank you for this comment! First, regarding looking into multiple column support, we have already begun investigating this issue (T277192), and we’ll try to look into what the Internet Archive is doing as well. Second, regarding your suggestion about updating the dictionary, this is probably out of the scope of the project, unfortunately. However, if this interests you, we encourage you to submit as a separate wish in the 2022 survey later this year. --IFried (WMF) (talk) 18:14, 15 April 2021 (UTC)Reply

Which OCR tools do you use the most, and why?

edit
@Koreller: Thank you for this feedback! This is great to hear, especially since we are us considering doing work to specifically improve Google OCR. In that case, what do you think are the top things that need to be improved about Google OCR? Thank you in advance! --IFried (WMF) (talk) 22:13, 3 March 2021 (UTC)Reply
@IFried (WMF): I think it would take :
  • (important imo) make the tool native (i.e. accessible by default)
  • (important imo) select only an area to make OCR work (for example to Ocerize a page with two columns, or only part of a page)
  • remove the hyphen "-" for hyphenated words when OCR is used
  • transform the typographic apostrophe into a curved apostrophe
  • (important imo) maybe it is possible to make settings on the OCR ? (I don't know if it is possible but if it is it can be interesting) — Koreller (talk) 20:43, 4 March 2021 (UTC)Reply
  • I use sometimes Google OCR, but I mostly work on archive.org books, where I find an excellent structured OCR (_djvu.xml, and recently hOCR). Dealing with texts that don't come from archive,org, I use a personal ABBYY FineReader application.--Alex brollo (talk) 09:12, 3 March 2021 (UTC)Reply
@Alex brollo: Thank you for sharing this! Can you let us know more about when you choose to use Google OCR vs. when you choose to use another tool, such as on archive.org? Perhaps you can give us some specific examples? We are asking because we hope to improve Google OCR during the project, so we would like to identify its greatest weaknesses and pain points for users at this time. Thank you in advance! --IFried (WMF) (talk) 22:14, 3 March 2021 (UTC)Reply
@IFried (WMF): I alwais use IA OCR (wrapped into archive.org djvu files or, by now, into djvu files built by IA Upload tool) but rare cases where column segmentation of text is wrong (when archive.org OCR engine guesses unexistant columns; a not unusual problem into play text in verse) or when I work on pages, that I didn't load personally. My present "loading style" is effective but a little difficult to explain; in brief, I download archive.org _djvu.xml file and I work on it offline, then uploading the resulting text into nsPage by bot or by mediawiki Split tool.
Using Google OCR, I noted an excellent character recognition, but sometimes some words/some groups of words are moved away from their right place - a very annoying thing. --Alex brollo (talk) 16:30, 4 March 2021 (UTC)Reply
  • on frwikisource, we only have the Phetool OCR (Tesseract), which, after having been unavailable for months (almost a year I think), has finally been fixed, a few months ago... thanks to the nice devs who finally found what the problem was (something about not emptying de memory if I remember well)... - which is why many of us took the habit of asking a contributor who has ABBYY to OCR books before we correct them, but this means relying heavily on the goodwill of a single contributor Template:Wink

Generally, I find Tesseract tool very reliable on recent books (19th or 20th century), generally improving the recognition from older Gallica or Google scans - but never better than ABBYY... -- On old texts (18th and before), no OCR is really reliable, but contributors have developped scripts that allow automatic correction of fairly frequent errors, and that makes proofreading easier... -- I would be glad to test the Google OCR, if I could activate it on frwikisource - but it is not available in gadgets. -- If it was possible to have ABBYY Finereader tool for difficult texts (some online version), I think it could really be very interesting for difficult books --Hsarrazin (talk) 08:19, 9 March 2021 (UTC)Reply

  • I often use Google OCR which has improved very much in the last year or two. It often produces better results than original OCRs of files from HathiTrust or Archive.org. Mediawiki also extracts original OCR layers of PDF documents very badly and so I use Google OCR to replace it. There are still some serious problems with it which I will describe in the sections below. --Jan Kameníček (talk) 10:16, 9 March 2021 (UTC)Reply
  • On Bangla Wikisource, I just use GoogleOCR, but it sometimes fails to render old characters. Like a weird full-stop, or some page ends. --Greatder (talk) 05:22, 17 March 2021 (UTC)Reply

For Hebrew

edit
  • I use a non-free (and pretty expensive) OCR software, called ABBYY Finereader. I also use it professionally (I work for an Israeli Publishing House). The reason is that it is far superior to free OCR tools for Hebrew. I encounter much less Scaning errors with ABBYY Finereader than with any free OCR tool I have tried. However, when proofreading books uploaded by others to Commons, I use the OCR Gadget, because it is already there, unless the quality of the OCR is too poor to be usable.--Naḥum (talk) 09:51, 18 January 2021 (UTC)Reply
@Nahum: Thank you so much for this feedback! We are also curious to know your opinion on how Google OCR handles RTL text? Would you say that OCR Gadget does a better job than Google OCR -- and, if so, why? Also, is it possible to provide us some examples of the superior OCR quality that you find with ABBYY Finereader over the free tools? This would help us identify problem areas and see what solutions may be possible. Thank you in advance! --IFried (WMF) (talk) 22:15, 3 March 2021 (UTC)Reply

For Indic wikisource

edit
@Jayantanth: Thank you for this feedback! Between Indic OCR and Google OCR, which tool do you think is the best, in your experience? When do you choose to use one tool over the other? Thank you! --IFried (WMF) (talk) 22:17, 3 March 2021 (UTC)Reply

For Neapolitan wikisource

edit

Book to Test Support for Non-English

edit

Stumbled across this book that has over 500 languages higher quality scans lower quality scans. It's probably a great way to test multilingual support. Languageseeker (talk) 05:42, 13 March 2021 (UTC)|partial, higher quality scansReply

What are the most common and frustrating issues you encounter when using OCR tools?

edit
  • the default OCR works well, but google is superior for marginal scans, or non latin characters. texts before around 1870 are harder for the OCR introducing more errors. OCR errors tend to be systematic per work leading some to use find and replace for repeated errors. two columns texts are a problem requiring much hand zipping Slowking4 (talk) 02:39, 18 January 2021 (UTC)Reply
@Slowking4: Thank you for this information! Overall, do you tend to use OCR Gadget (“basic OCR”) or Google OCR more often, and why? Also, thank you for providing information on how Google OCR tends to work better for older scans and non-Latin languages. However, we also understand that support for older books and multiple column texts is currently lacking. For this reason, we want to analyze if we can improve multi-column books. We don’t know yet, but we’ll see! Thank you and we look forward to hearing your response to our question on whether you prefer OCR Gadget or Google OCR. Thank you! --IFried (WMF) (talk) 22:19, 3 March 2021 (UTC)Reply
thanks for the effort. i will use basic OCR first as it handles 2 columns better. but will test if google OCR is better for a work's scan, and then use it to get an un-proofread version, (red) to improve later. hard to determine the quality of the text layer, except by trial and error. sometimes, after saving an un-proofread version, then we will paste in a scrape from gutenberg. for works with a lot of greek and latin characters, (natural history survey books) google is better. for works with a lot of French accents, google is better also. the basic OCR seems to like modern fonts, so for older editions, (around 1870) google is better. for really bad scans, (around 1840) then neither work well, and text is from zero, like handwriting. (i.e. [2]) for tables, and math equations, neither work well, so we have to do by hand. google OCR is slower as basic loads on opening the new page, and for google you have to press the button. (this is English Wikisource) Slowking4 (talk) 23:35, 3 March 2021 (UTC)Reply
@Jayantanth: Thank you for sharing this! Can you provide more information on what is not good about the OCR? If you have specific examples of the errors or issues you are seeing, that would be very helpful. Thank you! --IFried (WMF) (talk) 17:20, 9 March 2021 (UTC)Reply
  • The ocr output for 2 column or multi column is not as expected. The OCR output is expected column wise. But the the actual output of the ocr is line wise first line from the first column, first line from second column and then second line from first column, 2nd line from second column. What is the desired result is first, second, third, etc from from first column and then followed by second column. In case of dictionaries it is very important. without this column recognition the OCR result is practically useless with so many rearrangements to correct it. Currently we type it directly. Previously in the tool OCR4wikisource (by @Tshrinivasan:), there was a program to specify the number of columns in the image and the tool will spilt up the image vertically and send to the OCR and collect the data sequentially and then give us the desired output. Using this approach in Tamil wikisource we have OCRed many dictionaries with 2 columns. But I don't know the problem the tool now is not supporting the multicolumn. Is there any way to mention the columns in other OCRs like google or Indic OCR? -- Balajijagadesh (talk) 02:34, 27 January 2021 (UTC)Reply
    • @Balajijagadesh: Thank you so much for this feedback! We have been able to reproduce this problem in our own tests, and we understand that this is very frustrating for Wikisource editors. We have created a ticket to analyze this problem and see if there is anything we can do to improve the situation (T277192). Once we have more details on this analysis, we will share it on the project page. Thank you so much for bringing this to our attention! --IFried (WMF) (talk) 17:45, 12 March 2021 (UTC)Reply
  • I have often issues with OCR with PDF files. E.g. this file was uploaded in 2019 and still is not possible to use OCR button. Google OCR works here.

And one more problem is with dictionary - OCR services probably use some dictionary and tries correct words according it, but some words in older texts are always replace with different modern (in old czech text v pravo (on the right), in modern language is vpravo so this word is not in dictionary, and OCR always (99%) writes v právo (to the law)). JAn Dudík (talk) 15:24, 28 January 2021 (UTC)Reply

    • @JAn Dudík: Thank you so much for this feedback! During the course of this project, we will probably be focusing on improving Google/Wikimedia OCR more extensively than other OCR tools (since it is what we worked on in the past). However, if you are experiencing issues with Basic OCR, perhaps you can report it in Phabricator so we can have it documented. In that case, is there a ticket for this issue? As for the issue with correcting words, we understand that this can be very frustrating. Unfortunately, it is out of scope for this project, since we won’t be focusing on improving the actual rendering of text itself, but rather we will focus on improving the efficiency and reliability of the OCR tools. Again, thank you! --IFried (WMF) (talk) 17:47, 12 March 2021 (UTC)Reply
  • The most frustrating was when I didn't know, during 1-2 months, if the existing OCR button (the native button), was really the basic one because it didn't work, which forced me to find an alternative → google OCR. One of the other problem is the use of the OCR tool on the columns, which really doesn't work well — Koreller (talk) 17:55, 18 February 2021 (UTC)Reply
    • @Koreller: Thank you so much for sharing this! Both of the issues you described -- i.e., not knowing which OCR tool to use, and experiencing issues with multiple-columned texts -- are also issues that we have identified and will be exploring as a team. We have written about some of these issues in our first project update, but we will provide more information on them as we dig deeper into the project. In that case, please stay tuned and thank you again for your feedback! --IFried (WMF) (talk) 17:43, 19 March 2021 (UTC)Reply
  • I would like to get optionally hOCR instead of text, since its text structure can be fixed using text fragments absolute coordinates; hOCR could be used too to get some formatting suggestions (font-size, indents, centered text....) by local jQuery scripts. --Alex brollo (talk) 09:18, 3 March 2021 (UTC)Reply
  • It would be nice to have OCR just working normally on languages for which it doesn't have a dictionnary to refer to. One Neapolitan wikisource it is a disaster, that becomes worst when dealing with texts from 18th century or before. --Ruthven (msg) 19:42, 3 March 2021 (UTC)Reply
    • @Ruthven: Thank you for providing this feedback! If we understand correctly, you are requesting the ability to have OCR tools not automatically determine the language, since the automatic choice is sometimes incorrect. Is that what you are saying? Once we have more information, we can look into what may be appropriate next steps. Thank you! --IFried (WMF) (talk) 16:29, 30 March 2021 (UTC)Reply
    @IFried (WMF): I dunno the technical details behind an OCR software, but yes, it would be useful to teach the OCR the dictionnary of a specific language. If it selects Italian for Neapolitan texts, 70% of the words will be erroneous. This means that 30% of the words will be correct. But it isn't more precise to just recognise single characters instead of complete expressions of a given language in this case? --Ruthven (msg) 19:40, 2 April 2021 (UTC)Reply
  • Google OCR does not join lines together as required in Wikisource but leaves them separate, which causes problems with further formatting and the lines have to be joined manually or using some other tool added to local commons.js, which is not very friendly to unexperienced newcomers. --Jan Kameníček (talk) 10:16, 9 March 2021 (UTC)Reply

Which problems, overall, do you find the most critical to fix, and why?

edit
  • I have read above that the tech team aims at improving Google OCR tool in this project. The biggest problem I can see with this tool is that it has to be specially switched on in preferences and is not available by default. That is not a problem for experienced users, but newcomers usually do not learn about its existence in the beginning at all. I was told that the privacy policy doesn't currently allow us to turn it on by default. However, we do need an excellent OCR tool that can be turned on by default because only such a tool can help to keep the newcomers in the project. --Jan Kameníček (talk) 10:16, 9 March 2021 (UTC)Reply
    • @Jan.Kamenicek: Thank you for sharing this! We completely agree that, ideally, there should be an OCR tool turned on by default. This would be much better behavior for both newcomers and all participants in the project. We will investigate if this is possible in the project. Also, thank you for bringing up that there may be privacy policy issues. We will investigate this question as well. Thank you again, and we will provide more information on this topic in future project updates! --IFried (WMF) (talk) 16:33, 30 March 2021 (UTC)Reply

Anything else you would like to add?

edit
  • I think it would be useful if the new OCR tool will continue to expose an API endpoint (link phetools or google-ocr) to fetch the OCR for a page, so users could build tools around it to automate tasks.
  • Providing an optional way to indicate the OCR's certainty level would be great. For example, allowing editors to switch on highlighting for the parts of the text that the OCR is less sure about. This would be a great way to allow editors to concentrate on the parts of the text that the OCR had problems with, quickly checking these parts and making appropriate fixes as necessary. --YodinT 13:05, 31 January 2021 (UTC)Reply
    • @Yodin: Thank you so much for providing this feedback! Regarding your first point, this is a great idea, and we’ll discuss with the team if it is possible. We have created a ticket for this work (T278444). Please feel free to add any relevant details to that ticket. Regarding your second point, this is a very exciting idea, but it may be too large in scope for the team to take on. For both of these ideas, we will discuss them as a team, and if there are any updates, we will share them on the project page. Thank you! --IFried (WMF) (talk) 16:46, 30 March 2021 (UTC)Reply
  • Recognition on texts with columns is rarely good, maybe there should be a system to choose several areas manually from the image to be recognized, that would facilitate this recognition. — Koreller (talk) 17:55, 18 February 2021 (UTC)Reply
  • The existing tool (and also the google-ocr) fail when the text is set in columns. Even if it is not often it is quite annoying since this kind of text is also quite tough for many external programs. Here a few examles of pages sharing this layout:   ,     ,   and   Please, notice the different number of columns, rulers or empty space between columns and a header. So it would be nice if the coming tool could cope with these problematic pages. Draco flavus (talk) 19:28, 2 March 2021 (UTC)Reply
  • Hello, the first thing I would like to add is a big THANKS. This document helped me greatly. Actually, even better than it, it helped me to find a solution for someone which asked me for some help, when I wasn't sure something with a low enough technical entry ticket was existing that I could provide as an answer. The problem I was asked for was how to improve transcription of pages from Catalogue de l'histoire de l'Amérique, V like this one. The issue as you can see is that the default OCR doesn't provide anything useful. The document was hosted on the French Wikisource. After a quick search for "OCR" on this same wiki and going through a few links, I landed here. I started to read it, and arrived at the Google OCR section, which made me rush to the Gadget section of my preferences on the French Wikisource. But it was nowhere to be found. I read the documentation more carefully and saw that it seemed to be only available on the multilingual Wikisource. Luckily, this book is actually multilingual. So I re-indexed the first volume there, enabled the gadget, and here is the result of recognition of the same page : now that's really something that is going to help my friend. So, my recommendation I guess is obvious: provide the Google OCR gadget on all languages where it can give better result than the default one. If it's most of the time everywhere, just switch the default and let the other one as a gadget option (you never know…). Note that I didn't tried the last option : I have no doubt I will be able to deploy it, but that is not an option for my friend to work autonomously. So a second suggestion, provide the OCR4Wikisource as a toolforge service, where people just give the URL of the index page, and the tool make all the work using some bot account. Not everyone will be at ease with a command line, but anyone can copy paste an URL and click "Go!". I admit that should be done with a bit more reflections on "what could go wrong?", but basically that's it. As for Indic OCR, I didn't try it either, but if it works better in some case, just make it the default where relevant, and keep it as an optional Gadget everywhere else. Here are what I would find fine:
    • all Wikisource page edition comes by default with a single OCR button, which is set to what works best for the current language according to experience
    • all Wikisource allow to optionally enable the same set of OCR
    • the index page have a "bulk OCR" drop down button that open a wizard which allow to preview samples depending on which OCR tool option you select before validating to launch the job. --Psychoslave (talk) 21:29, 5 March 2021 (UTC)Reply
@Psychoslave: Wow, thank you for such lovely and thorough feedback! We are so happy that the project page provided helpful context for you as well. Regarding your specific recommendations, here are some thoughts:
  • Provide Google OCR gadget on all languages: Yes, we also agree that there should be some default OCR that is accessible to all users, without an installation process required. We have a ticket to look into this (T275547), and we also write different or more tickets on this in the future. The ultimate goal will be make a default OCR tool available.
  • OCR4Wikisource as a toolforge service: Thank you for this wonderful idea! We have one question for you: Would you still want this feature if we created a default way to access bulk OCR support within Wikisource? We are asking because we currently have an investigation lined up to try to make this possible (T277768).
  • All Wikisource allow to optionally enable the same set of OCR: Can you clarify what you mean by this?
  • The index page have a "bulk OCR”: As written above, this is a big priority for us, so we plan to investigate this very soon (T277768).
Thank you, and we look forward to your response! --IFried (WMF) (talk) 16:56, 30 March 2021 (UTC)Reply
Would you still want this feature if we created a default way to access bulk OCR support within Wikisource?
I think that would be even far better in term of integration. So if you manage to make that happen successfully, all congrats in advance. On the other, good integration is a far more tough challenge than a compatible external tool. The later is also more aligned with a more distributed platform, as opposed to a single monolithic application. My impression is that WMF is trying to lead the platform to something composed of more autonomous software component, but you will be certainly better informed than me on that point. So, it's all a question of trade-off. Sure for end users, better integration in a single interface seems more appealing as it create a less complex environment to grab in order to contribute. On the other hand, that might bring the platform to a more tied state of affairs on the software development side. Sure that last point is not necessarily a fatality: software architecture/good-practices and tools can prevent a lot of wild interferences. Moreover, if you have the resources to do both, why chose? Of course, in our finite world, most of the time we don't have infinite resources to make all options maintained in parallel indefinitely. You can however evaluate how easy it will be to implement the independent service, and if it is far more likely that it will be a far easier task to achieve, implement that first. And then, as you already have your minimal viable product, you can consider to throw more resources on the more desirable well integrated solution. Well, that's all high level strategies on implementation process, and probably all things you are perfectly aware of, but sometime when we are in real action conditions it's harder to step back.
All Wikisource allow to optionally enable the same set of OCR
Can you clarify what you mean by this?
I think that simply it was in misconception that it was not the case. I should check that again to see if I can indeed achieve the same kind of various OCR calls, whatever the language version I try.
Thank you for your feedbacks, and it's always a pleasure to help with a few ideas if it can help to improve the environment for all contributors. Cheers, Psychoslave (talk) 14:26, 7 April 2021 (UTC)Reply
  • Besides better OCR, we need better quality images. No matter how good the OCR is, the ultimate correction will depend on having texts that are easy to read. The current approach of extracting images from either PDF or DJVU files on Wikisource is fundamentally flawed because both rely on heavily downgraded images. Such images are also not suitable for the usage of crop tools.
    • Support full quality JP2 or PNG convert from the JP2 on IA.
    • Automatically redo OCR periodically if the page has not been proofread or the text does not come from a merge-and-split
    • Support hOCR or ALTO to provide some formatting
    • Discuss the way of reintegrating corrected text back into the original text. This is a future request.

--Languageseeker (talk) 03:44, 9 March 2021 (UTC)Reply

@Languageseeker: Thank you so much for this feedback! Can you provide some specific examples of low quality images impacting the OCR, which we can analyze? As for the comment on automatically updating the OCR-ed pages that have not been proofread, this is an interesting idea and we understand that it could be valuable. We will discuss this as a team. As for the hOCR/ALTO comment, we have also heard that feedback from other Wikisource community members. We have written a ticket for this (T278839) and will discuss it as a team. Finally, regarding the reintegration of corrected text comment: This is a great idea! We don’t know if we can do this, since it is out of the specific focus area of this project. But thank you for bringing this up, and we hope someone can take up this feature request after we improve the OCR tools. --IFried (WMF) (talk) 17:01, 30 March 2021 (UTC)Reply
  • Rather than a single language, it’s better to use a range of characters because there are many books that mix languages. A book can contain Hebrew, French, and English all at once. Just proofreading for English will leave considerable work for volunteers. Languageseeker (talk) 19:08, 10 March 2021 (UTC)Reply
@Languageseeker: Thank you for providing this feedback! Can you provide some specific examples of books with multiple languages that are difficult to properly OCR? Once we have these examples, we can analyze them. Thank you! --IFried (WMF) (talk) 17:06, 30 March 2021 (UTC)Reply
@Fried (WMF): I think that this book is the ultimate test because it contains text in 500 languages. [3]
  • For the column, it's probably better to develop a tool that can split pages, run the ocr on the individual columns, allow for proofreading the individual column, and then automatically reassemble the transcribed text. Not only will this make OCRing easier, it will also make proofreading easier. Languageseeker (talk) 02:00, 12 March 2021 (UTC)Reply
@Languageseeker: Thank you for providing this feedback! We are investigating the multiple column issue (T277192) right now, since we agree this is a major issue to address. We’ll provide an update on the project page when we know more. Thanks! --IFried (WMF) (talk) 17:07, 30 March 2021 (UTC)Reply
  • It occurs to me that effective OCR has been hampered by a series of conscious and unconscious biases that have crept into the software.
    1. Conscious: A text has a single-specific language. In fact, printers did not think of texts in the form of specific languages by rather than specific pieces of type. Any text can have multiple languages, but these are just collection of type pieces. When OCR was developed, it ignored this historic reality because it focused on digitizing documents for professionals.
    2. Unconscious: There is a universal algorithm to recognize a particular character. If this algorithm can be perfected, then OCR will have 100% accuracy. In fact, letters can look extremely different. Take the letter a and look at 15th century texts, 17th century texts, black face, and Fractur. They all look remarkable different. No algorithm can actually capture them. The current algorithms are biased towards how letters appeared at the end of the twentieth century.
    3. Conscious: OCR should not include human intervention. In fact, through proofreading, we're already introducing human intervention after the fact.
    4. Conscious: Confidence level should be hidden from the human. They're statistical curiosities at best. In fact, highlighting low-confidence characters in a visual form can help guide proofreading.
  • To develop a better proofreading, I think it's important to review two basic facts from how a text was created.
    1. Typesetters set a book one character at a time. Therefore, the character and not the word is the basic unit.
    2. Typesetters had to have a consistent visual look. Therefore, they had bins fill with characters that were visually identification. Furthermore, these characters were grouped into typefaces that attempted to provide visual consistency across. These were called typefaces. Following from this, there is a set number of possible representations of any character.
  • To think of achieving OCR, what if, instead of trying to guess, what a particular character is, we try to reverse the typesetting the process? That is, we take the typeset text, break it down into individual characters, group into bins, and ask a human to label the bin. I imagine that it would go something like this:
    1. Take a book.
    2. Separate it into individual character. Mark each character with an individual tag.
    3. Run an algorithm to group similar characters. The computer won't know what the character are, just that there there is an Image_Group_1 with 5,000 characters that have the same appearance.
    4. Make a grid where one image from each character group is displayed. A person can then label the image character with both the machine character and any formatting. For example, this image represent a lower-case italic i. Therefore, Image_Group_1 represents an italic i. Defective characters would need to be labelled as variants.
    5. The OCR would then replace all instance of Image_Group_1 with ''i''
    6. For the characters that are on the margins of the character group, the OCR should also tag them to make them visually easy to see for the proofreading.

This first identification would become the basis for a raster font that can then be used on other books. Over time, we would build a library of raster font files to use for comparison. For instance, we would have a raster fraktur font, a raster Garamond font, a raster Petrus Caesaris and Johannes Stol Type 1:109R [4]. Over time, the need for human intervention will decrease as we teach the OCR program which images correspond to which character. This would be a variant of Matrix matching relying on the fact that no new font faces will be added from the past and that comparing images is much faster now than in the 1980s/1990s. After we group all the characters in a book, we'll probably have around 200-400 groups to compare to a library of several thousand identified characters. We also won't have to worry about language because we'll just be comparing images. To get more information about a particular character, we can initially feed the OCR software books with sample type. Lastly, this can also help to add in other type features such as horizontal lines that the program would recognize as a just another character Languageseeker (talk) 04:28, 13 March 2021 (UTC)Reply

@Languageseeker: Thank you so much for this thorough analysis and recommendation! It’s very exciting to see people think through how OCR tools can be fixed and improved from the ground up. Unfortunately, we will not have the capacity or resources to build a new OCR tool in this project, and we will not have the resources to write out a new OCR algorithm either. However, there is still a lot of other work we can do to improve the existing tools, including some of the work you have suggested in previous comments. Thank you again for this feedback and we hope to hear more from you after we share our next update! --IFried (WMF) (talk) 17:08, 30 March 2021 (UTC)Reply

Current version is very good but some tweaks are needed.

edit

The current version of the OCR in en.wikisource works well. It recognizes columns and separates paragraphs. But, it cannot recognize double quotes properly and displays them as two single quotes, or just a single quote, most of the time.

Done extensive tests comparing Google OCR and our own which is better overall. Speed of OCR reproduction is not relevant in Wikisource when proofreading a page.Ineuw (talk) 20:17, 15 April 2021 (UTC)Reply

Number of simultaneous OCR users?

edit

In the past few days, the Wikisource OCR takes so long to clean up text that I thought that it's broken. Then, I saw the cleaned text after some five minutes. How can I improve the OCR process time at my end? Is the speed affected of the number of simultaneous users, or by the browser cache? I tested this in Firefox and Vivaldi browsers in Windows and Linux, and increased the cache memory and space, but so far, it didn't help. Ineuw (talk) 06:54, 20 April 2021 (UTC)Reply

April 2021: Request for feedback ons 1st status update

edit

@Xover, Languageseeker, Koreller, Hsarrazin, Nahum, Jayantanth, Jan.Kamenicek, LA2, Alex brollo, Greatder, Ruthven, Slowking4, Balajijagadesh, JAn Dudík, Draco flavus, Psychoslave, Ineuw, Peter Alberti, and Yodin:

Hello, everyone! We have just posted a status update for the OCR Improvements project! This is our first major update since the project was launched, and we would love to hear your feedback. Your suggestions and ideas have already been so helpful to us. So, we now invite you to answer the questions below, which will help us determine the next stages of the project. Thank you in advance! --IFried (WMF) (talk) 18:17, 23 April 2021 (UTC)Reply

Thanks for keeping us updated and all the amazing work already accomplished.
A few feedbacks:
  • https://ocr-test.wmcloud.org/ currently returns The server returned a "500 Internal Server Error".
  • the explanation on why developing a new OCR is not on the agenda is clear and rational, and it's good to see that rather than dismiss the whole demand, it was requalified into a more manageable tasks. That is a really great way to deal with these kind of large demands, KUDOS.
  • within the French wikisource, I can find the Google OCR gadget, could you check and confirm/infirm whether it's just me missing something? In the English version I was able to enable it without issue following the instruction in the documentation you provided.
I started to work on a text that contains French (main language of the book, including for second hand translations) along ancient Armenian it translates (first hand translation) and where available ancient Greek that the Armenian translates (original text – or at least as close as you to the original as you can expect to get for such an ancient text). As I didn't found the text by myself, I asked on the French community portal and someone not only found it but also uploaded it on Commons and started to make the scans available on the French Wikisource. But currently, no OCR option I was able to enable on this Wikisource provides me pertaining results for a page like https://fr.wikisource.org/wiki/Page:Cirbied_-_Grammaire_de_Denys_de_Thrace,_1830.djvu/37 I was especially interested to look if the Google OCR would provide better results. As is, the only option I see to try that would be to move the transcription efforts on the international Wikisource. Do you have suggestion on alternative/better way to handle this? I red the point Accept Google Options on the API in this news with interest regarding this topic, but since I yet have to find a way to enable it on French Wikisource, it wouldn't help much even if it was already completed work. Note that the text additionally has a multiple column layout, so it also a good candidate to challenge the Investigate how to improve multiple column issues topic.
I'll answer your open questions later. Thanks again for all your great work, Psychoslave (talk) 12:50, 24 April 2021 (UTC)Reply

@Psychoslave: Thank you so much for your feedback! First, we just tested https://ocr-test.wmcloud.org/ and it seems to be working fine again. Is it okay for you? Second, we really appreciate your kind words regarding our choice to improve OCR tools (since we can’t develop a totally new one). It’s great to hear! Third, regarding your point that begins (“...within the French wikisource, I can find…”), are you saying that the OCR tool is already enabled by default? Or are you saying that you cannot see the OCR tool at all? Regarding your description of difficulties regarding the Grammaire de Denys de Thrace, we are currently working to make the tool available to all users (T280848). In the meantime, you can personally enable it for yourself by 1) going to your common.js page on French Wikisource, and 2) pasting in the following script: mw.loader.load('//wikisource.org/w/index.php?title=MediaWiki:GoogleOCR.js&action=raw&ctype=text/javascript'); Finally, regarding multiple column support, we are currently working to try to improve this situation, and we’ll update the project page on our progress. Thank you so much, and we look forward to your response! - NRodriguez (WMF) (talk)


I tried it for some pages. Sometimes there was error 500 for whole service, sometimes only for google.
Sometimes I got excellent output, sometimes there are problem witj special characters - and when I try the same page few minutes later again I got better/worse output.
There are many pages, where automatic OCR works good. But sometimes there are pages, where would be useful to define text area, column or language(s). And sometimes would be useful to redefine some character - I worked on book where almost all occurences of "jsem" were recognized as "isem", now I work on book with dirty types and many false | in text. JAn Dudík (talk) 11:45, 27 April 2021 (UTC)Reply

@JAn Dudík: Thank you for sharing this! First, the 500 error is fixed now, but please let us know if you are still experiencing issues! Second, can you provide specific examples (such as links to pages and/or screenshots) of OCR outputs being different for the same page? We have not encountered this issue ourselves yet, so additional details would be very helpful. Third, you wrote about how it would be helpful for users to be able to specify languages, columns, or text areas. The team is currently looking to see how we can improve support for texts with multiple languages or columns (T280213), so you can expect more updates on this soon! Finally, you wrote about how sometimes the OCR tools have incorrect output (such as giving “jsem” instead of “isem”). It is unfortunately out of scope of the project to actually change the core functionality of the OCR tools and the quality of their output. However, you may consider using the cleanup.js script, which can help with these sorts of issues (example). Thank you! NRodriguez (WMF) (talk)

Tested the tools and my observation below.

@Jayanta (CIS-A2K): We have just upgraded to 5.0.0. Could you try again and let us know if the output is better? here is the link. DWalden (WMF) (talk) 16:00, 13 July 2021 (UTC)Reply
What about the mass-ocr tool? As I have relier explain. Jayanta (CIS-A2K) (talk) 06:56, 29 April 2021 (UTC)Reply
Hi,
The status update seems to me very good,
  • you have well identified the problem of access to OCR tools (which should in my opinion be enabled by default, when it is in place) and their simple and quick setting.
  • On OCR improvements → the most urgent thing seems to me to make it work with columnar texts (mainly 2 columns) and you want to work on it as a priority : that sounds good to me ! — Koreller (talk) 10:01, 2 May 2021 (UTC)Reply

What are your general thoughts about the project principles?

edit

I appreciate the efford to fix column text recognition. Can I suggest here a possible trick, or do you prefer another page? --Alex brollo (talk) 09:32, 24 April 2021 (UTC)Reply

Please suggest away! I forgot to cc you in my reply below. NRodriguez (WMF) (talk) 19:18, 30 April 2021 (UTC)Reply

Please suggest away! The CommTech Engineers and Designers have been researching, but we always love to hear suggestions on our solutioning process from the community. NRodriguez (WMF) (talk)

  The idea is, to get OCR of part of a page only, instead of the whole page; this could be done simply by selecting a box of the side image of the page, then sending to OCR engine its relative coordinates. Resulting OCR should be added to page text, or returned into a separate box, so allowing a copy-and-paste of the text in the right position of page text. The trick could give a result similar to selection of a box of text into a djvu page (I do so sometimes, when I need to fix severe segmentation mistakes of OCR layer). --Alex brollo (talk) 05:34, 23 May 2021 (UTC)Reply

How do you feel about our work to make Wikimedia OCR automatically available, with no installation required?

edit
It will be very good if it is available in Proofread page extension by default. We don't need to install from Gadget/commons.js. Jayanta (CIS-A2K) (talk) 07:02, 29 April 2021 (UTC)Reply
Thanks for the feedback! We look forward to integrating this soon. Check out details for an initial first step, but more elegant UI is coming. NRodriguez (WMF) (talk) 19:10, 30 April 2021 (UTC)Reply

How do you feel about our work to add Tesseract to Wikimedia OCR?

edit

This is very good, but please update or use the latest version of Tesseract for better out for specific Indic languages. And it is open source. Jayanta (CIS-A2K) (talk) 07:04, 29 April 2021 (UTC)Reply

@Jayanta (CIS-A2K) Thanks for the thorough note above, and for this second response as well! We were planning to include the latest stable release of Tesseract version 4.0.0 but since you linked to the examples above showing us the power of 5.0.0 alpha for Indic languages + multi-column support, we will be upgrading to that version. Feel free to check out the task in Phabricator-- as always, thanks for your feedback! It helps the team re-define what to include in the scope of improvements. Stay safe! NRodriguez (WMF) (talk) 17:36, 7 May 2021 (UTC)Reply
Thanks for this task. T282150. Jayanta (CIS-A2K) (talk) 14:13, 19 May 2021 (UTC)Reply

Ideally, what user experience do you recommend for choosing an OCR engine when using Wikimedia OCR?

edit

Baby steps towards AI

edit

Hi, here are some comments about what would be my ideal OCR for Wikimedia projects. I am aware that the actual tool is far from this, but I hope that they can be taken into account for future development.

The current OCR software we have does not make use of AI, if I've correctly understood. We're based on optical character recognition and text layers. However, the current trend in research is to train with Machine Learning techniques the OCR software on large corpora of texts, both to learn to recognise fonts (with geometric distortions, noise, etc.)[5], and corrections for a previously OCR-generated text [6]. Recognising fonts actually can be done without having a lexicon [7].

The ideal Wikimedia OCR should be, in fact, different OCR engines. Each engine being specific of a language and of a kind of font, because our projects are in different languages (duh) and the documents span over several centuries, meaning that they are printed with a large variety of types. As a matter of fact, we have now difficulties in automatic recognition of some languages (e.g. no latin languages, or minor languages), and certain historical fonts (e.g. Gothic fonts like Alte Schwabacher).

My point is that training different OCR engines can be done for our projects. Because we do have a large corpora of texts, from different centuries, and in different languages. The most interesting point being that a lot of these texts are already annotated by humans, namely the volunteers that have proofread the texts on Wikisource. Having annotated data is a central point for being able to recognise historical documents [8][9][10], and we have already solved it. It would be a shame not to take advantage of this resource. --Ruthven (msg) 15:58, 28 April 2021 (UTC)Reply

@Ruthven Thanks for pointing this out! I agree that leveraging human annotated texts from WikiSource to improve the accuracy of the OCR ecosystem at large would be an ideal. We have so many human-annotated works that would be so valuable to train the different engines. Thanks for your thoughtful list of resources for all of the initiatives you cite. While this is not in scope for the wish, the foundation's mission of making knowledge accessible would support any engine trying to use the annotated sources on WikiSource to improve the OCR experience. Do you know of any initiatives needing human annotated texts to make the engines more powerful? Let us know and happy to talk! NRodriguez (WMF) (talk) 19:13, 7 May 2021 (UTC)Reply

What do you think of our work to improve the speed of Tesseract?

edit

Good, even if the main trouble is accuracy of text recognition. Search and fixing of scannos can be very time-consuming. I think that post-OCR, regex based, shared, customable (edition-specific) fixing of text can fasten a lot text user experience. --Alex brollo (talk) 09:50, 24 April 2021 (UTC)Reply

Great to hear and thanks for the feedback. We're also considering adding a clean up tool to our UI although we are figuring out if it's in scope. Feel free to follow the convo here. NRodriguez (WMF) (talk) 19:37, 30 April 2021 (UTC)Reply

Anything else you would like to add?

edit

Why not just use GoogleOCR but fix the typography --Greatder (talk)

  • @Greatder: Improving the underlying OCR was not in scope for the wish improvements because of the high technical complexity. However, by adding support for Tesseract engine, some of the typography cases will be mitigated! Hope this answers your question. NRodriguez (WMF) (talk)

Changed perspective

edit

Since I proposed this task I have learned a few things that have changed my perception of this task. I found out that Windows users can train tesseract using the training part of vietocr (it is on https://sourceforge.net/projects/vietocr/files/jTessBoxEditor/). That process is not Linux only, as I originally thought. The idea behind my part of the request was always to make OCR become better as time goes by. Right now, I think the users themselves can do that more so than when I submitted the proposal. I still do believe tho that removing html tags from finished books in order to train tesseract is complicated and it still would be appreciated if an tool could help with that. The user could then use that to train tesseract. As for new languages, nowadays, I only ask for that an user could add support for a new language (by submitting an traineddata file) without the hassle of trying to get wmf devs to update tesseract just to add an language file. I do not know if that is the practice currently, I did not manage to find any tasks on phabricator asking for added language support for OCR, other than entire tools.--Snævar (talk) 22:21, 4 May 2021 (UTC)Reply

@Snævar Hey Snaevar, thank you for this reflection on how your hopes for the wish have changed. If I understand you correctly, you now hypothesize that:
  • focusing the scope of improvements on making OCR better as time goes by would be a better investment-- i.e. trained data
  • providing a way for volunteers to upload trained data files could help folks get support when new language necessities arise in the future
These are great points! Long-term sustainability of these improvements is something we care about. While our updates will improve the reliability and languages supported overall, improving underlying OCR engines is not in scope given our resourcing for the wish. Thanks for your feedback! NRodriguez (WMF) (talk) 20:14, 7 May 2021 (UTC)Reply
that is good. how would you organize individual editors train tesseract, and then report back to a tool that was convenient for other editors to use? is it possible to use a javascript? which could be migrated to a gadget? looking for action steps / road map. Slowking4 (talk) 21:21, 21 May 2021 (UTC)Reply
I second @Slowking4:'s question. A road map is what I am interested in as well. We know the problems, and now is the time to document what needs to be done, and in what order of priority. My contributions, if of any value, is that of a user but, I am willing to test and report the results as the project progresses. I recommend that this should be placed in an a new 'under development' section, Grouping it as 'Experimental' would be misleading. — Ineuw (talk) 22:07, 21 May 2021 (UTC)Reply
to be fair, we are still brainstorming. and this is a management task. don't know if it would be alpha or beta. need some input from coders to outline task flow and possible steps or paths to get to a OCR improvement cycle. Slowking4 (talk) 22:33, 21 May 2021 (UTC)Reply
@Slowking4: The explanation is much appreciated, I won't post here, unless I have something relevant to add. I can only contribute by testing, if asked. I have no knowledge, just experience.Ineuw (talk) 18:26, 8 June 2021 (UTC)Reply

May 2021: Request for feedback on 2nd status update

edit

@Xover, Languageseeker, Koreller, Hsarrazin, Nahum, Jayantanth, Jan.Kamenicek, LA2, Alex brollo, Greatder, Ruthven, Slowking4, Balajijagadesh, JAn Dudík, Draco flavus, Psychoslave, Ineuw, Peter Alberti, and Yodin:

Hello, everyone and nice to meet you, I am the new product manager for Community Tech🐣! We have just posted a status update for the OCR Improvements project! We are looking for feedback regarding performance and transcription preferences for the different Wikisource projects, in addition to any other feedback you have. Looking forward to hearing from you. Thank you in advance! NRodriguez (WMF) (talk) 19:20, 21 May 2021 (UTC)Reply

Thank you @NRodriguez (WMF)|Natalia. Congratulation for your new position, I hope it will lead to fulfilling collaborations.
I think the next book I will have a look at is Plena Vortaro, a classical public domain Esperanto dictionary, in the frame of Requests for comment/Resolve massive copyright infrigment on Wikitionary in Esperanto. Psychoslave (talk) 14:52, 22 May 2021 (UTC)Reply
Please improve this test page by clarifying how one can test a document with an actual example. It's too cryptic, and is not meant for users. Please don't disregard us, we also contribute.
For example "upload.wikimedia.org and upload.wikimedia.beta.wmflabs.org" what does this mean in the context of selecting a file? This is the file I attempted to scan to help this project, but because of the lack of clear instructions, I got nowhere. And this is without mentioning the time wasted in trying to figure it out.Ineuw (talk) 17:28, 22 May 2021 (UTC)Reply
Second. A beta that can be used an an Index ns would be a far better test than a single page. Languageseeker (talk) 02:21, 1 June 2021 (UTC)Reply
We need as Indic Wikisource community Google OCR with 2,3 or 4 column supported OCR. And automated OCR for text layer to pdf/djvu file. I cannot found this at https://ocr-test.wmcloud.org/ or I may not to apply this due to lack of knowledge. Jayantanth (talk) 15:12, 31 May 2021 (UTC)Reply
@Ineuw @Jayantanth Hello there, thanks for your feedback! We've been busy finalizing improvements, but we haven't forgotten about this ask! You're right to point out that page could use more intuitive copy and more details around how to best use. The CommTech designer @NAyoub (WMF) has made some example flows of that Advanced Tools page here that will be incorporated in this phabricator ticket. Feel free to share your thoughts on what else could be improved about that page. NRodriguez (WMF) (talk) 17:57, 8 June 2021 (UTC)Reply
@NRodriguez (WMF): Thanks for the eye opener, much appreciated. Ineuw (talk) 18:52, 8 June 2021 (UTC)Reply


June 2021: OCR enabled in handful of Wikisources

edit

@Xover, Languageseeker, Koreller, Hsarrazin, Nahum, Jayantanth, Jan.Kamenicek, LA2, Alex brollo, Greatder, Ruthven, Slowking4, Balajijagadesh, JAn Dudík, Draco flavus, Psychoslave, Ineuw, Peter Alberti, and Yodin: Hello all, we have enabled a "pre-release" of the OCR by rolling it out to these initial Wikisources that volunteered to be in our first round of release:

  • hi
  • bn
  • mul
  • ta

We will be enabling for idws over the next couple of days. Please note: The release isn't likely to break anything, and the existing gadgets aren't impacted (it'll just give a separate button, which could be confusing, but doesn't interfere with the existing ones). If you contribute in those Wikisources, we'd love to hear your feedback! — The preceding unsigned comment was added by NRodriguez (WMF) (talk) 01:49, 9 June 2021 (UTC)Reply

  • Users in plwikisource report problem with this gadget that it overlaps scan area and cannot be turned off / disabled. Note, that for already existing pages, that already have text entered it is totally useless. And during page creation it is also useless in most cases as about 90% of book scans that we proofread in plwikisource already have good quality build-in OCR provided by digital libraries being the main source of our books. It is useful for the remaining 5-10% of newly created pages only. And efficient utilization of the whole available screen area is very important for Wikisource users. So a possibility to remove currently unneeded tools (or to move them outside the currently used area) is important. OCR gadgets can be easily turned on/off by users. Ankry (talk) 10:00, 27 June 2021 (UTC)Reply
@NRodriguez (WMF): Please note that in order for mw:Extension:Echo to actually send a notification when you link to a user's user page, the edit with the links must also contain a signature. This edit did not and so nobody was notified about your message. The users in your list are also a self-selected group that, while they try their best, cannot speak for their respective communities: they can provide individual feedback, but for community feedback you need to talk to the actual communities at their respective Scriptoriums (what Wikipedia calls the "Village Pump"). Xover (talk) 17:00, 30 June 2021 (UTC)Reply
@NRodriguez (WMF) Sometimes OCR layer of djvu/pdf file is very poor and Google OCR tool gives a much better result. But there's a itwikisource issue: a very useful editing tool gadget "eis" opens an AJAX-based editing session of nsPage, so that user can jump to other pages (next, previous....) and work on them staying in edit mode. Old OCR gadget reads actual work page/image, while the new gadget reads the image first opened while activating AJAX session. Can this issue be fixed? Where is the script for new tool? --Alex brollo (talk) 04:48, 1 July 2021 (UTC)Reply
@NRodriguez (WMF): And this awesome tool by the itWS community is something that the other Wikisourcen are likely to steal or ape eventually so it would be extremely good if the new OCR tool could be made to work with that kind of tweak. Would it be possible to document the new OCR tool's expectations (a defined HTML ID or class, for example) for what image to work on and where it puts the output? @Alex brollo: If CommTech can find the resources for a little research on this, could itWS make available whoever is familiar with the "eis" tool on a technical level so that they together could try to figure out how such gadgets and the new OCR tool can work well together? If we're lucky it could be really simple to solve. Xover (talk) 06:31, 1 July 2021 (UTC)Reply
Hey Xover, thanks for this note. Posting in Scriptoriums will be a goal in our future. Thanks for all your activity on the talk pages and phabricator. We are wrapping work in this wish by end of july given that we must begin tackling wishes from the 2021 wishlist but will take all of this input into account when defining the conclusion to this wish. ~~~ NRodriguez (WMF) (talk) 19:58, 8 July 2021 (UTC)Reply

Your new OCR is brilliant, 99% improvement on Welsh OCR, it doesn't however like the Welsh accented ŵ and ŷ! Apart from that it makes transcribing texts a lot, lot easier! AlwynapHuw (talk)

@NRodriguez (WMF): Unfortunately, I did not find the new tool as good as I expected it to be. It is great that finally we have a working OCR tool and I have to say that its results as for simple OCR are very satisfactory. However, I cannot say the same about the other features that were asked for in the wish. The wish asked for an integral tool, while we received another external tool instead, which is the reason of several problems. Only basic OCR recognition is offered directly in the Wikisource page namespace, for all other features the user has to leave the page and open the external tool. When I proofread an English text containing some foreign expressions, I cannot add the other language directly. I have to open the tool at ocr.wmcloud.org/ and only there I can add the other language, make OCR, copy it to clipboard and manually insert to Wikisource. And this has to be done with every individual page again and again, which is absolutely frustrating. I stopped using this feature at all after some time. The same applies for work with columns, the need to open the external tool, transcribe every column individually and individually copy it to clipboard and manually insert into Wikisource incredibly slows the work down. To sum it up: It is much better than the broken tool we had before, but I really did hope that we will get a fully integral tool, as so many people wished it. --Jan Kameníček (talk) 16:56, 4 October 2021 (UTC)Reply
Thank you @NRodriguez (WMF):, I would just like to say thank you to the technicians and devs for this new tool (as well as to the designers for their effort in the user experience of the functionality)!
It allowed me to do some good work (which is not finished) on the French transcription of s:fr:Auteur:Maurice Courant's works.
This tool was especially useful for this author as there are sinograms in every other work.
And great this function of text zone to transcribe combined with a language to choose from! Good job! — Koreller (talk) 14:20, 27 October 2021 (UTC)Reply

A tool to create ocr raw pages/text layer

edit
  • we have been using the ocr4wikisource tool developed by tarini for the creation of raw pages of ocred texts.
  • It has major issues that it's not hosted on Wikimedia or its only for advanced users.
  • We need a tool where normal trusted users can input the file names page ranges and create raw ocr pages.
  • We have a great inflow of scanned books and huge lack of skilled users to process it. So such a tool would be really helpful.
  • I have seen enwikisource, frwikisource folks are using phetools and other bots but we don't have any such things
  • Even technical information about how to use it would be helpful for us. QueerEcofeminist [they/them/their] 01:42, 29 April 2022 (UTC)Reply
    @QueerEcofeminist: It should be possible to extend the OCR tool to include some sort of batch system where users can give it a list of index pages or images or whatever, and it'd process them all. In addition, adding Google Drive as an option for OCR shouldn't be too hard (phab:T295842 is the task for that). I do wonder about the use case for the bulk OCRing though. We looked into it last year (phab:T281502) and at the time figured that if the page-by-page OCR was fast enough then that'd be enough. Is it not fast enough? Or are there other reasons that having the OCR text be pre-filled in the wiki pages is desirable? I know that some people have said that it's not a good idea to have a quick way of dumping OCR text into the wikis, but I'm sure there are different opinions about that. — Sam Wilson 06:06, 29 April 2022 (UTC)Reply
    @Samwilson, thanks for your quick reply.
    • It should be possible to extend the OCR tool to include some sort of batch system where users can give it a list of index pages or images or whatever, and it'd process them all. Yes, that's what we need desperately!!!
    • Yeah, I understand there are oppositions to such automatic raw ocr page creation. But at least at our project mrwikisource we all believe that there is nothing wrong with it. With the pretext that we would be proofreading those pages anyways. So whether automatic creation or by hand it doesn't really affect the quality of the content.
    • As mentioned earlier imbalance between the inflow of books scanned and skilled users working on those books makes it difficult to go for ocr by hand.
    • Page-by-page ocr is fast enough but repetitive and tedious and becomes boring for many of the newcomers, who tend to drop editing on Wikisource after a few edits. In fact, few of the dedicated contributors to Wikipedia/commons/wikidata did stop editing Wikisource, because of this. In contrast to that, raw ocred pages were curated and validated much faster. I can provide you few diffs if needed.
    • Yes, the batch system idea for ocr would be a real transformation for us, I hope there would be something like that soon. thanks and regards.
    QueerEcofeminist [they/them/their] 07:03, 29 April 2022 (UTC)Reply
    @QueerEcofeminist: When editing a page in the Page: namespace, the text box should already be filled in with raw OCR from the underlying file's text layer if it has one. That is, manually running OCR for each page should not be necessary in most cases. Is that not the case on mrWS for some reason? Xover (talk) 13:36, 8 May 2022 (UTC)Reply
    Yes, @Xover the preload text gadget is not working somehow. On mrws. Thanks for noticing that QueerEcofeminist [they/them/their] 14:10, 8 May 2022 (UTC)Reply
    @QueerEcofeminist: Do the source PDF or DjVu files contain a text layer? Could you provide a link to an Index: page for a work where the text box is not automatically filled in? Xover (talk) 14:14, 8 May 2022 (UTC)Reply
    Lemme try and get back to you in sometime. QueerEcofeminist [they/them/their] 14:31, 8 May 2022 (UTC)Reply
    @Xover, I looked into it and found that, our sources of pdf are not providing text layers. Most of the scans are of old books and done by libraries. addionally, if we have the text layer at all then the o QueerEcofeminist [they/them/their] 01:21, 12 June 2022 (UTC)Reply
    @Samwilson, How can we actually have such mass ocr tool deployed on mrwikisource project? As you said it is possible. Those ticket you mentioned phab:T281502 was declined. Is it there any way. we could get it? QueerEcofeminist🌈 05:13, 10 October 2022 (UTC)Reply
Return to "Community Tech/OCR Improvements" page.