Problem: Projects to make Wikimedia articles spoken have not gone ahead and/or the number of audios compared to the number of articles is negligible. This is caused by two reasons: Few editors know how to make these audios and over time these audios become outdated by the edits made.
Proposed solution: One of the possible ways to expand and improve this idea is to make software that reads (with pre-established rules) the text of articles in all Wikimedia projects in all languages automatically, allowing people to choose to listen or read, observe misspellings more easily to correct them, learn the pronunciation of words and the visually impaired can benefit from this.
Who would benefit:
More comments: You might find interesting pronunciation lexicons for XHTML and SSML attributes in XHTML. There could be wiki syntax, possibly templates, for listing pronunciations which could be aggregated into pronunciation lexicon resources referenced with document metadata. There could also be one or more wiki templates for rendering XHTML spans with SSML attributes. As for which document content to synthesize, this could also be done using cascading stylesheets using the CSS Speech Module, specifically the speak property. (AdamSobieski idea)