Abstract Wikipedia/Related and previous work/Natural language generation

Abstract Wikipedia will generate natural language text from an abstract representation. This is not a novel idea, and it has been tried a number of times before.

This page aims to collect different existing approaches. It tries to summarize the core ideas of the different approaches, their advantages and disadvantages, and points to existing implementations. This page (by and for the community) will help to choose which approach to focus on first.

Implementations

edit
Arria NLG
ASTROGEN
Chimera
Elvex
FUF/SURGE
Genl
GoPhi
Grammar Explorer
Grammatical Framework
  • Wikipedia: Grammatical Framework [ en ] [ nn ]
  • Website: https://www.grammaticalframework.org/
  • License: GNU General Public License: see text
  • Supported languages: Afrikaans, Amharic (partial), Arabic (partial), Basque (partial), Bulgarian, Catalan, Chinese, Czech (partial), Danish, Dutch, English, Estonian, Finnish, French, German, Greek ancient (partial), Greek modern, Hebrew (fragments), Hindi, Hungarian (partial), Interlingua, Italian, Japanese, Korean (partial), Latin (partial), Latvian, Maltese, Mongolian, Nepali, Norwegian bokmål, Norwegian nynorsk, Persian, Polish, Punjabi, Romanian, Russian, Sindhi, Slovak (partial), Slovene (partial), Somali (partial), Spanish, Swahili (fragments), Swedish, Thai, Turkish (fragments), and Urdu.
jsRealB
KPML
Linguistic Knowledge Builder
Multimodal Unification Grammar
NaturalOWL
NLGen and NLGen2
OpenCCG
rLDCP
RoseaNLG
Semantic Web Authoring Tool (SWAT)
SimpleNLG
SPUD
Suregen-2
Syntax Maker
TGen
Universal Networking Language
UralicNLP
  • Website: https://uralicnlp.com/
    https://github.com/mikahama/uralicNLP
  • Supported languages: Finnish, Russian, German, English, Norwegian, Swedish, Arabic, Ingrian, Meadow & Eastern Mari, Votic, Olonets-Karelian, Erzya, Moksha, Hill Mari, Udmurt, Tundra Nenets, Komi-Permyak, North Sami, South Sami and Skolt Sami[1]

Theoretical background

edit
 
Powered by Wikidata

Natural language generation [ de ] [ en ] [ es ] [ fr ] [ 日本語 ] [ nn ] [ 中文 ] is a sub-field of natural language processing. See the broader topic on Scholia.[2]

Pipeline model

edit

In their 2018 Survey,[3] Gatt[4] and Krahmer[5] begin by describing natural language generation as the "task of generating text or speech from non-linguistic input." They identify six sub-problems (after Reiter & Dale 1997, 2000[6]) [2.NLG Tasks, pp. 70-82]:[3]

  1. Content determination (content determination (Q5165077))
  2. Text structuring (document structuring (Q5287648))
  3. Sentence aggregation (aggregation (Q4692263))
  4. Lexicalisation (lexical choice (Q6537688))
  5. Referring expression generation (referring expression generation (Q7307185))
  6. Linguistic realisation (realization (Q7301282))

Please note that the six topics listed above have articles only in the English Wikipedia (24 July 2020).

These six sub-problems can be seen as a segmentation of the “pipeline”, beginning with “early” tasks, aligned to the purpose of the linguistic output. The “late” tasks are more aligned to the final linguistic form. A summary form might be “What (1), ordered (2) and segmented (3) how, with which words (4&5), in which forms (6)”. Lexicalisation (4) is not clearly distinguished from “referring expression generation” (REG) (5) in this summary form. The key idea during REG is avoiding repetition and ambiguity, or managing the tension between those conflicting aims. This corresponds to the Gricean maxim (Grice, 1975[7]) that “speakers should make sure that their contributions are sufficiently informative for the purposes of the exchange, but not more so” (or, as Roger Sessions said (1950) after Albert Einstein (1933): “everything should be as simple as it can be but not simpler!”).

Content determination

edit

Document structuring

edit

Aggregation

edit

Lexical choice

edit

Referring expression generation

edit

Realization

edit
“In linguistics, realization is the process by which some kind of surface representation is derived from its underlying representation; that is, the way in which some abstract object of linguistic analysis comes to be produced in actual language. Phonemes are often said to be realized by speech sounds. The different sounds that can realize a particular phoneme are called its allophones.”
“Realization is also a subtask of natural language generation, which involves creating an actual text in a human language (English, French, etc.) from a syntactic representation.”
English Wikipedia
(Wikipedia contributors, “Realization”, Wikipedia, The Free Encyclopedia, 26 May 2020, 02:46 UTC, <https://en.wikipedia.org/w/index.php?title=Realization&oldid=958866516> [accessed 31 August 2020].)


Black-box approach

edit

In a later survey, Gârbacea and Mei[8] suggested “Neural language generation” as an emerging sub-field of NLG. Eleven of the papers cited in their survey have titles with “neural language” in them, the earliest from 2016 (Édouard Grave, Armand Joulin, and Nicolas Usunier)[9]. The earliest citation in which “neural language generation” appears is from 2017 (Jessica Ficler and Yoav Goldberg)[10].

In mid 2020, “neural language generation” is not mature enough to be used to generate natural language renditions of language-neutral content.

References

edit
  • Jessica Ficler and Yoav Goldberg, 2017[10]
  • Édouard Grave, Armand Joulin, and Nicolas Usunier, 2016[9]
  • Gârbacea and Mei, 2020[8]
  • Gardent et al., 2017[11]
  • Gatt & Krahmer, 2018[3]
  • Grice, 1975[7]
  • Reiter & Dale, 2000[6] (PDF ends at the end of the first section.)
edit

Notes

edit
  1. https://models.uralicnlp.com/nightly/
  2. The Scholia view on Natural-language generation lacked the standard sources and leading authors on 27 July 2020. Instead, see Google Scholar.
  3. a b c Gatt, Albert; Krahmer, Emiel (January 2018), "Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation", Journal of Artificial Intelligence Research 61: 65–170, archived from the original on 2020-06-23, retrieved 2020-07-24 
  4. Gatt's publications
  5. Emiel Krahmer (Q51689943) selected publications
  6. a b Reiter, EB; Dale, R (2000), Building Natural-Language Generation Systems. (PDF), Cambridge University Press., archived from the original (PDF) on 2019-07-11, retrieved 2020-07-27 
  7. a b Grice, H. Paul (1975), Logic and conversation (PDF), retrieved 2020-08-10 
  8. a b Gârbacea, Cristina; Mei, Qiaozhu, Neural Language Generation: Formulation, Methods, and Evaluation (PDF), pp. 1–70, retrieved 2020-08-08, Compared to the survey of (Gatt and Krahmer, 2018), our overview is a more comprehensive and updated coverage of neural network methods and evaluation centered around the novel problem definitions and task formulations. 
  9. a b Grave, Édouard; Joulin, Armand; Usunier, Nicolas (2016), Improving neural language models with a continuous cache (PDF) 
  10. a b Ficler, Jessica; Goldberg, Yoav (2017), "Controlling linguistic style aspects in neural language generation" (PDF), Proceedings of the Workshop on Stylistic Variation: 94–104 . Published slightly earlier that year was Van-Khanh Tran and Le-Minh Nguyen. 2017.
    Ficler, Jessica; Goldberg, Yoav (2017), Semantic Refinement GRU-based Neural Language Generation for Spoken Dialogue Systems (PDF) 
  11. Gardent, Claire; Shimorina, Anastasia; Narayan, Shashi; Perez-Beltrachini, Laura (2017), "The WebNLG Challenge: Generating Text from RDF data." (PDF), Proceedings of the 10th International Conference on Natural Language Generation: 124–133