Research:Quantifying the role of images for navigation on Wikipedia
This page documents a research project in progress.
Information may be incomplete and change as the project progresses.
Please contact the project lead before formally citing or reusing results from this page.
The goal of this project is to study the importance of images in readers' navigation and information seeking behaviour on Wikipedia. Following previous research [1] showing that images elicit high level of engagement, in this project we investigate more in depth the impact of images when readers navigate Wikipedia and if they are useful to support their information needs. Some of the research questions that we aim to answer are:
- Do images affect the way readers search for information on Wikipedia?
- Is it faster/easier to search for information if images are combined with the text?
- Does engagement with illustrated parts of the articles change with respect to non-illustrated ones?
- Some studies have found that engagement decreases with article length. Do images help to keep up the reader’s attention and reach the lower part of the articles?
- Do images help fulfil readers' information needs?
- Are readers satisfied during/after their navigation? Do images play a role in that?
Methodology
editTo answer our questions, we design a set of large scale studies using both Wikipedia server logs and crowdsourced navigation data.
Do readers engage more with illustrated sections?
editImages represent powerful means for conveying concepts and information. Educational psychology research shows us that illustrations serve many cognitive functions, among which the attentional one: images help capture attention to the textual information, evoking emotions or making information easier to understand [2].
Here, we investigate whether and to what degree this is reflected in the way readers browse Wikipedia articles. Namely, we address the following research question: are illustrated parts of articles more engaging?
Data&Methods
editTo answer our question, we first need to divide articles into parts which can be labeled as illustrated or not. We identify such parts with sections as they constitute self-contained article concepts. We extract sections from the [[1]] of articles as of March 2021. From the same source, we extract the presence of images and then label each section as illustrated or not.
To quantify engagement with sections, we base our analysis on the links clicked. In other words, a section receives an interaction if the reader clicks on any link within that section. We collect these data from the Wikimedia server logs available in the Webrequest table. We analyze the server logs collected over four consecutive weeks from March 1st to 28th, 2021, for the English Wikipedia. In addition:
- we limit the analysis to the articles in the main namespace, and drop the Main Page
- we exclude non-anonymous users (not logged in) not identified as bots
Finally, we compute the section click-through rate as the share of readers that clicked on at least one link over the total number of readers that viewed that article.
Key findings
editWe perform a matched observational study comparing illustrated vs. non-illustrated sections and find the following:
- there is a clear positive effect of the images in terms of engagement: on average, links receive more clicks if they are accompanied by images within the section. The median section CTR of illustrated sections is twice the CTR of the non-illustrated ones;
- we assign each section the ORES topics of the article it belongs to and repeat the same analysis. We find that for almost all the topics illustrations drive more attention, except for Culture.Linguistics.
Do images support navigation on information networks?
editIn the first part, we find that illustrations increase engagement with the surrounding text. We then ask whether images are helpful to find information in the page, and design a crowdsourcing experiment.
Data&Methods
editTo answer our second question, we design an experiment where we ask readers to navigate between two random Wikipedia articles using only internal links (i.e., wikilinks).
We analyze navigation traces obtained from an online crowdsourced experiment where people play the Wikispeedia game. We ask participants finding the path from two given articles following links. The experiment has two setups: one in which all the articles are illustrated, and the other where all the images are removed. We recruit participants using Prolific and asking volunteers via social media and personal contacts. Prolific is a crowdsourcing platform specifically designed for research, and has been shown to provide higher quality data than other similar platforms, like Amazon MTurk[3].
Results
editWe observe that information-seeking is more efficient in the presence of images. Participants who completed tasks with illustrations take 19\% less time to find the solution, with shorter paths. This is particularly evident when images are relevant to the surrounding text. As a matter of fact, when we modify the original position of the images on the page and shuffle them around, we notice that participants mostly ignore the content of the images and show a performance similar to the unillustrated article setup.
Future work
editWe still do not know if, and for how long, readers are actually looking at images when browsing. To obtain this information, we plan to design an information seeking experiment and collect eye-tracking data from voluntary participants.
Are illustrations useful to find information?
editDuring the experiment, we will simulate an information seeking scenario in which participants are provided a Wikipedia article and asked to find some information on it. The answer could be found in any textual and visual part of the article, such as paragraphs, images, or captions. While searching for the answer, an eye-tracking device will track the participant's gaze coordinates on the screen.
Remote eye-tracking
editTo collect data from a large and heterogeneous set of participants, we can use WebGazer, an open-source webcam-based eye-tracking tool. WebGazer is a JavaScript library that can be integrated in any website and runs entirely in the client browser. We developed a web app that integrates WebGazer into our experiment (the source code is maintained here). During the experiment, the eye-tracker will collect only the participant's eye-coordinates on the screen (no video data needs to be processed). Also, we won't collect personal data from the participants. The experiment will be run online using a crowdsourcing platform, like Prolific.
References
edit- ↑ Research:Understanding Engagement with Images in Wikipedia
- ↑ Lavie, W. Howard; Lentz, Richard (1982). "Effects of text illustrations: A review of research". pp. 195–232. doi:10.1007/BF02765184.
- ↑ Template:Cite journal=Behav Res