About this database

Language Elements is a living systematic review database of neurostimulation studies examining the causal role of brain regions in language processing. It compiles findings from transcranial magnetic stimulation (TMS), transcranial electrical stimulation (tES), and direct electrical stimulation (DES) studies — the only techniques capable of establishing causal brain–language relationships rather than correlational ones.

The database is built around elementalism: a theoretical framework that characterises brain regions by the minimal computational operations they causally support, inferred bottom-up across heterogeneous tasks (following Genon et al., 2018). This high-specificity approach distinguishes Language Elements from existing resources, which typically catalogue findings at the level of broad linguistic domains.

The database is designed to serve both basic research — supporting experimental planning and meta-analytic synthesis — and intraoperative language mapping, providing a searchable evidence base of tasks documented in the neurostimulation literature for each brain region.

The database

The systematic review (PROSPERO: CRD42024602006) searched PubMed, Scopus, Embase, and PsychInfo from October 2024 to January 2025, returning 12,763 records. After deduplication and screening, the current database includes:

221included papers
606outcomes
38languages covered
~6,300participants
44data points per paper

Updated 28 April 2026

The database is live and updated as screening and extraction continue.

How to cite

A paper describing the database and the elementalism framework is currently in submission. In the meantime, please cite the PROSPERO registration:

Williamson, T. R., et al. (2024). Elements of the neurobiology of language: A neurostimulation model. PROSPERO CRD42024602006. https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=602006

Key references: De Witte et al. (2015) DuLIP — doi:10.1080/02687038.2015.1071993 · Wager et al. (2017) Operating Environment — doi:10.1016/j.neuchi.2016.10.002

Contributors

Language Elements is an international collaboration between research teams in the UK and Germany.

United Kingdom
T. R. Williamson
Project lead · PhD researcher
UWE Bristol & Southmead Hospital, North Bristol NHS Trust
Anna E. Piasecki
Co-investigator · Psycholinguistics
UWE Bristol
Neil U. Barua
Clinical validator · Neurosurgeon
Southmead Hospital, North Bristol NHS Trust
Eimear McKnight
Research assistant · Syntax
UWE Bristol & Queen Mary University of London
Antonia Vogt
Collaborator · Neurosurgery
University of Cambridge
Kristofer Kinsey
Collaborator · Neuropsychology
UWE Bristol
Naomi Heffer
Collaborator · Neurostimulation
UWE Bristol
Jemma Sedgmond
Collaborator · Neurostimulation
UWE Bristol
Sonia Mariotti
Collaborator · Bilingualism
UWE Bristol & Southmead Hospital
Germany
Gesa Hartwigsen
Professor of Cognitive Neuroscience
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig & Leipzig University
Philipp Kuhnke
Postdoctoral researcher · Neurobiology of language
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig & Leipzig University
Lydia Wiernik
Collaborator · Sign language & multilingualism
University of Göttingen & MPI Leipzig

Funding

This work was supported by the British Council Going Global Partnerships Springboard Programme (UK–Germany).

Contact

For queries about the database, the systematic review, or potential collaborations, please contact T. R. Williamson at t.williamson@uwe.ac.uk.

How it works
Database tab

The Database tab searches the literature directly. Every result is drawn verbatim from the systematic review dataset — no inference, no generation. Free-text search and filters (stimulation type, linguistic area, hemisphere, inhibition/facilitation) operate on the raw data. The brain visualiser plots MNI coordinates extracted directly from the included papers.

Elements tab

Type a query to begin. Elements mode accepts searches across six categories: brain region, task name, stimulus type, language, linguistic domain, or reported effect of stimulation. The tool interprets the query automatically and returns a functional profile of the relevant studies alongside several context summaries.

Search interpretation

The search bar classifies the query into one of six categories and shows the resolved category below the bar, with an override dropdown if the interpretation is wrong. Region, language, and linguistic domain queries are resolved via synonym tables. Task, stimulus, and reported-effect queries are resolved via an AI inference pass that reasons about the construct behind the query — so a search for “picture naming” returns papers whose paradigms instantiate picture naming, not only those that use the string verbatim.

1Category classificationNo AI · InstantAI · ~2–3s

The query is routed to a region, language, or linguistic domain synonym table first; if a match is found, results are returned instantly with no network request. If no match is found, an AI call infers whether the query is a task, stimulus, or reported effect and caches the result for subsequent searches.

2Construct-aware matching (open-vocabulary searches)AI · ~30–50s first time, then cached

For task, stimulus, and reported-effect searches, the tool reasons about the neuropsychological construct behind the query, groups filtered papers into construct clusters, and returns per-paper rationales for inclusion. Results are cached per query.

Functional characterisation

The tool identifies the elements — minimal computational operations — that the filtered papers collectively support. Elements are inferred bottom-up from the data, not assigned top-down from a fixed ontology. Each element card shows the studies grouping under it, their combined sample size, and a confidence indicator. On each card, “What is this element?” returns a plain-language definition; “How was this element derived?” returns the reasoning chain from the grouped studies.

1AI-assisted characterisationAI · ~15–25s

Elements mode uses a two-stage AI pipeline to infer element labels bottom-up from the filtered studies. Operation-function labels are generated following Genon et al. (2018) functional characterisation. Label specificity is calibrated to the neuroanatomical hierarchy level of the search. Results may vary slightly between sessions due to the probabilistic nature of language models.

2Flat process listNo AI · Always available

If the AI call fails — for example, because of a network issue or an API outage — Elements mode falls back to a flat, ranked list of all identified processes ordered by study count. All data remains drawn directly from the systematic review.

Context summaries and drill-downs

Below the elements, five summary cards describe what the filtered papers report for the five datapoints not searched. Each summary is a short prose synthesis; each has a “Breakdown” button that opens a detailed view in the right panel. The right panel stacks — multiple breakdowns and element explanations can be open at once, each with its own close button. Region breakdowns render as a three-level neuroanatomical hierarchy (lobe → gyrus → Brodmann area) with paper counts rolled up at each level. Task, stimulus, and effect breakdowns render as construct clusters with member papers grouped under canonical construct labels.

Tasks tab

Type a brain region name to begin. The tool runs in two stages: first summarising what the neurostimulation literature reports for the region, then surfacing tasks from the database that studies have used at or near that site.

Region summary

When you search a region, the tool identifies all neurostimulation studies in the database targeting that region and synthesises the linguistic processes the studies implicate. These processes are grouped into summary element labels using a two-stage pipeline with a defined fallback:

1AI-assisted region characterisationAI · ~15–25s

When you search a region, Tasks mode uses a two-stage AI pipeline to summarise the linguistic processes studied at that site. Summary element labels are generated using bottom-up functional characterisation guided by Genon et al. (2018). Label specificity is calibrated to the neuroanatomical hierarchy level of the searched region. Results may vary slightly between sessions due to the probabilistic nature of language models.

2Flat process listNo AI · Always available

If the AI call fails, Tasks mode falls back to a flat, ranked list of all identified processes ordered by study count. All data remains drawn directly from the systematic review.

Database-derived tasks

Once a region has been summarised, the tool surfaces tasks from the database that studies have used at or near that region, in three steps:

1Evidence retrievalNo AI · Instant

All tasks used in neurostimulation studies of this region are retrieved from the database and organised by the processes they target. Tasks are deduplicated and prepared for the AI, which receives the full task evidence for this region alongside the region summary from the previous stage.

2AI-assisted task selection with literature guardrailsAI · ~30–40s

An AI model selects and ranks tasks from the database evidence, applying clinical constraints drawn from two peer-reviewed frameworks: De Witte et al. (2015) (DuLIP) and Wager et al. (2017) (operating environment). All tasks shown cite their source papers. Where the database lacks a suitable task for a specific process, the model may surface a DuLIP fallback task (De Witte et al., 2015) — clearly labelled as such — so that every process identified in the region summary has at least one candidate task, drawn either from the systematic review literature or the DuLIP protocol.

3Fallback if AI is unavailableNo AI · Instant

If the AI call fails, no tasks are shown. The region summary from the previous stage remains visible and can still be used to support the clinician's own literature review.

Clinical disclaimer
The database-derived task lists presented by this tool are intended to support the clinician's own literature review, not replace clinical judgement. They are drawn from the neurostimulation research literature and have not been prospectively validated as clinical decision-support outputs. All task selection for awake craniotomy procedures must be approved by the responsible surgical and neuropsychology team. Confidence scores are heuristic indicators based on study count, stimulation modality, and sample size — they are not validated clinical risk scores.
Brain Visualiser
Left drag · rotate  ·  Right drag · pan  ·  Scroll · zoom