ProjectsTechnology research & development
Technology research & development

Contact: James Thomas

There are several areas of research and development activity concerning the use of technology in our research processes; five of our main projects in this area are listed below.

EPPI Reviewer: software for systematic reviews

Our EPPI Reviewer software has been a long-standing area of technology research and development. It aims to be an end-to-end solution for all types of systematic review, managing data from citation screening through to synthesis. Recent developments include the incorporation of machine learning to support citation screening, study classification, and data extraction, and the statistical software 'R' at the 'back end' to facilitate advanced statistical analyses. Further information about the software is available on the EPPI-Reviewer Gateway.

Digital Evidence Synthesis Tools (DEST) in Climate and Health

Digital evidence synthesis tools (DESTs) are tools that have been developed to automate one or several tasks of an evidence synthesis project such as a systematic review or systematic map. We understand automation in a broad sense that includes complex machine learning tasks, but also more trivial ones such as automatically populating database fields with manually extracted data (Tsafnat et al. 2014).

This project, funded by Wellcome Trust and led by Pauline Scheelbeek from LSHTM, to took a co-creative approach to research on DESTs where we:

  1. assessed user needs and practices for climate & health evidence synthesis
  2. compiled and evaluated DESTs
  3. drafted recommendations on priority areas for applications of DESTs in climate and health

Project team: Dr Pauline Scheelbeek, Prof James Thomas, Prof Jan Minx, Dr Max Callaghan, Dr Ashrita Saran, Prof Julian Elliott, Dr Christopher Trisos, Dr Patrick O'driscoll, Dr Melissa Bond, Dr Alison O’Mara Eves and Ms Genevieve Hadida

The database of DESTs compiled as part of this work can be viewed here.

Finding Accessible Inequalities Research In Public Health (The FAIR database)

This work has been developing methods to apply machine learning and Natural Language Processing approaches to support the review, assessment, evaluation and summarisation of large volumes of public health research to support decision making. We have developed and applied automatic methods for identifying information about inequalities, study types and common themes mentioned within large volumes of public health research. The output of these techniques are available through an online tool containing a continuously updated repository of public health research.

AI and equity: what are the benefits and harms associated with this new generation of decision-making tools?

A partnership of the Campbell Collaboration and EPPI Centre have mapped the evidence on AI and equity in consultation with the American Institutes for Research (AIR). 

The map was constructed by searching 19 databases and downloading the results into EPPI Reviewer. 34, 541 records were identified, of which 8,485 were found to be duplicates. The reminder were screened automatically by GPT-4 with human-validated sensitivity of 95% and specificity of 100%. 6,628 records remained which were then ‘mapped’ using a pre-defined coding tool – again by GPT-4. Human validation of the mapping found that 86% contained no errors and an additional 12% contained only minor errors, so that automation was deemed to be sufficiently reliable for the map to be placed online.

Identifying relevant studies for systematic reviews and health technology assessments using text mining

The specificity of sensitive electronic searches of bibliographic databases to find studies for inclusion in systematic reviews is typically low. Reviewers often need to look manually through many thousands of irrelevant titles and abstracts to identify the much smaller number of relevant ones a process known as 'screening'. Given that an experienced reviewer can take between 30 seconds and several minutes to evaluate a citation, the work involved in screening 10,000 citations is considerable (and the burden of screening is sometimes considerably higher than this). This MRC-funded project is developing and evaluating methods for using text mining to automate some of this laborious work.

Publications resulting from this work

Miwa M, Thomas J, O’Mara-Eves A, Ananiadou S (2014) Reducing systematic review workload through certainty-based screening. Journal of Biomedical Informatics. 51: 242–253

O’Mara-Eves A, Thomas J, McNaught J, Miwa M, Ananiadou S (2015) Using text mining for study identification in systematic reviews: a systematic review of current approaches. Systematic Reviews 4:5. doi:10.1186/2046-4053-4-5

Thomas J (2013) Diffusion of innovation in systematic review methodology: why is study selection not yet assisted by automation? OA Evidence-Based Medicine. Oct 21;1(2):12

We would like to encourage you to contact us if you have data that we could use; might like to participate in our study with a 'live' review; or would like to hear more about this work.

 

Copyright 2019 Social Science Research Unit, UCL Institute of Education :: Privacy Statement :: Terms Of Use :: Site Map :: Login
Home::About::Projects::Training::Research Use::Resources::Databases::Blog::Publications