PublicationsSystematic reviewsICT & literacy meta-analysisICT & literacy meta-analysis - policy-maker
A systematic review and meta-analysis of the effectiveness of ICT on literacy learning in English, 5-16. Policy-maker perspective

Summary of results

The systematic review of post-1990s research studies on Information and Communication Technology (ICT) and literacy learning in English that use a randomised controlled trial (RCT) methodology found there to be little evidence that ICT had a positive effect on literacy outcomes.

Research question

What is the evidence for the effectiveness of ICT on literacy learning in English amongst 5-16 year-olds?


In 2002, the EPPI Centre English Review Group completed a systematic review of the impact of ICT on literacy learning in English. This included the production of a 'map' describing all research in the field which met the inclusion criteria for the review. The present review is one of a number of in-depth sub-reviews addressing aspects of the overarching research question: 'what is the impact of ICT on literacy learning in English?'.

The policy background of the review is that since the mid-1980s, the use of ICT in schools to support literacy learning has been increasingly pervasive, with considerable government investment in schools' ICT resources. However, this use of ICT for teaching literacy has not been strongly underpinned by robust evidence based on effectiveness research.

While there are existing systematic reviews in the field of ICT effectiveness and literacy, these are insufficiently rigorous in that they include non-randomised or poor-quality trials, or are narrowly focused on particular aspects of literacy (e.g. among pupils with learning disabilities).


The earlier systematic review mapped research on the impact of ICT on literacy learning in English, for 1990-2001. The researchers updated searches for 2001-2002, screened potentially relevant studies for inclusion, and re-keyworded studies in the original map according to study type.

Criteria for inclusion/exclusion were developed. Given the focus on effectiveness, the requirement was for studies which used rigorous methods to assess effectiveness. Studies were restricted to those where pupils were randomly allocated to an ICT or no ICT treatment for the teaching of literacy, within either individual or cluster randomised trials. It was also required that studies presented effect sizes or that the researchers could calculate these from the data provided; this was necessary in order to establish the magnitude of the effect.

Data extraction and quality appraisal took place on the studies/RCTs included. Each was assessed on its methodological quality and the weight of evidence it provided.

Narrative synthesis of included trials assessed the effectiveness of different types of ICT interventions on a range of literacy outcomes, and of different types of ICT on specific literacy outcomes. Statistical synthesis of outcomes averaged results from the studies and weighted them, giving the greatest weight to those with the smallest standard errors (usually the largest studies).

Researchers also calculated the possibility of publication bias (the fact that studies most likely to be published are those which show beneficial effects).


From a total of 2,319 potentially relevant reports identified, 42 RCTs were included in the effectiveness map; 30 were excluded on the basis of various criteria, leaving 12 RCTs in the in-depth review.

The plotting of effect size of the identified trials against sample size suggested some publication bias, with estimates of effect being biased towards the positive.

Five different kinds of ICT intervention were identified within the 12 RCTs included - computer-assisted instruction; networked computer system; word-processing packages; electronic texts; and speech synthesis systems. Three literacy outcomes were identified: reading (including reading comprehension and phonological awareness); writing; and spelling.

The synthesis carried out for the five different ICT interventions included 13 positive and seven negative comparisons from the 12 RCTs selected. Of these, three positive and one negative comparison were statistically significant, suggesting that there is little evidence to support the widespread use of ICT in literacy learning in English.

A second synthesis involved meta-analysis for each of the three literacy outcomes measures. There was no evidence of benefit or harm in relation to spelling and reading, and weak evidence of a positive effect for writing.

From this, the authors conclude that further investment in ICT and literacy should be halted until one large RCT shows that it is effective in increasing literacy outcomes; that more RCTs are required to evaluate ICT and literacy learning across all age ranges; and that teachers should be aware that there is no evidence that non-ICT approaches to teaching literacy are inferior to those which use the technology.


The review adopts an admirably rigorous, systematic and tightly focused approach to addressing the research question. The main weakness of the review is that the majority of studies included date from the early 1990s. Since then there have been major developments in terms of the hardware and software used in classrooms, approaches to teaching with technology, and the ICT familiarity and competence of users. Restricting studies for the review to those involving RCTs may have excluded more recent research which reflects the current ICT landscape more closely but employs other research methodologies. The studies in the review sample are exclusively North American, which may be problematic, as the findings are not necessarily fully transferable to the UK curriculum and delivery context. The absence of UK studies may be due to the fact that the large-scale ICT impact studies commissioned by the Department for Education and Skills (DfES) (such as ImpaCT2) adopted non-experimental or quasi-experimental approaches for reasons of practicality. In addition, the ethos in the UK is that policy decisions on the use of ICT in education tend not to be made on the basis of evidence from this experimental research, but rather are informed by findings from studies utilising a broad range of other methodological approaches (quasi-experimental techniques, qualitative approaches, etc.). Other types of evidence are also drawn on, such as school assessments made by the Office for Standards in Education (which recently reported that ICT is often used effectively to support pupils' writing development at primary level). The reviewers conclude that further investment in ICT for literacy should be halted until there is more substantial evidence of impact on attainment, but it should be emphasised that the DfES does not invest directly in ICT to support literacy; rather, the focus of investment is to support teachers in the use of ICT for teaching literacy (as well as other curriculum subjects).

The ImpaCT2 study (not included in the review because it was not an RCT and therefore not able to establish causality) found that, where used effectively, ICT can increase attainment in English at Key Stage 2 by 0.16 of a National Curriculum level. Yet the reasons for encouraging use of ICT in the classroom do not relate exclusively to attainment. While the review is rightly focused on literacy attainment outcomes in relation to ICT, account does need to be taken of other positive outcomes - for example, increased learner motivation and development of learner's ICT skills - which other research shows can result from ICT use in teaching and learning.

The writer is involved with central Government policy-making and has no connection with the Review Group. This 'perspective' is written in a personal capacity.

Copyright 2019 Social Science Research Unit, UCL Institute of Education :: Privacy Statement :: Terms Of Use :: Site Map :: Login
Home::About::Projects::Training::Research Use::Resources::Databases::Blog::Publications