What do we want to know?
Meta-syntheses (ie. reviews of existing research reviews) have reported positive impacts of feedback for student achievement at different stages of education and have been influential in establishing feedback as an effective strategy to support student learning. However, these syntheses combine studies of a variety of different feedback approaches (e.g. written comments and oral feedback), combine studies where feedback is one of a number of intervention components and have different methodological limitations, for example the inclusion of different types of study design.
More precise estimates of the impact of different types of feedback in different contexts for different learners aged between 5 and 18 will help schools and teachers to make more informed choices about appropriate feedback practices.
Research Question
The research question for the in-depth review was:
What is the difference in attainment of learners, aged 5–18, receiving a feedback only intervention/approach from a teacher/researcher/digital/automated source in comparison to learners receiving ‘the usual treatment’ (with regard to feedback practices in the setting)/no feedback or an alternative approach?
Who wants to know?
This systematic review was conducted at the request of the Education Endowment Foundation (EEF) to provide evidence that can be used to support the development of guidance for teachers and schools about feedback practices. Findings were further interpreted by a panel of expert practitioners and academics to produce the EEF’s Teacher feedback to improve pupil learning guidance report.
What did we find?
We identified 171 studies in which the intervention consisted solely of feedback in school settings provided by a teacher, researcher or using technology. After applying final selection criteria, 43 papers with 51 studies published in and after the year 2000 were included. The 51 studies had approximately 14,400 students. Forty studies were experiments with random allocation to groups and 11 were prospective quantitative experimental design studies. The overall risk of bias was assessed as low to moderate in 44 studies.
The interventions took place in curriculum subjects including literacy, mathematics, science, social studies and languages, and tested other cognitive outcomes. The source of feedback included teacher, researcher, digital, or automated means. Feedback to individual students is reported in 48 studies and feedback to group or class is reported in four studies. Feedback took the form of spoken verbal, non-verbal, written verbal, and written non-verbal. Different studies investigated feedback that took place immediately after the task, during the task and up to one week after the task (delayed feedback). Most of the feedback interventions gave the learner feedback about the outcome of their work and the process/strategy. Some provided feedback on outcome only and two provided feedback about task/strategy only.
On the main research question, the pooled estimate of effect of synthesis of all studies with a low or moderate risk of bias indicated that students who received feedback had better performance than students who did not receive feedback or experienced usual practice (g = 0.17, 95% C.I. 0.09 to 0.25). However, there is statistically significant heterogeneity between these studies (I2 = 44%, Test for Heterogeneity: Q(df = 37) = 65.92, p = 0.002), which suggests that this may not be a useful indicator of the general impact of feedback on attainment when compared to no feedback or usual practice.
The results of the subgroup analysis do, in some cases suggest there may be systemic variation in the impact of single component feedback by age of students, curriculum subject, mode and type of feedback.
What are the conclusions?
The results of the review may be considered broadly consistent with claims made on the basis of previous synthesis and meta-synthesis, suggesting that feedback interventions, on average, have a positive impact on attainment when compared to no feedback or usual practice. The limitations in the study reports and the comparatively small number of studies within each subgroup synthesis meant that the review was not able to provide very much more certainty about the factors that affect variation in the impact of single component feedback interventions within different contexts and with different students. More research is needed in this area to consider what may moderate the impact of feedback.
However, the findings further support the conclusion made by previous studies that feedback, on average, has a positive impact on attainment; moreover, this is based on a more precise and robust analysis than previous syntheses. This suggests that feedback may have a role to play in raising attainment alongside other effective interventions.
How did we get these results?
A systematic review was undertaken in two stages. First, a systematic map identified and characterised a subset of studies that investigated the attainment impacts of feedback. Second, an in-depth review comprising of a meta-analysis was performed to answer the review questions about the impact of interventions that comprised of feedback only and to explore the variety of characteristics that may influence the impact of feedback.
We used the Microsoft Academic Graph (MAG) dataset hosted in EPPI-Reviewer to conduct a semantic network analysis to identify records related to a set of pre-identified study references. The MAG search identified 23,725 potential studies for screening.
Studies were selected using a set of pre-specified selection criterion. Semi-automated priority screening was used to screen the title and abstract of studies using bespoke systematic review software EPPI-Reviewer. 3,028 studies were screened on title and abstract when stopping rules were applied. 745 were identified for full-text screening. Reviewers carried out a moderation exercise, all screening a selection of the same titles to develop consistency of screening. Thereafter, single reviewer screening was used with referral for a second reviewer opinion in cases of uncertainty.
Studies were coded using a bespoke data extraction tool developed by the EEF Database Project. Study quality was assessed using a bespoke risk of bias assessment adapted from the ROBINS-I tool. The review team undertook a moderation exercise coding the same set of studies to develop consistency. Thereafter, single reviewer coding was used, based on the full text with referral for a second opinion in cases of uncertainty.
Data from the studies was used to calculate standardised effect sizes (Standardised Mean Difference- Hedge’s g). Effect sizes from each study were combined to produce a pooled estimate of effect using Random Effects Meta-analysis. Statistical Heterogeneity tests were carried out for each synthesis. Sensitivity analysis was carried out for assessed study quality. Subgroup analysis was completed using meta-analysis to explore outcomes according to the different characteristics of feedback, context and subjects.
This report should be cited as:
Newman, M., Kwan, I., Shucan Bird, K., Hoo, H.T. (2021), The impact of Feedback on student attainment: a systematic review, London: Education Endowment Foundation.