ProjectsTechnology research & developmentAI and equity

AI and equity: what are the benefits and harms associated with this new generation of decision-making tools?

The need for policy to regulate the development and deployment of AI applications is regularly discussed and, indeed, there is ongoing surveillance work which describes and catalogues AI policies and strategies.  However, despite the need for ongoing scrutiny around the ethical approaches around the creation of AI tools, little cohesion and understanding exists resulting in a multitude of tools from many sources and disciplines but no gold standard.    Indeed, it has been suggested that there are three core principles which can aid the understanding of the ethical implications of AI tools:  impact (to avoid harm and benefit society), justice (to promote fairness and equity) and autonomy (to allow access to the use and modification of AI).

Thus, while there is much excitement at the potential for AI to benefit many areas of decision-making, such tools are frequently deployed in ways that lack transparency despite documented risks that they may increase systemic inequities. There is an urgent need to understand more about when AI is (or can be) used for decision-making, what the potential benefits and harms are, and how we can equip people with the tools and knowledge they need to understand these issues more effectively.

A partnership of the Campbell Collaboration and EPPI Centre have mapped the evidence on AI and equity in consultation with the American Institutes for Research (AIR). The online map is browsable by clicking the image below.

The map was constructed by searching 19 databases and downloading the results into EPPI Reviewer. 34, 541 records were identified, of which 8,485 were found to be duplicates. The reminder were screened automatically by GPT-4 with human-validated sensitivity of 95% and specificity of 100%. 6,628 records remained which were then ‘mapped’ using a pre-defined coding tool – again by GPT-4. Human validation of the mapping found that 86% contained no errors and an additional 12% contained only minor errors, so that automation was deemed to be sufficiently reliable for the map to be placed online.

Two versions of the map are available. First the entire map; and second a more granular version of a subset that better visualises the ‘smaller’ topics. (When the entire map is viewed, there is little variation in the appearance between cells because there are a few areas where there is so much content.) The more granular map contains fewer records, having removed the categories: computer science, mitigating bias (topic domain) and the general equity domain of 'bias' without further information.

Full map Filtered map
Copyright 2019 Social Science Research Unit, UCL Institute of Education :: Privacy Statement :: Terms Of Use :: Site Map :: Login
Home::About::Projects::Training::Research Use::Resources::Databases::Blog::Publications