The need for policy to regulate the development and deployment of AI applications is regularly discussed and, indeed, there is ongoing surveillance work which describes and catalogues AI policies and strategies. However, despite the need for ongoing scrutiny around the ethical approaches around the creation of AI tools, little cohesion and understanding exists resulting in a multitude of tools from many sources and disciplines but no gold standard. Indeed, it has been suggested that there are three core principles which can aid the understanding of the ethical implications of AI tools: impact (to avoid harm and benefit society), justice (to promote fairness and equity) and autonomy (to allow access to the use and modification of AI).
Thus, while there is much excitement at the potential for AI to benefit many areas of decision-making, such tools are frequently deployed in ways that lack transparency despite documented risks that they may increase systemic inequities. There is an urgent need to understand more about when AI is (or can be) used for decision-making, what the potential benefits and harms are, and how we can equip people with the tools and knowledge they need to understand these issues more effectively.
The aim is to understand the nature and extent of literature and frameworks to support equity in areas where AI tools exist and how they are utilised or applied to mitigate bias.
We post below an 'indicative map' of research that has been identified through a systematic search and then automatically classified using GPT-4. Its content is therefore purely for information only, and should not be taken as a representative reflection of the current distribution of research activity.
PLEASE NOTE THAT THIS MAP IS IN DRAFT FORM AND WILL BE UPDATED SOON.