Predictions, Mocks or Models? Learning from cancelled predictive analytics in public services

Carnegie UK
4 min readAug 20, 2020

--

by Anna Grant, Senior Policy and Development Officer

Like many other forgotten tokens of youth alongside ticket stubs and postcards, I still have the unmistakable white envelope containing my A Level results tucked away in a desk drawer. On the rare occasion I happen upon it (though these have increased since working from home), I can vividly remember the feelings of anxiety, significance and hope its contents created. While the publication of these results often garner media headlines highlighting record grades or relative gender performance, they are generally out of the national news cycle within 48 hours, sustaining significant importance to only a relative minority of the population.

However, as with most things, 2020 was very different.

From its inception, Ofqual’s statistical model for determining A Level results has come under intense scrutiny, challenged from technical, social and even moral perspectives. Few stories, even in these most ‘unprecedented’ of times, have united so many voices. With outcries from students, broader public, across mainstream media, multiple legal challenges, challenges from sector bodies within and outside education, political pressures from outside and within the governing party to better explain or preferably retract this ‘black box’ approach to decision making. Ultimately, this collective effort did enact change and lead to confirmation of the decision to scrap total reliance on the tech and allow for other forms of assessment to be more transparently recognised.

But, this outrage was not novel. Similar challenges were experienced two weeks prior, with the publication of Highers results and saw the Scottish Government reverse the decision to use a predictive model as part of the process for assigning Highers to instead base the awards solely on teacher assessment (where results had been downgraded).

While educational results have been the most high profile case in the UK for some time, challenges with regards to data use in the allocation of public services are becoming increasingly significant. We are seeing a rise in national and international interest around the use of predictive analytics, machine learning and automated systems. These algorithms can determine a range of outcomes across public sectors; from supporting decisions around immigration policy, to when to make a children’s services intervention. Better use of data can undeniably help these services run more efficiently and effectively, delivering a range public benefits to individual and communities. However, this increasing use of data also raises concerns about privacy, security, transparency and accountability as well as potentials for discriminatory sorting, exclusion and exploitation, over-surveillance, the reinforcing of stereotypes and broader unintended consequences. The A Level results example appears to have been particularly acute in highlighting these issues as it exemplifies many of our deepest concerns around these predictive systems, primarily that they opaquely and unjustly have the potential to negatively impact those in already disadvantaged or marginalised communities most significantly.

So what makes governments and institutions change their mind about the use of these technologies or even cancel them entirely?

Earlier this year we partnered with researchers from the Data Justice Lab run out of Cardiff University’s School of Journalism, Media and Culture (JOMEC), to conduct new research that examines this very question in our new project: Automating Public Services: Learning from Cancelled Systems.

Over the past six months, the project team has sought to identify, compile and analyse a range of example case-studies of cancelled or paused predictive analytic systems through desk-based scoping research, document analysis and interviews with key personnel. Examples have been selected from the UK and comparable international contexts, including New Zealand, the USA, Germany and The Netherlands. The research focuses primarily on three thematic areas of predictive policing, child welfare and fraud detection to investigate, among other questions:

  • What is the range of automated or predictive systems that have been proposed, piloted and cancelled?
  • Why have government agencies cancelled plans for, or the use, of different automated systems?
  • What rationales and decision making processes are leading to cancellation?
  • What kind of individual and social processes lead to cancellation?
  • Can any comparative factors be identified across countries?

Investigating cancelled programs provides a means to learn from those who have direct experience with trialling predictive systems, and gain greater insight into what kind of contextual forces and rationales may be influencing decisions to stop the use of these systems. Be that public pressure, legal challenge, cost, political interest or a myriad of other compelling factors. More broadly, we intend this project to support the wider policy call for greater government transparency and accountability, particularly surrounding the use of data and automated technologies.

The final full report will be published later this year and will present key themes that have emerged from the analysis and lessons learned that can be applied in a UK context. We will also be publishing an accompanying scoping document outlining all of the cancelled systems identified during the project, not all of which could be investigated for this research, but that we encourage others to explore further.

So today, as thousands more young people receive confirmation of their GCSE results, in what is hoped to be a smoother, less anxious, more transparent process, we must reflect on what lessons this and future governments and service providers will take forward in the design, implementation and evaluation of automated systems.

Automating Public Services: Learning from Cancelled Systems is a partnership project between the Carnegie UK Trust and the Data Justice Lab. The project team includes Data Justice Lab co-director Joanna Redden, Lab researchers Jessica Brand, Ina Sander and Harry Warne. If you would like any further information on the project please contact or future updates can be found on the project page.

Originally published at https://www.carnegieuktrust.org.uk on August 20, 2020.

--

--