AI Ethics guide foresaw “mutant” A Level algorithm
The CIPR and CPRS has published a best practice guide to the ethical application of artificial intelligence. It’s important work.
Imagine an algorithm that marked down the most deprived students and improved the grades of student attending fee paying private schools.
You don’t have to imagine of course. This is exactly what an algorithm devised by Ofqual, the UK exams regulator, did to the class of A Level students of 2020.
Students were unable to sit their exams because of the COVID-19 lockdown.
280,000 children had their grades lowered from their teachers' estimates. Top performing students in disadvantaged areas of England and Wales were hit hardest.
A U-turn a week after the exam results were published led to results being based on teacher assessment.
Prime Minister Boris Johnson blamed the situation on Ofqual’s “mutant algorithm.”
A new artificial intelligence (AI) ethics guide published by the CIPR and CPRS foresaw this situation. The two organisations have been working on the guide since January.
The lead researchers and authors are by Professor Anne Gregory for the CIPR in the UK and Jean Valin for the CPRS in Canada.
The AI ethics guide sets out a process based on the CIPR's ethical decision-making tree related for testing algorithms and other applications of AI. The Ofqual algorithm fails the test on several grounds, including diversity and transparency.
“The guidelines are timely. We are cognisant of living in transformational times and the rise of AI is just one of those factors that is exacerbating the experiential impact on communicator’s working lives,” Professor Ralph Tench, Director of Research, Leeds Business School.
The 18-page document advocates that for public relations teams to be part of the teams building algorithms. It is designed to support communicators in their own work and in their role as management advisors.
“Understanding ethics is hard enough, understanding the potential pitfalls and ethical challenges of AI makes it even harder. We wanted to do two things: first, take public relations professionals through a decision-making framework that will educate them on AI itself and the bigger issues it generates,” said Professor, Corporate Communication, University of Huddersfield.
The guide outlines some key principles for ethical decision-making, provides practical advice on using the CIPR’s ethical decision-making tree and the Open Data Institute’s data ethics canvas using real-life examples.
New laws or regulations are unlikely to keep pace with modern technologies and that knowledgeable PR professionals are needed to ensure that algorithms don’t discriminate or harm individuals.
“The ethics guide offers answers to questions that few other ethical guides in PR even care (or dare) to ask. They go far beyond general protestations and instead identify dilemmas, double binds, as well as ways for practitioners to resolve them,” said Professor Gregor Halff, Dean of Copenhagen Business School.
The ethics guide has been written by Professor Anne Gregory and Jean Valin with input from members of the AI panel including myself, Kerry Sheehan, Andrew Smith and Emma Thwaites.