top of page

Is there such a thing as a “mutant algorithm”? Of course not, but biased ones are everywhere

Algorithms have been a hot topic in the news recently, and not for a good reason. But should they really be taking the blame?

A level results day always brings tears aplenty, but this year there was more cause than most for sobbing teenagers. Nearly 40% of grades were lowered from teacher’s predictions, and it immediately became clear that the downgrading affected state schools much more than the private sector. Despite the immediate backlash, Boris Johnson continued to describe the grades as “robust” in his most sympathetic voice, safe in the knowledge that had he been collecting his results today, his Eton education would have bagged him a trifecta of A*s.

A government U-turn four days later then found it remarkably easy to scrap the so-called robust system, returning to teacher assessments, and dumping the consequences of this action on the doorstep of already struggling universities. Eager to shift the blame, the Prime Minister then decided to label Ofqual’s chosen method of grading as a “mutant algorithm”. This is misleading, blatant scapegoating. Unfortunately for Johnson, as much as he would have loved the algorithm to have evolved all by itself in a deliberate attempt to penalise poorer communities, that’s not quite how the science works. 

So, what is an algorithm?

For Boris’s sake (it’s been a long time since those maths classes at Eton), let’s break it down to the basics in order to see where the true blame lies. According to the Cambridge dictionary, an algorithm is “a set of mathematical instructions or rules that, especially if given a computer, will help to calculate an answer to a problem”. 

Essentially, an algorithm is just like a recipe that uses data instead of eggs and flour. They’re incredibly clever things that, when designed properly and effectively, can be very useful to society. Artificial Intelligence is built on algorithms, using machine learning to predict behaviour, recognise speech and even faces. It is able to do this by processing vast amounts of data, learning and optimising operations to improve performance, developing ‘intelligence’ over time.

Algorithms are everywhere

Few people realise just how much of our life is built upon algorithms. To live a day where you don’t interact with an algorithm at all, you wouldn’t be able to check the weather, use any form of transport, and of course phones and laptops would be completely out of bounds. If I were to write an article without the use of any algorithms at all, it would be a very boring read.

Output relies on input

The inputted data is the root of a biased algorithm’s problem. In order to teach an algorithm how to recognise a cat, you have to show it thousands and thousands of pictures of cats, until it can recognise one better than a human. If you’re feeding an algorithm biased data, ie you show it many pictures of dogs that you label as cats, then the algorithm will recognise dogs as cats and you will have a biased algorithm. This is a very lighthearted example of a very serious problem.

Algorithms are not either good or bad – they depend on human context

The bad press that algorithms have been receiving lately is not their fault. The blame lies in the hands of the programmers, and those that commission these algorithms without ever thinking through their full effects. I’m not saying that people deliberately set out to disadvantage certain subgroups of those affected by an algorithm. But that doesn’t mean that they’re not guilty of ignorance, and of forgetting certain perspectives that don’t directly align with their own. 

This isn’t to say that we shouldn’t continue to utilise such an advantageous tool. Instead we need to be aware of how they’re being used, and what information is being used to teach them.

An obvious example of a biased algorithm is the recent Ofqual scandal

Ofqual used a surprisingly short string of characters in their grading algorithm (although it is technically an equation). According to the Guardian, student’s university choices were determined by four factors. For the life determining decision of where a student goes to university, this immediately seems like too little data is being used. 

The algorithm includes the historical grade distribution at the school over the last three years (2017-2019), meaning that your grades are decided by the academic achievements of former pupils that you may never have met. It also uses the predicted grade distribution of the class based on their GCSE results, and how well the school has predicted grades in the past. The final input describes how much historical data is available for each pupil in a class. If every GCSE result is available then it is 1, if none can be found at all then it is 0.

According to the Guardian, the equation is as follows: Pkj = (1-rj)Ckj + rj(Ckj + qkj – pkj).

This is in two parts, with the left hand side saying that if no GCSE results are available, the school’s previous A-level results should be relied upon. If GCSE results can be found, the right side instructs you to add the predictions from the GCSE grades to the historical A-level results, and then downgrade them based on how good the school has previously been at predicting results.

The issues are obvious. Historical grades only stretch back for three years, and don’t account for a change in teacher. Basing A-level grades on GCSE grades does not reflect current achievement, and the grouping of pupils in classes prevents individual success. Finally, the decision to exclude classes of less than fifteen pupils and allow teacher grading benefits private schools, as they are much likelier to have smaller classes than state schools – independent schools enter 9.4 students per A-level subject on average, while sixth-form colleges enter 33.) 

All of this led to private schools in England seeing a greater increase in the proportion of students getting top A-level grades than any other type of school.

Algorithms do help people

Artificial Intelligence (AI) has recently been shown to outperform the average radiologist at diagnosing breast cancer in mammograms. The machine learning algorithm, which was trained on X-ray images from almost 29,000 women, is superior to one radiologist and as effective as two doctors. Unlike humans, algorithms don’t tire, and as there is currently a shortage of more than 1,000 radiologists across the UK, algorithms like this one could be key to relieving the burden on the NHS. There are many more ways that AI is currently being used in medicine, as it is ideal for spotting patterns that could lead to a diagnosis.

There are two reasons that algorithms are usually used. The first is budget cuts. In this case, the lack of funding results in people being replaced by algorithms, which are much cheaper to use over a long period. The second is the widespread belief that they are more objective than humans. However, until humans are able to avoid unbiased data themselves, how can algorithms be expected to do the same? Judging one individual based on what other people have historically done doesn’t work when the prejudices are entrenched in the system itself.

It is unsurprising that Boris has decided to blame the algorithm instead of addressing the root of the problem. Tom Haines, lecturer in machine learning at the University of Bath said, “We need to realise that these algorithms are man-made artefacts, and if we don’t look for problems there will be consequences.” As much as Boris can cross his fingers and wish otherwise, mutant algorithms don’t exist. But biased ones certainly do.

Comments


bottom of page