There is an interesting debate now widening its scope on the ethics around AI. I reproduce below a survey conducted by the RSA in which many respondents expressed hostility to the use of AI for decision making - except for the two areas in which most people have experienced it, financial services and ad content on social media. For every 'computer says no' there are at least as many 'computer says yes' decisions. And AI is colourblind; it doesn't care about your accent, your creed, your beauty or the size of your, erm intagible assets. You can't game an AI system, or rely on a human co-religionist or wearer of the same club tie to distort the outcomes. I've never heard anyone complain they were refused an overdraft because they were black or gay - AI has credentials of utter impartiality that few human agents have.
There are, I think, two reasons for opposing the use of AI . The first is that it doesn't make the quality of decisions that do humans, that it's somehow second-best. This is the easiest criticism to answer. In most cases AI is deployed because it makes better decisions than people - and the gap is getting wider each day. Simply, AI should not be deployed unless it's demonstrably better than human decision-making.
The second and most apposite reason to oppose AI is because it is utterly impartial.The sharp-elbowed middle classes have no advantage over the modest or inarticulate in securing better access to services; neither will favoured ethnic, faith or sexual-preference groups go to the front of the queue. The old school tie will cut no ice, a golf-partner MP or Chief Constable dinner guest will endow no special treatment. Decision making by AI, in other words, offers the potential for the ultimate meritocracy, making decisions without fear or favour on strict clinical, equitable or judicial grounds, unmoved by all those factors economists class as 'taste' discrimination. But the word 'potential' is the key word.
The reason we are having a debate right now is that we need to set the rules, and to set them in law, as to how AI makes decisions. Healthcare AI, for example, must make decisions on strict clinical grounds. Ethnic minority women in the population are substantially more obese than either ethnic minority men (.pdf) or the general population. If treatments or surgery are withheld from the obese on clinical grounds, black women will be disproportionally affected. Will an ultra-liberal NHS stand for this, or will the AI be programmed to refuse surgery to the obese unless it's a black woman? That's why we need a legal framework. And a debate.
I guess the BBC and the public sector will fight tooth and claw to resist a hiring-and-promotion AI system based only on merit - no more internships open only to Somali transexuals, no more preference for Korean pederasts who are 'under-represented' as leisure-centre instructors.