There is an interesting debate now widening its scope on the ethics around AI. I reproduce below a survey conducted by the RSA in which many respondents expressed hostility to the use of AI for decision making - except for the two areas in which most people have experienced it, financial services and ad content on social media. For every 'computer says no' there are at least as many 'computer says yes' decisions. And AI is colourblind; it doesn't care about your accent, your creed, your beauty or the size of your, erm intagible assets. You can't game an AI system, or rely on a human co-religionist or wearer of the same club tie to distort the outcomes. I've never heard anyone complain they were refused an overdraft because they were black or gay - AI has credentials of utter impartiality that few human agents have.
There are, I think, two reasons for opposing the use of AI . The first is that it doesn't make the quality of decisions that do humans, that it's somehow second-best. This is the easiest criticism to answer. In most cases AI is deployed because it makes better decisions than people - and the gap is getting wider each day. Simply, AI should not be deployed unless it's demonstrably better than human decision-making.
The second and most apposite reason to oppose AI is because it is utterly impartial.The sharp-elbowed middle classes have no advantage over the modest or inarticulate in securing better access to services; neither will favoured ethnic, faith or sexual-preference groups go to the front of the queue. The old school tie will cut no ice, a golf-partner MP or Chief Constable dinner guest will endow no special treatment. Decision making by AI, in other words, offers the potential for the ultimate meritocracy, making decisions without fear or favour on strict clinical, equitable or judicial grounds, unmoved by all those factors economists class as 'taste' discrimination. But the word 'potential' is the key word.
The reason we are having a debate right now is that we need to set the rules, and to set them in law, as to how AI makes decisions. Healthcare AI, for example, must make decisions on strict clinical grounds. Ethnic minority women in the population are substantially more obese than either ethnic minority men (.pdf) or the general population. If treatments or surgery are withheld from the obese on clinical grounds, black women will be disproportionally affected. Will an ultra-liberal NHS stand for this, or will the AI be programmed to refuse surgery to the obese unless it's a black woman? That's why we need a legal framework. And a debate.
I guess the BBC and the public sector will fight tooth and claw to resist a hiring-and-promotion AI system based only on merit - no more internships open only to Somali transexuals, no more preference for Korean pederasts who are 'under-represented' as leisure-centre instructors.
9 comments:
Presumably you are describing automated systems that are programmed by a human being to make decisions more quickly and without certain prejudices.
The thing is, that you can program a computer to do anything, the reason current systems are the way they are, is because they have been programmed that way.
They say that if you put a bunch of monkeys into a room and gave them each a typewriter, they would eventually come up with the sort of thing that Shakespeare wrote.... Who "they" are I don't know, because nothing intelligible has ever come from such a scenario, just as nothing that makes any sense has ever come from the Houses of Parliament, and those chambers host the creme de la creme...
... Apparently.
I suppose that we put up with them, because they are always wrong, rather than because they are always right.
We're already ruled by artificial (ie not as good as real) intelligence ,Teresa M-AI. Today's programme was brought to you by the numbers 22 and 14 and 5 and the letters A,B.and C. Both remainers and leavers are waking this morning with that Johnny Cash feeling, with a burning ring of fire...No guys it's not trapped wind, its semen from the arse fucking you got last night.
I offer comment on 3 parts of the issue.
(i) The title is "No one complains they were a refused an overdraft for being gay", which (in my view) contains a typographic error: syntax, not spelling. See (iii) below.
(ii) Raedwald writes: "[AI] offers the potential for the ultimate meritocracy". An important issue with so-called AI (as at least hinted at by right-writes in his/her "programmed that way") is that such merit must be decided by an objective function which embodies all that is known WRT merit in the appropriate circumstances. Seemingly now (or certainly if we had Artificial General Intelligence (AGI)) such an objective function would surely favour artificial rather than human intelligence. Eventually the question would be asked: why keep on with humans? Are they like beef cattle: good food? [As in biomass electricity generation!] Are they like sheep and alpaca: a source of higher-class clothing material? Are they like cats and dogs: pets? Are they like pandas and tigers: the concern of environmentalists, preserved in the not quite wild and in zoos?
We need to practice defining objective functions for merit. That would/should be for our own use as well as use by machines. But thought leaders and politicians do not yet understand that sort of thing.
(iii) AI (as in Machine Learning based on Bayesian statistics and/or artificial neural networks) is not really intelligence. It is actually yet another merely useful tool: better than its predecessor tool (as we know by demonstration); worse than its future successor (as we strongly suspect from experience and induction).
--
One of the things I find interesting in the bloggosphere are those arguments about what is best in some societal or political circumstances (a good one was the AV Referendum - Alternative Vote). Being brough up on optimal decision theory, I have enquired of others "What is your objective function?" [Sometimes also utility function] The silence is deafening - of incomprehension. Nearly all people do not realise that one needs a single objective function. If one has a multiplicity, they must be combined before deciding. And multiplicative weighting is not the only (and is usually the wrong) approach.
There is no reason why such concepts are beyond most people's grasp. The subject could easily be taught to everyone in school, as part of the compulsory pre-16 curriculum.
Until our 'leaders' in though and other things all understand this and associated issues, how can they lead is in what to do about AI.
Back to (i), Raedwald probably checks his posts with a spell-checker: a tool to increase the merit of his writing. His error in the title here would most likely be detected by a grammar-checker: a more sophisticated tool, slightly better than a mere spell-checker (though current tools of that sort tend to have a high false alarm rate).
Best regards
"No one complains they were a refused an overdraft for being gay "
i sincerely doubt that.
Whether or not their complaint has the slightest merit is another matter altogether.
It does not seem unfair to insist on higher healthcare premiums for those who pose a higher health risk or who make undue demands on medical care.
20-year olds pay more for their car insurance, no-brainer.
Yet reckless and self-indulgent non-exercizing over-eaters pay the same - or less - and are allowed to make limitlessly expensive claims on the NHS than careful dieters who curb the cals and exercise, whether they feel like it or not.
So logically, either we should nationalize car insurance? Or instead we should....
Maybe AI can improve the analyitical tool of medical aetiology and the degree of culpability (self-infliction).
AI would also be a lot sterner than the current crop of medics.
Just who is Al? He seems to have a lot of influence in our lives. Is he a friend of Soros? Does he have a surname?
He's as enigmatic as all those London girls working in 'It'. Never got that concept either......
Dave G @ 20:25 --- presumably you're being ironic about Artificial Intelligence (aka "Hal" in 2001, methinks!).
Couldn't agree with you more, though, about the difficulty of knowing who exactly pretends to be in control of the weapon . . .
We are seeing a ridiculous amount of overblown nonsense being spouted in the press with respect to AI, much with a good measure of implied trans-humanist bullshit sprinkled on top.
As hinted at by right-writes and Nigel Sedgewick - the term AI is nebulous, and any "intelligence" is usually nothing of the sort. Lots of brute force and backward looking "big data" pattern matching as well as algorithms belying the implicit prejudices of their creators. That is of course before we even get to what a definition of intelligence and consciousness is. Jarron Lanier has a good point when he says we often make ourselves "stupid" before machines as we want them to do better than us.
Post a Comment