Uncovering biases, really?
I have used the above screenshot countless times in talks and in teaching. If I pitch it well on the day, I get quite a few giggles.
This screenshot was taken from the website of the MIT review in 2016. The article on that page features a research article who has purportedly shown that machine representations of meaning (‘vectors’ or ‘embeddings’) are sexist, with strong tendencies to associate women with certain conventional roles. The article’s teaser enthusiastically advertises the research with the line As neural networks tease apart the structure of language, they are finding a hidden gender bias that nobody knew was there (my emphasis).
As much respect as I have for this work (and I’ll come back to that once I have ranted enough), I’m afraid to say it isn’t actually the first to make this important discovery. My colleagues have been scooped. And not by another research group a few months earlier, but by thousands of people for thousands of years.
Anybody who’s ever been discriminated against – and that’s quite a lot of people over the years – knows that they’re being talked about in very specific ways, in ways that are not particularly nice. But of course when those people talk about their experiences, if they do, they don’t usually say ‘Gosh, there’s a lot of data against me’, or ‘That use of the woman embedding really upset me today’. So using words like ‘data’ and ‘vector’ or ‘embedding’ makes us feel like we have discovered something.
The real problem is that AI representations of meaning are built from naturally-occurring language data. They just learn what’s in the data. So if the data is sexist, racist, ableist, or if the data thinks kittens are really cute, then the representations will similarly be prejudiced and kitten-loving. It doesn’t mean this effect shouldn’t be studied: it can be useful to look at vector representations to verify, understand, and visualise prejudicial trends in specific types of discourse (see e.g. one of our attempts here). But we are never uncovering anything. Real people have said it before. The humanities have said it before. My 6-year old neighbour knows it all.
Now, this kind of ‘discovery’ keeps popping up everywhere and especially loved by social media and pop science. One recent example is possibly Google’s sentiment analyser revealing its homophobic and anti-semitic tendencies:
Another AI bias issue uncovered this week - this time it's Google's sentiment analyzer in Cloud Natural Language API https://t.co/9H6qRndTbZ pic.twitter.com/nSmEsu4ZAA
— Kate Crawford (@katecrawford) October 28, 2017
I am absolutely sure that every time such an article gets posted somewhere, people who are actual subjects of discrimination go to their special head-banging wall and pop a few additional cracks in the already crumbling plaster. Meanwhile, we AI researchers can feel really good about ourselves for acknowledging the deficiencies of our technology and saying we’re ‘working on fixing it’.
So okay, there is perhaps a slight misuse of the concept of ‘news’ there, and we knew it all along, and we shouldn’t be surprised, but still cool that people are working on those biased vectors. We don’t want biased AIs, right? Right.
The trouble is, the cure follows the same path as the discovery. Some researchers somewhere, who somehow hadn’t noticed before that the world was a jungle and have now realised by looking at a string of numbers on a screen, will think that if the vector is biased, the vector should be fixed. What does that mean exactly? Well, as we’ve seen a vector is a representation of the data. The obvious way to change the vector is to change the data… but changing the data, which is data that real people have produced out there, would basically mean changing the world. And for sure, that sounds rather complicated. So what to do instead? Forget about the data and forget about the world (again). We’ll change the vector!
What does it mean to change the vector? Let’s take a metaphor. Let’s suppose that we’ve drawn a map of the night sky, with the position of all the stars and planets we could observe. Let’s call each star or planet a vector on the map. Now, we don’t like the position of that particular star. Gosh, it looks like it might die any minute, and it’s not so terribly far from Earth. So what could we do? Well, we’ll move the star on the map. We’ll move it really far away from our planet so that the drawing looks right, and then we can relax. You relaxed? I’m relaxed…
This is what changing a vector means.
Before you think it is completely absurd, let me qualify what I’ve just said. It actually makes a lot of sense to try and fix AI meaning representations. Why? Because they are latent in everything we do with technology. And if they’re biased because society is biased, they will just reinforce the existing prejudices. The article referenced by the MIT review gives an excellent illustration of what this means for Web search engines: the authors show that if the vector for ‘computer scientist’ is biased towards the male gender, female computer scientists will be returned at a lower rank in search results, reinforcing the idea that the prototypical computer scientist is male. So actually, changing the vectors in the technologies we use every day might indeed have an effect in the world.
But wait, how should we change them? How do we know where to put that dying star? Here is how the MIT article above does it. You go and find some humans and you ask them questions about your vectors, trying to elicit bias. Then, using the human responses, you ‘de-bias’ the vectors using some mathematical operation.
I can see at least two problems with this. The first issue has to do with the choice of participants. Who is to say what the stereotypes of the day actually are? If you randomly sample the population to help you with your de-biasing effort, you will by definition sample more of the majority. If you crowdsource your responses, you are bound to whichever distribution of social groups is embodied by crowdsourcing service’s workers. Are those people the right people to tell you about social biases? And what does it mean to be the right person anyway? The second issue has to do with the kind of bias you are eliciting. It seems pretty uncontroversial (at least, if you buy the notion of vector bias) that sexism and racism are bad and should be avoided. But what about political opinions? I wrote here that the UK Web before Brexit seems to have shown fairly strong anti-EU sentiments, which may or may not have been reflected in search results in the years prior to the now-famous referendum. Should the EU vector have been de-biased to show an appropriate balance of what the EU does and does not do? How? What kind of biases can be identified in the massive linguistic resources underlying our everyday Web services? When is it justified to fix them?
With respect to the first issue, I’d argue that the actors who are discriminated against are the best placed to say what they want and don’t want in the technologies we produce. Different groups have different issues and will need different algorithmic solutions to make their voices heard, or simply to meet their needs in a way that is as safe as possible for them, in their current context. I don’t believe in a one-fits-all solution that would be developed independently of the people concerned. And in some cases, solutions that are tailored to specific groups will still be insufficient. In an ideal world, each individual might be able to model the technology they use in the way they choose, to find the results that are really important to them.
At this point, some may say that this dream is already reality. It is called ‘personalisation’, and every self-respecting company does it. For instance, Google re-ranks your search results according to what it thinks your interests are. But personalisation has its own problems. First, it requires a level of centralised data-gathering that can be highly dangerous for minorities (i.e. Google needs to know about you in order to give you results that fit you). Second, it doesn’t work. In her book Algorithms of oppression, Safiya Noble reports how, despite her own identity as a female black scholar and the substantial amount of data that Google holds about her, her searches often result in offensive and biased results.
The second issue – what should we de-bias? – is just as spiny. Should we take particular care of politically charged concepts? But what is a political concept? Safiya Noble provides again an excellent example of the ubiquity of the issue, mentioning visibility problems experienced by people of colour on the Web. In particular, she mentions an interview with a hairdressing salon owner, who rightly points out the bias in taking black hair as a colour rather than a texture in search results. So hair can be a biased concept too, if its representation ‘encourages’ the disambiguation of black as colour – in effect, the algorithms then returns white black hair.
So the world is not perfect. But who’s going to fix it? I’m arguing here that a centralised fix is a bad idea. It is a bad idea politically: there are no structures in place to democratically impact on the essential technologies that are currently served by huge private companies. It is also a bad idea technologically: the type of algorithms that we create for the big data of big companies is not suitable to modelling the small data of small people. Why not, instead, provide technological solutions that support the individual in their life and political choices?
We can’t fix the world in a piece of code, exactly as we can’t change the sky by redrawing the map of the stars. But we also have to acknowledge that our interaction with algorithms is part of the world and so we have to make sure that this interaction happens the way we want, for the politics that we want. It seems misguided to fight for fair, bottom-up political structures and at the same time accept some top-down overseeing of information by centralised structures that are outside our sphere of influence. But if we are to match our digital lives to our politics, we need the right algorithms to support us. And those will be small, distributed, intelligent, open and transparent.
I don’t think that crowdsourced good conscience will cut it.