[prisna-google-website-translator]
Select Page
[prisna-google-website-translator]

The sociologist on how discrimination is embedded in technology and how we go about building a fairer world

Ruha Benjamin is an associate professor of African American studies at Princeton University, and lectures around the intersection of race, justice and technology. She founded the Just Data Lab, which aims to bring together activists, technologists and artists to reassess how data can be used for justice. Her latest book, Race After Technology, looks at how the design of technology can be discriminatory.

Where did the motivation to write this book come from?
It seems like were looking to outsource decisions to technology, on the assumption that its going to make better decisions than us. Were seeing this in almost every arena healthcare, education, employment, finance and its hard to find a context which it hasnt penetrated.

Something which really sparked my interest was a series of headlines and articles I saw which were all about a phenomenon dubbed racist robots. Then, as time went on, these articles and headlines became less surprised, and they started to say, of course, the robots are racist because theyre designed in a society with these biases.

The idea that software can have prejudice embedded in it is known as algorithmic bias how does it amplify prejudice?
Many of these automated systems are trying to identify and predict risk. So we have to look at how risk was assessed historically whether a bank would extend a loan to someone, or if a judge would give someone a certain sentence. The decisions of the past are the input for how we teach software to make those decisions in the future. If we live in a society where police profile black and Latinx people, that affects the police data on who is likely to be a criminal. So youll have these communities overrepresented in the data sets, which are then used to train algorithms to look for future crimes, or predict whos seen to be higher risk and lower risk.

Are there other areas of society such as housing or finance where the use of automated systems has resulted in biased outcomes?
Policing and the courts are getting a lot of attention, as they should. But there are other areas too, such as Amazons own hiring algorithms, which discriminated against women applicants, even though gender wasnt listed on those rsums. The training set used data about who already worked at Amazon. Sometimes, the more intelligent machine learning becomes, the more discriminatory it can be so in that case, it was able to pick up gender cues based on other aspects of those rsums, like their previous education or their experience.

In your book, you assert that the treatment of black communities is an indication of whats to come for other communities more generally. How would you say this extends to technology?
Thinking about how risk is racialised is one way into understanding how those systems can eventually be deployed against many more people, not just the initial target. This is one of the things we can see with these new digital scoring systems these companies which dont just look at your personal riskiness, but also your social media and the people youre connected with. If someone you know has defaulted on a loan, that can affect you. So actually, incorporating and gathering more data can be even more harmful to peoples lives.

What role can legislation or regulation play in changing this direction?
Im personally a little sceptical the passing of a law can be a placeholder for much more significant progress because people prematurely celebrate, even if not much changes. But I do increasingly think that legislation has a role to play. Even if a particular law is just a regulation in a state, or one country in Europe, it can be very effective because if these companies want to roll out technologies universally, and then they find they have to change something up for a certain jurisdiction, it can then be an obstacle. Other elements, like state-level protections for whistleblowers are vital, because there has been retaliation against workers at these tech companies.

Home DNA testing kits are increasingly popular, and genomics screening is more commonplace too. Are you concerned about how technologies are being used, and weaponised?
We did an informal audit of three DNA testing companies when I was a postdoc at UCLA. The results we got back were completely different across the three companies, because of their own reference data. These companies have access to our data, which they can buy and sell to other companies, and theres really very few regulatory safeguards on how this is going to be used. The similarity between those technologies the DNA testing kits, artificial intelligence, machine learning is that the reference data shapes so much of the prediction. We have to question how its put together, what individuals are being used as reference points.

You call these systems the new Jim Code, because of how they perpetuate inequality. How do they build on the legacy of Jim Crow?
So the original Jim Crow was about designing racial segregation, but it was really about maintaining status hierarchies. Many people look at high levels of segregation which does still exist and they rarely question how it was designed, but instead put it down to the stereotypes of lazy people who dont value education, and all kinds of narratives along those lines. When we think about the new Jim Code, Im referring to things like the investigation from ProPublica, where companies could tick boxes saying, you dont want black people to see the real estate where you advertise, on Facebook. [Earlier this year US courts ruled against this and other discriminatory ad targeting.]

How aware do you think people are of issues around algorithmic bias, or automated systems?
Since I started working on this book, a little over two years ago, I have seen a dramatic shift in the tone of the conversation around technology. Part of that has been spurred on by Facebook and Cambridge Analytica and the US election. More and more people are realising that this idea of big tech coming to save us, its really been dismantled. Part of it is shifting from a kind of paranoia around technology to what my activist colleagues like to say: from paranoia to power.

Could a more diverse workforce in Silicon Valley potentially provide a solution to these problems?
More diversity in Silicon Valley is important, but wont automatically address algorithmic bias. Unless all those diverse people are empowered to challenge discriminatory design processes, diversity is a ruse. We need a complete overhaul of the larger accountability structures that shape tech development, and we definitely cant wait for Silicon Valley to become more diverse before implementing much stronger regulation and accountability.

Race After Technology by Ruha Benjamin is published by Polity Press (14.99). To order a copy go to guardianbookshop.com. Free UK p&p on all online orders over 15

Read more: https://www.theguardian.com/technology/2019/jun/29/ruha-benjamin-we-cant-wait-silicon-valley-become-more-diverse-prejudice-algorithms-data-new-jim-code

[prisna-google-website-translator]