News

4 scandalous cases of discriminatory algorithms and how to avoid them

Share on Facebook Share on Twitter Select sharing service
Author: 
Colectic
  • Two people showing each other photographs with their mobile phones.
    Two people showing each other photographs with their mobile phones. Source: Cottonbro (Pexels).

We talk to DataForGoodBCN on the risks of algorithms deciding for us.

Algorithms are a reflexion and amplifier of discriminations in our society; they are racist, chauvinist, aporophobic and they perpetuate aesthetic patterns.

"One of the main problems is that algorithms use historical data on past behaviours to detect patterns and, in this way, cast predictions. Unfortunately, our society shows structural inequalities that entail bias towards vulnerable groups, and these bias are also present in the data used to create algorithms”, they tell us from DataforgoodBCN.

Another reason is that, generally speaking, “they are designed to act on a concrete group, but are then used with other sectors of population different from the first” when “the reality of different groups isn’t necessarily the same”.

How can a machine decide for us? Here we bring the four most scandalous discriminatory algorithms that have emerged:

Amazon discards women

Some years ago, the giant of parcel services created a system of algorithms to facilitate personnel selection that favoured men above women as, at the time, the bulk of its staff was basically made up of men and the algorithm had been created using data on personnel from the last ten years. In the end, the transnational company withdrew this system.

TikTok censures dissidence

German publication Netzpolitik gained access to internal documents of TikTok signalling that people who had been trained to moderate content on the application had been trained to remove posts that could be sensitive to cyberbullying, and this included people with functional diversity, with non-normative bodies or LGBTIQ+. When this information was disclosed, the social media changed this policy.

An algorithm for racist justice

COMPAS is a predictive system that is used by some US courts of justice since 1998 to determine the sentences handed to defendants based on their risk of repeating offences, which the software determined by assessing 137 aspects relating to the person. ProPublica proved, by screening the data, that the algorithm had a racist bias and determined that persons of colour were more likely to be repeat offenders

Have we chosen our partners?

Journalist Judith Duportail confirmed that chance doesn’t exist in her research into Tinder, in her book 'The Algorithm of Love'. The truth is that this dating app, and probably others too, has a secret ranking of its users based on parameters such as beauty and intelligence quotient to pick matches based on our level of desirability. With this, we will only see people who are at our same level because, as admitted by the IA system designed by Amazon that it uses, “people with the same level of attractiveness are more likely to get along”.

As Duportail explains, you move up the rank when you receive a match, and move down it is you are rejected. Furthermore, it follows a patriarchal logic since it rewards men with studies and penalises women with studies, in the same way that it favours relationships between older men and younger women.

Fair algorithms?

From DataForGoodBCN they support best practices such as ensuring that data that is used adequately represent the studied group or that only the minimally required data are used to reach the goal.

They are also in favour of the algorithm impact assessment, a governance mechanism that is compulsory since the General Data Protection Regulation (GDPR) was adopted in 2016, although few organizations have actually implemented this mechanism.

Add new comment

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.