OASI, an algorithm search engine used on us by governments and large corporations

Share on Facebook Share on Twitter Select sharing service
  • Machine learning algorithms make predictions by identifying patterns through massive data.
    Machine learning algorithms make predictions by identifying patterns through massive data. Source: Pexels.

Eticas Foundation launches a directory to know the risks and challenges of machine learning systems that directly affect us.

The Observatory of Algorithms with Social Impact (OASI) is an algorithm search engine that governments and large corporations use on us organized by name, area and country where they are developed, with information such as whether they have a discriminatory bias, among others.

This is an unprecedented tool created by Eticas Foundation that shows how these automated learning systems are much more present in our lives than we may believe and how their functionalities take decisions and have an impact on society.

Just as any technological advance, the organization warns that this tool will be permanently updated and in a collaborative way. That is why it has opened a channel to add algorithms or change existing ones.

Algorithm transparency is a demand from the social technology sector that is taking a small step forwards with this search engine, that also contains information on what machine algorithms are and what is their impact. Despite this, we largely continue to ignore how most of these algorithms work, since they have an internal functioning known as a “black box” because they are opaque.

The goal is to better understand the risks and challenges posed by them. These automated systems that use artificial intelligence can be used to predict natural disasters or for early detection of diseases by analyzing data, but it is also known that they can lead to discrimination by gender, ethnicity or age, and this makes them unfair and inefficient.

State algorithms

If we look for registered algorithms set up in the Spanish State and which, therefore, have a direct impact on us, we can find several examples that are questionable.

The first is a system to offer job opportunities from the Spanish Public Employment Service (SEPE) that is called into question due to its shortcomings. When it entered into service, offers dropped by 50%.

The second is one used by the Spanish National Police to identify fake claims that generates a socio-economic bias since it takes into consideration the grammar and morphology of claims. The Catalan Government also uses a system in the Regional Ministry of Justice to identify the risk of recidivism of inmates.

Finally, we find an algorithm to detect hate speech on Twitter that has not yet been implemented and could lead to discrimination against certain social demographic groups. It was created by a data analyst and developed jointly between the Autonomous University of Barcelona and the Ministry of Home Affairs and can filter some 500 words relating to insults, sensitive issues and groups that are likely to face violence. Every day, the algorithm detects 3,000 to 4,000 tweets with these features.

Add new comment

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.