Bias in Algorithms

Artificial Intelligence and Discrimination

What types of bias can arise with the use of algorithmic decision support systems, and when could this amount to discrimination?

 In a high-ranking consortium, two different use cases -- predictive policing systems, and automated offensive speech detection -- were investigated for the European Union Fundamental Rights Agency (FRA).

A toy robot painted in torquoise and red, seemingly an old timey childrens toy. The robot stands, with its arm bent 90 degrees towards the front, with open round eyes, a square elongated mouth and its inside machinery visibly displayed on its torso.

AI systems: useful tool or overhyped gadget with bad consequences?

Imagine a sexist machine with xenophobic tensions can influence decisions on whether you get a job or financial aid.

This machine would also determine whether you are a possible criminal based on where you live or consider you offensive because of your gender.

What might sound like a dystopian future is already a happening. 

Artificial Intelligence (AI)-based algorithms seem to be everywhere now. Not one day goes by where a company or a new start up isn’t proud to present their new “revolutionary AI solution” that will make companies or the public sector faster, more accurate, and basically better at everything.

For instance, algorithmic decision systems (ADS) are already used in assisting the healthcare sector in diagnosing illnesses like cancer or utilised by companies in supporting the analysis of consumer behaviours and market trends, enabling them to make more precise forecasts regarding product sales.

These systems, like many AI systems, “often rely on the analysis of large amounts of personal data to infer correlations or, more generally, to derive information deemed useful to make decisions. Human intervention in the decision-making may vary, and may even be completely out of the loop in entirely automated systems”, according to a report by the European Parliament.

Basically, ADS are supposed to help us make more “educated” decisions faster, or even take decisions for us, thereby showing the potential to speed up laborious processes and increasing our accuracy.

However, large amounts of data come with large amounts of unforeseen consequences. This is especially true when AI systems are ingrained with ideas about the world and its people that are discriminatory against certain groups.

This article will show how algorithmic decision systems are ingrained with bias, shedding light on the challenges, implications, and potential solutions to bias in ADS.

The article will look at a study of two use cases on predictive policing and automated hate speech detection, in which members of leiwand.ai were involved as part of a research consortium.

Biased AI?

Discriminatory Bias is pervasive and found throughout societies’ cultural ecosystems.

Artificial Intelligence Systems work and learn with large sets of data – usually from the internet - which is substantially filled with unfiltered human-made content. Algorithms trained on this data therefore reflect this discriminatory bias based on gender, religion, ethnicity, or individuals with disabilities.

AI Systems are not neutral

But bias in AI is not just based on training data. Quality and representativeness of training data, or lack thereof, as well as human to machine interactions are also responsible for discriminatory bias to occur. Human interpretation of data, the actions they take from them, and subsequent interactions are also sources of bias. Developer decisions during development and the situation where AI is deployed can also lead to biased results.  

Delegating decision-making or influence to ADS therefore raises a plethora of ethical, political, legal, and technical concerns, demanding thorough analysis and resolution.

With AI being very much in its infancy phase, it then comes with its own risks if left unrestrained. In practice, many companies and the public sector are using algorithmic decision systems without truly knowing what data they are running on and what their actual impact will be.

The Project: Detecting Biases within Algorithmic Decision Systems

We investigated bias for the European Union Agency for Fundamental Rights (FRA) by creating algorithms, joining forces with social sciences and legal researchers to explore the legal and fundamental rights implications of the use of algorithms.

We developed bias tests, where our algorithms were used to analyse cases on predictive policing and hate speech, using a wide range of machine learning techniques to determine what types of ethnic and/or gender bias can arise with the use of algorithmic decision support systems.

Hate Speech meets Overpolicing

  1. Case

    What if the police are repeatedly sent to certain neighbourhoods based on algorithms, despite faulty crime predictions?

A variety of law enforcement agencies in the European Union utilise predictive policing systems, which are algorithmic systems, with the aim of forecasting and preventing potential future criminal activities.

While this technology-supported crime fighting method promises to prevent crimes more effectively and make policing fairer, it has shown to be privacy invasive, inaccurate and to even enhance biases, leading to more discrimination of certain areas and groups.

Feedback Loops

Largely, biases and unfair policing within predictive policing systems are created through feedback loops.

A feedback loop occurs when predictions made by a system influence the data that are then used to update the system. So-called runaway feedback loops not only perpetuate biases in the data but might actually increase them.

For example, if police forces are advised to monitor one area based on predictions that arise from biased crime records, then police will patrol that area more heavily, and accordingly detect more crime in that area. They will thus generate new crime records that are even more biased.

For the project, we simulated a simplified version of a fully automated predictive policing algorithm, focusing on feedback loops in relation to occurrence of crime in neighbourhoods.

Consequences – Feedback loops are prone to happen in automated environments

Some neighbourhoods will receive more police patrols than warranted by their “true” crime rate, which leads to “overpolicing”. Assuming that police have limited resources, this means other neighbourhoods will receive fewer patrols than necessary, resulting in “underpolicing”.

The report identifies some negative aspects of overpolicing, such as increased police stops and searches, and intrusion in homes. These actions can affect various fundamental rights, including the right to physical integrity, the right to respect for private and family life and the right to data protection.

Underpolicing, on the other hand, could put people at a higher risk of becoming victims of crime, impacting fundamental rights such as the rights to life and physical integrity and the right to property.

MitigationThorough exploration of bias sources

The report suggests a mix of strategies to deal with the issue of runaway feedback loops in predictive policing:

  • Technical mitigation measures, such as regularization and down-sampling

  • Non-technical approaches, raising awareness among police to the problems of profiling and sensitizing them to limitations of the system's predictions

  • Measures to improve crime reporting rates by victims and witnesses of crime

There is a lot of content online. It has become impossible to manually filter out online hatred, given the sheer volume of platforms and apps and the massive amount of content created by its users daily.

To detect or predict online hatred more efficiently, offensive and hate speech detection systems are already employed on social media and other platforms. These algorithmic systems categorise an entered text as offensive or non-offensive.

Even though their aim is to combat hatred, these tools can generate biased outcomes due to certain characteristics, like ethnicity or gender. Bias occurs when the system is more prone to error for certain vulnerable groups.

Many actively operating  algorithms are based on off-the-shelf general-purpose AI models that are ingrained with bias. Such models are customizable for specific tasks and are utilized in areas like text classification, machine translation, or in this case, speech detection.

Problems arise with comments falsely being identified as offensive by the system (censorship) and unrecognised insults (hate speech and personal attacks) being allowed.

To understand why people and certain statements a falsely flagged as hate speech, we created various algorithms that used real datasets containing offensive speech, employing diverse methods such as pre-trained AI models. These models were specifically designed for datasets in English, German, and Italian languages.

2. Case

What if legitimate content posted online by or about certain groups gets deleted more often than others?

Consequences – Bias across languages

While testing our algorithms, certain terms often result in text being predicted as offensive more frequently and differently across languages:

  • Terms like 'Muslim', 'gay', and 'Jew' elicit significantly higher predictions of offensiveness compared to other terms.

  • Across languages, there were differences as to when and how strong certain terms were detected as hate speech by the simulation.

  • The quality of the available data and algorithmic tools was significantly worse in German and Italian than in English.

  • Bias is not only about "what" was written (the content), but also about "how" it was written (style of expression, dialect).

  • A slight tendency for more hatred against women showed in the training data.

Mitigation – Extensive evaluation is required prior and during AI deployment

Researchers are currently exploring the creation of "neutral" training data concerning attributes like gender and ethnicity. However, this raises concerns about whether such models might overlook offensive speech directed at certain groups that are disproportionately impacted by discrimination.

Before using a speech detection algorithm for any tasks, one must consider the extent to which individuals with certain characteristics might face disadvantages. Evaluations should therefore consider the training data and the prediction outcomes across various potentially affected groups.

So, what about ADS? Is it any good or does it just cause trouble for everybody?

From what we’ve seen, algorithmic decision systems, at least in their current state, are not suitable to perform completely automated tasks or give unbiased suggestions.

Identifying and mitigating biases remains challenging

For ADS to work properly, every system would need a thorough evaluation of its impact on fundamental rights for safe deployment, involving as many stakeholders and affected groups as possible to enable mitigation through their cooperation in the development and implementation of ADS.

In the case of predictive policing, use of algorithms can affect certain neighbourhoods and groups negatively due to over- and underpolicing.

Reporting rates of victims and of witnesses to crime need to increase, while law enforcement needs to reduce its reliance on algorithmic predictions.

Hate speech detection categorizes content about certain ethnic groups and female gender more frequently as offensive, showing the innate bias of AI systems and their training data.

Before using algorithmic hate speech detection systems, evaluations should consider the training data and the prediction outcomes across various potentially affected groups.

Moreover, this study revealed a significant disparity in the resources and expertise available for Natural Language Processing (NLP) technologies between English and other languages. The performance of the models in German and Italian falls notably short compared to those in English.

What we can take from the study is that there is no “quick fix” to AI-based discrimination.

While AI appears to be capable of making us more efficient in many ways, it is still far away from that development stage where we humans can just leave such systems unsupervised.

The project was a collaboration of a high-ranking consortium consisting of Rania Wazir (consortium lead), Lena Müller-Kress (winnovation), Christiane Wendehorst (University of Vienna Law, with support from Daniel Gritsch), Claudia Schnugg, Thomas Treml, Sarah Cepeda and Isabella Hinterleitner (TechMeetsLegal).

FRA published a report based on the consortium’s research findings in December 2022: https://fra.europa.eu/en/publication/2022/bias-algorithm

Link to FRA:

Negotiated procedure for a middle value contract: Providing evidence on bias when using algorithms – simulation and testing of selected cases | European Union Agency for Fundamental Rights (europa.eu)

© All fotos taken from UnSplash

Next
Next

Roadmap to Trustworthy AI