Algorithmic Bias in Real-world

Practical Examples of Bias

Abhishek Dabas
8 min readJul 27, 2020

***The Intent of this blog is just to show the importance of understanding Bias in Artificial Intelligence***

While there are many real and potential benefits of using AI, a flawed decision-making process caused by Human bias embedded in AI output makes this a big concern for its real-world implementation. The growth of Artificial Intelligence in sensitive areas such as hiring, criminal justice, and healthcare has sparked debates on bias and fairness.

Consider the following examples:

  1. Predictive Policing:

Predictive policing involves using algorithms to analyze data in order to predict and also help prevent potential crimes in the future. It is being used to detect potential crime spots, were police patrolling can be done in the future, so as to decrease the crime rates. Models are built based on the past history of crime, for predicting the hot spots of crime in the future. The system is based on where police officers have been in the past and made arrests, and so machines are learning these patterns from the past. Its important here to understand that arrests are different from a crime. As a result of this model, some areas have excessive patrolling and some areas do not have it at all. This is exactly where the feedback loop is amplifying the bias.

The main focus is her is mentioned as “stopping crime before it starts”. But are we doing that? Read More

2. Predictive Sentencing:

It is a method where data on a person is used to predict, a score of criminality that can be used while sentencing the person for the crime. Judges can use a score calculated by a tool called COMPASS, which is a risk assessment questionnaire given to people accused of a crime. Answers to these questions are collected into a system, about specific points like socioeconomic status, neighborhood, family background, etc. The data is then used to reach a prediction, for calculating an individual’s tendency to commit a crime in the future. Read More, Pro Publica

3. Face Analysis Technology:

It is the AI technology for detecting human faces and trying to analyze different aspects of information like gender, race, etc. These facial Analysis systems came out to be very accurate but biased. Researchers have found that facial analysis technologies had higher error rates for faces of color which is mainly due to unrepresentative training data. Let's look at some of these here:

  • Predicting criminality

Faception : Faception is a Facial Personality Analytics company. Its breakthrough computer vision and machine learning technology analyzes facial images and automatically reveals personalities in real-time. The technology promises to allow predictive screening solutions and preventive actions in public safety, smart cities.

Companies are using algorithms to detect and find criminals using facial image data. Research papers have been published in the field, which has put some light on the topic. Using a very small test dataset, the research papers have claimed a high accuracy in the detection of criminality. Papers on the topic have shown that the angle from nose tip to mouth corners can increase your chances of being a criminal i.e if you're sad or angry you have a higher chance of being predicted as a criminal compared to when you're smiling. Read More

  • Predicting Homosexuality

Algorithms are used for the prediction of the sexuality of a person based on a faical image. These models are trained on data from dating websites and other platforms where people have explicitly disclosed their sexuality. The problem here is that dating websites reflect the social self of a person where people are presenting the best of themselves. However, these systems then proceeded to make bolder claims like they will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone. These models have been showing 90% accuracy. Read More , BBC News, the guardian, The guardian

  • Webcams cant recognize people of color: Face tracking systems are also used by companies like Nikon and HP for better adjustment of frames for their users. HP uses this adjustment of the lens focusing on the user's face as one moves, for better video calling and other similar features. Whereas Nikon uses face recognition for skillful capturing of the picture, which has lately found to be having trouble with Asian Faces. Cases have been found where the system was not able to identify people of color. Read More
  • Amazon’s facial recognition system works great much of the time, but when asked to compare the faces of all 535 members of Congress with 25,000 public arrest photos, it found 28 matches, when in reality there were none. A computer program designed to vet job applicants for Amazon was discovered to systematically discriminate against women.

Note: Gender Shades: A study by MIT Media Lab researchers found that gender classification by big companies like IBM, Microsoft, and Face++ had 34.4 percentage of high errors for dark skin than light skin.

Source: @
  • Microsoft, IBM, and Face++ have now improved this error in detection to 2%**. The main concern here is if companies start using such systems that show bias on human faces, it becomes a matter of concern.

4. Hiring programs:

A lot of hiring programs by big companies use AI for reviewing job applications and selecting the top talent. Such methods have been used by companies that receive a lot of applications throughout. Amazon had been using such AI systems, which was later found to be biased against women. The system taught itself that male candidates were preferable. Amazon has stopped using these biased models but it is essential to know the implications such biased models in real-World. Read More

5. ImageNet Dataset:

In 2012, a project named ImageNet started, which was an effort to provide researchers around the world an easily accessible image dataset with 14 million images. Images in each concept are quality controlled and human-annotated. It was a tremendous help for researchers and motivation for people in the field to educate and research using a large image dataset. It has been a very important aspect of unlocking the potential of AI by helping researchers and developers of deep learning to facial recognition. In recent years a problem has been found within this dataset(people have tested that the data is biased). One of which was that the data label programmers are white men. A good read on this topic is the project called Excavating AI. It is a very good example of bias in Labelling.

Since then, The ImageNet team has analyzed its dataset and tried to identify the sources of bias. They have been taking certain measures to reduce this bias from their dataset by removing derogatory words, adding more demographic and geographically diverse photos. Read More, Effert for fairness in ImageNet, Labeling is not what you think

6. Spurring out biased comments by Twitter bots:

Microsoft created a bot to learn more about conversation and designed an automated bot that learns from users and mimicks their language. Very soon it started reflecting the bias in its comments and tweets. Read More & more.

7. Google Autocomplete Still Makes Vile Suggestions:

Source: Link

Autocomplete is a reflection of what people search for, and hence a very good example for a biased feedback loop. The more people who search for that particular terms or information, the more they come across these links, and more likely they are to click on such links. The more traffic that appears on such pages, the more accurate the pages become, and this becomes a biased feedback loop. Google has fixed a lot of such results that show bias but this reflects on the importance to set policies on such use cases. Read More, YouTube’s Search Autofill

8. Ad serving Algorithms are biased:

Ad-serving Algorithms are core elements for every server today. Complex algorithms have been built for the selection of most relevant ads for a user. Researchers have found that slight tweaks in demographics cause significant impacts on the advertisements shown. Advertising tools used by big platforms like Facebook optimizes its decision based on the historical preferences of the people. The Machine Learning model picks up the pattern shown by people and reapplies this in the future predictions(also called Bias Laundering). Example: White people are shown more ads about houses, women are shown more ads about nurses and secretaries. Read More here

7. Bias affects Healthcare Decision Making:

There is a growing use of Artificial Intelligence in the healthcare industry. Let's consider an example where ML models are being used for diagnosing skin cancer for increasing the efficiency of the detection process. This also means that doctors and staff can use such technology to screen more patients in less time. Researchers reviewed more than 50,000 records and have found a very significant flaw in these models which gives higher risk scores to white people and low scores to people of color. Which in turn results in a bias towards the proportion of patients who get extra help than others, even though all the patients are facing the same disease. Such Algorithms underestimate the sickness of people with certain demographics. One of the reasons is because of the past medical histories which are used for training the models which select features like past health care spending as one of the factors that determine its output. Machine Learning Algorithms pick up these irrelevant correlations from health data. Another reason is that if the model is trained on Americans, it would have a lower performance on Asians and Africans and vice-versa as some medical conditions are more common in certain groups of people than others. Read More


  1. These AI systems have repeatedly shown that they work better with some demographic groups than others. These biases then perpetuate (also known as “bias laundering”) and then lead to consequences that keep on worsening the situation.
  2. Unchecked, Unregulated AI can amplify the bias. Hence the awareness of bias and accountability in AI needs to be developed for preventing the unfavorable use of AI systems.
  3. One must understand why and how decisions are being made by the AI algorithm in order to identify the biases.

Good Read on the Topic:

  2. How to Make a racist AI without really trying





Abhishek Dabas

Masters Student | Machine Learning | Artificial Intelligence | Causal Inference | Data Bias | Twitter: @adabhishekdabas