What type of algorithmic biases are there?

Various private and public sectors are increasingly using artificial intelligence systems and machine learning algorithms to automate easy and complex “decision making” tasks.Before the world of algorithms, people or organizations made decisions about certain topics such as; hiring people and criminal convictions without the use of any algorithmic systems.

Machine learning algorithms rely on multiple datasets or patterns that they have learned in the past, which indicate the correct output for some people or objects. From those learned patterns, the system then creates a model that can be applied to other people or objects and can make predictions about what the correct output should be for them.

However, because these systems can perceive people and objects like situations differently, a study has been initiated that revealed some troubling situations where the reality of algorithmic decision making does not always live up to expectations. Some algorithms run the risk of replicating and even reinforcing human bias

However, because machines can treat similarly-situated people and objects differently, research is starting to reveal some troubling examples in which the reality of algorithmic decision-making falls short of our expectations. Given this, some algorithms run the risk of replicating and even amplifying human biases, particularly those affecting protected groups. For example, automated risk assessments used by U.S. judges to determine bail and sentencing limits can generate incorrect conclusions, resulting in large cumulative effects on certain groups, like longer prison sentences or higher bails imposed on people of color.(Lee, Resnick, & Barton, 2019).

Some examples of algorithmic biases:

  • Bias in online recruitment tools

  • Bias in word associations

  • Bias in online advertisements

  • Bias in facial recognition technology

  • Bias in criminal justice algorithms

Facial recognition technology

Last updated

Was this helpful?