Today, decisions that other humans used to make are being thrust into the hands of mathematical models.

Currently, algorithms are used to make life-altering financial and legal decisions like who gets a job, what medical treatment people receive, and who gets granted parole. In theory, this should lead to fairer decision making. In reality, AI tech can be just as biased as the humans who create it.

We are living in the age of the algorithm. More and more we are handing decision making over to mathematical models. It’s becoming increasingly common that algorithms make life-altering decisions for us. Mathematics is being used to determine whether or not we qualify for a loan, where we are educated, and even how much we should pay for health insurance.

Algorithms ensure that certain rules and constraints are set and everyone gets judged in accordance. In theory, bias is removed from the equation. In theory, algorithms should enhance equality. In reality, this isn’t the case.

The mathematical models that are reshaping our lives are opaque. Algorithms could be used to conceal hidden bias that comes into play in making vital decisions. Very real problems like racism and sexism are seeping into data sets used to program algorithms. These issues that exist in our current society could reak even more havoc if let go disguised as an algorithm that is deemed ‘fair’ without question.

As the technology spreads to areas like medicine and law, bias in machine learning is becoming critical. The problem is that people don’t fully understand the technicalities behind artificial intelligence (AI). And no one is going to argue with maths, right?

 

AI tech uses algorithms to make decisions.
In Today’s World, AI Tech and algorithms make many life-altering decisions for us. | Shutterstock

How Does AI Tech Work?

“Artificial Intelligence” is a broad term that encompasses all types of machine learning. Although there are many different ways computers can learn, generally speaking, they are all based on algorithms.

On the most basic level, computers are given enough input so that they can identify patterns in the data. The identified patterns are then used by the machine to make decisions about a new but similar input. Although the input has never been seen before, the machine is capable of making what we could call an ‘educated guess’ on the basis of the patterns it has learned.

The more data the AI system processes, the more patterns it forms. In other words, the links between inputs and outputs are improved.

Humans decide what data the AI system gets to process and program the way it responds. This means that the feedback an AI can provide is human selected.

Humans create and program AI tech systems. A man works out the mathematical systems to create an algorithm.
Bias can occur in algorithms due to the humans who create the AI tech. | Shutterstock

Do We Place Too much trust in AI Tech?

Maybe it’s because for the average layperson AI tech is something futuristic and complicated, but it seems that our approach towards these systems has been “innocent until proven guilty”. We have deemed AI systems unbiased until proven otherwise.

An example of an algorithm that most of us come across every day is the one that selects what appears on our Facebook newsfeeds. We don’t know why certain posts or articles are being pushed to the top and we don’t have a clue as to how to figure out how.

Read More: Attention Span is the New Currency

Now, it may seem trivial to want to know why one meme is being favored over another, but when it comes to the news,  this selection process becomes more consequential.

The algorithm has the potential to distort your perception of social interaction online and prevent certain major events from coming under your radar. However, we continue to use social media and trust that the algorithms are selecting the most important and relevant stories for us to see.

We trust machine systems to do jobs even more so than we would trust other humans. Machines categorize and rationalize without emotions getting in the way. They can evaluate information in a matter of seconds without getting distracted or letting a human lens of experience have an input on their system. The data output is deemed neutral. The decisions that artificially intelligent systems make come down to algorithms, not emotions.

People are all too willing to trust in mathematical models. Algorithms replace the human processes and we believe that they will remove human bias.

Read More: Why Manners Matter More to Robotic Automation Than You Think

However, as we have seen, the intelligence of AI systems is learned from humans. By nature, humans are biased. We will usually want our national team to win against a rival. We will always be rooting for our own family members to succeed. Even though we may not realize it, deep in our subconscious lies bias.  

Algorithmic bias occurs when the AI system acts in a way that reflects the implicit values of the humans who were involved in the data collection, selection, and programming. Despite the presumed neutrality of the data, algorithms are open to bias.

Algorithmic bias often goes undetected. Bias is hidden in the depths of the mathematical programming of AI tech and means that important decisions go unchecked. This could have serious negative consequences for poorer communities and minority groups.

AI Tech in a Biased Criminal Justice System

AI tech has the potential to revolutionize criminal justice systems. That’s if it is used in a fair and transparent way.

The United States imprisons more people than any other country. A disproportionate number of these prisoners are black. Up to now, these legal decisions were in the hands of human lawmakers guided by their own instincts and, unavoidably, their personal biases.

If algorithms could accurately predict which defendants are likely to re-offend, the system could be made more selective about sentencing and more just. However, this would mean that the algorithms would have to be devoid of any type of bias to avoid exacerbating unwarranted and unjust disparities that are already far too common in the criminal justice system.

A defendant in an US court where the judge will use AI tech to help decide his future.
In the US, algorithms and AI tech are used to decide who gets parole, and who stays in prison. | Shutterstock

In the US, algorithms are used to calculate risk assessment scores in courtrooms. Assessments are used to rate the defendant’s risk of future crime and their rehabilitation needs. These scores are used to inform decisions ranging from assigning bond amounts to which defendants get to go free.

In states including Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, these scores are given to judges during criminal sentencing.

An AI tech system called COMPAS, created by a company called Northpointe is widely used to carry out these assessments. The system produces the score from the responses to 137 questions that are answered directly by defendants or taken for criminal records. An investigation carried out by ProPublica found evidence that this mathematical model could be biased against minorities.

ProPublica obtained the risk scores of more than 7,000 people arrested in Broward County Florida in 2013 and 2014. Then they checked to see how many of these people were charged with new crimes over the next two years (the same benchmark used by the creators of the algorithm).

A black prisoner handcuffed to a cell in an US prison where AI tech is used to make decisions.
AI tech and algorithmic bias is a threat to equal treatment of defendants and prisoners. | Shutterstock

The investigators found that the score was extremely unreliable in predicting violent crime. Only 20% of the people predicted to commit violent crimes actually did.

Significant racial disparities were also revealed during the investigation. In forecasting who would re-offend, the algorithm proved to be more likely to falsely flag black defendants as future criminals. In fact, the mathematical model wrongly labeled black defendants in this way almost twice as much as white defendants. In comparison, white defendants were mislabeled as low-risk more often than black defendants.

This disparity could not be explained by the defendant’s previous crimes and did not boil down to the type of crime that they were arrested for. This was made clear when ProPublica ran a statistical test that isolated the effect of race from criminal history, age, and gender.

The results proved that black defendants remained 77 percent more likely to be labeled as “high risk” in terms of committing violent crimes in the future.

Additionally, black defendants were 45 percent more likely to be predicted to commit a future crime of any kind according to this particular AI system.

Unsurprisingly, Northpointe (the AI tech company) disputed ProPublica’s findings.

Read More: China Uses AI, Facial Recognition, and Shame for Law Enforcement

AI Bias Causes Discrimination

A giant hand selects a white male from a diverse group of candidates. AI tech and algorithms can exasperate bias within society.
In some cases, AI tech is used in recruitment. Algorithmic bias can lead to discrimination. | Shutterstock

The bias within the US criminal justice system is not an isolated example. The negative consequences of algorithmic bias can be traced back as early as 1982. St. George’s Hospital Medical school was found guilty by the Commission for Racial equality for racial and sexual discrimination in its admission policy.

From 1982 until 1986 the school denied entry to women and men with “foreign-sounding names”. This was due to a biased assessment system which used an algorithm that was trained using past admissions trends.

Read More: 3 Ways Health AI is Changing the Medical Field

Facebook's iconic 'like' button giving a big thumbs-down to algorithmic bias.
Facebook’s anti-hate speech algorithm had good intentions but backed fired as the AI tech was biased. | Shutterstock

Algorithmic bias has also proliferated sexual discrimination. In 2011, the Android store’s recommendation algorithm linked the gay online dating application Grindr to apps designed to find sex offenders. Thus the algorithm was wrongly making a connection between homosexuality and pedophilia.

In 2017Facebook designed an algorithm to remove online hate speech. However, this algorithm proved to be biased. Internal documents from Facebook revealed that the algorithm favored elites and governments over grassroots activists and racial minorities.

Read More: 5 Ways Technology Helps Protect Human Rights

How do we Fix a Problem Written into AI Tech?

ProPublica’s investigation shed light on a crucial challenge that must be overcome if we want to stamp out bias in algorithms. Machine learning techniques are usually so complex that their inner workings can be difficult to break down. Companies like Northpointe who develop these machine learning systems need to be open and transparent about their technology and how it functions. Users and engineers need to be able to have some idea of their inner workings and what data algorithms use to operate.

Although AI is an incredible development with a lot of potential, making biased decisions is a major ethical risk. Europe has begun to recognize these pitfalls of artificial intelligence. The EU has tried to remedy AI risks with The General Data Protection Regulation that came into force this past May.

The European Parliament where laws were passed to help decrease algorithmic bias in AI tech.
The EU is taking steps towards holding AI tech companies accountable for bias. | Shutterstock

This directive requires companies using algorithms for automated decision making to explain the logic behind each choice. To reduce the risk of errors and bias, the GDPR even requires algorithm developers to use “appropriate mathematical or statistical procedures” (recital 71). It also means that companies can be held accountable for bias and discrimination in automated decisions.

Similarly, the French President Emmanuel Macron has promised that all algorithms developed for governmental will be made available to the public.

Algorithmic accountability and the auditing of AI tech is the way forward in eliminating bias. However, the majority of the general public cannot understand code and many businesses and governments are still reluctant to publish the data used to train algorithms.

Students learning about algorithms so that they can understand how AI tech makes decisions.
Education in AI tech, coding, and algorithms is key to overcoming bias. | Shutterstock

Until widespread education has caught up and algorithms are completely transparent to the public, should a line be drawn in the decisions that AI tech can make? Should an algorithm decide who goes to prison? Should medical decisions like who gets access to certain treatments be left up to a mathematical model?

Algorithms and artificial intelligence are powerful tools that can be used to make decision-making more equal and just. However, as it stands, the decisions that this technology makes is only as fair as those who create it.

Read More: Gastrograph AI: The Company That’s Digitizing Your Taste Buds

Do you think life-altering decisions should be left up to algorithms?

banner ad to seo services page

3 COMMENTS

  1. […] Currently, algorithms are used to make life-altering financial and legal decisions like who gets a job, what medical treatment people receive, and who gets granted parole. In theory, this should lead to fairer decision making. In reality, AI tech can be just as biased as the humans who create it. We are living in the age of the algorithm. More and more we are handing decision making over to mathematical models. It’s becoming increasingly common that algorithms make life-altering decisions […] The Danger of Bias in an Al Tech Based Society […]

  2. Be careful asking AI for a non-biased truth…you might still get an answer you don’t like. In the end truth might not be what you’re really looking for.

    • Hi Mike,
      I hope you enjoyed reading this article. Thanks so much for your interesting comment.
      I think what we need to look for from non-biased AI is enhanced equality and fairness, not necessarily truth. AI systems that are used within society should be eliminating discrimination rather than supporting it. However, I definitely agree that some ugly truths will need to be revealed in order to fix the situation and achieve this.

LEAVE A REPLY

Please enter your comment!
Please enter your name here