The
media update team explores machine bias and why you should guard against it.
You’ve probably heard the term “machine learning” being used more and more often. That’s because artificial intelligence (AI) has already made an impact on many industries. Machine learning is a component of AI and it too is being applied to various computer-based tasks and technological systems.
As machine learning becomes more prevalent in our everyday lives, it’s important that we understand its capabilities and limitations, as well as the opportunities and threats it poses.
What is bias in machine learning?
Bias creeps into machine learning algorithms when human preference or emotion has affected the training data that machines are given to learn from.
Wait, backup – what’s machine learning? It’s when humans train machines to create their own algorithms, based on training data that the machines process.
Basically, you show a machine some data, and based on that data it either:
- creates a model that allows it to make predictions about similar data, or
- uncovers patterns and trends within the data.
Machine learning can be helpful for many things, from spotting health risks based on patient data, to predicting weather and detecting fraud. People, teams, companies and governments have started to make more of their decisions based on the information that machine learning-powered systems provide.
But training data that is skewed or incomplete can seriously affect the information that machine learning systems deliver. These algorithms become ‘biased’ in the way they process new data they receive.
The dangers of machine bias
When algorithmic bias creeps into machine learning systems, they provide answers or make predictions that can put people at risk.
Bias in machine learning can be very harmful to those using them, as Tobias Baer and Vishnu Kamalnath write
in an article for McKinsey Insights: “Algorithmic bias is one of the biggest risks because it compromises the very purpose of machine learning. This often-overlooked defect can trigger costly errors and, left unchecked, can pull projects and organizations in entirely wrong directions.”
A machine learning algorithm that learns from data that isn’t representative, can, for instance, turn down loan applications unfairly. Or when the training data the machine learns from isn’t complete enough, an algorithm could fail to predict a financial crash or natural disaster.
As we rely more and more on AI systems to guide our decision making, it’s vital that we do everything in our power to make sure these systems provide us with accurate answers.
What causes algorithmic bias?
One of the main causes of bias in machine learning is when the creator of the training data gathers data that is not representative of the information that the machine will be expected to process in future.
If a database of loan applicants, for example, is mostly made up of data from one cultural or income group, then the algorithm resulting from this data won’t be able to accurately process new loan applications of people in other groups.
Sometimes it’s not that the training data isn’t representative or balanced, but that historical data doesn’t include all potential situations that may arise. For instance, machine learning can’t predict a disaster that is about to happen if no data points about disasters are included in the training data.
Sometimes it’s not that the training data isn’t representative or balanced, but that historical data doesn’t include all potential situations that may arise.
The specialists who create the databases and systems that machine learning applications use, don’t consciously introduce bias into the process. But like all people, they are subjective and might subconsciously transfer their assumptions and preferences into the process.
The good news is that bias in machine learning is both preventable and repairable. It requires those who use machine learning systems to understand how these systems work, continually monitor their accuracy and to be willing to employ experts that can make this technology work optimally.
Machine learning has so much to offer the human race, from better health care, to faster loan approvals and a safer world in general. But it’s up to everyone – those building machine learning systems and those using it – to make sure this technology benefits all of us, without bias or favour.
Want to stay up to date with the latest news?
Subscribe to our newsletter.
Machine learning can make even the little things in life easier, like picking music you like. Read more in our article, How machine learning and algorithms are changing music.
Image designed by Freepik