News -

NCC Conversations: How can we reduce bias in artificial intelligence?

Last month, as part of our NCC Conversations series, we explored the issue of systemic racism, and how it impacts our industry, wider society, and the technology that we use every day.

Artificial intelligence systems – which are most commonly implemented using machine learning, a subset of AI – makes up a part of this, and the potential impact on our everyday lives is hugely significant. It’s an entirely new way of programming systems and computers, and the possibilities are endless.

However, with this growing influence comes concerns about making artificial intelligence systems more inclusive and accessible.

To build a safer and more secure future for all, minimising bias in artificial intelligence is crucial. Machine learning algorithms are underpinned by data and designs, which, in turn, are defined by the teams that build these systems and make decisions on how they should be trained.

As part of this month’s focus on systemic racism, our Race and Ethnicity Steering Committee set up a panel to discuss the future of these systems, and what the tech industry can do to reduce bias.

How big is the problem of bias in machine learning?

Felicity Hanley, commercial account manager and vice chair of the Race and Ethnicity Steering Committee, gave us some personal examples of how she’s experienced bias in machine learning systems. “Artificial intelligence is supposed to make life easier for us all. While it can do this, it can also amplify sexist and racist biases from the real world. Some of my personal experiences of AI not going to plan include social media filters making my skin appear whiter. Another example was with an old phone that wouldn’t unlock on the biometric facial recognition setting if the room was dark, whereas it did unlock for a friend who had lighter skin under the same conditions.”

As artificial intelligence becomes increasingly ubiquitous across our lives, the potential for bias becomes far greater. Matt Lewis, NCC Group’s commercial research director, said: “There are quite a lot of instances where AI is being used, and we’re probably not aware of it. The use of facial biometrics is well known and it’s happening in a number of scenarios – not just for authentication to our mobile devices, but also in surveillance applications.”

The UK government’s review into bias in algorithmic decision-making highlights the scale of the issue, with the report stating ‘it has become clear that we cannot separate the question of algorithmic bias from the question of biased decision-making more broadly’. Kat Sommer, NCC Group’s head of public affairs, said: “The report looked at financial services, and the example they mentioned is credit scoring. The unfairness comes in when people who don’t adhere to standard financial backgrounds are treated unfairly because the availability of data is not there to train the models.”

How can we reduce bias in these systems?

A first, important step is to ensure AI is trained on representative datasets. Creating synthetic, representative data could be a future solution to this, and when it comes to developing these systems, as Matt Lewis says, it’s important to examine the datasets used to train algorithms and “see if there is diversity in the [development] team itself.”

Kat told us that the above report recommends taking a multidisciplinary approach to reviewing systems or algorithms to reduce bias wherever possible. “Not just looking at it from a technical perspective, but looking at it from a policy, operational and legal perspective”, she adds.

However, responsibility for issues shouldn’t simply end when products or systems have been released. “Having been on the disclosure side of reporting vulnerabilities to third parties, it’s always been a challenge to try to connect to a human on the inside to raise an issue”, says Matt Braun, regional vice president. “From a proactive standpoint, businesses need to recognise that algorithmic issues are a class of bugs, and there needs to be a way to receive information about those issues from researchers or members of the public.”

The issue of bias in machine learning and artificial intelligence systems is pervasive and impossible to resolve quickly. However, as NCC Group’s data protection and governance officer, Nadia Batool, said: “The panel agreed that having such a broad group of people to discuss AI is invaluable. We recognised that there are no ideal solutions as of yet, so we're now looking at ways to keep this conversation going; so we can keep improving the impact we have as a business, whilst also meeting the Committee's goal of influencing wider societal change.”

Topics

  • Technology, general

Categories

  • insights & viewpoints
  • ncc conversations
  • inclusion and diversity

Contacts

Related content

  • NCC Conversations: Race and Ethnicity

    This October, as part of NCC Conversations we’ve focused on Race and Ethnicity, with a range of content and discussions exploring key topics to understand why they’re important, what research and evidence is available and why it matters to us here at NCC Group.

  • Introducing NCC Conversations: A safe space to talk

    As part of our Inclusion and Diversity programme here at NCC Group, we introduced our NCC Conversations series to help us to drive dialogue around a number of important topics that our colleagues care about. In September, we turned the spotlight to dyslexia as part of our focus on neurodiversity. You can find out more here.

  • News spotlight: Oil and Gas pipelines a target for hackers – part two

    Following our last oil and gas industry news spotlight on the advisory issued by the CISA and FBI on a spear phishing and intrusion campaign carried out between 2011 and 2013, Damon Small, technical director at NCC Group, reacts to a new directive that requires owners and operators of critical pipelines to implement specific measures to protect against ransomware and other prevailing threats.

  • NCC Conversations: Understanding ADD and ADHD

    As part of our ongoing NCC Conversations series, we explore the topic of neurodiversity – from coping strategies for different conditions to understanding ADHD in the workplace.

  • NCC Conversations: Taking action to protect the environment

    Throughout September, as part of our ongoing NCC Conversations series, we’ve been exploring the topic of climate change and the role we as NCC Group and individuals must play in making a change to protect the environment.

  • Are dash cam users en-route to security risks?

    We rely on dash cams to continuously record events that happen on the road, and to provide evidence in the event of road traffic incidents or accidents. But can we trust them to keep our data safe and secure? In our latest research with Which?, we put nine devices to the test and uncovered a number of issues.

  • Menopause: Creating a platform for conversation

    Back in October 2020 we hosted the first ever, open conversation about the Menopause, with nutritionist, Menopause coach and founder of Take a Pause – Simone Burgon. Continuing our work with Simone, we’ve built a menopause library to support colleagues at whatever stage of the journey they are on.