How C-level can drive fairness in AI: by making (hard) choices

How C-level can drive fairness in AI: by making (hard) choices

Laurent Sorber

C-level executives can promote fairness in AI by making difficult decisions, balancing accuracy with fairness, and addressing biases proactively.

Fairness in artificial intelligence is a major concern in today’s ever-connected world: unless properly supervised, AI risks duplicating our society’s biases.

As AI becomes more embedded in life and business, we realize that our artificial intelligence is “all too human”: it reflects biases that it finds in the data it is trained on. Is AI bound to endlessly replicate our own shortcomings? Maybe not. Bias is more easily detected in AI than in humans. This is an advantage we can exploit.

Why fairness matters

Automated decision-making systems are being deployed on an unprecedented scale. With AI set to take or influence more and more decisions that affect social and economic opportunities, the idea of fairness in AI is hotly debated. Will AI hinder your (re-)entry to the labor market because you are a woman or because you have a migration background? There is widespread anxiety about this, and it may not be wholly unjustified.

How should we deal with that? How should we “regulate” AI? In part, AI is already regulated in Europe. When AI makes decisions that affect GDPR subjects, the owner of the software should be able to explain how the software made those decisions. Art. 22 of the GDPR stipulates that data subjects have the right to “opt out” of any decision that was based “solely on automated processing” if this “produces legal effects concerning him or her or similarly significantly affects him or her”. And this is only the beginning.

Cutting-edge technology, age-old debate

What is important to understand about fairness in AI, is that it will require making difficult choices. And those choices will define us as machine learning engineers, data scientists, but most importantly as humans. Today, when implementing AI solutions, most of the attention of an AI engineer goes to the utility of the model. When reporting to stakeholders, AI engineers tend to ask themselves: is the model I am delivering as accurate as possible?

The answer is that accuracy is not the only major goal (anymore). Accuracy is a technical challenge that is very well understood. Once you add the requirement that the system also needs to be fair, you have to realize that this will require a trade-off: your system will necessarily become less “accurate”.

“A new, powerful, data-driven tool”

This is logical. In fact, you might find that the more accurate your model is, the more biased it becomes – as the AI will learn from a real world data set. It might steer women to nursing jobs, for example, and men to engineering jobs. A famous example of bias in AI is Amazon’s recent AI recruiting tool showing aversion towards women. COMPAS, an algorithm used in the US to assess the likelihood of a defendant becoming a recidivist, was and still is criticized by many for being biased against African Americans.

While these examples remind us that there is still a lot of work to be done, the debate around fairness in AI is also a great opportunity. AI also allows us to implement a company’s choice of fairness. Fairness policies in AI are to be considered as a new, powerful, data-driven tool to sharpen and refine age-old questions about fairness, discrimination, and justice. It is precisely because machine learning models are helplessly explicit that we are forced to precisely define our values and make choices.

How can you make AI “fair”?

An organization's preferences around fairness in AI should be discussed by its management team, maybe even by the board level for important matters. It is not something that can be “decided” at the level of the developers. If you do not address fairness questions before starting an AI project, some developers might choose the “tech bro” solution: opt for “accuracy”, make sure it works technically, with little regard for the consequences or overarching fairness considerations.

But fairness questions can be prepared and answered. And then they can be implemented, explained and defended. Fairness in AI is too important to leave to a technology partner—it is a matter that needs to be handled by the organization itself.

“Fairness in machine learning starts with what you want to achieve”

Consider these common biases and the groups of people affected by them: gender, race, socioeconomic and migratory background, age, familial status, disability, medical history, and religion. Before starting an AI project, you need to decide as a company how you want to deal with them. There are multiple ways to approach the question. Fairness in machine learning starts with stating what we want to achieve: what should we define as a standard? Do you want to reflect the (implicit) societal biases in your system? Do you want to make sure that every group will be treated representatively? Or do you want to create equality in the outcome?

As a thought experiment, you could imagine an AI system that offers a list of all the possible biases it can find in the data—and then proceeds to ask to what extent the owner is willing to accept this bias, or how they want to correct the bias.

Dealing with AI biases in a business environment

To illustrate how to deal with bias in AI projects, we created an interactive walk-through of a worked scenario: as a manager of a construction company, you would like to use an AI recruiting tool to assist you in selecting candidates in a fair way. This scenario includes 3 concrete strategies that can be used to deal with bias in AI (and we hint to a fourth one that we’ll describe in a follow-up article). Should you strive for equality of outcome, demographic parity, or equal opportunity? Have a look at our interactive tool and discover the business outcomes and consequences of your strategies.

Fairness Dashboard

As manager, you are tasked with hiring 100 construction workers for an upcoming construction project. You would like to use an AI recruiting tool to assist you in selecting candidates in a fair way with respect to gender.

The visual shows the impact of your chosen fairness measure on the hiring recommendations made by your AI model. You can see the consequences of each fairness measure by clicking on the arrows.

The numbers above the line represent the candidates your model suggests you to hire. The numbers below the line represent the remainder of your talent pool.

Pay close attention to how your choice of fairness measure influences the composition of your team in terms of gender, (true) competencies, and profits. Where do you draw the line?

<div class="container-fluid py-4" id="fairness-dashboard">
    <h1 class="headings-500" id="fairnessHeading">Fairness Dashboard</h1>
    <div class="text" id="introtext">
        <p>As manager, you are tasked with hiring 100 construction workers for an upcoming construction project. You would like to use an AI recruiting tool to assist you in selecting candidates in a fair way with respect to gender.</p>
        <p>The visual shows the impact of your chosen fairness measure on the hiring recommendations made by your AI model. You can see the consequences of each fairness measure by clicking on the arrows.</p>
        <p>The numbers above the line represent the candidates your model suggests you to hire. The numbers below the line represent the remainder of your talent pool.</p>
        <p>Pay close attention to how your choice of fairness measure influences the composition of your team in terms of gender, (true) competencies, and profits. Where do you draw the line?</p>
    </div>
    <div class="row d-flex justify-content-between align-items-center m-3"><button class="btn-ghost col-2"><i class="fa fa-chevron-left indicators"></i></button><h3 class="col-8 headings-300 text-center" id="graphTitle">Maximum Profit</h3><button class="btn-ghost col-2 text-right"><i class="fa fa-chevron-right indicators"></i></button></div><div class="row hideGraph"><div class="card p-3 introductioncards"><div id="graphDescription" class="text">Your model is trained to be as accurate as possible, resulting in the highest possible profits.</div><canvas role="img" height="610" width="1220" style="display: block; box-sizing: border-box; height: 305px; width: 610px;"></canvas><div class="align-self-end legend"><p class="text"><svg height="10" width="10"><circle cx="5" cy="5" r="5" fill="#9BCBEA"></circle></svg> Incompetent men</p><p class="text"><svg height="10" width="10"><circle cx="5" cy="5" r="5" fill="#0067FF"></circle></svg> Competent men</p><p class="text"><svg height="10" width="10"><circle cx="5" cy="5" r="5" fill="#FFC3BA"></circle></svg> Incompetent women</p><p class="text"><svg height="10" width="10"><circle cx="5" cy="5" r="5" fill="#FF8877"></circle></svg> Competent women</p></div><h4 class="headings-200 mt-2">Conclusion</h4><p id="graphConclusion" class="text">Without taking fairness into account, your AI model suggests as many competent candidates as possible to optimize for profit. Since there are more competent male candidates, it will learn to prefer male candidates.</p><h4 class="headings-200 mt-2">The numbers</h4><div class="row"><div class="col-6"><p id="hiredCandiates0" class="firstNr">100</p><p id="hiredCandiates1" class="text secondNr">/100</p><p class="text textNumbers">Number of hired candidates</p></div><div class="col-6"><p id="femaleCandiates0" class="firstNr">3</p><p id="femaleCandiates1" class="text secondNr">/ 30</p><p class="text textNumbers">Number of hired female candidates</p></div></div><div class="row"><div class="col"><p id="competentCandidates0" class="firstNr">81</p><p id="competentCandidates1" class="text secondNr">/144</p><p class="text textNumbers">Number of hired competent candidates</p></div><div class="col"><p id="profit0" class="firstNr">8.1</p><p id="profit1" class="text secondNr">/10</p><p class="text textNumbers">Profit in million €</p></div></div></div></div>
    <h2 class="headings-400 mt-3">Assumptions</h2>
    <div class="row">
        <div class="col-2 justify-content-center align-self-center">
            <p class="text assumptionNr">1</p>
        </div>
        <div class="col-10 justify-content-center align-self-center">
            <p class="text">We assume you are prepared to interview up to 3x the number of people you
                want to hire, in this case 300 people.</p>
        </div>
    </div>
    <div class="row">
        <div class="col-2 justify-content-center align-self-center">
            <p class="text assumptionNr">2</p>

        </div>
        <div class="col-10 justify-content-center align-self-center">
            <p class="text">We assume your talent pool consists of 270 men (90%) and 30 women (10%).*</p>
        </div>
    </div>
    <div class="row">
        <div class="col-2 justify-content-center align-self-center">
            <p class="text assumptionNr">3</p>
        </div>
        <div class="col-10 justify-content-center align-self-center">
            <p class="text">
                We assume that your talent pool contains 135 competent men and 9 competent women. The proportions of men and women who are competent are slightly skewed on purpose to help illustrate the effect of your fairness choice.
            </p>
        </div>
    </div>
    <div class="row">
        <div class="col-2 justify-content-center align-self-center">
            <p class="text assumptionNr">4</p>

        </div>
        <div class="col-10 justify-content-center align-self-center">
            <p class="text">We assume that every competent construction worker you hire brings in
                $100k of profit, regardless of gender.</p>
        </div>
    </div>
    <div class="row">
        <div class="col-2 justify-content-center align-self-center">
            <p class="text assumptionNr">5</p>
        </div>
        <div class="col-10 justify-content-center align-self-center">
            <p class="text">Your AI hiring model tries to predict if someone is competent or not,
                and, like any real-world decision making model, sometimes fails to do so.</p>
        </div>
    </div>
    <div class="row">
        <div class="col-2 justify-content-center align-self-center">
            <p class="text assumptionNr"></p>
        </div>
        <div class="col-10 justify-content-center align-self-center">
            <a id="sidenote" href="https://ec.europa.eu/eurostat/web/products-eurostat-news/-/EDN-20180307-1" target="_blank" rel="noopener noreferrer">*Gender
                inequality in the European construction sector is even more severe: 97% men to 3%
                women.
            </a>
        </div>
    </div>
</div>

Equal Outcome

Equal outcome means using quotas to ensure positive outcomes are distributed equally among men and women.

 Incompetent men

 Competent men

 Incompetent women

 Competent women

Conclusion

Your AI model enforces equal quotas to achieve a balanced suggestion between women and men. Because of the gender imbalance in the talent pool, there are simply not enough women available for hire to meet your quota, resulting in 40 open positions.

The numbers

60/100

Number of hired candidates

30/ 30

Number of hired female candidates

31/144

Number of hired competent candidates

3.1/10

Profit in million €

Assumptions

  1. We assume you are prepared to interview up to 3x the number of people you want to hire, in this case 300 people.

  2. We assume your talent pool consists of 270 men (90%) and 30 women (10%).*

  3. We assume that your talent pool contains 135 competent men and 9 competent women. The proportions of men and women who are competent are slightly skewed on purpose to help illustrate the effect of your fairness choice.

  4. We assume that every competent construction worker you hire brings in $100k of profit, regardless of gender.

  5. Your AI hiring model tries to predict if someone is competent or not, and, like any real-world decision making model, sometimes fails to do so.

*Gender inequality in the European construction sector is even more severe: 97% men to 3% women.

At this point, you’re probably wondering about the fourth strategy mentioned in our tool (equalized odds). It is a specialized approach to fairness that embodies the idea that being fair also means being fair to the group of incompetent candidates. Sounds counterintuitive? Put yourself in the shoes of an incompetent woman. All things considered, wouldn’t you want the same chance to be selected as an incompetent man? A very interesting question for another time: we’ll take a deeper look at this specific strategy and its outcomes in an upcoming blog post.

How to live a “good” life

If you are not ready to answer these fairness questions, you should at least strive for transparency (explain how it works) and compliance with privacy (explain to individuals affected by the AI how the system makes its decisions). In the end, the holy grail of fairness in AI and machine learning is not about finding the best AI, the one that will be 100% accurate while also perfectly unbiased, because in general it cannot be done. It’s about asking yourself “How can we be a ‘good’ member of society? How can we use AI to bring ‘good’ to society by making conscious, moral choices that will be embedded in the software we create?”

Those aspects need to be addressed before developing AI. One essential facet is coming to terms with the fact that there is no standard definition of fairness. Such a definition would require, well, bias. This is true for human decisions, and this is true for machine decisions. Without a perfect fairness measure, a trade-off has to be made. Promoting fairness and reducing inequality transcends machine learning and AI and requires a multi-disciplinary approach, ultimately involving society as a whole. In the end, it’s all about the choices we make.

Contact Us

Ready to tackle your business challenges?

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena

Ottergemsesteenweg-Zuid 808 b300

9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.