Table of content

Bias in HR: Tracking and mitigating job-matching engine disparities
Bias definition
Bias sources
Bias tracking through fairness metrics
Demographic Parity
Pros of DP:
Cons of DP:
Equal opportunity
Pros of EO
Cons of EO
TalentAPI architecture as an enabler for transparency and bias mitigation
Pros of a decoupled architecture:
Cons of a decoupled architecture:
Summary & conclusion
Q&A
What is the standard way to track social bias in HR?
How can one track social bias in a job-matching engine?
How to mitigate social bias in a job-matching engine?
Annexe
Additional resources

Table of content

Table of content

Bias in HR: Tracking and mitigating job-matching engine disparities
Bias definition
Bias sources
Bias tracking through fairness metrics
Demographic Parity
Pros of DP:
Cons of DP:
Equal opportunity
Pros of EO
Cons of EO
TalentAPI architecture as an enabler for transparency and bias mitigation
Pros of a decoupled architecture:
Cons of a decoupled architecture:
Summary & conclusion
Q&A
What is the standard way to track social bias in HR?
How can one track social bias in a job-matching engine?
How to mitigate social bias in a job-matching engine?
Annexe
Additional resources

Bias in HR: Tracking and mitigating job-matching engine disparities

Bias in HR: Tracking and mitigating job-matching engine disparities

17 Jan 2025

Exploring bias in HR: Identifying sources of bias in AI-driven job-matching, tracking with fairness metrics, and leveraging architecture for transparency and mitigation.

It will not come as a surprise to anyone that we live in a world containing biases and prejudices. The strength of AI algorithms lies in their ability to identify and learn patterns. However, this strength can also become a weakness, as discriminatory patterns are no exception to this learning process. Without proactive measures, these biases will be integrated and potentially amplified by an AI model.

Algorithmic processing in HR became a common way to cope with ever increasing amounts of available data. AI models enhance this processing, allowing for, among other things, automated matching between jobseekers and vacancies. Even though the rise of AI-driven matching solutions has revolutionized hiring processes by making them faster and more scalable, this reliance on algorithms has also sparked significant social and legal concerns. Movements advocating for fairness, alongside regulations like the EU AI Act and GDPR, are driving a push for transparency and systematic bias tracking in HR. Despite this interest, very little guidance is currently provided to practitioners about how to approach the question of bias tracking and mitigation, which this blog post attempts to partially address.

Superlinear brings valuable experience to the algorithmic recruitment domain through the development of TalentAPI, our matching solution designed to provide personalized job recommendations to users. Thus, in this blog post, we will share some insights as to how one can approach the question of bias tracking and mitigation. The main points of the discussion being: (1) bias definition, (2) different sources of bias, (3) baseline setup for bias tracking through fairness metrics, and (4) TalentAPI architecture as an enabler of bias mitigation.

Bias definition

A distinction needs to be made between two bias categories: social and technical biases. Social bias is the common notion of bias that people think of when talking about the recruitment domain. It consists in differential unfavorable treatment based on personal characteristics, such as ethnic origin, gender, skin color, or age. Which undermines the principle of equal treatment and fairness in the labor market. Technical bias, on the other hand, arises from the architecture, development, or implementation of the system itself. It stems from technical limitations, design choices, or algorithmic decisions that inadvertently introduce or amplify biases.

A recommendation system operates on a feedback loop mechanism. It generates suggestions for users, gathers their feedback, and leverages that input to refine future recommendations. However, this feedback loop inherently introduces a form of technical bias, as it tends to reinforce patterns already present in the data and user behavior. This self-reinforcing nature of recommendation algorithms has made the technical bias a focal point of multiple studies.

The topic of social bias has been extensively studied by social scientists but has received little attention from engineers. This gap leaves the question of tracking social bias unresolved, demanding the creation of a clear framework. This blog post tries to provide some guidance on how this tracking challenge can be approached.

Thus, this blog post focuses on the less technically studied social bias, which is why it will be referred to as simply “bias” throughout the remainder of this post.

Bias sources

A job matching engine provides personalized jobs suggestions to users, i.e. it matches profiles (a list of user features) with jobs (a vacancy description). Modern matching engines leverage explicit feedback, such as likes or dislikes, and implicit feedback, such as clicks or saved jobs, to improve their suggestions. This feedback creates a continuous loop where user interactions influence future matches. It enables the engine to adapt over time, but also reinforces bias present in the system.

Let's analyze the above diagram representing the feedback loop and enumerate the different potential sources of bias, i.e. stages where bias can be introduced in the system:

  1. Biases and prejudices present in society are of course also reflected in the raw data. Thus, without proactive actions to debias the datasets, the resulting models will also integrate these biases.

  2. Most end-user AI solutions, such as matching engines, are complex models that use already-existing models as components (sub-models). These components are themselves potentially sources of bias.

  3. The implementation of a matching engine involves numerous human decisions, some driven by technical considerations and others shaped by business logic. Regardless of the intent, human involvement inevitably introduces bias, whether through design choices, prioritization of features, or interpretation of requirements.

  4. User interactions, such as clicks and saved items, are some of the first and most evident bias sources. These interactions often reflect existing biases in society, which the system then learns and reinforces in future recommendations, perpetuating the cycle of bias. Users typically favor jobs traditionally associated with their gender or background, like women clicking more often on teaching roles or men on technical positions, reinforcing occupational stereotypes.

Bias tracking through fairness metrics

Fairness metrics are most commonly used as proxies for bias measurement. However, fairness is a social concept that evolves through time and space. Thus, similarly to bias, there is no commonly adopted definition of fairness that would be precise enough as to allow the emergence of a universal fairness metric. The challenge that a practitioner faces is to go:

We will showcase two complementary fairness metrics, which we deem being a good baseline to start tracking bias: the Demographic Parity and Equal Opportunity.

First, let's go over the data that should be available when executing jobs recommendations:

  • Labeled profiles: A profile is a set of user personal data and preferences, which is used for the matching. Labels describing the profile’s sensitive variables, such as gender, age, ethnic origin or nationality, are required for bias tracking.

  • Labeled jobs: A job is a vacancy description used for the matching. Labels describing job’s sensitive variables, such as salary, contract type or working hours, are required for bias tracking.

  • Suggestion logs: We need to know what jobs were suggested to which user.

  • Performance data: we need some way to assess whether a match is good or not. It can be done using one of the following data sources:

    • User interactions from search: Often, job suggestions are made on platforms that also provide a job search engine. Thus, the user interactions, such as views and saves, coming from the search can be used as independent data sources to compute the performance. We check how many of the jobs we suggest are being interacted with through the search. The assumption is that search engine interactions are closer to a ground truth of user intent. This assumption is based on the fact that a search engine operates in an open-ended environment where users are free to query any term or phrase, providing an unrestricted space of possibilities.

    • User interactions from matching: We can also directly use the interactions coming from the matching engine suggestions. The downside of using your own suggestions compared to an independent source, is that you’re much more exposed to the self-reinforcing feedback loop discussed in the previous section.

    • External datasets: can also be used to do independent bias measurement. However, it is problematic to use a static dataset, knowing that employment preferences change through time. It might also be difficult to find a dataset specifying all the labels you're interested in.

Demographic Parity

Demographic parity (DP): A binary predictor Ŷ satisfies demographic parity with respect to attribute A {a1, ..., al} that can take l values if Ŷ is independent of A:

P(Ŷ | A = a1) = ... = P(Ŷ | A = al)

DP is one of the simplest and most widely used fairness metrics. DP is achieved when the likelihood of a positive outcome is the same for all groups, regardless of whether the person is in the protected group or not.

As an example, we will illustrate the bias measurement only for one sensitive variables combination:

  • Profile gender: Female or Male

  • Job work hours: Full-Time or Part-time

Thus, in our case, DP is used to ensure that the probability of receiving a full-time or part-time job suggestion is the same for both genders.

In the tables below, we can see a very simple bias metric based on DP:

First, a totally unbiased situation where DP is achieved, i.e. the proportion of part-time and full-time suggestions is the same for females and males. For example, the bias of the first row is DP = S(Female; Part-Time) - S(Male; Part-Time) = 0.35 - 0.35 = 0%.

Second, a slightly biased situation, where 5% more part-time jobs are suggested to females compared to males. It can be safely argued that no actions need to be taken yet in this case.

Third, a clearly biased situation, where 25% more part-time jobs are suggested to females compared to males. Actions should most certainly be taken to decrease this bias.

Pros of DP:

  • Ensures that members of different groups receive similar job suggestions.

  • Let’s assume that females do tend to interact more with part-time jobs than men. Because DP does not take into account any user interactions, it considers this behavior to be biased, allowing the metric to highlight and fight against biases already established in society.

  • Simple to implement and interpret.

Cons of DP:

Does not allow identification of any performance bias. For example, we can be in a situation where DP is achieved, while having a matching engine that provides significantly worse suggestions to females than to males. So even though we suggest similar job types to both, women will have a lower chance of being interested or hired for the suggested jobs. This is why at least one additional bias metric based on user interactions is required.

Equal opportunity

Equal opportunity (EO): A binary predictor Ŷ satisfies equal opportunity with respect to attribute A {a1, ..., al} that can take l values and ground truth Y if they are independent conditional on the ground truth outcome being favorable:

P(Ŷ = 1 | A = a1, Y=1) = ... = P(Ŷ = 1 | A = al, Y=1)

EO is one of the most commonly used fairness metrics that takes into account performance by ensuring an equal true positive rate (TPR). A true positive being a job that was both suggested and interacted with. In our case, we want to ensure that the performance (TPR) of the part-time or full-time jobs is the same for both genders.

In the tables above we can see a very simple bias metric based on EO.

For example, the bias of the first row is EO = (TPR(Female; Part-Time) - TPR(Male; Part-Time)) * S(Female; Part-Time) = (0.3 - 0.55) * 0.5 = -0.13. The TPR difference is multiplied by the proportion of suggestions (S) to account for the number of suggestions.

For example, we see that 25% of TPR difference is problematic for the (Female; Part-Time) suggestions, reaching an alarming bias level of -0.13. However, the same 25% of TPR differences for (Male; Part-Time) is considered as a reasonable bias of 0.03, because these suggestions concern only 10% (S) of all suggestions made to males.

Pros of EO

Ensures a similar job suggestion quality across the different groups by identifying performance bias.

Cons of EO

It is based on user interactions, which directly reflects the existing biases in society.

TalentAPI architecture as an enabler for transparency and bias mitigation

Due to the rise in popularity of machine learning algorithms, there are numerous techniques that try to address the bias within these algorithms. This is generally done during one of the following stages:

  • Pre-processing: Adjusts training data to eliminate biases before model training.

  • In-processing: Modifies learning algorithms to reduce bias during model training.

  • Post-processing: Corrects bias in model outputs after training using a separate holdout set.

An overview of the existing works can be found in the table below, from [https://arxiv.org/pdf/1908.09635]. It contains references to debiasing methods applied to different machine learning algorithms.

references to debiasing methods applied to different machine learning algorithms

The challenge with the debiasing techniques commonly discussed in the literature is that they often involve complex algorithms, requiring significant customization to fit specific use cases. Rather than presenting one of these methods, we aim to address a frequently overlooked aspect: the role of an algorithm's architecture in mitigating bias. In this context, we will introduce TalentAPI’s structure to show how a decoupled architecture can reduce data-related bias and enable better control of the matching algorithm.

The figure below describes TalentAPI’s architecture, which is decoupled through logical components that we call minimodels. Each minimodel is responsible for one specific matching dimension, such as profile’s desired jobs, location, skills, etc. TalentAPI’s recommendations are made through two distinct steps:

  • The Prefetch is responsible for the selection of a first set of candidate jobs (from a couple of hundreds to a couple of thousands jobs). This step is necessary, as most of the databases contain hundreds of thousands or even millions of jobs, and applying the complex logic contained within the minimodels on all jobs is not possible at inference time. Thus, a special minimodel called “master” fetches this first set of relevant jobs. The master minimodel applies a big query against a search engine, within which all jobs are indexed, allowing a fast prefetch. A simplified logic of all the other minimodels is integrated within this master query as distinct sub-queries.

  • The Matching itself consists in applying the more complex minimodel logic to all the fetched jobs. Each minimodel returns its own score. All minimodel scores are then combined into a final job matching score, which determines the position of the job within the final ranking.

Pros of a decoupled architecture:

  • Bias mitigation through decoupled data: Of course, no matching engine should use any personal data that is directly related to one of the sensitive variables. For example, the name of the person cannot be used during matching because the gender and ethnic origin can easily be inferred from the name. However, the strength of machine learning algorithms resides in the recognition of patterns, allowing them to discover indirect links between sensitive variables and remaining data relevant for the matching. For example, a deep learning algorithm can identify links between geographic areas and ethnic groups. Each minimodel within TalentAPI has only access to a subset of relevant profile’s data. For example, the Location minimodel only has access to the profile's geo-location. This minimizes the risk of learning unwanted data links, avoiding a potential bias source by design.

  • Full control for debiasing: Once bias is found through the fairness metrics, the minimodel architecture enables easy identification of the parts of the algorithm that require logic changes. We simply need to compute a correlation matrix between sensitive variables and profile’s matching data, which allows us to identify the minimodel and the subquery of the master minimodel that needs to be modified. For example, let’s assume that the matching engine suggests significantly more low paying jobs for non-EU citizens. Our confusion matrix might show us that the spoken language is most highly correlated with non-EU citizens. The Skills minimodel being the only one that uses the spoken language of a profile, we know that this part of the algorithm is most probably responsible for the bias.

  • Other non-bias related advantages:

    • Explanability and transparency through minimodels.

    • Modularity through the modification, deletion or addition of minimodels.

Cons of a decoupled architecture:

  • Missing contextual information: Of course, there is no such thing as a free lunch, so the above advantages come at a price, which is the fact that a matching engine with a decoupled architecture requires much more effort to reach the performance of standard deep learning blackbox solutions. Blackbox algorithms automatically identify data links useful to the matching, which, in the case of decoupled architecture, requires explicit human intervention to be defined.

To summarize, in this section, we briefly mentioned the existence of multiple debiasing methods, which are use-case specific debiasing algorithms built on top of your model. We then emphasized on the model's architecture as an often overlooked way to mitigate bias, showcasing the advantages of a decoupled architecture through the lens of TalentAPI.

Summary & conclusion

In the context of job-matching engines, bias is an inherent challenge with far-reaching consequences, both socially and technically. We began by defining bias, emphasizing its social and technical dimensions. We enumerated multiple potential bias sources, including societal inequalities, flawed data, and feedback loops within recommendation systems, making it a complex issue to address. We introduced fairness metrics as proxies for tracking bias. These metrics, while diverse and sometimes even contradictory, help evaluate the system’s impact on different groups, highlighting areas of unfair treatment. A starting bias tracking setup was proposed via two complementary fairness metrics: Demographic Parity and Equal Opportunity. Finally, we discussed how the model's architecture enables bias mitigation through the lens of TalentAPI.

The above discussion shows that there are quite a few challenges ahead before achieving a somewhat standard bias tracking and mitigation framework. Some people might see these challenges as arguments against algorithmic processing of data in recruitment, but we would like to argue the exact opposite by highlighting the following advantages of matching engines:

  • Coverage: A matching engine analyzes all available data, ensuring every job opportunity has a chance to be considered. In contrast, human processing can only handle a fraction of the data, inevitably leaving some options neglected.

  • Tracking: Matching engines inherently collect feedback data to monitor performance and improve over time. This same data can be leveraged to track and measure bias. Human decision-making lacks such built-in tracking, requiring additional effort and cost to gather similar data, making bias tracking inconsistent and non-standardizable.

  • Mitigation: When bias is detected in algorithms, adjustments can be made to align with updated requirements. However, changing the behavior of people working in the recruitment domain is a much more complex and effort-demanding task.

In essence, algorithmic processing not only broadens opportunities, but also facilitates more systematic bias tracking and mitigation compared to human processes.

Q&A

What is the standard way to track social bias in HR?

To track bias, we need to be able to assess if the treatment of users is fair or not. However, fairness is a social concept that changes through time and space. Currently, we do not have any commonly accepted and precise enough legal definition of fairness. Which is why there is no standard framework for bias tracking.

How can one track social bias in a job-matching engine?

Bias can be tracked using fairness metrics, of which multiple exist, each addressing different dimensions of bias. We propose combining two complementary fairness metrics as a basic bias tracking setup: (1) Demographic Parity and (2) Equal Opportunity.

How to mitigate social bias in a job-matching engine?

Debiasing techniques commonly discussed in the literature often involve complex algorithms requiring significant customization to fit specific use cases. Thus, instead of discussing such an algorithm, we prefer to emphasize on a decoupled model architecture as an enabler of bias mitigation. A clear separation of model’s data and logic allows to both prevent and address bias.

Annexe

Additional resources

There are already some attempts to provide an out of the box code solution for bias tracking:

  • Aequitas [https://arxiv.org/pdf/1811.05577]: is an open source bias and fairness audit toolkit, containing multiple bias metrics and a bias report.

  • IBM’s AI Fairness 360 [https://aif360.res.ibm.com/]: also an open source toolkit that can help users examine, report, and mitigate discrimination and bias in machine learning models.

To be noted that both of these toolkits require some additional data transformations in order to be used for our recruitment matching setting.

Author(s):

Dumitru Negru

ML Software Engineer

AI Literacy explained within two professionals

Article

Understand the essentials of AI literacy and its impact on your organization. Explore practical steps to build team-wide understanding, comply with regulations, and confidently navigate the complexities of AI integration.

AI Literacy explained within two professionals

Article

Understand the essentials of AI literacy and its impact on your organization. Explore practical steps to build team-wide understanding, comply with regulations, and confidently navigate the complexities of AI integration.

AI Literacy explained within two professionals

Article

Understand the essentials of AI literacy and its impact on your organization. Explore practical steps to build team-wide understanding, comply with regulations, and confidently navigate the complexities of AI integration.

RAGLite tutorial

Article

This guide walks you through the process of building a powerful RAG pipeline using RAGLite. From configuring your LLM and database to implementing advanced retrieval strategies like semantic chunking and reranking, this guide covers everything you need to optimize and scale your RAG-based applications.

RAGLite tutorial

Article

This guide walks you through the process of building a powerful RAG pipeline using RAGLite. From configuring your LLM and database to implementing advanced retrieval strategies like semantic chunking and reranking, this guide covers everything you need to optimize and scale your RAG-based applications.

RAGLite tutorial

Article

This guide walks you through the process of building a powerful RAG pipeline using RAGLite. From configuring your LLM and database to implementing advanced retrieval strategies like semantic chunking and reranking, this guide covers everything you need to optimize and scale your RAG-based applications.

RAGLite

Article

Discover RAGLite, a lightweight toolkit that revolutionizes Retrieval-Augmented Generation (RAG). With features like semantic chunking, adaptive retrieval, and hybrid search, RAGLite overcomes traditional RAG limitations, simplifying workflows and ensuring fast, scalable, and accurate information retrieval for real-world AI applications.

RAGLite

Article

Discover RAGLite, a lightweight toolkit that revolutionizes Retrieval-Augmented Generation (RAG). With features like semantic chunking, adaptive retrieval, and hybrid search, RAGLite overcomes traditional RAG limitations, simplifying workflows and ensuring fast, scalable, and accurate information retrieval for real-world AI applications.

RAGLite

Article

Discover RAGLite, a lightweight toolkit that revolutionizes Retrieval-Augmented Generation (RAG). With features like semantic chunking, adaptive retrieval, and hybrid search, RAGLite overcomes traditional RAG limitations, simplifying workflows and ensuring fast, scalable, and accurate information retrieval for real-world AI applications.

Contact Us

Ready to tackle your business challenges?

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Stay Informed

Subscribe to our newsletter

Get the latest AI insights and be invited to our digital sessions!

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena

Ottergemsesteenweg-Zuid 808 b300

9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.

Locations

Brussels HQ

Central Gate

Cantersteen 47



1000 Brussels

Ghent

Planet Group Arena
Ottergemsesteenweg-Zuid 808 b300
9000 Gent

© 2024 Superlinear. All rights reserved.