Why Does a Matching System Supposed to Be Fair Create Inequality? Differences in Understanding Lead to Differences in Outcomes

Why Does a Matching System Supposed to Be Fair Create Inequality? Differences in Understanding Lead to Differences in Outcomes

When "Fair Algorithms" Create Unfairness—Disparities Occur Outside the Code

"If the algorithm is fair, the results will be fair too."

As AI and automated decision-making systems become more prevalent in society, we tend to expect this. By removing human subjectivity and bias and making decisions based on rules, important scenarios like hiring, admissions, assignments, and promotions should become more just. At the very least, it seems more transparent and less biased than arbitrary decisions made behind the scenes by humans.

However, new research throws cold water on this assumption.

Even if the matching system itself is designed fairly, inequality can arise if users do not fully understand the mechanism. The problem is not solely within the algorithm. Rather, it lies with the humans using the algorithm and the information environment surrounding them.

The focus here is on the "residency match" system used by medical students in the United States to determine their training placements. Graduating medical students and residency programs submit their preferences, and a computerized matching system determines the combinations. For future doctors, where they train can significantly impact their careers. This is not just a matter of career planning; it is akin to a life-defining moment.

The system is designed so that it is best for students to list programs in their true order of preference. In other words, there is no need for strategic maneuvering like "I should lower the rank of this popular program because it seems unlikely" or "I should put a safer option higher to increase my chances." In fact, such strategic rank manipulation could lead to less desirable outcomes for the individual.

In principle, the system is very rational. Just honestly state your preferences. There's no need to play games. Therefore, it should be fair for everyone.

However, reality is not that simple.

Researchers analyzed incentive-based simulation data from over 1,700 medical students and conducted detailed interviews with 66 students who experienced the residency match. They found that even if the system is designed fairly, differences in students' understanding and information-gathering behaviors can lead to suboptimal choices.

For example, some students think, "I really want to go to Hospital A, but since it seems highly competitive, maybe I should rank Hospital B higher to increase my chances." This may seem like a cautious decision. However, in this type of matching system, falsifying one's true order of preference can lead to missing out on the best outcome for oneself.

In other words, the system is designed to favor "choosing honestly," but if users do not understand this, they cannot use the system correctly. Even with a fair mechanism, if there are differences in the knowledge, confidence, and quality of advice surrounding those accessing it, the results will not be fair.

The study particularly highlighted differences between genders. Male medical students were more likely than female students to seek additional information about the system. They checked official websites, watched explanatory videos, compared multiple sources, and conducted independent searches. Students who engaged in these behaviors had a deeper understanding of the matching algorithm and were more likely to adopt optimal strategies.

On the other hand, students who relied solely on standard system explanations and general advice from universities were more prone to misunderstand the system. The advice from medical schools tended to be "Rank according to your true preferences," "Follow your intuition," and "Don't overthink it." While these are not wrong, and indeed correct as principles of the system, they are not sufficient.

But correct advice and sufficient advice are different.

"Why is it best to rank according to true preferences?"
"Isn't it disadvantageous to list popular programs at the top?"
"Does the concept of a 'safety school' apply in this system?"
"How does changing my ranking affect my match rate?"

If these questions cannot be answered, students act based on anxiety. Even if the system is designed to be "strategy-free," if users cannot believe this, they end up strategizing anyway.

Here lies a major blind spot in the discussion of algorithmic fairness.

So far, discussions on "fair algorithms" have mainly focused on the system's contents. Does it discriminate against specific attributes? Has it learned from biased past data? Are the criteria for judgment opaque? Are the results biased? Of course, these are important.

However, what this study shows is that inequality can arise even if the algorithm itself is not biased. Biased results can occur even without discriminatory code because users are not starting from the same line.

Some people have the time and confidence to seek additional information. Some receive support to understand the system. Some receive specific advice from seniors or mentors. Some hesitate to voice questions about the system. Some do not even realize that "this is something I should research myself."

These differences are not merely individual differences. They are shaped by gender, class, alma mater, family environment, cultural background, and surrounding networks. Even if algorithms apply the same rules to everyone, they do not automatically bridge the information and confidence gaps on the human side.

In fact, the more complex the system, the more advantageous it is for those who understand it.

This structure is not limited to the matching of medical students' training placements. School admissions, corporate hiring, internal assignments, promotion reviews, talent marketplaces, public sector job allocations, scholarship selections, housing lotteries, daycare admissions, military placements—matching and automated decision-making are increasing in our society. Often, it is explained as "the system processes fairly, so it's okay."

However, how users understand the system is surprisingly overlooked.

Consider a company's internal recruitment system. Employees rank their preferred departments, and the system assigns them based on aptitude and preference. Even if the system is fair, if employees misunderstand that "writing a popular department as the first choice is disadvantageous," they might lower their preferences from the start. Meanwhile, employees who understand the system well will confidently state their true preferences. As a result, under the same rules, those who understand are more advantaged.

The same can happen in university admissions or school selection. If parents or students do not correctly understand the selection system, they may make disadvantageous choices thinking they are playing it safe. Families with more information can master the system, while those without it may misunderstand its intent. Algorithmic fairness can reflect the differences in family information capital.

In this sense, a fair system is not just a "system that calculates without bias." It must be a system that users can understand correctly, use with confidence, and receive support when needed.

Researchers suggest practical measures such as clearer explanations, repeatable learning materials, simulations, interactive practice, and encouragement to access multiple sources. The important thing is not to simply say, "Write in your true order of preference," but to ensure users understand "why that is optimal."

This is both a user education issue and a system design issue.

If users are prone to misunderstand, it is not solely their responsibility. If explanations are vague, counterintuitive aspects of the system are left unaddressed, there is an atmosphere that discourages questions, and standard advice is too abstract, then there is a problem with the system's implementation. If we are serious about achieving fairness, we must design not only the algorithm but also the points of interaction with users.

Looking at reactions on social media, attention to this research is still limited, but it is starting to attract interest within the expert community. A LinkedIn post by one of the paper's authors introducing the research had 69 reactions and 3 comments at the time of checking, with positive responses such as congratulations on the research and comments like "an interesting paper." Rather than a flashy controversy or large-scale discussion among general users, it is currently being quietly shared among those interested in management, medical education, and algorithmic fairness.

On the other hand, on the Phys.org article page, the number of shares was low, and there were hardly any comments at the time of checking. This is not because the research is unimportant, but because the theme is specialized and not immediately relatable to the general reader. However, the issues highlighted by this research are actually directly connected to the lives of many people.

Because we already live in a society where "those who can understand the system benefit."

Tax systems, scholarships, insurance, mortgages, point systems, job hunting, job change sites, school selection, administrative procedures. All of these have the same rules on the surface, but the results vary depending on the level of understanding. When algorithms are added, the problem becomes even less visible. With human representatives, it is easier to say "there is not enough explanation," but with systems, people tend to accept it as "that's just how it is."

However, fairness is not about imposing self-responsibility on users.

"It's the fault of those who didn't research properly."
"It's the fault of those who can't understand the system."
"It's the fault of those who don't ask questions."

It's easy to dismiss it that way. However, the more socially important the system is, the more responsibility the operators have to support users' understanding. Especially if the system involves life paths, careers, income, or educational opportunities, a lack of explanation can be more than just unkindness; it can be a breeding ground for inequality.

The question posed by this research is very significant for system design in the AI era.

Is it enough to be satisfied with just creating a fair algorithm?
Is it truly fair if users make disadvantageous choices due to misunderstandings about the system?
Can we say we have fulfilled our responsibility to explain just by saying "the correct information is published"?
When there are differences in the ability or confidence to understand the system, how far should support go?

In the future society, not only the transparency of algorithms but also "understandability" will become more important. Even if there is a transparent explanation, it is meaningless if only experts can understand it. Only when users can understand how it affects their decisions does transparency lead to fairness.

Creating a fair system requires not only those who write the code but also those who explain, educate, operate the system, and listen to users' concerns. Algorithmic fairness is not just the job of the technical department. It is the design of communication, education, and trust across the entire organization.

The residency match for medical students is just one example. However, the lessons learned from it are broad.

Fair rules only function fairly when supported by fair understanding.
And inequality does not necessarily arise from malicious discrimination.
Even a well-intentioned system can produce unfair results if explanations are lacking, support is uneven, and there are differences in users' understanding.

In an era where AI and algorithms are introduced into society, what is needed is not a naive trust that "machines are fair." Rather, because machines appear fair, we must carefully examine the human understanding, behavior, and information environment surrounding them.

Fairness does not exist solely within the algorithm.
It exists among the people who use the algorithm.


Source URL

An article by Phys.org introducing the research. It reports on the research content that "even a fairly designed matching system can produce unequal outcomes due to differences in user understanding."
https://phys.org/news/2026-05-fair-unequal-outcomes.html

The publication page of the research paper. Overview, authors, publication information, and DOI of the paper "Gendered Navigation of Advice and Suboptimal Behavior in Matching Algorithms: Evidence from the Residency Match" by Samuel E. Skowronek and Joyce C. He.
https://pubsonline.informs.org/doi/10.1287/orsc.2024.19652

INFORMS news release. Key points of the research, simulation data from over 1,700 medical students, interviews with 66 students, and explanations about differences in system understanding and information-gathering behavior were confirmed.
https://www.informs.org/News-Room/INFORMS-Releases/News-Releases/Fair-Matching-Systems-Can-Still-Produce-Unequal-Outcomes-New-Research-Finds

News release published on EurekAlert! Background of the research, mechanism of the residency match, practical implications, publication date, and DOI were confirmed.
https://www.eurekalert.org/news-releases/1128471

LinkedIn post by the author. Introduction of the research content, the number of reactions and comments observed on social media were confirmed.
https://www.linkedin.com/posts/sam-skowronek-1775896a_gendered-navigation-of-advice-and-suboptimal-activity-7442932708063821824-SJds