Late last year, the Justice Department joined the growing list of agencies to find that algorithms fail to account for good intentions. An algorithm known as PATTERN has placed tens of thousands of federal prisoners into risk categories that could make them eligible for early release. The rest is sadly predictable: like so many other computerized guardians who make life-changing decisions – presentation decisions, resume screeningsame health needs — PATTERN appears to be unfair, in this case to Black, Asian, and Latino inmates.

A common explanation for these misfires is that humans, not equations, are causing the problem. Algorithms mimic the data given to them. If this data reflects sexism, racism and oppressive tendencies in humanity, these biases will be incorporated into the algorithm’s predictions.

But there is more than that. Even if all of humanity’s flaws were eliminated, fairness would still be an elusive goal for algorithms for reasons that have more to do with mathematical impossibilities than retrograde ideologies. In recent years, a burgeoning field of algorithmic fairness research has revealed fundamental – and insurmountable – limits to fairness. The research has profound implications for any decision-maker, human or machine.

::

Imagine two doctors. Dr. A is a graduate of a prestigious medical school, is up to date with all the latest research, and carefully tailors her approach to the needs of each patient. Dr. B takes a quick look at each patient, says “you are fine” and sends them an invoice.

If you had to choose a doctor, the decision might seem obvious. But Dr. B has a redeeming attribute. In a sense, it is fairer: everyone is treated the same.

This compromise is not only hypothetical. In an influential 2017 article titled “Algorithmic Decision Making and the Cost of Equity“, the researchers claim that algorithms can achieve greater accuracy if they are not also required to operate fairly. The core of their business is simple to grasp. Generally, everything is more difficult when constraints are added The best cake in the world is probably more delicious than the best vegan world cake. The most accurate algorithm is probably more accurate than the most accurate fair algorithm.

In designing an algorithm, a choice must therefore be made. It may not be as difficult as choosing between Dr A and Dr B, but it’s in the same flavor. Are we ready to sacrifice quality in the name of equality? Do we want a fairer or more efficient system? Understanding how to best respect this line between performance and fairness is an active area of academic research.

This tension also arises in human decisions. Universities might be able to admit courses with advanced university degrees if they didn’t also value the diversity of the student body. Equity takes precedence over performance. On the other hand, police departments often concentrate patrols in high-crime areas at the expense of over-surveillance of communities of color. Performance takes precedence over fairness.

Deciding whether equity or performance should be prioritized is not straightforward. But what the study of algorithms reveals is that this is an unavoidable decision with real trade-offs. And these trade-offs often lead to conflict.

::

But what do “justice” and “equity” really mean? Algorithms require precision, but language can be ambiguous, creating another hurdle. Before you can be fair, you have to define fairness. Although there are many ways to define fairness, they clash in a rigid mathematical competition where not everyone can win.

To assess whether an algorithm is biased, scientists cannot peer into its soul and understand its intentions. Some algorithms are more transparent than others, but many used today (especially machine learning algorithms) are essentially black boxes that ingest data and spit out predictions according to mysterious and complex rules.

Imagine a data scientist trying to figure out if a new cancer screening algorithm is biased against black patients. The new technology provides a binary prediction: positive or negative for cancer. Armed with three pieces of information about each patient – ​​their race, the algorithm’s positive or negative prediction, and whether the patient really has cancer – how can the data scientist determine if the algorithm is behaving fairly?

A reasonable way to dig deeper is to see if error rates are different for black patients compared to white patients. Mistakes are costly in both directions. For example, failing to diagnose cancer (getting a false negative) in black patients at a higher rate than whites could be considered unacceptable discrimination. A different rate of false positives – which takes healthy patients down an unnecessary and costly rabbit hole – is also problematic. If an algorithm has equal false positive and false negative rates for black and white patients, it is said to have achieved equalized odds. It is a form of fairness.

Another way to measure fairness is to check whether the algorithm’s predictions have the same meaning for black and white patients. For example, if a negative prediction is a 90% chance that a white patient will be cancer-free, but only a 50% chance that a black patient will be, then the algorithm can reasonably be considered discriminatory. Conversely, an algorithm whose predictions have the same implications for cancer, regardless of race, could be considered correct. The equity of this flavor is called calibration.

Here is the problem: Researchers have shown that no algorithm can achieve both types of fairness. Achieving a goal of fairness requires violating the other. It’s as hopeless as immobilizing both sides of a swing.

These different quality measures are intimately linked. For example, the algorithm could be modified to raise the bar for a cancer diagnosis. There would be fewer false alarms, but patients receiving a negative result would no longer be able to rest so easily. Combine these competing effects with the fact that predispositions to certain cancers may, in fact, differ between racial groups and an unsolvable puzzle emerges.

These kinds of results, called impossibility theorems, abound in algorithmic fairness research. Though there is dozens of reasonable ways to define fairnessequalized odds and calibration being only two – it is unlikely that a handful of them could be encountered simultaneously. All algorithms are unfair by some definition of fairness. Bias hunters therefore have guaranteed success. Seek and you will find.

These impossibilities do not only concern algorithms. The incompatibility of definitions of fairness exists whether the predictions are made by a sophisticated cancer screening algorithm or by a dermatologist using the human eye to examine skin tags. But the simple structure of the algorithms – data in, decision out – helped make the study of fairness possible. It’s easy to ask too much of algorithms, but that shouldn’t stop us from asking for anything. We must be intentional and specific about the type of fairness to seek.

Even though attaining equity is inherently difficult, the pursuit of it is not in vain. Forcing an optimized algorithm to behave fairly might push it above the peak of performance, but that may be a preferable trade-off. And for systems that are less than optimal and far from fair, an alternative could exist which is better in both respects.

We are accustomed to considering the social forces that undermine fairness: violent history, implicit biases and systematic oppression. If we were to eliminate these human contributors to iniquity, we would always end up hitting impenetrable bedrock. These are some of the fundamental limitations that algorithms come up against today. In seeking to improve fairness, algorithms teach us that we can’t have it all.

Irineo Cabreros is an associate statistician who studies algorithmic fairness among other public policy topics at Rand Corp.