In some ways, artificial intelligence acts like a mirror. Machine learning tools are designed to detect patterns, and they often reflect back the same biases we already know exist in our culture. Algorithms can be sexist, racist, and perpetuate other structural inequalities found in society. But unlike humans, algorithms aren’t under any obligation to explain themselves. In fact, even the people who build them aren’t always capable of describing how they work.
That means people are sometimes left unable to grasp why they lost their health care benefits, were declined a loan, rejected from a job, or denied bail—all decisions increasingly made in part by automated systems. Worse, they have no way to determine whether bias played a role.
[…]In an article published in the Harvard Journal of Law & Technology earlier this year, Wachter, along with Brent Mittelstadt and Chris Russell, argue that algorithms should offer people “counterfactual explanations,” or disclose how they came to their decision and provide the smallest change “that can be made to obtain a desirable outcome.”
For example, an algorithm that calculates loan approvals should explain not only why you were denied credit, but also what you can do to reverse the decision. It should say that you were denied the loan for having too little in savings, and provide the minimum amount you would need to additionally save to be approved. Offering counterfactual explanations doesn’t require the researchers who designed an algorithm release the code that runs it. That’s because you don’t necessarily need to understand how a machine learning system works to know why it reached a certain decision.
“The industry fear is that [companies] will have to disclose their code,” says Wachter. “But if you think about the person who is actually affected by [the algorithm’s decision], they probably don’t think about the code. They’re more interested in the particular reasons for the decision.”
https://www.wired.com/story/what-does-a-fair-algorithm-look-like/