Who wants to go to more meetings?
I sure as hell don't. Neither does anyone else I know. (We might want to be with other people more, but not like that.)
Over next few weeks, I'm wrapping up an academic book about the ethical choices that hide inside of high-stakes software, and how those choices can be un-hidden. (This is a different project than my earlier posts.)
My main example is kidney transplant. About 100,000 or so Americans are waiting for kidney transplants, and whenever a donated organ becomes available, there's a national matching system – an algorithm on a computer – that decides which patient will get the chance to use that particular organ. No matter how it gets made, that software has to navigate some choppy ethical waters: Should we favor the young? Or, those who've waited the longest, even if they might be old or sick? Or maybe we should give to whoever can benefit most, even if they just signed up yesterday?
Algorithms make lots of other ethically tricky decisions – who deserves government benefits, who's dangerous, who's qualified for a job, whether you are a pain patient or a secret opiate addict. In those cases, the ethics often seem to disappear into the software's details. The key decisions get buried in technical jargon that only experts can understand, or the system is a commercial secret and nobody but its makers can be quite sure how it works.
The kidney transplant story is very different, and that's why I wanted to study it. In transplant medicine there's a long tradition of bringing in laypeople to help decide the ethical tradeoffs, rather than forcing experts to figure everything out. Here, that means patients and organ donors and members of the public get a say, not just doctors and surgeons and programmers. And between about 2004 and 2014 there was a careful public process to revise the ethical balance in how we give out kidneys.
It was beautiful, people. It had all or most of the things reformers tend to want: There was transparency and public input, published proposals and comment periods. There were simulations of the different policies, and reports, and once the system finally did roll out, there were audits. Complex technical things got rewritten into plain English so that patients and journalists could decipher what was going on.
Not only that, but all this process made a practical difference: At first the idea was to give each organ to whoever would benefit the most, but when the relevant committee pitched the idea publicly, people pointed out that this would basically lock out old people and also the poor, and folks who had had other bad medical luck. So the community ended up compromising on something more moderate, which still valued waiting time, and also made it easier for people to join the list.
So that's all good, right? People working together to open the black box, and take control over the ethical choices that matter most, rather than leaving them up to technical experts or machines?
An easy way to end my book would be to say that yes, it was overall imperfect but basically good. In conclusion (I could say) let's extend the same principle to public benefits systems, or to criminal justice algorithms. (Advocates have sought things like this for a while: In 2018 worked with a group of more than 100 advocacy organizations urging that criminal justice algorithms be constrained by "community advisory boards.") Let's have more transparency, and more meetings – hooray!
But hang on. This new transplant algorithm took ten years to get done. Endless hours of salaried time and volunteer time, travel and analysis and writing and reading – millions of dollars worth, surely. I'm not saying this wasn't worth it. But, was it? How would we know? We can't do this for everything.
I always thought it was Oscar Wilde who said, "socialism will never work because there aren't enough evenings in the week." Turns out Wilde never said this (as far as anyone now can tell), but Michael Walzer did, in a zany 1968 essay in Dissent. As Walzer puts it
Self-government is a very demanding and time-consuming business … and when the organs of government are decentralized so as to maximize participation … it may well require almost continuous activity, and life will become a succession of meetings.
We can assume that a great many citizens, in the best of societies, will do all they can to avoid [all this]. While the necessary meetings go on and on, they will take long walks, play with their children, paint pictures, make love, and watch television. They will attend sometimes, when their interests are directly at stake or when they feel like it. But they will not make the full-scale commitment necessary for socialism or participatory democracy. How are these people to be represented at the meetings? What are their rights?
In the real world, participation is costly. It's not just money – people are busy. The whole thing can easily go sideways, and be "captured" by interests with the most money at stake. For instance, in the 1970s and 80s new land use laws were introduced, requiring additional approvals before construction projects could start. The idea was to protect the environment and prevent developers from riding roughshod over poor neighborhoods. But in practice these laws have also become tools for wealthy homeowners to resist new housing and mass transit that the country urgently needs.
Bruce Cain argues that these kinds of mechanisms – all the new committees and meetings – reflect a "populist distrust [of] representative government" that reaches back to the American founding. It's the same impulse that gave us the separation of powers, to keep our representatives honest. But now, reformers are pushing for "greater citizen control over public officials by maximizing opportunities for transparency, participation, observation, and control." The fly in the ointment is that most people just don't have time to do that stuff: "Populism' s hold on the modem political reform community rests on denying cognitive reality and promising unmediated citizen empowerment."
Cain argues instead for what he calls "pluralism," which is when the system depends on expert advocates from different groups to push the buttons of power on behalf of various competing interests. If we plan for too much popular input, we're kidding ourselves. As Walzer said, it’s reasonable to fear that “Participatory democracy means the sharing of power among the activists. Socialism means the rule of the men with the most evenings to spare.”
So is the transplant story "populist" in this sense? Not quite, I'd say. There's lots of rhetoric about "public" input. And an individual single transplant patient – flying to a meeting full of doctors at his own expense – did play a key role in the story. But most of the work was done by intermediaries.
Beyond the practicalities, there may also be a deeper argument against asking lots of people to weigh in on "tragic choices" like picking which dying person to save: Our values might not survive the confrontation.
Moral obfuscation, of the kind that happens now via algorithms, existed before software, and it at least arguably serves important goals and is an unavoidable component of modern life. We can’t re-open all the hard choices, all at once, all the time — even if we wanted to. Some people focus their limited supply of ethical attention on factory farming, or on what's happening in Afghanistan, or on child and maternal health.
In the transplant world we'd like to value every life equally – but in some sense we can't, because ultimately, only one patient can receive each organ. In their 1978 book Tragic Choices, Calabrese and Bobbitt explore situations like these, when every possible way forward requires us to violate our society's most basic principles. Looking at such a choice straight on can be a really painful experience for everyone, and so (given that the choice is, in a case like transplant, unavoidable), society has found ways to sort of make it seem decent or tolerable, rather than making it seem like a horrifying daily travesty. If we stare a choice like that straight in the face, Calabarese and Bobbitt seem to fear, we'll go mad, or pay "a price in ideals," being forced to acknowledge that our system doesn't really work. To make the situation tolerable, we need to find a way through the moral conflict. And in that goal, numbers and algorithms can help. Take an algorithm's decision that Alice and not Bob will get a heart transplant that each one urgently needs:
By making the result seem necessary, unavoidable, rather than chosen, it attempts to convert what is tragically chosen into what is merely a fatal misfortune. But usually this will be no more than a subterfuge, for, although scarcity is a fact, a particular … decision [for instance about who gets something] is seldom necessary in any strict sense.
In other words, the moral anesthesia that comes with quantifying ethical choices, and putting them through software, is sometimes a mercy.
We learn not to explain these impossible choices in moral terms, so that we can make it through the day. But then, the terrible moral tradeoffs are happening anyway. When does it make sense to pry the lid off and peer inside? When should the "subterfuge" of neutrality be dropped, and the impossible problem honestly confronted?
C&B say that whether the process is worth it “depends on whether, now that we are aware of what we are doing, we can do sufficiently better [than before the process started] to make up for the costs of clearly choosing. But whether we can or not, we cannot turn back: we now know that either way, we are choosing to take some people’s lives.”
I'm not persuaded by the Tragic Choices argument that it's important to tiptoe around the inevitable contradictions in our society's core ideals. But, their idea about when to bother having an ethical debate does make sense to me.
For instance in the transplant case, the reason it was worth trying to collaboratively and publicly rewrite the kidney algorithm was not just because the algorithm was ethically significant but also (in part) because the old algorithm was no longer working efficiently. The redesign not only let everyone fight for their preferences, but also ultimately allowed for greater saving of life. We should bother with an ethical debate about an algorithm when, by bothering with the debate, we can actually make things better.