Effective Altruism Is Neither Effective Nor Altruistic
Rich people want to subjugate you, not uplift you.
Image: Sam Bankman-Fried by Cointelegraph, CC
I first published this on John Stoehr’s Editorial Board site. It’s reprinted with his permission.
If a school of philosophy can be considered hot or hip, Effective Altruism (EA), an intellectual movement arguing for rational philanthropy, is hot and hip. But after the dramatic collapse of billionaire EA proponent Sam Bankman-Fried’s cryptocurrency empire, EA has been faced with a PR disaster.
How could a philosophy designed to promote generous giving have instead led to federal charges of fraud, conspiracy, money-laundering and campaign finance violations?
Leading EA philosopher William MacAskill condemned Bankman-Fried and argued that the philosophy opposed “ends justify the means” reasoning. That is to say, MacAskill does not condone fraud as a means to raise money for worthy causes.
MacAskill and others are still committed to EA. But there’s a strong case that the philosophy lends itself to the uncritical elevation of supposed tech finance innovator geniuses like Sam-Bankman Fried. Lack of accountability is baked into EA. In many ways, the philosophy is an algorithm not for helping the poor, but for hoarding virtue and power in the hands of those who already possess it.
The most efficient
According to the Center for Effective Altruism, EA “is about using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.”
Even in that short definition, it’s clear that EA is focused on the decisions and the viewpoint of those with money. Those with funds are tasked with using reason to benefit others. This is not a philosophy of self-advocacy. Nor does it suggest asking people what they need. Instead, the emphasis on reason, efficiency and the role of technocratic arbiters links EA to what sociologist Elizabeth Popp-Berman refers to as the “economic style of reasoning.”
Berman argues that before the 1960s and 1970s, progressives often made arguments on the basis of a universal right to health, equity and security. Arguments like these, founded on claims of human dignity and empowerment, helped pass universal programs like Social Security and Medicare.
However, during the 1970s and later, as part of a conservative backlash to the civil rights movement, progressives began to move away from universal arguments. Instead they started to center “efficiency.” Efficiency meant trying to do the most possible good with the least resources. That led to a focus on means-testing poverty programs, as in Bill Clinton’s welfare reform package.
Efficiency arguments are as focused on making sure that no one gets too much as they are on trying to make sure everyone has enough. From this perspective, if too many tax dollars go to relieve the student debt of the affluent, the debt relief policy is a failure, even if it benefits many. Helping people in itself isn’t enough; you must help the most people in the most efficient way.
Not even a tweet
EA takes this government turn to efficiency and personalizes it. One of the founding philosophers of the movement, Peter Singer, argues in a famous 1972 article that we have a moral imperative to use our resources in the most efficient manner possible to help others.
“People do not feel in any way ashamed or guilty about spending money on new clothes or a new car instead of giving it to famine relief. (Indeed, the alternative does not occur to them.)” Singer writes. “This way of looking at the matter cannot be justified. When we buy new clothes not to keep ourselves warm but to look ‘well-dressed’ we are not providing for any important need.”
According to Singer, we should all be constantly monitoring our expenditures and actions to make sure we perform maximum good. EA imagines the moral life as one of continual ethical self-regulation. We are all the Uber drivers of our own monetized virtue.
The logic is persuasive. After all, isn’t fighting hunger more important than a new shirt? Shouldn’t we eschew consumption to help those in need? The problem is, as Berman points out, that the rage for ethical quantification tends to nickel-and-dime broader moral demands to death.
For example, Anthony Kalulu, a farmer working to end poverty in the Busoga region of Uganda, says he reached out to a hundred effective altruists. He didn’t ask for money. He simply wanted them to post on social media to draw attention to his cause.
None of them would even post a tweet. They all said they only helped the supposedly best charities, such as those vetted by organizations like Givewell.
The refusal, Kalulu says, “was already preset by EA’s creed of only supporting the world’s ‘most effective’ charities, even when the only help needed is a tweet.”
There’s waste and then there’s waste
EA encourages people to carefully regulate their generosity so they don’t provide aid to anyone who isn’t the absolutely most deserving poor. And as Kalulu explains, the most deserving are determined by experts and technocrats at western organizations like Givewell. These organizations prioritize western solutions like mosquito nets — which, Kalulu says, have done little to improve his region for generations.
One problem with having experts choose is that they sometimes choose wrong. This can create massive waste when giving is centralized and regimented. Philosopher Kate Manne, for example, points out that Givewell has been advocating for deworming for years, as a simple, cheap remedy that vastly improves outcomes for the very poor.
Unfortunately, Manne explains, Givewell’s recommendation was based on a single paper that had both methodological and arithmetical errors. Givewell has funded millions to what is probably a useless remedy. Nor have they fully admitted their error. The organization continues to advocate for probably useless deworming. EA acolytes then use Givewell’s false recommendations as an excuse not to tweet in support of solutions proposed by people from affected communities like Kalulu.
Fake ethics in the name of fake people
Even worse, EA has also advocated for “longtermism” — the idea that helping theoretical people in the future is as important as helping people in the present.
Many longermists believe that in the future there may be billions and billions and billions of digital people living in computer simulations. Because these theoretical people are so numerous, we have a moral obligation to them that transcends our obligation to the poor now. Therefore spending money on tech development or on enhancing human intelligence is more important than spending money on … well, anything else.
Thus, philosophers Olúfẹ́mi O Táíwò and Joshua Stein point out, the EA organization OpenPhilanthropy in 2021 spent $80 million to study risks from AI and only $30 million to support the Against Malaria foundation.
The right to power
In framing virtue as a technique of self-regulation, EA has elevated technocracy to a kind of busted transhumanist theology, and technocrats to gods of the coming simulation. EA insists that centralized credentialed thinkers should make decisions about what giving is most efficient. Input from the marginalized themselves, like Kalulu, is seen as not just superfluous, but as an actual moral error or failing.
Poor people, African people, colonized people, have no status or say by virtue of being poor and colonized. It is only those with the wherewithal to spend, and the vision to regulate their spending, who can even be said to have virtue. Therefore, it is only they who have the right to power.
In that context, Bankman-Fried does not seem like an aberration, but rather as a fulfillment of important currents within EA. As an expert with a great deal of money, he saw himself as better than, and unaccountable to, others with less money and supposedly with less expertise.
Many effective altruists are sincere and want to do good. But worshiping the elevated rational choices of the wealthy is not a way to a better world.
This is so similar to how hearing people treat Deaf people, and have since the Milan “conference” of 1880 that banned speakers of signed languages and Deaf people from that conference and ultimately only admitted one Deaf ASL speaker from the American/Gallaudet University delegation, and the Gallaudet people found themselves the only ones who argued passionately for signed languages, and all the other hearing people decided that Deaf people really needed all signed languages to be banned and an emphasis on spoken speech was what Deaf people needed.
From that time on, language deprivation has been rampant. And when hearing aids exploded in usage in the 1980s, hearings decided that Deaf people, regardless of their audiogram results, really needed heading aids, and double down on speech therapy, to reverse the pervasive language deprivation and inability to speak of most Deaf people up to that point. You do understand what came next? In the early 2000s hearings expanded cochlear implants so that very young Deaf babies could be implanted, and they would benefit the most from the noise machines! Alas, same results, exactly as useless as hearing aids.
If only they listened to Deaf people at any point since 1880, they would hear the same consistent desire that all Deaf children and their families learn ASL, and that Deaf children all attend school together, because it is life altering to not struggle for crumbs of communication or community.
Effective altruism, but for no one. And Bill Clinton’s agreeing with right wingers to make sure that no one got ahead on federal dollars is a pernicious scourge, especially to those on SSI, even now.
This is a good takedown of the latest, Tech-centered version of EA. But I remember being turned off of it 10+ years ago when it was "regular" people going "I'm more virtuous than you because I work at a hedge fund and give away 400k per year". Like sure, the people who are helped by that 400k definitely appreciate it, but the whole philosophy of EA rests on the fact that some people will not be helped because others need it more (and, like you pointed out, he with the money chooses). But where did that 400k come from??
There's no possible way that working at a financial institution is doing less than 400k worth of extraction when it provides 400k worth of salary to an I-banker. And that's my problem with the EA-ers: they're just not thinking big enough (or to the extent that they are, it's like you've pointed out, sidetracked by the imagined billions of descendants rather than the real billions of people right here).
Effective Altruism is just one way (of several that I have witnessed -- former FAANG eng here) for people to justify high-status-yet-hopelessly-immoral jobs.