By Calum Carmichael.

(The full, five-part series is downloadable as a pdf: What Can the Philanthropic Sector Take from the Downfall of Samuel Bankman-Fried and His Ties to Effective Altruism, a five-part series by Calum Carmichael (2023).)

Setting the stage for parts 3, 4 and 5 of this series

In September 2022, the prescient but pseudonymous Sven Rone anticipated the fallout that Effective Altruism (EA) would experience before year’s end:

“By relying heavily on ultra-wealthy individuals like Sam Bankman-Fried for funding, … the Effective Altruism community does not appear to recognize that this creates potential conflicts with its stated mission of doing the most good by adhering to high standards of rationality and critical thought…. [A]ttacks on the image of SBF, FTX and even crypto as a whole carry the risk of tarnishing EA’s reputation. Were SBF to be involved in an ethical or legal scandal (whether in his personal or professional life), the EA ecosystem would inevitably be damaged as well.”

In November 2022, following the bankruptcy of FTX International and the criminal charges against Samuel Bankman-Fried (SBF), The Economist referred to that fallout:

“The downfall of Mr Bankman-Fried, who has been apparently dedicated to the [EA] cause since his time at university, has led to a reckoning. Not only has effective altruism lost its wealthiest backer; its reputation has been tarnished by association. Many inside and outside the community are questioning its values, as well as the movement’s failure to scrutinise its biggest funder—something particularly painful for a group that prides itself on logically assessing risk.”

In the same month, Kate Arnoff described EA’s reckoning as the one bright spot in the downfall of SBF:

“This rethinking of effective altruism may be the one bright spot in an otherwise depressing crash …. It’s good that FTX’s collapse is finally making people rethink Bankman-Fried and effective altruism.”

Also in the same month, Erik Hoel speculated on whether such rethinking would lead to the demise of EA:

“Sam Bankman-Fried, affectionately known as SBF, was until recently effective altruism’s biggest funder…. If in a decade barely anyone uses the term ‘effective altruism’ anymore, it will be because of him….”

Part 3: Questioning the philosophical foundations of Effective Altruism

Introduction

Part 1 of this series summarized criticisms of Effective Altruism.

Late in 2022 the bankruptcy of FTX International and the criminal charges brought against the crypto entrepreneur SBF re-focused and intensified existing criticisms and suspicions of EA – the approach to philanthropy with which he was closely associated. Part 1 of this series summarized those criticisms under seven points: two each for the philosophical foundations and ultimate effects of EA, and three for its analytical methods. Part 2 described EA: its origins, ethos, analytical methods, priorities and evolution. Parts 4 and 5 of the series will focus on the criticisms and their rejoinders that apply to the analytical methods and ultimate effects of EA. Here in part 3, I focus on the two criticisms and their rejoinders that apply to its philosophical foundations. Before discussing each criticism, I provide several references to it that were made in reaction to the downfall of SBF.

Throughout, my goal isn’t simply to present contending views on the foundations, methods and effects for EA, but to derive from them implications and questions for the philanthropic sector as a whole – so that regardless of our different connections to the sector, we can each learn or take and possibly apply something from the downfall of SBF and his association with EA.

Criticism #1: The ethical bases of EA rely on a narrow version of utilitarianism to the exclusion of other ethical theories or considerations, such that it encourages its adherents – through their philanthropy – to pursue purportedly good ends using potentially harmful or corrupting means.

“The question is: was the FTX implosion a consequence of the moral philosophy of EA brought to its logical conclusion?” Erik Hoel, November 2022

“The problem for effective altruists is not just that one of their own behaved unethically. There is reason to believe that the ethos of effective altruism … enabled and even encouraged the disaster at every step along the way…. [I]t is little more than a fancy way of saying ‘the ends justify the means’.”David Z. Morris, November 2022

“One key feature of utilitarianism is that it doesn’t rule out any kinds of actions unilaterally. Lying, stealing and even murder could, in certain situations, yield the overall best consequences…. That doesn’t mean that an effective altruist has to say that stealing is okay if it leads to the best consequences. But it does mean that the effective altruist is engaged in the same style of argument.”Jeff Dunn, November 2022

“If there’s a lesson to be learned from the collapse of FTX, it’s this: ethics is not the result of calculated consequences. If there’s any good to emerge from the rubble, it’s this: the demise of utilitarianism as a spiritual guide.” Michael Cook, November 2022

Holden Karnofsky

Holden Karnofsky, a thought leader in the EA community, voiced mild concerns that utilitarianism could weaken the trustworthiness of effective altruists.

This first line of criticism against the philosophical foundations of EA focuses on their connections with utilitarianism and its premise that actions are moral to the extent their consequences promote total well-being. Sure enough, utilitarianism informs EA, whether through the writings of thought leaders such as Peter Singer or William MacAskill, the outlooks of the majority of effective altruists as surveyed in 2017, or the analytical methods used to identify the philanthropic causes or interventions capable of doing “the most good.” And sure enough, SBF aligned himself with utilitarianism early on. At the age of 20 – perhaps influenced by his parents, both of whom are professors at Stanford Law School – he described himself as “a total, act, hedonistic/one level (as opposed to high and low pleasure), classical (as opposed to negative) utilitarian; in short, I’m a Benthamite.” Both parentheses are original.

According to some critics, the presence of utilitarianism has “poisoned” or “corrupted” EA, in part by inviting “‘ends justify the means’ reasoning, … [and a] maniacal fetishization of ‘expected value’ calculations, which can then be used to justify virtually anything”, ranging from business fraud all the way to such things as bestiality and murder. Even within the EA community there are some thought leaders – Holden Karnofsky being one – who have voiced milder concerns that utilitarianism could weaken the trustworthiness of effective altruists: “Does utilitarianism recommend that we communicate honestly … [or] say whatever it takes … stick to promises we made … [or] go ahead and break them when this would free us up to pursue our current best-guess actions? …. My view is that – for the most part – people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it.”

William MacAskill, 2018

“The Reluctant Prophet of Effective Altruism,” a New Yorker article, shows how William MacAskill’s movement set out to help the global poor, but how his followers now fret about runaway A.I. The article asks: have the followers seen our threats clearly, or lost their way?

Among external critics of EA, the unease around utilitarianism often focuses on the “earning to give” strategy – the idea promoted by 80,000 Hours that for some effective altruists a career with social impact might involve their working not in positions tackling major problems directly, but rather in high-paying jobs that allow them to donate more to organizations tackling those problems effectively. As noted in parts 1 and 2, it was this strategy that MacAskill proposed to the undergraduate SBF, and of which SBF came to be the most prominent and praised exemplar. Some argue, however, that “earning to give adds a darker possibility of rationalizing unethical means in service of virtuous ends.” This could take several forms.

First, the strategy could place well-intentioned people in work environments likely to erode those intentions. For example: “the idea that getting rich is good (or even obligatory) so long as you’re giving enough of it away, can become a justification for embracing a soul-corroding competitiveness while telling yourself you’re just doing it for the greater good.” Alternatively, “the Spartan tastes and glittering ideals of dogooder college students rarely survive a long marinade in the values and pressures and possibilities of expansive wealth.”

Second, it could encourage people to accept careers that are high-paying but socially harmful, or to undertake business practices that are profitable but shady: “[i]t’s easy to see how this could translate to: Go work in crypto, which is bad for the planet, because with all that crypto money you can do so much good.”

Samuel Bankman-Fried in 2022

‘The experience of SBF is a warning that if you are the type to try and make billions, you should worry that your ethics are vulnerable along the way,’ says internet writer Zvi Mowshowitz of SBF (pictured above).

Finally, the strategy might attract duplicitous or at least susceptible characters from the get-go: “[the experience of SBF] is also a warning that if you are the type to try and make billions, you should worry that your ethics are vulnerable along the way.” Italics are original. According to one commentator, the italicized warning could also have applied to types who try to receive billions: “It is possible that MacAskill and his peers recognized that running a crypto exchange was inherently unethical, but concluded that it was nevertheless justifiable given the scale of the good that SBF’s fortune would do.”

Rejoinders to criticism #1

There are rejoinders to these criticisms of the role and effects of utilitarianism and the earning-to-give strategy. First, as an ethical theory, utilitarianism offers not a cut-and-dry, how-to manual for day-to-day use, but rather a general framework for thinking about what makes actions moral. Like environmentalism or feminism, it provides sufficient latitude for people holding different moral outlooks or priorities to partake.

Second, EA isn’t simply utilitarianism – despite SBF labeling it “practical utilitarianism.” EA makes no claim that one must sacrifice one’s own interests or those of another to serve the “greater good,” nor does it specify or insist upon what the “greater good” comprises. Sure enough, as noted in part 2 the EA organization Giving What We Can encourages members to donate at least 10% of their income in perpetuity to the charities found to be most effective. But such a standard isn’t unique to EA: it’s present in Judaism and Christianity. Moreover, in 1996 philosopher Peter Unger developed arguments akin to those of Singer from 1972 – ones that could have similarly inspired EA – but unlike Singer, he did so disavowing any particular ethical theory, including utilitarianism.

For more about ethical theories, read "What does Batman have to do with philanthropy? A series about ethics (or lack thereof) in our sector," by Calum Carmichael: https://carleton.ca/panl/ethics.

For more about ethical theories, read “What does Batman have to do with philanthropy? A series about ethics (or lack thereof) in our sector,” by Calum Carmichael: https://carleton.ca/panl/ethics.

Third, effective altruists aren’t all utilitarian: although MacAskill is thought to be, his co-founder of Giving What We Can, Toby Ord, isn’t; and although the majority surveyed in 2017 said they were, a sizable minority said they weren’t, affiliating instead with another ethical theory (e.g., deontology or virtue ethics) or none.

Fourth, the underlying principles of any ethical theory, if carried to the extreme, could be used to justify abhorrent behaviour. Sure enough, as some critics of EA argue, fanatical utilitarianism could be used to justify the murder of one to save the lives of two. But then again, fanatical deontology could be used to justify not telling a lie even to save the life of an innocent victim. And fanatical virtue ethics could be used to justify sectarian indoctrination or bloodshed. Thus, using extreme extrapolations to declare utilitarianism – or deontology, or virtue ethics – a “flawed philosophy” that has “corrupted” EA isn’t only a logical fallacy, but also – if carried to the extreme – a line of reasoning that would dismantle Western ethical thought.

Fifth – turning to the dangers of recommending the “earning-to-give” strategy – such recommendations are infrequent, made perhaps to 15% of effective altruists. For most, careers combining social impact with a better personal fit would come from working directly on important problems – whether through nonprofits, charities, social enterprises, universities, think tanks, government or political organizations.

Sixth, when recommended, earning-to-give comes with guidelines: for example, don’t pursue a career that violates the rights of others or that entails fraud, such things being bad both in themselves and in their likely consequences; don’t enter or stay in a job where “there is a large gap between your daily conduct and your core commitment”; more generally, “avoid doing anything that seems seriously wrong from a commonsense perspective”; and “in the vast majority of cases” don’t pursue “a career in which the direct effects of the work are seriously harmful, even if the overall benefits of that work seem greater than the harms.” To be sure, by estimating the donations that would compensate for the harmful aspects of a career or by inserting phrases like “a large gap” or “a commonsense perspective” or “in the vast majority of cases,” the guidelines could set up slippery slopes toward profitable but bad behaviour or lucrative but harmful careers. And admittedly such warnings may not have penetrated the thinking of SBF, who claimed “I would never read a book. I’m very skeptical of books.”

Nevertheless, the insertion of such “fudge factors” within the guidelines provides the agency that effective altruists would need to make their own moral decisions around actions that might be bad in themselves but good in their side effects: actions akin to spanking a child to discourage cruel behaviour, or telling a lie to protect an innocent life. Such trade-offs exist in all walks of life, and credible, moral decisions regarding them aren’t necessarily categorical.

Criticism #2: EA excludes human emotion or relationship as guides to philanthropic choice, such that it undercuts philanthropists’ agency and overlooks or opposes key aspects of human motivation.

“Many EA folks come from tech; many also consider themselves ‘rationalists,’ interested in applying Bayesian reasoning to every possible situation. EA has a culture, and that culture is nerdy, earnest, and moral. It is also, at least in my many dealings with EA folks, overly intellectual, performative, even onanistic.”Annie Lowrey, November 2022.

“What the ‘effective altruism’ types believe in is that they can replace the inferior, subjective standards of the plebs with the superior, objective standards of the ruling class…. Armed with these tools, … [they] feel empowered to do an unhinged collection of immoral things because, frankly, they are saving the world.”Cauf Skiviers, November 2022.

Eliezer Yudkowsky, an American researcher into AI

Eliezer Yudkowsky, AI researcher, argues that charitable giving shouldn’t be about human feelings. “A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan.”

This second line of criticism against the philosophical foundations of EA focuses on their undercutting donors’ agency by discouraging them from choosing philanthropic causes freely in response to their own unfiltered emotions, interests or relationships. Instead, EA uses impartial and impersonal criteria to pre-select causes and interventions that are cost effective in saving or improving lives and then asks donors to choose from these. As explained by Peter Singer: most charitable donations are “given on the basis of emotional responses to images of the people, animals, or forests that the charity is helping. Effective altruism seeks to change that by providing incentives for charities to demonstrate their effectiveness.” Or as put more bluntly by the effective altruist Eliezer Yudkowsky: “This isn’t about your feelings. A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain’s feelings of comfort or discomfort with a plan. Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply.” Indeed, SBF endorsed such reasoning even in choosing among the cost-effective causes pre-selected by EA – eschewing those he considered “more emotionally driven,” such as global poverty and health that threaten millions of lives at present, preferring instead those he considered more intellectually driven, such as runaway artificial superintelligence that could conceivably exterminate trillions in the distant future.

EA’s use of impartial and impersonal criteria to pre-select causes has been criticized for both what it overlooks in the world and denies in the individual. In terms of what it overlooks, the criteria used by EA focus on concepts “of individual needs and welfare, rather than power, inequality, injustice, exploitation, and oppression”. By omitting the latter set of concepts, EA gives short shrift to conditions that are inherently important to our quality of life.

Bernard Williams

Philosopher Bernard Williams argued, in 1973, that the impartiality prescribed by utilitarianism is neither possible nor desirable.

In terms of what it denies, EA’s reliance on utilitarianism and impartiality requires individuals to forgo “the things that constitute us as humans: our personal attachments, loyalties and identifications” along with “the complex structure of commitments, affinities and understandings that comprise social life.” Moreover, imposing a “point-of-viewless” impartiality “deprives us of the resources we need to recognise what matters morally.” The social world is “irreducibly,” “irretrievably” and “ineluctably” normative such that acting morally does not require “acting with an eye to others’ well-being” but rather acting with a “just sensitivity to the worldly circumstances in question.” As a result, EA’s “image of the moral enterprise is bankrupt and … [the] moral assessments grounded in this image lack authority.” Such concerns echo those of the philosopher Bernard Williams who argued in 1973 that the impartiality prescribed by utilitarianism is neither possible nor desirable: it’s not possible given that individuals cannot step outside their own skin; and it’s not desirable if, like Williams, one assumes that our individual well-being depends upon our ability to decide and act freely in accord with our own concerns, purposes or deepest convictions and not become a conduit for the initiatives or claims of others – including the claim that we should replace our own convictions with the “impartial point of view” needed to maximize total utility.

Rejoinders to criticism #2

Peter Singer

Peter Singer

There are rejoinders to the criticisms of what EA overlooks and denies. First, when it comes to overlooking justice or equality or freedom, Peter Singer admits that effective altruists “tend to view values … [like these] not as good in themselves but good because of the positive effects they have on social welfare.” And yet, within EA there’s no “party line” on that front. Indeed, given EA’s commitment to cause neutrality and means neutrality, MacAskill claims in principle that if it can be demonstrated that advancing such values directly is a “course of action that will do the most good … then it’s the best course of action by effective altruism’s lights.” That said, putting this principle into practice is difficult: it would require agreement at the outset on what justice or equality or freedom entails, for whom, and how it and its effects can be measured. To date, such difficulties have limited EA initiatives to ones that advance equality or justice indirectly: say, countering inequality by alleviating the effects of poverty; or addressing injustice by promoting election reform, criminal justice reform or international labour mobility.

Second, directly pursuing justice or equality or freedom internationally could introduce forms of cultural domination and colonization by imposing Western concepts on non-Western societies and exercising philanthropic spending power that could silence or subvert local priorities, as well as challenge the sovereignty of host-nations.

Third – turning to the denial of donor agency and the suppression of emotion – MacAskill argues that EA seeks to harness such things, not eliminate them. Some effective altruists may choose to adopt an impartial perspective if given evidence that this would allow their philanthropy to do more good for more people. Moreover, as proposed by economist Tyler Cowen, “an inescapable feature of human psychology means at the normative level, there’s just no way we can fully avoid partiality of some kind.” In order to recognize and respond to a cause or need, we need first to identify with it and with the people or entities involved. The direction and degree of such identification differ across donors, and to honour these differences EA presents a menu of alternative cause areas and interventions deemed cost effective.

And fourth, the critics of EA who echo Williams’ insistence that morality is essentially first-personal rather than impersonal risk undermining the responsive regard for others that is the very basis for, and indeed the original meaning of, philanthropy: according to philosopher Jeff McMahan, “the importance to oneself of one’s own projects and attachments limits the extent to which morality can demand that one provide assistance to others.”

What can we take from the downfall of Samuel Bankman-Fried with regard to the philosophical foundations of Effective Altruism?

Journalist Kelsey Piper

Journalist Kelsey Piper was surprised by the willingness of SBF to be interviewed on Twitter after news broke that his cryptocurrency exchange had collapsed, with billions in customer deposits apparently gone.

Is SBF the exception that proves the general rule that the philosophical foundations of EA are sound? Or is he the example that demonstrates they aren’t? Or is he neither? How did the utilitarianism he professed as a student in 2012 apply in his professional life a decade later? Did he use it as an ethical theory to guide and justify his actions, or as a smoke screen to obscure them? Did he consider his own ethical protestations sincere, whereas those of his competitors a marketing ploy? Or was he just like the others? Such questions weren’t answered definitively in his infamous Twitter exchange with journalist Kelsey Piper, soon after he came under investigation in November 2022:

Piper: “So the ethics stuff – mostly a front? People will like you if you win and hate you if you lose and that’s how it all really works?”

SBF: “Yeah. I mean that’s not *all* of it. But it’s a lot….”

Piper: “You were really good at talking about ethics for someone who kind of saw it all as a game with winners and losers.”

SBF: “Ya. Hehe. I had to be. It’s what reputations are made of, to some extent. I feel bad for those who get fucked by it. By this dumb game we woke westerners play where we say all the right shiboleths (sic) and so everyone likes us.”

By Ord’s account: “I don’t think anyone fully understands what motivated Sam (or anyone else who was involved). I don’t know how much of it was greed, vanity, pride, shame, or genuinely trying to do good…. [If he remained a utilitarian, then] it increasingly seems he was that most dangerous of things – a naive utilitarian – making the kind of mistakes that philosophers (including the leading utilitarians) have warned of for centuries…. [T]he sophistications that he thought were just a sop to conventional values were actually essential parts of the only consistent form of the theory he said he endorsed.”

To my mind, it’s unclear what role the philosophical foundations of EA played in the professional decisions of SBF. Hence, to judge those foundations by those decisions would be misleading. Nevertheless, his downfall revived two lines of criticism that raise issues and questions relevant to not only EA but also the philanthropic sector as a whole. I select three.

1. What are or what should be our ethical anchors?

David Z. Morris

David Z. Morris, a writer about crypto topics and author of “Bitcoin is Magic,” wrote: “The problem for effective altruists is not just that one of their own behaved unethically. There is reason to believe that the ethos of effective altruism… enabled and even encouraged the disaster at every step along the way…”

As noted above, EA has been criticized for its ties to utilitarianism and the premise that actions are moral to the extent their consequences promote total well-being.

But what gives meaning or moral worth to our engagement with the philanthropic sector – whether as donors, volunteers, workers, advisors, collaborators or beneficiaries? Has it to do with the outcomes of our actions and whether they’re good, or the duties and rules fulfilled by our actions and whether they’re right, or the personal qualities underlying our actions and whether they’re virtuous? How do we assess, perhaps question and possibly improve that goodness, rightness or virtue? Are there limitations or dangers in the standards we use? How do we work with others or in contexts that value standards different from or contradictory to our own? To what extent can we temper or change our own standards without losing our way?

If these questions seem irrelevant to how and why you engage with the philanthropic sector, why is that? Would you feel challenged by someone who sees them as fundamentally important?

2. How do we decide upon actions that on the one hand could be harmful or problematic in themselves, but on the other hand could allow us to do more and better things?

As noted above, EA has been criticized for tolerating actions that might be intrinsically bad but instrumentally good: say, accepting donations from crypto, or recommending – albeit with cautionary guidelines – that some effective altruists pursue high-paying but perhaps corrupting or socially-harmful careers that would nevertheless enable them to donate more.

Harvey Weinstein

The Guardian newspaper reported that Harvey Weinstein offered $5 million to support female filmmakers (following multiple claims of sexual harassment against him), an offer rejected after widespread criticism. Photo is courtesy of David Shankbone.

But how do or should we manage similar situations? For example, when and why should a charity refuse or return a donation? Or when and why should a charity refuse or terminate a partnership with a for-profit corporation? Should we share the outlook associated with William Booth, who co-founded the Salvation Army in 1865, that “the trouble with tainted money is t’aint enough of it”? If not, then where do we draw the line? By what criteria does “tainted” become “unacceptable” – apart from being criminal? What sources of donations would violate your own values, or either oppose the mission of a charity you deal with or trigger irreparable reputational harm in the eyes of the public or key stakeholders: tobacco, alcohol, cannabis, extractive industries, nuclear power, social media, airlines, crypto, the pharmaceutical industry, a religious foundation? Would the size or purpose of the donation make a difference to your decision?

Consider the following timeline for SBF. By 2013 he had affiliated with EA. In 2014 he took up the earning-to-give strategy, working at Jane Street Capital and donating half his salary. He started to build his crypto empire in 2017. Although crypto may be of disputed social value, it’s not illegal. And although Bankman-Fried’s promotional strategies may have been questionable (e.g., placing ads during the Super Bowl or in The New Yorker and Vogue magazines), they’re not unprecedented. Sure enough, starting in 2018 EA leaders received personal reports that he was duplicitous, refused to implement standard business practices, and had inappropriate sexual relations with subordinates. But these reports weren’t circulating publicly, didn’t allege any criminal activity and could simply have been rumours spread by disgruntled associates. Few if any foresaw the devastating events of November 2022. Certainly investors like the Ontario Teachers Pension Plan didn’t see them coming.

At what point during that timeline, would you or a charity you deal with have refused or returned, say, a $1 million donation from SBF?

3. What ways should or should not be used to influence donors’ decisions on how much and where to give?

“Perhaps you know of campaigns that have been truly ‘donor-centric’ in the sense of not resorting to practices that could sway or nudge their prospects into acting against their interests or priorities.” –Calum Carmichael. Photo is courtesy of Christine Roy.

As noted above, EA has been criticized for constraining the agency of donors in deciding the amounts and destinations of their giving. It recommends 10% of one’s income, discourages acting on personal relationships and emotive appeal, and encourages a reliance on impersonal indicators of cost-effectiveness. As a result, some claim it both denies individuals the ability to decide and act on their own concerns, purposes or deepest convictions, and it overlooks normative but hard-to-pin-down goals such as liberty or justice.

But if the charge against EA is that it tries to sway donors – in other words, alter their conception of their own interests in ways that would have them act in a contrary manner – then could the same charge by leveled against other if not all fundraisers or fundraising campaigns in the sense of their doing the same thing albeit on different terms? Such campaigns might employ communication and relationship-building techniques designed to persuade. Such techniques might work on emotive rather than cognitive grounds, providing only selective information and relying on narratives or verbal or visual images that evoke rather than document. They might adjust the goalposts of “impact” to match what can be evoked emotively, and encourage compliant donors to think of themselves as “generous” or “visionary” and their gifts as “transformative” or “inspired.”

Could such campaigns be faulted for tampering with donor agency?

Perhaps you know of campaigns that have been truly “donor-centric” in the sense of not resorting to practices that could sway or nudge their prospects into acting against their interests or priorities. If so, then – as suggested by the taxonomy constructed by Ian MacQuillin – could such campaigns be at the expense of important considerations apart from donor agency, including what EA emphasizes: the well-being of actual or potential beneficiaries? Consider, for example, the decision of Leona Helmsley to establish in her will a $12 million trust fund for her Maltese dog, Trouble. Or consider the reassurance offered by Bronfman and Solomon that “[i]n philanthropy, there are no wrong answers…. You might want to fund an antigravity machine or a museum for dust mites. There may be more constructive uses for your money, and these objectives may sound crazy, but there is nothing wrong with them. In philanthropy, the choices are not between right and wrong, but between right and right.”

In closing

William MacAskill

The need for reflection has been both identified by critics of EA and acknowledged by its leaders, such as William MacAskill, who said: “I had put my trust in Sam, and if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of. For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints.” Photo is courtesy of Nigel Stead.

The downfall of Samuel Bankman-Fried has elicited calls for renewed and greater reflexivity within Effective Altruism – the approach to philanthropy with which he was closely associated. The need for such reflection has been both identified by critics of EA and acknowledged by its leaders and members – all seeing this as an occasion to reconsider and perhaps revise its philosophical foundations and analytical methods in the hope of improving its ultimate effects.

To my mind, the process of reflection required within EA is something in which the wider philanthropic sector could participate – or, indeed, should participate.

In part 3 of this series, I’ve summarized the criticisms of EA and their rejoinders as they relate to its philosophical foundations. From these I’ve drawn out several questions that apply to the philanthropic sector more broadly. My intent here, as for the forthcoming parts 4 and 5, isn’t to castigate or exonerate EA. Instead, it’s to point out that the issues on which EA is or should be reflecting are ones that could guide more of us across the sector in reconsidering and perhaps revising our own outlooks and ways of engaging with philanthropy, in our shared hope of improving its ultimate effects.

Banner photo is courtesy of Valdemaras D.

Friday, July 21, 2023 in , ,
Share: Twitter, Facebook

More News Posts