One way to think about why nonprofits are so inefficient is that they operate within a market of funders who are monopsonies—a centralized ecosystem with only one or two buyers of the services that the nonprofits are offering. This makes the strategy, or the particular quirks, of those buyers very important.
This is far from a new point, but if you’re wondering why a certain philanthropic funding landscape may be broken, look at the incentives.
Take AI safety, the landscape I know best. A few funders—primarily Open Philanthropy (which advises Good Ventures, funded by Dustin Moskovitz and Cari Tuna) and the Survival and Flourishing Fund—fund the vast majority of work in the space.1 Smaller funders often just follow suit. Therefore, nonprofits will be optimized toward providing the best “safety services” for those organizations. By default, this means almost all strategic thinking must come from that funder, or from outsiders who change the funder’s mind.
This collapses a potentially vibrant landscape of strategy thinkers to the narrower set of strategy thinking that filters through to the funder. With a goal as unwieldy and multidisciplinary as “make AI go well”—spanning technical research, policy advocacy, public communication, and institutional design—this centralization is especially costly.
That suggests increasing the efficiency of the funding landscape can be achieved in a few ways:
(1) Publish the ultimate goal of the funding. If the goal is “ensure that we reach an intelligence explosion, but with safeguards in place,” say that explicitly. If it is “prevent superintelligence from being built while remaining within the Overton window,” say similarly. And if it is some mixture, or “robustly good,” or anything like that—at least have this public.
Of course, in practice, these organizations cannot be perfectly coherent, and much funding-relevant information must necessarily remain non-public. But having some insight into their worldviews would be tremendously helpful.
2. Publish the process of funding decisions for any particular organization. How were grants evaluated? What constraints exist, in practice, to what can and cannot be funded? For any given proposal that passes a certain intuitive check of “cost-effectively achieves goal X” and was rejected—why?
This also has the additional benefit of informing smaller funders in the space, so that all involved can make better decisions about how their funding decisions will funge (ie. displace another funder’s dollars) against each other.
3. Publish information about funding flows. How much funding is going in? How much is available? What are the discount rates? What is the projected risk?
For example, the discount rates in AI safety are very high—because of future money entering the field both for and against safety work (such as from a potential Anthropic IPO, or increased public salience), the “inflation” in the market (with superPACs like Leading the Future, valuable goods like congressional attention become much more expensive), the path-dependency of policy decisions, and disappearing windows of opportunity.
4. Be wary of setups that may look like they’re aggregating many preferences (such as certain regranting or advising processes), but are actually correlated because of similar dispositions and blindspots of the regrantors or advisors themselves. You don’t get exposure to true diversity, and you double-count if everyone is in the same informational and social environment or has the same pedigree.
5. Funders should stimulate their own demand! If they want to make a certain thing happen, or enable a technology, they should build enough expertise to make it happen, then write an RFP. Do more market pulling rather than being passive recipients of whatever happens to be available. Make your preferences known. If they are objectionable, this will be clear to other potential funders and may motivate them to enter and change the landscape.
Even with companies funding nonprofit work—where priorities are obvious (improve the bottom line) and the invisible hand of the free market pushes for greater efficiency—there are still huge gains to be had.
At least for AI safety, you’d think it wouldn’t be a problem. (Almost) no one wants human extinction. But acting consistently with that goal is hard.
Philanthropy is a market, too. Shape it like one.
In fairness, these funders are far better than most. Open Philanthropy and Good Ventures has, compared to other fields, some transparency (and will say when they are eg. exiting sub-cause areas, even if not specifying what), and SFF has spoken about their own funding process at length.