109: David Goldberg – Founders Pledge, Philanthropy and Doing Good

Play episode
Hosted by
Will Jarvis

In this episode we are joined by Founders Pledge CEO David Goldberg to discuss philanthropy, how best to do good, and how startup founders should think about prioritizing their charitable giving.

William Jarvis 0:05
Hey folks, welcome to narratives. narratives is a podcast exploring the ways in which the world is better than in the past, the ways it is worse in the past, where it’s a better, more definite vision of the future. I’m your host, William Jarvis. And I want to thank you for taking the time out of your day to listen to this episode. I hope you enjoy it. You can find show notes, transcripts and videos at narratives podcast.com.

Will Jarvis 0:38
Well, David, how are you doing this afternoon?

David Goldberg 0:41
I’m well, thanks. How are you? Doing?

Will Jarvis 0:42
Great. Thank you so much for taking time to come on the show. Do you mind giving us a brief bio and some of the big ideas you’re interested in?

David Goldberg 0:51
Yeah, brief bio. I’m an American living in the UK. I’ve been here for about a decade now.

I’ve had a slightly weird journey, I started my career as a high school dropout and finance for four years, then went, started a business in Europe in 2006, that I sold in 2008 made enough money to think about giving most of it away and, you know, got pretty frustrated by the state of play in the charity sector and decided I probably needed to learn more. And you know, 25 went to university for the first time and turns out, I like learning as an adult and ended up coming to the UK a decade ago to work on a PhD, which I dropped out of the Start founders pledge. And that’s what I do now. And I’ve been doing that for the last eight or so years, seven half years. And big ideas. I’m really interested in how we can coordinate better to solve really tough problems. I mean, if you look at the news these days, all I tend to see is just doom and gloom. And how increasingly precarious the balance we have in on this planet is. And I don’t just mean about climate. But I mean, in lots of respects. And on the same token, you have like, we’ve never been better off as a people as a species before, we’ve never had, you know, better health, more happiness, like all of the big indicators are getting better in lots of respects. But they’re still like this precariousness. And so I think that there’s a really interesting space for really smart people to coordinate better, as they think about how to apply resources for tough problems. And so I’m excited about that. It’s sort of what I do at founders pledge and what we do at founders pledge. And, yes, I’m interested in how we leverage communities of exceptional people to do good stuff.

Will Jarvis 2:51
That’s great. That’s great. You wrote me a question there I’ve had for a while, which is, it seems like humanity has, in some sense, gotten worse at coordinating around big problems. I think in context of Apollo and man, the Manhattan Project where we were able to build these like incredibly, technically difficult things, we weren’t even sure they were quite possible in very short periods of time. But it seems like it’s almost impossible to imagine the Department of Energy, which was actually the Manhattan Project originally doing that, again, like that same thing. It sounds like it just seems like it just wouldn’t work. What do you think has gone wrong there? And how do you think about fixing that kind of problem?

David Goldberg 3:27
It’s a really good question. And actually one that we’ve been thinking about, along with a bunch of other funders in the space there, too, that are trying to harken back to these times gone past where humanity could seem sort of seemingly accomplish these incredible feats in small periods of time. And I think it’s a question of, like, willpower, and, you know, the Manhattan Project and the Apollo missions were, like, a sort of a great power balancing act, right, where you had the entire might, and we’ll have the US government and a relatively in line, political class just deciding to throw their weight behind this idea so that America could best the Soviets, right? And in the Apollo sense and defeat, Nazism in the in the Manhattan Project sounds so like, we it’s good that we don’t have threats like that anymore, or like, or the type of sort of monolithic thinking in that respect that enables a country like the US or, or Europe to go about, like, sometimes scary types of things like the Manhattan Project was scary. But but it also has meant that we’ve really not made big advances in areas that seemingly what unlock a huge amount of good for the world. And, and one of the funders that we’ve been impressed by recently has been thinking about this as Schmidt futures, which is Eric Schmidt’s family office that has a big focus on impact. And we’ve been exploring with them. This so called foundational research offices sepharose. Actually, that’s not fri I don’t think, Oh, I’m getting a right foundational research organization, sorry, that are basically looking into setting up Manhattan style, Manhattan Project style projects for various issues. And so I think it’s a question of like, a meaningful amount of money, and good coordination between the line doctors deciding to say, we’re just going to sort of throw caution to the wind, and fun stuff that we think is going to be transformational for society.

Will Jarvis 5:46
That’s great. I, I’m curious, is there is there another element that matters like in like, so when we think about these big mega projects? You mentioned something interesting, right? The great power conflict element to it seems to me to be really important. So this idea like you, and I, David, if we were Manhattan Project, scientists, and general groves comes to us and he’s like, you know, you need to work really hard. Because if you don’t work really hard, you know, like, we’re gonna lose this war, it’s gonna be terrible. And like, you know, the Nazis would come over, and they’re gonna take over this country. And that was the story at the time, they’re going to invade North America and eventually come down, you know, wherever you were. Do you think it’s, and we’ve had Adam marble stone on the show as well. So this is a this is a great, great connection from the FRA angle. Is it difficult, you know, if you spent up on fra, for, let’s say, fertility research, if you don’t have this kind of, you know, human adversary you’re worried about that forces you to work kind of doesn’t force you, but encourage you to work 80 hours a week on this project. You know, do people just not work as hard? Is there a way to inspire people to work really hard on these, these problems that are incredibly important? Maybe as important as you know, the problems the Manhattan Project was solving, and Apollo we’re solving but don’t have this, like, kind of human enemy element that, to my mind sounds like it seems really important to encourage people to scares people into working really hard, if that makes sense.

David Goldberg 7:10
Yeah, I’m not sure. I mean, the example that I thought of and then laughed about, because, you know, it’s going to be, I guess, slightly controversial is like, we’ve seen people be motivated to work 80 hour weeks and through weekends, on big projects, they tend to be led by Elon Musk, right. So you think about SpaceX and Tesla and just how these two industries have been fundamentally disrupted by this guy who basically inspires action. Without this that threat of like, you know, extinction or Maga, yeah. Right. But through I’m not sure what. So I guess more Elon might be the answer, in some respects, and less than in lots of others. I’m not really sure. That’s the truth. So and I guess maybe it is a bit of like, if we don’t do this soon. We’re like heading into a brick wall, except the brick wall is not brick. It’s like a big, you know, huge metal wall, that’s, you know, several meters thick, and it’s very hot, and we’re not going to survive a crash with it. And we’re sort of heading in that direction. And a lot of respects, I just don’t think everyone has realized it, or really internalized it yet to the extent that would compel people to work those 80 100 hour weeks, in the way that might be necessary to like, develop. Right? Foundational new technologies.

Will Jarvis 8:41
Absolutely. Absolutely. Well, I have a question. I’m gonna jump around a little bit on my outline, I’m sorry, but it is aligned here. How concerned are you about AI safety ai ai X risk? It does seem to me like one of the big problems in the AI safety space is a lack of coordination. There is no like Manhattan Project for AI safety research. And perhaps if we could coordinate everyone better in some sense, I feel like our odds are better. And it’s not like we’re youth. kowski is kind of like, you know, writing these these blog posts where he’s kind of given up and says, you know, it, at least in my mind, that that’s how I read it, that we don’t really have a chance, I think we should try really hard to make sure AI safety goes right and AI alignment goes well, and it seems like a coordination problem at some level.

David Goldberg 9:24
It’s definitely a coordination problem. And, and I, I don’t know, maybe maybe the theme of you know, this, this conversation so far is I’m not sure but I’m, I’m not feeling super optimistic, if from, from my perspective, and I’m certainly not an expert in AI safety in the way that Eliezer is or any of any of the other much smarter people thinking about this. What I do understand is that the people focused on AI safety research are dwarfed by the people. focused on developing AI as quickly as possible. And the difference in budget scale and ambition is hard to fathom between the ones that want to develop AI quickly, regardless of safety concerns, and those who want to do it safely, it’s just there. It’s two different leagues. Yeah. And when you when you think about, let’s just constrain it to the west for very, very briefly, and, and then we’ll sort of expand that, that sphere that we talked about, like, even the West, where you have a relatively aligned political institution, group, and a relatively aligned economic set of incentives, it still is just seemingly intractable, to get people to talk to each other. And this is, you know, and we’re talking about developing commercial uses for, you know, for AI, it just, I don’t see how we how we can coordinate that better absent some, you know, some new piece of information, some extant something happening that really rallies people together, I’m just, I’m not sure what it is. We can throw money at it, we can throw AI safety researchers at it, you know, we do with we do, we’d be much better with like, five more elicitors lasers. But like, that’s not going to happen. If you expand the sphere to think about China, then it becomes a fundamentally different question, because it’s not about AI for commercial, real use. It’s AI for dominion. Right. And I’m mentioning Eric Schmidt again, but you know, his his his views on this have have been pretty eye opening, mind expanding just his view on China is they are developing AI at a breakneck pace, because who develops it first wins everything right. And, you know, call that what you will, but they are throwing a huge amount of brainpower at it a huge amount of money, with no real, as I understand it, question of safety. And that’s very scary to me.

Will Jarvis 12:24
Well, is it a question, then, you know, if you’re the American President, if you’re Joe Biden, do you just, you know, say, Give Caskey like, Please stop, you know, please, please stop. But we want this first, if it’s gonna be primarily defensive, you know, how do you, you know, is the US government in this kind of public choice dilemma, like, kind of encouraged to just like, you know, not fun, any AI safety research, we need to be the first ones to get it, because if the first one gets it wins, then, you know, we’re really in trouble. You know, if AI safety if everyone in the field thinks it’s really not possible to line these things, it’s just like, Well, I’m gonna let the chips play up. That’s not a very, you know, rosy picture.

David Goldberg 12:58
I don’t know. Yeah, I’m not sure. I think. I think it’s maybe a potentially a false dichotomy, like the giant the Chinese view is, like, who those who get it first when and, you know, and sort of the more EA view would be those who get it first. Everyone loses, right. Like, it’s not it’s not like winners. Winners take all it’s a everyone loses, except, except for the Right. Right. Right. So I think it would be probably not smart to stop AI safety research. Exactly. If anything, you know, the goal of the goal of that research should be, I would expect better communication between the various labs on this. And I mean, if you look at medical analysts, the forecast for AI timelines is very close, like, very significantly recently. And, you know, I’m certainly not an expert, like, again, not an expert. If I could wear the hat. It would say not an expert on it, but like, lots of people are and, and I trust those people. And it seems to me that we’re fast approaching, either that brick slash metal wall or very large cliff of some variety.

Will Jarvis 14:18
Definitely. And do you guys support any AI safety AI alignment? Researchers at founders pledge?

David Goldberg 14:25
Oh, yeah, definitely. I mean, all of the usual suspects. We are big believers and in the need to put lots of philanthropic dollars to work to focus on developing safe AI, or slowing down the development of AI, aligning incentives in in these types of systems as well as all of this sort of biosecurity and in broad X risk areas, I mean, we’ve also done research on lethal autonomous weapon systems, we’ve just put out And we’ve done research on that we published at the end of last year on preventing great power war, which we also consider to be part of this long term just worldview, bucket of funding opportunities. And, and the representation of future generations and democratic institutions today.

Will Jarvis 15:20
That’s cool. I do you think about, you know, ranking these interventions, evaluating new calls areas, you guys have a framework you use that founders pledge to try and like figure out like, what’s the best highest leverage things to fund?

David Goldberg 15:34
Yeah, it’s, it’s a relatively complicated question, actually solved, maybe sort of walk you through it. And we could stop at various places, but it’s gonna sound pretty sort of cut of the standard cloth at the start. And we start and that start is we try to prioritize the world’s problems based on importance, relative neglect, goodness, and our ability to actually make a meaningful difference to them. And neglected in this framework is really sort of shorthand for the question, could a philanthropist add value in this space? And one way to approximate the question is to look at the scale of the problem and see how much money is being spent on it by philanthropists relative to its scale. And indeed, we do this a lot of the time. So to give a sense of how this might work. And and to tie it back to one of the things I just mentioned, I’ll point to an example, in our recent report on lethal autonomous weapons systems, which our research suggests, poses a pretty meaningful threat to global stability and could be a causal pathway to a nuclear exchange or other forms of great power conflict. And the strategic stability risk posed by autonomous weapons largely takes the form of government spending on these systems. So in 2021, for example, the US Department of Defense requested $1.7 billion to work on autonomous weapons. And on the other side, philanthropic funding in the space is only about two and a half million dollars a year. So the ratio of philanthropic to military investment is around one to 680. And we can stack that up against cybersecurity, for instance, where the ratio is one to 62. So autonomous weapons, are philanthropically neglected relative to cyber issues. And also to take a step back. More generally, since we think two and a half million dollars is not very much to spend for an issue with potential catastrophic consequences, just not enough money. Yeah. But we don’t always do sort of a neglected pneus. Exercise this way. And the question isn’t always how much money is being spent, but sometimes is the money being spent wisely. And the work we do in climate, for example, is indicative of this. And the reason that we work in climate is, despite the fact that there $600 billion being spent on it annually, is we think there’s a good reason to think that that money is either misspent or at least the tendency in the climate spaces for the next dollar to very likely be misspent. So, an example quickly. Carbon removal plays a really big role in many of the IP, IPCC emission scenarios that keep the world under that two degree warming over pre industrial levels. But it’s historically been dramatically underfunded, relative to other types of solutions that are more popular with the stakeholders driving philanthropic activity in the space. So within climate neglected, this refers loosely to systematic, systematic bias. And our own work is largely focused on applying leverage in the policy landscape to try to fix these funding imbalances. So neglect to this sort of starts that conversation. And, and then, and then we look to, you know, effective solutions. In these problem areas that take data and evidence led approach, we think about solving them. And so we sort of go through, you know, the pretty standard importance and tractability phases of the standard ITN process. And then we look to organizations implementing cost effective interventions that have room for additional funding, and then make recommendations about which are most impactful. But we take different approaches based on different worldviews and like others in the space we we have a three worldview approach. We care about current generations, and that’s largely the kinds of things that global health and development charities, you might think about focus on, we care about long term ism, which is the welfare of people and other creatures living in long term future and animal welfare. And we do this sort of worldview diversification out of genuine moral uncertainty, and it’s a strategy that we’ve adopted based on open philanthropies worldview to diversification framework. Gotcha, gotcha. I’m curious.

Will Jarvis 19:34
How long does that evaluation process generally take? Is it kind of a quicker thing? Does it depend on the problem you’re evaluating? Yeah, how long does it generally take you guys to kind of come to a conclusion that something is a good idea?

David Goldberg 19:49
It really varies. So we start by doing shallow investigations if shallow investigations prove promising, we do medium depth investigations, and if they prove promising, then we do full cause area write ups. It just came out of a meeting with one of our researchers walking me through a medium investigation into an area we’re considering. And she spent the last three months working on it. And and it seems pretty promising. And we’ll likely do a deeper dive cause area investigation, and that’s going to take another couple of months. Yeah, before we even get to what are the funding opportunities in that space? And so, you know, I guess, if I had to put a range on it, somewhere between three and eight months it takes to produce a piece of research that we’re happy with that will stand behind that we’re willing to put our high impact Funding Opportunity label on the and then recommend to our members to fund gotcha, gotcha, that’s very robust. It’s rigorous and pretty time intensive. And, you know, there are ways, you know, ways to actually increase efficiency and decrease time spent on this stuff. But it’s, they’re not super generalizable.

Will Jarvis 20:55
Got it? Got it. Got it. I want to back up a little bit and get back to the beginning kind of a founders pledge. David, what was the moment where you realize like something like this needed to exist? Was it like, you know, you made like, a, you know, big pot of cash? And you’re like, Okay, how do I make the world better with this liquidity I’ve got, and you start looking around at charities, you’re like, wow, like, the nonprofit industrial complex is not exactly the best way to spend the money. I need to spend something else up it, what did that kind of process look like?

David Goldberg 21:25
Yeah. So it, I’d like to say that it was like, way more strategic than it really was. But, you know, it wasn’t, I was, like, you say, I made a pot of cash, wanted to give it away, I felt like I’d gotten really lucky in life a bunch of times. Even though I didn’t really think that I was growing up that I was like, my family grew up very working class. And, you know, I would have thought we were poor. But you know, as I got older, and saw more of the world, and really understood, really understood, the place that I had in it turned out to be like actually really, really very well off, like a white man born in California in the 1980s, like, pretty good, sort of won the life lottery there. So, you know, I thought, I’ve had some success in life, I’ve gotten really lucky a bunch of times, I should give money away. But the you know, as you say, the charity industrial complex seemed broken. And I didn’t really understand what to do as a next step. But I, you know, I definitely didn’t have an education to speak of, and I thought, you know, if I’m going to use my money, well, I need to actually figure out how to think better first, because I didn’t have a good OS, right? Like I was street smart. I knew how to run a business. I knew, you know how to hustle. But I didn’t really understand how to think systematically mythologically around big issues, and so I, my, my thinking was, I’m going to go get an education and actually learn how to think better. So I spent, you know, five years, six years doing an undergrad and started a PhD. And, and by the time I was into, like, the second year of my PhD program, I had gotten itchy feet, I wasn’t really sure. Like, I sort of lost track of like, why I did this to begin with the PhD felt like the next logical step, like it would really, like take me to that next level. But I wasn’t really sure what that next, like what the next level was going to be for me. And I sort of took a took a step back, I remember. Like, I was up late one night, I was, I was in my apartment, and I was just like, I had really itchy feet. I was just like, I felt anxious. I’m like, What am I doing? I’m not I’m not sure why I’m going to spend the next four years studying international relations. And I don’t know what I want to do next. And I ended up ended up going on the TED website as one does late at night to like, be inspired by really interesting people who speak well about their journey. And I saw a TED Talk by by Dan palapa, which was basically just the most brutal charity takedown I’ve ever seen. And he just, like eviscerated the charity, charity industrial complex, through this lens of like, they just don’t care about like donors don’t care about efficiency, they just they want, they want to feel good. And he walked through the story about just how effective and how hamstrung he was by by donors and sort of outside interests. And it was really depressing, actually, and didn’t make didn’t make me feel any better. And I and then the next suggested TED talk was by Jennifer Pahlka, who had started something called Code for America. And it was like the other side of that coin it was this message of hope. And this message of like, you know, we have this amazing opportunity to leverage technology and technologists to make the world better. And, and she had this and it was just like the most inspiring thing I’d seen. And I sort of decided after watching her TED Talk, like I started thinking about like, Why did what what journey have I been going on? Like, why did I go? Why did I decide to go to college and do a PhD, and it was like, I want to think better about charity. And like, she’s just, she just told me that there’s a way to do this. And it’s the intersection of technology, charity, and government. And I told my supervisor the next day that I’m not planning on continuing and like it, it basically dropped out. And I decided at that point to work at this intersection, and found found a foundation in the UK in London. I was at Cambridge at the time. That was basically recruiting me they’ve like, wrote a job spec for me to run this really interesting project, seemingly interesting project that was the intersection of

social entrepreneurs using technology to scale impact. And I just loved it, it was just like, oh, cool, I’m gonna go try to emulate Jen Pahlka and, and, and help entrepreneurs build scalable technology and powered commercial solutions to impact driven problems. And my job was to help people become more commercially viable, and it just didn’t work. Like it just didn’t work. I spent a year on it gave away a million pounds of equity free cash to 20 businesses. And, and it was a failure, basically. And it was like a useful failure because I learned a lot and, and the big, the moment that sort of founders pledge crystallized as a concept was, I’m focusing on the wrong user group. And I’m focusing on the wrong thing. I’m helping these social entrepreneurs try to be more commercially viable. And maybe I should just focus on great commercial entrepreneurs and help them to be more socially impactful, and maybe I should just solve the problem that I faced when I sold my business. What like started this whole journey that I went on, which is like, how do you actually give money away better. And by this point, you know, GiveWell, was well established, Effective Altruism was this burgeoning community, I hadn’t really become involved in it yet. And, yeah, and I, and at the same time, we were seeing the giving pledge, and this idea that, like billionaires would should in, you know, must give back of their success in some meaningful capacity. And it just felt like, it felt like the right time to create positive social pressure on people who are going to be billionaires, to get them to commit to give before they actually made their money, while they still were paper rich and seemingly generous, like, but at least have the potential for generosity and extreme earning potential. And so yeah, founders pledge developed really quickly. Caught me and I was like, that aha moment was April 2014. And by March 2015, it was live in the world.

Will Jarvis 28:08
That’s great. That’s great. I, I’m curious, you mentioned something interesting there. And that’s, you know, this idea of like, you know, social entrepreneurship or something like that. And then there’s like, that, you know, making a lot of money and then doing good with it. So there’s like, you know, there’s companies that can do good directly, like in the business context, and then there’s companies that, you know, you make a lot of money, and you could do both, right, you can make a lot of money, doing good, and then give it away in a ruthless way. And then this kind of double whammy, but it seems to be like there’s, there’s a bit of a dichotomy between, you know, something that’s, you know, in some sense, helping with climate change, like a Tesla, and then something that’s just like a high frequency trading, you know, quant Fund, which is more just like money making, in some sense, you know, maybe there’s some attack, right?

David Goldberg 28:50
Exactly. Notorious. Teskey, man, right.

Will Jarvis 28:56
That’s great. Yes, absolutely. You know, um, how do you think about, you know, coaching founders through this, if they’re, you know, building one of these companies in between, and how they can kind of maximize the most good they can do?

David Goldberg 29:07
Yeah, that’s a really good question. There aren’t very many of these inherently world positive companies that also have huge, you know, earning potential. I mean, I think you’ve got Tesla is a really great example. Lots, lots of Ed Tech, sort of fits this bill. Climate tech fits this bill, alternative protein and Foodtech fits this bill. In some cases, pharma fits this bill, but the rest of the sort of consumerist ecosystem of technology companies tends not to and in my view is been, you know, and I’ve encouraged founders to just focus on having positive social like, financial outcomes, without necessarily harming the world rather than doing trying to focus on doing good in it. I found that people who sell Identify social entrepreneurs tend to be, but not always are less good at the commercial outcomes than those who just focus on the commercial outcomes. And if they are social by dint of the area that they’re working on all the better for it, but the self identified social entrepreneurs, I, you know, tend, I think, to have slightly less good outcomes. So I, my advice to people, when I get asked it is focus on building good business, that doesn’t harm the world actively, that makes a lot of money. And then and then once you’ve made that money, give the vast majority of it away to the very best stuff. And and at that point, when you have you know, the, the fu capital, basically, you can you can go about doing things that sit at that intersection. And if we use Elon as a use case, and I think he’s a good one here. If you take the human Elon out of the equation, and you focus on musk, the entrepreneur, like, why is he doing what he’s doing? His arc is really interesting. And I think, potentially something we can recreate, and certainly something we’re attempting to do, which is, you have this guy started a business with a bunch of other white men that became Pay Pal that had a really good outcome. And that outcome gave him fu money, basically. And when he was very young, in his in his career, and he had 50 to 70 years, almost at 100 years, maybe he will, but 50 to 70 years of work life left. And he’s sort of stepped back and said, Okay, so what do I do now? Like, what how do I spend my resources that I spend my time, and he ended up sort of coming to the conclusion that I should spend my time on things that are potentially harmful and detrimental to humanity’s existence, our fragile existence. And he identified three areas, right, petrochemicals, the fact that we’re living on one one planet, and that one planet is susceptible to shocks, and AI, and built three companies around each of those three areas. Truly bucking status quo, and, and building commercial juggernauts that are also social businesses, in my view, and we can debate whether they’re social businesses or people will debate and may think differently, but in my view, like, absent Tesla, we don’t have an electric car revolution, battery tech isn’t nearly where it’s at. None of none of the changes that we’re seeing in energy infrastructure really exist. Apps and SpaceX, you still have a duopoly with the couple of bloated commercial players that haven’t really innovated in a long time. And, and we aren’t aiming for the stars again. And open AI is, you know, been an interesting journey. And neural link is the next version of that for Elon, right? Like all three companies. Yeah, all very interesting, all aimed at making money all aimed at doing good. I want to see more of that. Definitely. But that arc is like make money and then go about raising money to like Buck industries that are otherwise unhappy to allow people to change the status quo.

Will Jarvis 33:20
Definitely. That’s, that’s super interesting. It’s super interesting. It seems like you and Elon, you’re similar that, you know, you got liquidity early. And that gave you like Slack to be able to go and like solve, like, the problems you kind of like wanting to solve, and maybe you solve the problem want to solve in the beginning as well. I’m not sure I don’t know the story. But do you think there’s something important about that aspect of like, you know, getting a big win, like kind of early, that gives you some amount of like Slack to just like, look at the entire picture and say, Okay, what is important for me to solve?

David Goldberg 33:55
Absolutely. Gotcha. I mean, it’s why it’s why founders pledge is doing what founders pledge is doing. Right? So there’s, and I think that the people who’ve gone through the gauntlet of starting a business, and having liquidity in that business, like that pressure cooker creates a different type of human than someone who inherits wealth. And that’s not to say that the person who inherits wealth isn’t a good person, right? Many of them are, I’m sure, yeah. But that gauntlet of honor, like building something from scratch magic and matching it into existence, funding it sustainably or with venture capital, growing it to a scale dealing with all of the human problems that was like that one deals with in running a business as well as like, the commercial sides of it, the financial sides of it, the product sides of it, it just, it creates different types of people. They’re, they’re special, and that doesn’t mean they’re, like special with you know, in a capital S or exceptional. and some in some way that isn’t recruitable. But they’re like different. And that type of person, I think just is more able to sort of think outside and think think bigger and think outside the box. And, and coupled with extreme wealth. It’s a different sort of order of magnitude of ambition, and, and willingness to try stuff. And risk taking. And I think that that generally is a good thing. So like a founders pledge, we focus on people who are going to potentially be incredibly wealthy. And we start working with them, five to seven years ahead of that wealth actually materializing. So by the time they actually have liquidity, they’re well positioned to think more strategically about how to deploy that capital to achieve maximum social impact. And that may not always be with charity. It, it often always encompasses charity, in some degree with Sanders pledge by the nature of what our commitment is. But you know, our 1800 members have pledged more than $8 billion, and their collective net wealth is in the many trillions. Yeah, like that. And what that means is, this group of people has the ability to potentially like remake the world in a better way. definitely helped to remake it in a better way. And, you know, for us, it’s about coordination to solve those really tough problems and getting the right people connected, and the right times, right, to focus on the right stuff.

Will Jarvis 36:37
I’m curious, David, when, you know, an entrepreneur gets liquidity and founder gets liquidity, and they come to you. And you don’t have to, like, tell me how the sausage is made. We can cut that cut this if you don’t wanna talk about it, but like, do you just like, do you give them a questionnaire? You’re like, okay, like, you know, well, you’ve done really well, what cause areas do you like, what do you want to look at? Or do you say like, Hey, like, we’ve kind of stack rank these kind of interventions? And we think like, these are, this is the most tractable and underfunded thing, this is kind of the highest leverage thing you could do at this time. What does that kind of process look like? Or is it just kind of dynamic,

David Goldberg 37:15
it’s pretty dynamic, and depending upon the person, and, and where they’re at how much, you know, the kind of outcome they’re looking at. And, you know, we’ve created some archetypes, and some, you know, user user types that we know how to serve as well. Well, we’ll work with them in different ways. But it typically starts with a conversation to understand, you know, what their ambitions are, what their timelines look like. And, and then we get into some of the specifics, and some people come to us, you know, fully formed with ideas, and, and they really just sort of want validation about, like, what within their ideas do we think makes the most sense. And we can give that, but that’s not really our, you know, our, you know, unique value proposition. Most of the people we tend to work with, we start by doing a values discovery session, where we try to understand, actually, what are their sort of underlying intrinsic values that, like, allow them to make decisions in their day to day to day lives that they feel comfortable with? And how do those values map to philanthropic opportunities, and so we sort of tried to translate those intrinsic values into areas of focus. And then within those areas of focus, we stack rank, and encourage people to do the most effective stuff. You know, and at the end of the day, the reason we work with entrepreneurs is because entrepreneurs tend to follow data, and tend tend to be more rational than your average person, though not always. And, and we present data that’s compelling. And we present data that tells a story about like, there are things that are way, way, way better than other things. And, and you should find them. Absolutely. And when presented with information, and the comparators between, you know, what they want to do and the outcomes that we expect from that and what we suggest they do and the outcomes that we expect from that. It’s a pretty straightforward decision.

Will Jarvis 39:26
That’s good. And you’ve done this robust research, like you describe a process to kind of sped everything. So like, you can give really good numbers there.

David Goldberg 39:33
I mean, we also partner with lots of other evaluators in the space and give Well, open philanthropy, and farmed animal funders, happier lives Institute, and lots of others who are just doing exceptional work and we’re not trying to recreate the wheel, right? We’re just trying to make sure all of the different wheels and cogs and gears fit together and make the watch tick at the right time at the right rate.

Will Jarvis 39:56
That’s good. Another question here. If you look at the gay movement as a whole, what do you think it gets? Right? And what do you think it gets kind of blessed? Right? And I want to like, like, put an ask us on this question that I think that a movement is like probably the most self reflective group of people I’ve been ever associated with. So, you know, at some point, they do a really good job at this already. But what do you see? That yeah, can do better?

David Goldberg 40:20
Um, yeah. I think that sometimes EA suffers suffers from like, a communications deficit, like it’s, you’re right, one of the more self reflective communities of people I’ve ever encountered. Some of the smartest people I’ve ever encountered. But sometimes, you know, because it’s a movement. And it’s a set of broad ideas. It’s not a monolith. And, and so it’s hard to communicate well, as a movement that is so diverse, and its thinking and so and so self reflective. So it’s, there’s sometimes it’s just like a communication deficit, I think. What else do I think?

Will Jarvis 41:18
Is that toward the public? Yeah. Like to the broader public?

David Goldberg 41:23
Yeah, that’s, I mean, I think like EA is like, generally, like the way EAS think is generally pretty much the way we should be thinking about most of these things. And if we want more people to, you know, join that movement, it needs to be palatable to them. And, and often it’s not, or people develop knee jerk reactions to some of the some of the, the language that’s used to talk about things. Yeah, but it’s heartless and and essentially, develop very knee jerk views that are hard to dispel and propagate. So I think, you know, most people, in any case, when asked to buy into the basic premises of AAA, do, and yes, they believe a life in Malawi is worth as much as a life in Minneapolis or wherever it is, yes, that they believe that we should use data and evidence and figuring out how to do good, but they sometimes balk at the conclusions that these premises entail, maybe. And I think that for many people who haven’t really sort of rigorously got into studying this, yeah, giving is very emotional. And hence, I just, I’m not sure that, like, EA is good at taking that emotional response, and turning it into something rational and sort of, and impactful. I’m not, I’m not sure how to sort of put that but there’s like, the emotional givers just seem not to resonate with EA, and I wish they wish they could. And maybe, and maybe that’s, you know, a lack of storytelling, I’m not sure.

Will Jarvis 43:12
Right. So maybe we can tell a more compelling stories about how EA is helping, instead of because there’s trade off, when you quantify things, it you know, it’s perhaps less compelling. But then you want to also be able to, you know, like, sell this to people, you want to communicate this to people, and you need to make it palatable to everyone across the political spectrum. And,

David Goldberg 43:31
yeah, I think I think that that’s right. Yeah, I don’t think he gets a huge amount wrong. I mean, there there is, there’s something to be said about the is movement, generally of white men. Although it’s getting better, you know, we’re seeing more more diversity in it. The pushback that I’ve gotten, or the critiques that I’ve heard, have tended to be around most of the people, especially in, in this global health and development space and current generation focus, don’t have much field experience, and don’t have any real context for what they recommend. So they’re like, you know, academics, straight out of university, who can sort of run cost effectiveness analyses and, and, you know, do have sort of statistics, you know, at their disposal, and, and do regression analyses and, and, and do all this sort of academic stuff, but don’t really understand what it’s like on the ground running an organization. And as a result, we’ll never get it right, until they do some legibility problems. Yeah.

I’ve heard that a lot. And I think, you know, there’s some truth in it.

There really is and until you run an organism And it’s really hard to understand the complexity of running an organization. And, and sort of the unexpected. The unexpectedness, the sort of the, what am I trying to say? And how things happen that are just so outside the realm of what you expect, that you have to deal with. And this is just sort of part and parcel of every organization, even the really well run ones. I think I think maybe EA would do well, to have some more operational experience in its core.

William Jarvis 45:41
Got it. That makes sense.

Will Jarvis 45:42
That makes sense. I think it’s a good, that’s a really good critique. David, I got another question here. You know, you’ve got a background in real estate, finance, in housing, you know, housing affordability, it’s a big problem lately. I’m a big GA. So I’m on a grant from Scott Alexander back in January with my friend to help implement, you know, land value tax. I’m also interested in the EMP movement, you know, how do you think about fixing the housing crisis? And what are interventions? Are you particularly interested in and have you guys looked much at housing as a space?

David Goldberg 46:13
We have, I am not super well versed on our housing affordability report. At the moment, unfortunately, though, so we have written housing affordability report, it is on our website. And I’ve just pulled up the summary now. And focus on it’s a focus on England. And, yeah, and it’s from 2020. So I, you know, I’m, I’m not really well, virtually well versed enough to speak about it. But we have suggested, as one of our recommended funding opportunities, London NIMBY, which is trying to improve housing policy in England. And it’s using a seemingly novel approach that focuses on overcoming political economy challenges.

Will Jarvis 47:04
Yeah, that’s awesome. Well, David, are you round for down for a round of overrated and underrated?

David Goldberg 47:11
Sure. Let’s do it.

Will Jarvis 47:12
So I’ll throw out a term you’ll just tell me whether it’s overrated or underrated. Okay. So the first one I think you already answered this, but I’ll throw it out there anyway. Ai x rest? Is it overrated or underrated? Let’s say by the general public, underrated, underrated, what’s good?

David Goldberg 47:26
And why underrated?

Will Jarvis 47:27
Why do you think it is just not exposure? Can’t quite grok what’s going on?

David Goldberg 47:31
People when they hear about AI risk think Terminator. And, and don’t really and then write it off as science fiction. Gotcha. And, and also, and just don’t understand, I think the speed and speed of development and just like, what the rest really entails,

Unknown Speaker 47:53
do you plan it, you’re

David Goldberg 47:54
in it, and it feels really far off. It has lots of people,

Will Jarvis 47:57
it still feels quite far off to me. But the metal cat always mispronounce that the prediction market less meticulous. And they they have it very, it just you know, moved up a lot with Dolly. And it’s what is it? 15 years away? I believe right now. It’s really close.

David Goldberg 48:15
It might even be less than that. But

Will Jarvis 48:16
might be less than that. Do you think that that’s you think that’s all the money? You think that’s aggressive? Do you think that’s

David Goldberg 48:23
I don’t take motor? Literally I literally don’t know. I’m not an expert in this. Most of the people who I talked to or experts think that it’s probably pretty close to right. Gotcha. I was at a talk the other day. I can’t attribute who said it, but an organization is working on. Yeah, is working on a sub 10 year timeline for general AI and it’s one of the big ones file and and the guy talking about it very casually, just guy or girl. very casually mentioned that. We expect in less than 10 years. We’ll have a week general AI. Well, well, that is that’s and I was like oh that’s pretty horrifying. I cried in the corner and drink.

Will Jarvis 49:14
That’s so good. Investing to give overrated or underrated.

David Goldberg 49:21
Underrated? Andre? Yeah, we, I mean, we just published a report and set up a new fund the patient philanthropy fund that is investing to give on a on a couple of 100 year timeline there brilliant shear, who I never pronounce his last name, right? You makers who’s just he’s just left founders pledge to join, giving what we can as their head of research, spearheaded this wonderful initiative to get people to invest, invest to give over the long term focusing on X risks. So we launched that at the end of last year, I believe If memory serves me, it brought in nearly $2 million for it so far and have a great, great momentum as we look to raise more. But it’s certainly underrated.

Will Jarvis 50:12
Is that just because you can get access, you know, get these returns that compile over time. And there’s just such a large with a long

David Goldberg 50:18
enough time horizon like markets point in one direction. And if you have the capacity to dry down capital rapidly as in when you have sort of very pingy moments, yeah. It’s hard to see why you wouldn’t invest to give it with at least some of your philanthropy. Got it? Got it. You’ve just Personal Capital.

Will Jarvis 50:39
Right. Do you think there are challenges with like maintaining, you know, large pools of capital, like over time, and have them be directed to the the purpose they were intended to? I’m sure, because it’s an important report, but

David Goldberg 50:51
it is in the report, we’ve looked into it pretty extensively. This tends not to be a problem with some with large pots of money because of how they were set up. And the sort of founding constitutional documents or trusts and foundations that are hundreds and hundreds of years old, that really, you know, are still focused on the very specific thing, in some cases, to their detriment, because that specific no longer exists as an issue. And they have so much money that they can’t actually do anything with because of how the organization was set up. But I think we’ve drafted our founding documents for the patient philanthropy Fund, which is an independent, independent entity, licensed by the UK Charity Commission to be more flexible. Very cool.

Will Jarvis 51:38
Very cool. One last one, direct cash transfers, overrated or underrated.

David Goldberg 51:45
I think it’s rated appropriately. I’m going to I’m going to create a third category. I don’t think it’s either. I mean, if you can’t do better than direct cash transfers, do direct cash transfers. Still too many people intermediate with organizations that are unnecessary. Right. But But again, lots of lots of development. agencies focus on direct cash cancel direct cash transfers already. So yeah, I wouldn’t call it either. I mean, I’d rather people give to give directly than Oxfam, for example.

Will Jarvis 52:23
That’s good. That’s good. Well, David, thank you so much for coming on the show. I really enjoyed talking to you today. My pleasure. Where can people find you? Where can people find founders pledge? Where should we send them?

David Goldberg 52:34
Our website, founders pledge.com. All of our research is public. All of our giving recommendations on our websites. If you’re based in the US, and you want to give to any of our top charities, you can do so with every.org. To search for founders pledge, we have a profile there and a bunch of different pooled funds. And if you’re in the UK, you can go to AAA funds, and we have our Climate Fund there, as well as a couple of others. In our in our websites. If you have questions. If you’re an entrepreneur, please do feel free to send us an email that’s gonna reach out via our web form. Awesome. Thanks, David. My pleasure. Thanks

William Jarvis 53:11
well. Special thanks to our sponsor, Bismarck analysis for the support. Bismarck analysis creates the Bismarck brief, a newsletter about intelligence grade analysis of key industries, organizations, and live players. You can subscribe to Bismark free at brief dot biz market analysis.com. Thanks for listening. We’ll be back next week with a new episode of narratives. Special thanks to Donovan Dorrance, our audio editor. You can check out documents work in music at Donovan dorrance.com

Transcribed by https://otter.ai

Join the discussion

More from this show

Menu

Subscribe