In this episode, we talk with Geoff Anders of Leverage Research about research, knowledge, decay, and a whole lot more.
The Structure of Scientific Revolutions by Kuhn
Transcript:[00:00:00] Will Jarvis: [00:00:00] Hey folks I’m will Jarvis along with my dad, Dr. David Jarvis, I host the podcast. Narratives. Narratives is a project exploring the ways in which the world is better than it has been. The ways it is worse in the past. We’re making a better, more definite future. I hope you enjoy it. [00:00:32] Well, Hey, Jeff, how are you doing today? Hey, doing well. [00:00:35] Geoff Anders: [00:00:35] Thanks for having me on. Definitely Jeff. Thanks for coming on. Um, could you go ahead and give us just a short bio and some of the things you’re interested in? Sure. Yeah. So, um, I guess my academic background is I got a PhD in philosophy, um, from Rutgers university, but I left the Academy, um, and set up a research Institute, leverage research, um, we’ve um, [00:01:00] done a number of different, uh, investigations over the course of time. [00:01:03] More recently, we’ve been focusing on early stage science, um, for myself I’ve. Been really interested for a long time in knowledge, um, and in the questions, I mean sort of essentially everything having to do with that. So, I mean, I’m really interested in philosophy, skepticism, metaphysics epistemology, like the whole nine yards. [00:01:28] I’m really interested in science, really interested in, uh, the sort of questions about what makes like really practical questions about what makes research programs work. Um, and then ultimately I think a lot of my interest is driven by a desire to have the world be much better. Um, and I think, uh, science and knowledge more generally have sort of been, you know, a double-edged sword where on one hand, Things are obviously much better in a whole bunch of ways [00:02:00] because of technology that we’ve developed in science we’ve developed. [00:02:04] Um, and then on the other hand, as humanity as a whole becomes more powerful, the chances that we might do something wrong and destroy ourselves or something really bad goes up. And so then there’s this question of how, you know, and so I think some people come out anti-technology or anti-science I think, think that’s not the way to go. [00:02:29] I think we need to understand things better. We need to have more ability to shape the world around us, but at the same time, we have to understand how to do that responsibly. And so it’s, it’s sort of like, I think the, the macro picture is that I don’t, I don’t think many people are really satisfied with. [00:02:51] Where we’ve gotten the world to thus far. Um, and so I think we need to push further and I think that this means we’re going to need more [00:03:00] science, more knowledge, more technology, et cetera. Um, but we got to do it in a way that that makes sense. Definitely. I really liked that. And most people, you know, they don’t come out of their PhDs and Rutgers, that’s a great philosophy program by the way. [00:03:13] Um, and, and start a research organization. I think that’s, that’s awesome. How did you kind of, you know, what was the Genesis of that idea? Oh, sure. Yeah. Well basically. Um, I, well, it wasn’t ever wedded to doing research inside academia. I think there’s a whole bunch of advantages there. I think it’s the right path for a lot of people. [00:03:36] Um, for myself, I’m essentially interested in whatever the most effective way is of acquiring knowledge and then helping to use that, to try to improve humanity’s position overall. Um, and the there’s a thing that I’ve come to appreciate over the course of time and thinking about academia and thinking [00:04:00] about research outside academia, um, which is that there’s, um, there’s this thing that is simultaneously a great strength and potentially a great weakness of academic research, which has to do with timelines. [00:04:16] See academia has. Basically eternal timelines. I mean, if you could think of it as descended from Plato and we’re all looking at the forms. Okay. And then are you supposed to have looked at the forms by like Tuesday? No. The answer is that we’re going to take as long as is necessary in order to, uh, be able to figure things out and make sure that we’re actually getting the truth and so forth. [00:04:44] And so I think like this is one, a lot of people, you know, you know, in and around Silicon Valley and so forth, you hear a lot of people discounting academia and they’re like, Oh, well, you know, there are so many things wrong and the professional incentives are out of whack and I mean, there’s truth to some of those [00:05:00] things. [00:05:00] And we can certainly talk about that, but. I point out like academia is just, has such a strong truth intention. Like they believe that they’re going to get the truth and it doesn’t matter if it takes 10 years or 50 years, you know, you’ll, you’ll say, well, you know, there’s all these ideas that people come up with that are initially rejected and later get accepted. [00:05:24] And, you know, the, I think the Academy is sort of like, it’s like, well, you know, yes. And eventually, absolutely everything will get in and out of time. Okay. And so what that is, is that is a really high degree of confidence and confidence in a good thing. Um, that’s occurring on this incredibly long timeline. [00:05:45] And so I think, you know, academia allows for individual research with these really long timelines. And as a result is a really good fit for, um, a lot of different types of research for [00:06:00] research where. There’s more a sense of, uh, needing, uh, more of a deadline. Then I think that, um, so it’s it’s research that involves either deadlines or research that involves groups organized in particular ways. [00:06:16] So another thing that’s really good about academia is academic freedom with academic freedom. You have people, even though there are, you know, what’s popular in the literature and you have to publish in this or that sort of sub discourse, there still is a lot of freedom. You just have tons of people who are studying, you know, why the Mongols retreated in this year instead of that year from whatever thing. [00:06:38] And then what’s the practical application who knows, um, And I think that the sort of academic freedom, I mean, research freedom, I think is, is really, really good. The problem is that makes it hard to organize groups of people, at least in, in certain disciplines, in certain contexts, you know, and obviously academia is a huge thing. [00:06:59] And so you don’t [00:07:00] want to want to, you know, really carefully study it before making any sort of like certain types of statements. But, um, but I found that for my own research, I was especially interested in whether it was possible in, on relatively short timelines to figure things out that would end up impacting the world in some sort of positive way. [00:07:30] And then that task seems really naturally approached with a group of people. And so then rather than trying to organize it inside of an academic context, which might have been possible. Uh, but certainly I think would have taken longer. I basically decided to just put the thing together outside of academia. [00:07:50] Um, and the w while my own research originally was in philosophy and psychology or sort of studying the universe and the mind and so [00:08:00] forth. Um, when, uh, around, I guess, 2009, Um, I had been thinking about different sorts of strategies for how would you actually go about trying to make the world better? And I realized, well, there’s tons of different possible strategies, and it’d be hard for me to explore them all. [00:08:23] So why don’t I get together a group of people? And then we, as a sort of team, we’ll try to explore tons of different things and figure out what might be useful for making the world better. And then because science and technology and knowledge have such a large impact, and those would be the main places we focus that sort of the, the original Genesis for leveraged research, right? [00:08:41] 2009, sort of, okay. We need something broad, 2010 planning and then founded it in 2011. Gotcha. Excellent. That that’s really interesting. So, you know, you’re, you’re 10 years on. Um, yeah. And yeah. I want to talk about scientific freedom for a second. Have you ever read, have you [00:09:00] read Don Brayden’s book on scientific freedom? [00:09:02] Um, no, I haven’t. Okay. It’s really interesting. So, um, he paints this picture where pre 1970s, he’s like 84. Now he ran a pitcher research program at BP and is really sharp guy. It’s really, I, I really enjoyed that chatting with him. He, um, but he has this idea that pre 1970, you know, academics can get a little bit of money. [00:09:22] And, you know, you can live on it and you could work on your wacky idea for 20 years, your max plank dynamics. Right. But he says, you know, something’s shifted around 1970 where suddenly it was all objective-based. So instead like, you know, you have to go apply to get grants. You’ve got to prove yourself, your reviews much more onerous and there’s this aggradation and freedom. [00:09:45] And there is freedom. Like you said, like you talked about a different kind of freedom where it’s like, you can go get a PhD in English literature and go study Hemingway and like, exactly how are they paying me to do this? Like it’s it’s. Exactly, exactly. Um, so do you think [00:10:00] academic research works less well than it used to? [00:10:03] Uh, it, it does work better. This is drilling on that. So this, this is, this is a great question. Um, and I’m gonna say sort of whatever remarks they make here, provisional. Cause I think, I think the, um, The question of how the premier knowledge acquisition organ of society is functioning is, is actually really crucial. [00:10:24] And so, and, and then, you know, it’s a hot topic and everybody wants to jump in and yeah, yeah, yeah. Hot takes, you know, try to try to have like, sort of warm takes. Um, the, so basically I do think that there’s a lot of truth in what, uh, in what you described him in particular. Uh, so if you look at a graph of, um, PhDs minted from 1900 through 1990, the graph basically like, [00:11:00] you know, you’ve got something like 400 PhDs per year in 1900 and 419 10, and then it like starts to go up a little bit. [00:11:09] Right. And then it goes back down. And then goes up quite a bit. And so this is right after world war II and the GI hi bill then goes back down again, and then you hit 1958, which I mean 59 and with the launch of Sputnik, and as soon as you hit the launch of Sputnik, then it just like goes crazy and it ends, I mean, so by the time you get to 1990, you have 40,000 PhDs per year compared to 400. [00:11:39] And so in the space of like, you know, we go from 1910 or so to 1990 that’s 80 years, and then a hundred X increase in the total number of PhDs minted per year. Now it’s very interesting because I think one of [00:12:00] the central, maybe the articles of faith of the sort of modern research approach, is that the way to produce. [00:12:09] Better research is to supply more funding. Gotcha. There’s a really natural thought there, which is all money behind the problem. Well, it’s, it’s, you know, the funding goes directly to researchers. And so with more funding, you have more researchers, more researchers means more papers, the people sort out what stuff is good. [00:12:30] And so it seems like pretty straightforward. Um, however, there’s. A thing that happens whenever any organization scales, this is, uh, quite clear from lots of people’s experience in startups, where, when startups scale, the initial culture that existed when there were three people or four people or five people is really different than the culture when there’s 10 or 20 is really different than the culture. [00:12:55] When there’s a hundred versus 10,000 and [00:13:00] saying different culture doesn’t necessarily mean better or worse, it’s better in some ways worse in other ways. But the thing that ends up happening is that you end up using different mechanisms to coordinate the people. So you can think that in addition to funding, producing more researchers, funding is also going to change the coordination mechanisms that are used to organize the researchers. [00:13:25] And then that could either be good or bad, um, as, uh, and so, you know, you talked about how there’s. You know, people are now focusing and it was more grant writing and, you know, publisher perish faster, you know, more publication cycles. And in this sense, less academic freedom, you also have more, uh, uh, you know, more numerical metrics like impact factor. [00:13:51] Um, in terms of looking at how impactful is a paper what’s happening, there is the [00:14:00] attempt for the field to operate at scale. If we just have 20 of us, Then, you know, I know almost all of them and you vouch for her. And we all are checking by tons of different means. By the time we have 20,000, we don’t all know each other. [00:14:20] And we don’t all know people who know everyone else, and we still want to maintain the quality of the research being done. So if we want to maintain the quality of research being done, we need some method for this, and it has to be legible to all the relevant parties. So then this, this means that as you scale, you move to more externally legible mechanisms for assessing quality. [00:14:43] And then that can change in a substantial way. What sort of research gets done? So at least, at least I will go so far to say that. Um, I mean, I, I do think the 1970 date is pretty good. A lot of research changed [00:15:00] around, I think was 74 with the national research act. And you get , there’s, there’s a bunch of factors that, uh, sort of plan, but I think what’s happening is the fields are becoming much larger. [00:15:12] They do adopt these different mechanisms and then there’s some things that are good and some things that are bad, right. The things that are good. I think that. It makes it, uh, especially if we make a distinction between, so borrow from Kuhn, there’s this really nice distinction between, so for everybody, you know, anyone who hasn’t read Thomas Kuhn’s structure of scientific revolutions, highly recommended, um, maybe we’ll talk more about this and this conversation, but the, um, but Kuhn, uh, there there’s a standard view of science on which people are accumulating knowledge in a pretty linear way. [00:15:51] Coon presents something that helps to put into context. The idea we have that science radically changes people’s views and that there are these massive shifts. [00:16:00] The simple version is you start with pre science. No one knows what’s going on. Someone comes up with a theory and there’s a bunch of practices and ways of approaching things. [00:16:08] These are called a paradigm. Then inside that everybody then acts inside the paradigm. This is called normal science. You are not Einstein. You are applied or you’re in a Newton, gave you the paradigm. You are, you are applying it and you apply it to everything. And then when you come across things that don’t seem to fit, you try to make them fit. [00:16:30] Some things end up not fitting when they don’t fit. Then those things become anomalies. Some of them get resolved, some don’t, um, and the ones that don’t eventually are people thinking about this, and then people produce new theories and then the entire field shifts. And this is where this is actually the book where the term paradigm shift comes from. [00:16:49] Nice. Uh, yeah. So that’s, that’s just like a bit of background. I think it’s, uh, It’s a really good book. The, and if we think in terms of this difference between [00:17:00] normal science and revolutionary science or a normal science pre science, I think the scaled research in the way that academia has been doing it is best suited to normal science. [00:17:13] So if there is a well-established paradigm, people know how to do the things you need, something like regular ingenuity, regular flashes of brilliance and so forth in the research. Then you can operate at scale. I think for example, there’s been a lot of work in, you know, chemistry and material science and so forth. [00:17:37] It’s an example that I think is really good. And I’m going to say, certainly would love to see a birds-eye assessment of all the different fields and how they’re doing. But my current impression is for, for instance, chemistry is doing well in that regard, but then. If you try to scale things up, then there’s a question of whether that actually helps you [00:18:00] to cause paradigm shifts or to get fields going in the first place when the fields haven’t really managed to finally get traction. [00:18:08] Gotcha. So, I mean, this is, this is I, if I, if I have a, you know, it was forced to guess right now, I’d say it did change. It became more professionalized as came about as a result of scaling and various sorts of, uh, structures and strictures apply it as a result of that. And this was really good for normal science and, um, less good for having academia be a house for, uh, sort of rogue thinkers. [00:18:35] I mean, there was this idea in eccentric or at an eccentric terms, derived, I think from astronomy, but it’s like the, the person is not thinking exactly in line with everyone else. Academia used to be the place for eccentrics. Um, there wasn’t a lot of demand to go into academia and it wasn’t a profession. [00:18:56] This is pre professionalization. And so really it was a [00:19:00] calling and then you get a bunch of people with wonky ideas and you don’t have to believe any other wacky ideas. That’s fine. And every once in a while, they figure out how to split Adams, um, and get, you know, nuclear chain reactions and the ability to destroy entire cities. [00:19:15] And it’s like, okay, wait a second. There’s something here. Exactly. That’s those are some thoughts. Gotcha. So maybe, so I guess what you’re saying, tell me if I’m wrong. It’s something like we’ve gotten very good at doing incremental things where we’ve got like this framework where to go explore this frontier when we go one by one, go through each place. [00:19:34] Um, but we’ve gotten less good at, okay. We need to find a new frontier. It’s an interesting question. When you say whether we’ve gotten less good at it, because yeah. One thing that a lot of discussions about science frequently are discussions of academic science. Right. Um, and the, but if we go back and look at the history of science, [00:20:00] science was not an endeavor that was confined to the Academy or the academies and scientific academies, like in the 18 hundreds. [00:20:07] Um, if you go back further, you have people doing science, sort of, you know, not necessarily in any sort of institutional context, if you look at the early history of electricity, which we’ve been studying at leverage for a little while, um, you find that there’s a lot of research that’s being done. That’s not inside academia flash forward to today. [00:20:31] Well, Hey, AI is certainly super important and the most important research being done in AI, isn’t being done in academia. It’s being done by commercial entities. You can think of this industry. So I think that it’s. Important to understand academia, because like I said, it’s like the sort of premier knowledge organ of our society, but the idea that academic science is [00:21:00] science, um, I think neglects the fact that you have a lot of things going on in industry and sort of outside, uh, by independent researchers. [00:21:11] Gotcha. So within an answer to your question, you know, have we gotten worse? This is a really good, this is a really good question. Um, I think that when people, I mean, first of all, There haven’t been a lot of really thorough examinations of the different fields inside of academia. How much progress are they making? [00:21:32] Like we’re all familiar with the replication crisis in psychology and then which has extended elsewhere, where you have a lot of people try it. You’re supposed to replicate, the studies are supposed to replicate or the results replicate. Um, and the answer is frequently. They don’t, uh, Oh, maybe there’s a problem that doesn’t exactly answer the question of how much progress is being made in each area. [00:21:52] And so I think both for academic research and non-academic research, there’s a really interesting [00:22:00] question of how much progress is being made. Uh, in fact, I was speaking to a friend and, you know, he put it, I think really nicely. He said, you know, for a while we had this narrative of singularity. Um, and then now we’re in the narrative of stagnation and everyone’s talking about how things aren’t as good as they were, you know, civilizations, collapsing and so forth. [00:22:23] Um, well, why don’t we actually figure out how much progress is being made? And I think that’s a great idea and I think we should do that. So I think, I think the it’s possible to do analysis of certain sectors if we understand them quite well. Uh, but getting a full picture of something I’m really excited to see happen. [00:22:44] Definitely. Yeah. I think getting a better way to measure and understand where we are is definitely helpful in understanding how to accelerate it. Right? Like you have to know where you are before you can. Yeah. And, and it’s, it’s interesting because I was, I was on another podcast recently and we were talking about AI [00:23:00] and there’s this interesting question. [00:23:01] How fast is progress in AI really going there? You know, there’s some article in science pretty recently that said, well, it looks like a lot of the new algorithms aren’t beating benchmarks from 2009 and it’s very, everybody who’s trying to support their research needs to have a narrative of victory. And so you can’t raise money on this doesn’t work. [00:23:24] Um, and so, so, uh, and so then you have recent and it’s very easy in AI to develop new metrics and new ways of measuring whether you’re getting better. So then there was a question of, are we really getting better? I think it’s, it’s quite clear that there are advances being made in AI. GPT three is a good example. [00:23:43] Um, but, uh, in terms of how the field is doing overall, I, I don’t know. And I would really like to know, right. Th th that’s a really well-played it’s yes, it’s really interesting. And I do wonder, I do want her to come in and psychology in the [00:24:00] replication crisis. Sure. I, I do have this worry that the reason why we’re all talking about psychology it’s because it’s easy for everyone to look at and understand. [00:24:08] You can read through the study, you can look at the statistics and it’s very legible. It’s very understood. Like, wow, this is not wrong here, but this is like super string theory. You know, I have no way to evaluate whether this paper is any bearing on reality at all. And so, uh, I that is somewhat worrisome to me, but, and that goes back to your point about the AI where it’s like, it’s just difficult to tell. [00:24:32] Yeah, I, I think that’s right. And I think that it would be really beneficial to see a sort of field by field or domain by domain analysis of what is the progress that’s being made. You know, you could turn this into an examination of bottlenecks is something I’m quite interested in, um, where it’s like, how much progress is being made. [00:24:55] What are the actual barriers? What are the different ways to approach it? Which things look the most promising. Um, [00:25:00] I think having a more bird’s eye view on the entire scientific process is a really good idea. Might help things go faster, um, and just sort of generally be clarifying. Gotcha. Oh, and I want to talk about bottlenecks and examination of bottlenecks for a little bit. [00:25:15] Um, is that something you’ve looked into, you know, and how, what does that look like? Is it going out and just asking researchers know, what are the biggest bottlenecks? Is it like reading papers? I dunno. Yeah. Well, so this is, um, this is interesting. Uh, um, I recently have been, um, in a number of conversations where it’s, it seemed, uh, quite important to figure out how we can tell how much progress is really being made. [00:25:41] Like, as you, you know, what are the bottlenecks? And it’s interesting because some fields seem more self-reflective and some seem less. So, I mean, I spoke to one person who claimed that their field was not self-reflective, [00:26:00] um, that’s just one opinion, but it’s very reflective. So part of it, part of this was that, um, I think that it takes a special type of mentality to approach the idea of what’s happening in the field overall. [00:26:17] And this is for a couple of reasons. So the first is the, there’s an insider outsider distinction. If you’re fully in the field and you’ve absorbed all the assumptions, then the answer is almost certainly things are going great. Okay. Um, and if you’re on the outside, then you aren’t privy to lots of conversations. [00:26:38] You don’t really know a lot about what’s going on. It’s really, maybe you’re bitter that you’re not on the inside and you know, you’re like, well, they, you know, I think they’re bad. Um, and so I think from the outside, it’s, it’s quite easy to not be as informed. That’s not to say insiders, [00:27:00] can’t see past assumptions and outsiders can’t. [00:27:02] Pierce the veil I’m, I’m talking sort of generally the sort of things that’ll happen. Um, but like, ideally what you’d want is someone who both had an insider understanding, but could take a broader perspective. Also the, um, of cases, the way fields work, even though the fields are studying different objects and function differently. [00:27:27] As a result of that, there are some things that are similar from field to field. Like it’s all research, there’s certain very important sort of broad, general truths. And you can learn a bunch about what’s happening in one field by looking at the right level of abstraction at what’s happening in other fields. [00:27:45] And so it’s also useful to have a sort of generalist perspective. Gotcha. Then bottleneck analyses are by their nature abstract. So you want to think abstractly, but. If, you [00:28:00] know, if it’s just then like frequently, the reason something’s not moving forward has to do with a lot of sort of concrete on the ground details. [00:28:10] And so I really think like the thing you want is like, you know, insider outsider, abstract, concrete generalist, or whatever, and then not everyone should be doing that. Some people should just be pushing the things forward as directly as possible. If everyone goes Metta, that itself is a problem and we should go meta on that. [00:28:28] Um, but so, uh, so then I, um, have been looking around and trying to find researchers who are especially interested in this sort of bottleneck analysis. There was like a mentality where the person, like, sort of you ask why, why, why, like one thing about bottlenecks is if you say, Oh, you know, here’s the problem with the field. [00:28:47] The incentives are set up wrong. Okay, cool. Great. Let’s assume. That’s true. Right? Why are the incentives set up wrong? Right? Because, and then there, you can say, well, incentives are the bottleneck for [00:29:00] what’s the bottleneck to that. And so you can push it back. And so then there’s. Uh, some people will have this habit of thought where they just keep pushing the sort of why question in fact, I mean, think of this, but you know, we’re here on a podcast, so I’ll say, you know, for anybody who does thinks about the fields in terms of bottlenecks and feels like they, uh, you know, have something really good to say on a particular fields or they know people who they’d recommend feel free to reach out, um, I’m sort of trying to collect a sort of figure out who the people are so that we can, uh, you know, maybe there’s some way to join forces or something like that. [00:29:41] Yeah. We’ve had someone who’s actually discussed all of that. Gonna feel it I’ll I’ll forward you, his email, I’ll give you an introduction. Um, it’s an it’s, but it’s an international relation. So, uh, it, no, but that’s, that’s super interesting. See, I think one of the things is that when, when people think about science and they think about bottlenecks, they’re frequently thinking about hard science, [00:30:00] I think hard science, super important, but, um, Yeah. [00:30:04] I mean, some of the places where the bottlenecks will be the most pronounced or the fields that aren’t really functioning properly. Right. And so, and then the soft science people will want to say, no, our field is functioning. Fine. Thank you. And the hard science people will say, well, you know, it’s not really a real science over there is it? [00:30:22] But th th the thing I think we should agree on is that the soft sciences haven’t yet given us the ability to affect reality in the way that the hard sciences have. And in so far as we’re studying the things like that is much more of what the bowl should be. So then the question is, well, why can’t we do that? [00:30:43] Definitely. Yeah. That’s, that’s really well put that that’s the right way to think about it. It’s, you know, looking at it, it’s like, you know, what utility have we gotten in, in real terms in the, in the world that things have been affected? Um, so I, I wanted to move on and actually ask you one of these questions I’ve got here. [00:30:57] This is what a great guy [00:31:00] in one of the essays you talked about, not the fact that knowledge can decay. Um, alright. This is my, my media message. Yes, absolutely. Uh, and have you ever heard of WHR river’s essay, um, disappearance of useful arts? Um, I haven’t actually, um, she’d send me a note on that. I will. [00:31:18] That sounds great. It’s really good. So he describes like how, um, Cultures can lose fundamental knowledge. So he gives an example of like, uh, you know, there’s a pandemic and in Greenland and, um, it, it killed off all the elders and they lost the ability to build seafaring kayaks for like a hundred years. [00:31:37] And so, um, it seems like things like this can happen. Yes. Have we kind of gotten away from that because of information technology or is that still a real problem? Oh, this is, this is such a good question. The, the answer is it’s still a real problem. Um, Oh yeah. It’s uh, I think the one thing that’s very interesting [00:32:00] to think about in these sorts of contexts is in a lot of cases, the things you would use to tell whether you’re maintaining the knowledge are themselves, the things that are decaying. [00:32:11] And so, and so it’s like, you know, imagine that you have, you know, the elders of whatever group have some special knowledge, but the next generation doesn’t seem to care. Then the elders die, no one is left who recognizes that there was actually this knowledge. And so there’s, there’s this way in which, you know, as the sort of mind’s eye of society closes to certain things, it doesn’t realize that that’s happening. [00:32:38] And this isn’t in a way, a sort of terrifying prospect because there’s like, what, what are we losing? And, um, then, but on the other hand, I do think people frequently emphasized. The decay aspect of it a little too much, like the example you gave has a pandemic. Um, [00:33:00] and that’s a sort of surprise event that was a deliberate, a shock to their system of knowledge. [00:33:06] But in a lot of cases, you don’t have exactly that. And so then it’s like really puzzling like, well, if we know the knowledge is important, why don’t we keep it? And if we know the younger generation doesn’t care as much, why don’t we teach them to care? That’s the whole point of the education system. Uh, and so I think there’s a sort of pretty interesting puzzle on the topic of why knowledge would ever decay in the relevant ways. [00:33:34] I mean, if, if anything, we should be trying to preserve it. Right. And so it’s like how, um, but I think when you dig into it, You’ll find that the, uh, frequently what’s happening is not just a decline, but rather a change, uh, and the change. And it’s not just a clever reframing. I do think like changes are made out of like some [00:34:00] things decline and some things improve usually, unless just the straightforward, pray to improvement. [00:34:04] Um, and, but I think in a lot of cases, you have the old gets replaced by the new, in a way that preserves some things and not others. So let me give you an example. So a nice example is the question about whether we can build the Saturn five rockets. Um, so this is great. Yeah. So moon landing, you know, we used Saturn five rockets, um, uh, or, sorry, the Saturn five was the thing we used the, um, what’s the name of the rocket? [00:34:33] How am I playing? Am I blanking on this? Um, and maybe this, the Saturn five rocket said no. Uh, okay. Okay. Um, anyway, the, um, These things were super impressive. Um, and right now it’s this weird zone. It just, you know, true fact if somebody tasked humanity with building one of the Saturn five rockets, um, uh, we would not [00:35:00] really be able to, um, really, okay. [00:35:02] That’s what that’s not, well, it’s not because general decay, everything is being lost. And then the sun sets on Western civilization. That’s not what’s happening. There’s a couple of things that’s happened. A couple of things that are happening. The first thing is that, um, technology moved on, um, the Saturn five was, um, hand welded. [00:35:25] Oh, wow. Yeah. Yeah. It’s crazy. If you look into the technical specs on this thing, it’s just, yeah, no, no, it’s, it’s crazy. Um, and the, um, and the. The rocket. Um, and then we just don’t have as many welders. And so when they look at a certain point, I think maybe it was the U S government like looked into, okay, can we rebuild these? [00:35:53] And well, most of the people who had worked on them had retired and we [00:36:00] developed a whole bunch of like new types of welding in the meantime. And some people weren’t trained in the original welding techniques. And there were a bunch of people who were willing to come out of retirement to help rebuild them. [00:36:12] But I mean, that was some number of years ago. And so unfortunately there are many fewer people who actually know how to hand build one of these. Okay. So that’s, that’s one of the reasons we can’t do it. But then on the other hand, we. I mean, w you know, when they did the relevant study, they went through and said, okay, is there a modern way to build this rocket? [00:36:34] That’s not just handcrafted. Mansour’s yes. So they made updated design, so we have better designs. And so there’s this in which we wouldn’t want to build the original rocket. Right. We’ve got a better one. So that that’s, that one’s, that one’s not a decline there, there’s a hand. Welding is lost. A bunch of the welding skill is lost. [00:36:56] And then there’s a question of what that allows you to do that doesn’t really hurt us on rockets [00:37:00] because we’ve got more advanced techniques. Right. Then there’s another dimension though, which is that, um, it’s just not a national priority. It was a national priority. When things are national priorities, they get tons of extra funding. [00:37:16] But then when the, um, But then when the priorities changed, the funding changes. And so it’s possible to reboot a rocket program that would let us build rockets, the upgraded versions of the originals. And it would cost the numbers are in the medium post, but tens of millions or a hundred million dollars, some, some large number of millions of dollars to reboot the, um, to reboot the program. [00:37:43] And then, you know, a lot of money per rocket that you’d like produced and then it would take awhile to reboot. So there’s an interesting, another example that people don’t know about, um, is this, um, this, uh, thing called fog bank? Are you familiar with fog bank? [00:38:00] Oh, what’s this? Yeah. So, I mean, if you look on Wikipedia, um, you know, it says, and this is it’s, it’s all caps, fog bank. [00:38:08] Um, fog bank has a code name, given train material used in nuclear weapons, fog banks, precise nature is classified. In the words of former Oak Ridge general manager, Dennis Roddy, quote, the material is classified. It’s composition is classified. It’s used in the weapon is classified. The process itself. It’s collapsed. [00:38:27] It’s classified. Cool. We don’t know what fog bank is, but we do know that they forgot how to manufacture it. Oh, wow. Now you might think so each of these cases, I think teaches us something really important because for people who are in the mentality that we can’t lose knowledge, just look at some of the cases like you can lose knowledge yeah. [00:38:49] With information technology relative. Yeah. Yeah. And then, um, then when you dig and then, but then for, if you want to get past the declining, you know, eclipse of civilization [00:39:00] mentality, then you want to look at why does, um, like what exactly is happening. Um, and as far as, from what I remember on the fog bank case, They had the original, like, it’s not like they threw away the chemicals formula and it’s classified file hidden somewhere. [00:39:21] Right. Okay. They had it. But when they went to manufactured, it didn’t work. He was like, well, why would it not work? Well, the answer is over time, your manufacturing processes improve as they improve, they remove impurities. And so then if your original formula worked because of impurities that you didn’t understand, then as your manufacturing gets better, you may actually lose the capacity to manufacture things. [00:39:50] And so when they realized this, they were like, Oh no. And then they went back and spent a lot of time and money, much more time and money than they were thinking. And then we got the ability to make it. So [00:40:00] we’re fine. We refigured, it out. Things were fine. Okay, but, but then, I mean, this, this is the sort of dynamic and we’re you talk about? [00:40:08] Um, and one thing I’m super interested in this research programs, um, and how knowledge is created and preserved and so forth. The idea that some of our manufacturing processes rely on impurities or sort of anomaly things we don’t exactly understand. Right. Um, and as a result over the course of time conditions change and the things stop working almost no one’s thinking about this and then people really aren’t thinking. [00:40:37] So if we go back to the, uh, conversation about academia, well, if you imagine that original academia. Has a bunch of impurities in the way that science is being done. Right. Okay. Like, are you selecting the best candidates? Like not nearly as rigorously. Are you testing them? No. Are you measuring the impact factor in, so [00:41:00] there’s a lot more variation. [00:41:02] You scale up the process, maybe in the course of scaling up the process, you refine the process so that you can pick exactly a particular type of candidate, but maybe that wasn’t maybe, maybe the thing worked because you had many different types of candidates. And so you can think about the entire process of sort of scaling up and institutionalization of science as a manufacturing process. [00:41:26] Like we are trying to manufacture bricks of scientific knowledge or blocks of scientific knowledge. And if you change the manufacturing process, you’re like, well, wouldn’t it be better? Like, so take the replication crisis in psychology. Yeah. Some people have proposed that we should maybe lower the P value. [00:41:49] So the threshold for significant. So, you know, if your study comes back P greater than 0.05, bad, lower than 0.05 statistically significant good job. Um, [00:42:00] the, some people have said, well, because the studies aren’t replicating, we should raise the threshold level, which means lowering it down. And maybe we lower it to something like 0.01. [00:42:12] That’s a, that’s a, that’s an interesting suggestion. And maybe it would work. Um, but do we really understand this manufacturing process as you make? I mean, take something like pre-registration pre-registration is another example where it seems like an obviously good idea. If we’re worried about scientists fiddling with the numbers so that they always get a positive result because they need to publish, then why not just have them pre-register so they can’t fiddle with it. [00:42:39] Well, I mean, it really depends on it’s something like you can say, well, why don’t we just increase the, you know, the, the standards? Well, increasing the standards increased the cost per unit of production, which may increase the degree to which people route around in clever ways, the [00:43:00] requirements for the unit of production. [00:43:02] Maybe if you increase the requirements and don’t decrease the pace of publication required. Maybe the thing you’re doing is you’re incentivizing people to figure out how to game the system, whether consciously or not, right in more and more clever ways. And so that this is, this is an example where it’s, I think one of the good lenses to use in thinking about knowledge production is to think of it as a manufacturing process. [00:43:29] And then you want to understand what do you know about it and what do you not, do you really want to switch to the new set of machines or you haven’t tested the new set of machines? Well, they’re just the same as the old set of machines, according to test X and test Y what do we really know about test X and test Y um, and so forth. [00:43:48] And this doesn’t mean you don’t change anything, because I do think you can make changes that radically improve our ability to make progress in developing knowledge. But I think what the, you know, so we should be careful. Definitely. It almost feels like [00:44:00] you’re, it’s very, uh, have you read seeing like a state? [00:44:02] Um, I have not. I’m familiar with the general idea of legibility though. Yeah. So it almost feels like that a little bit in that, you know, it’s like this process, it works since we don’t know exactly how it works. So should we, we should probably be careful about how we approach it and not just go blow it all up and know. [00:44:20] Yes. And I, and I think, I mean, I think it’s, it’s quite important too, [00:44:28] conservatism in the sort of approach for like policies in this way. So I think the, you know, we don’t know how it works, so don’t change. It is really good advice and that needs to be paired with the fact that we do need to make more progress and we do need to fix things. Um, and so I’m, uh, in favor of, um, I mean, the main thing I would use the, you know, the seeing like a state, you know, careful if you [00:45:00] go through and you remove all the underbrush from the forest to make it cleaner, and then the forest dies. [00:45:05] Right. Um, the, um, what you want to have is you want to have the principles in which you’re crafting your research programs to make a lot of sense. Um, this it’s sort of like a, there’s a high modernist tendency. It’s like, well, I know it’s make it into a geometrical shape. Like let’s make it a grid. Great. [00:45:26] Yeah. Resilient. Awesome. No, exactly. I was thinking of Brasilia and the, um, [00:45:34] I think it’s part of what modernism is about in a way is about. Not trying to rely on the past so that we can do new things and so that we can create what we are. And I think that that’s good. And I think we can partially create what we are. Um, and also I think we need to understand what we are. And so if we try to lay everything [00:46:00] out in a grid and human nature and the research process, mish-mash mismatch the grid in something bad will happen. [00:46:07] And your studies won’t replicate. Right. That makes a lot of sense. I re I really liked that. Um, and going off of that, you know, what do most like lay people misunderstand about scientists? What do scientists themselves kind of misunderstand about science? Like the process? We mentioned a couple of things. [00:46:25] Are there any other really big things that are worth mentioning? That’s a really good question. Um, well [00:46:38] I think that [00:46:42] it’s. It’s an interesting question. What the public doesn’t understand about science, because there’s a, there’s a sort of counterintuitive thing. I really want to write some things about this. Cause there’s a buddy like counterintuitive thoughts here. I haven’t gotten to it yet, but I really want to do this where a lot of what people [00:47:00] learn is from movies and television shows and so forth. [00:47:05] Right. Um, you know, some sell some source claiming that most of what people know about our legal system in the United States is actually from TV shows and movies and so forth. Um, but I also think that even though movies and television shows and so on frequently are unrealistic or frequently have some premise, like there are dragons, um, and so forth the, um, in order to have the thing be realistic, the writers and authors and creators. [00:47:37] Frequently have to try. They’re trying to make the thing as true to life and at least a number of different regards. Right. And so I think there’s this really interesting question about, you know, it’s like, so take, you know, my, my favorite example here is, um, the degree to which, uh, academic, uh, the degree to which academic scientific funding is backed by the middle. [00:48:00] [00:48:00] Gotcha. It’s not a thing that’s part of normal, every, you know, discourse among of, you know, when people are like, let’s make more progress and so forth. They’re, they’re not normally thinking about that, but as soon as you step over to Hollywood, then you know the scene, right. The scientist is like, well, but this is amazing. [00:48:17] Like if you, and then the, you know, the military comes in and they’re like, you know, we didn’t wait to see what you’re working on. Professor. We’ve been funding your research for many years and, you know, the professors like I had, no, I, I then, you know, when we wanted my inventions to be used for good, not to harm people and so forth, um, uh, And then it’s an interesting question, actually. [00:48:37] How much of the funding ultimately originates from the military and sort of who understands better what’s happening there, but I sort of bracket that. Let me answer. Um, I’ll leave that as you know, I feel like there’s more to be said in that area. Um, on the topic of what scientists don’t know, that’s a good question. [00:48:56] It’s like a big category. I’d want to talk to a lot of scientists to know, but I would [00:49:00] say among the people who frequently comment on scientists, on science, including many scientists, there’s a thing that I think is really not understood, which is, is the way that science interacts with authority. Okay. [00:49:14] Interesting. The. No. I mean, there’s, there’s, there’s lots of things to be said. I mean, right now, leverage is doing a lot of research on early stage science. I feel like people nowadays have this image of science as, um, you know, you’ve got lab coats, you’ve got large Hadron colliders, you’ve got randomized controlled trials. [00:49:35] That’s not how science actually got started. Of course those things are all characteristic of late-stage science. Okay. Well, what what’s early stage science, like why don’t people are poking stuff and seeing like, what happens when you mix chemicals together and so forth. And I think, um, was there, I think at least one thing that’s, that’s quite misunderstood in the area is even though people have this sense that science. [00:50:00] [00:50:00] In, you know, even though they have the sense of the science in its early stages, they’re frequently when they talk about it, they’re then trying to apply standards that, or like patterns that, uh, are more pertinent, more, uh, sort of more, uh, fitting for late-stage science. Um, but one thing that I think is importantly different is the role of authority. [00:50:20] And so I’ve been thinking about this. I don’t really feel like I have sort of finished thoughts, but one thing that at least certainly seems different is that at the beginning of, you know, you go back to early electricity and early magnetism or looking at, you know, early astronomy and so forth. Um, the it’s much harder to use the science authoritatively, but when you get to the later stages in the scientific process, then science comes to be used authoritatively. [00:50:52] And so I think there’s a sort of, there’s this, there’s this strange thing that happens where people will say. [00:51:00] Science is about always questioning, you know, never accept on the basis of authority, new frontiers challenge, your assumptions. In fact, I saw there was a, some, some blogs, media outlets, blog, or science blog, or something had both the tagline. [00:51:21] That was something like always questioning and then an article, which is like, can you people stop questioning the science on the topic of the pandemic? And it’s like, well, how do you, well, how do we square the fact that on one hand science has this ethos of continuing to explore challenging questioning. [00:51:41] And on the other hand, at a certain point, you want people to just accept the science because it’s solid and we know what we’re talking about. Right. Um, and so I think that. The, the fact is that science is now very, very frequently used authoritatively in society. [00:52:00] When governments make decisions, they want to make them, you know, you know, many sort of more modern governments are going to want to, uh, justify things by reference to science. [00:52:11] It’s like when the U S decides to go into lockdown, the pandemic it’s on the basis of a study, an academic study coming out of a very prestigious university. And so it’s, you know, over the last, you know, while we’ve been trending more and more away from really trusting individual decision makers and instead wanting impersonal mechanisms to verify, so you don’t want gotcha. [00:52:36] You know, the dictator saying that it’s there will you want, you know, like an impartial, you know, scientific, good, you know, assessment of the facts. So you make good decisions, but then. So what’s happened is there’s come to be a very large demand for authoritative statements from science. And then in some cases, [00:53:00] the supply is there, like in a bunch of places in physics, we do know the answers well enough to comment authoritatively, but in a bunch of places. [00:53:12] And this is where it gets really tricky. Um, and this is actually, I think one of the, uh, sort of like rift points. I don’t know if this is one of the places where there are clashes in, uh, our society now over science and government authority. You want one, you don’t want the government just making decisions arbitrarily or on somebody’s whim. [00:53:33] You want it to be well justified, right? Um, On the other hand, you don’t want anyone saying that the science is good enough until it’s actually good enough. And then a lot of the problems are actually really hard and we’re not actually at the point where we have totally solid answers. And so then what do you do with this? [00:53:52] You can have, you can disclaim authority and [00:54:00] have, you know, Mark everything preliminary. Um, and then, uh, not, you know, and resist attempts by government officials to rely on, on your work in any sort of official decision-making. But that, that seems like that’s that’s like, I don’t know, is, is like, it seems sort of rebellion oriented and unless the academics really all want to, you know, declare it’s time to resist the authorities and I think they want to look at the form. [00:54:25] Exactly, exactly. Um, whereas on the other hand, you don’t. I mean, we do want our governments to make good decisions and base them on the best available information. And so I think that, and so to put it concisely, I think one of the things that a lot of people don’t understand about science is that it’s being called on to play an authoritative role in a way that sometimes it’s able to do. [00:54:54] And sometimes it’s less clear. And then this is the sort of thing that can produce a [00:55:00] very notable distortion inside of different fields. And then there’s an object level question of, well, is that really happening to what degree and what do we do about it? Gotcha. Yeah, that makes a lot of sense. I also wonder too, it sounds like, you know, these. [00:55:19] The knowledge, making organizations, you know, like universities that gives them a lot of power. If the government goes to them and it says, what policies should we do? You know, I’m the staffer. And I have no time to figure out what the right professor helped me. And it sounds like, you know, power corrupts and absolute power corrupts. [00:55:35] Absolutely. Um, should we be concerned about that at all? And I don’t have any good alternatives, you know what I mean? So this is, this is one thing where I think it’s, um, in general, I think, uh, in some cases a system is so bad. That it should be torn down without any expected replacement. Okay. But that’s usually not what’s happening. [00:55:57] And people usually aren’t thinking [00:56:00] clearly about what happens if some reigning authority is removed without a replacement. Um, the, and so I think that in terms of, you said it, should we be concerned about it? I think people should be concerned about the authoritative use of scientific research and decisions, but I think that what this means is we need like, and this, and this is an area where I think that, uh, independent thinkers, uh, have a, a sort of role to play. [00:56:36] Um, we need concrete. Positive alternatives. And does it mean you can, you can, you know, it’s well, maybe we should just stop relying on science. It’s a terrible idea. Well, should we, should we stop relying on, um, stop, relying on science until it’s ready. It seems like there’s gotta be some [00:57:00] evidential value that can be extracted from the, not like from what we’ve done so far. [00:57:06] Okay. Well, what about. Adding epistemic tags to everything so that we keep the public informed. It’s like, well, sure. Except that you have to make the thing understandable and secure a system using epistemic tags and probabilities is, is not going to be understood. Um, and, and there’s all these sorts of problems, high modernist problems where you’re like, well, why don’t we just have everyone do Basie and updating? [00:57:28] It’s like, well, really hard. Also, there are weird subtle problems that occur if you really try to run with one or another proposal. Um, so, but I, I do think that this, this is an area that deserves to have more light shined on it, like Y um, and there there’s all sorts of things that are really interesting and suggestive. [00:57:55] Um, one is figuring out how [00:58:00] we can assess. The quality of research occurring in different places. I mean, if quality of research affects the amount of funding, this is a political question. Right. And do we trust academia to assess itself? Do we trust like, but then who do we trust to do we trust the government to assess academia that the academia is providing? [00:58:20] I mean, this just, you know, now, now we’re at the, who watches the Watchman, you know, challenge. Um, and this is where I think that there’s a place for social philosophy. Um, you know, uh, uh, abstract sort of thought that on the topic of what we should do. And then, um, and then I think that, I mean, getting that stuff right is also super hard. [00:58:48] I mean, it’s easy enough to come up with a, you know, simple idea for how to solve the problem. That would be terrible if implemented. Right, exactly. That. Really well put, well, Jeff, I [00:59:00] really wanted to thank you for coming on. And, uh, do you have any party [00:59:07] and where can people find you? So, you know? Yeah, sure. So, I mean, um, yeah, so I email@example.com. Um, you can follow me on Twitter. Um, I guess that’d be my Twitter bio, um, will be put up on, I guess, the podcast info. And then, um, I have a few essays up on medium, um, and, uh, ideally, you know, hopefully I’ll, I’ll write some more things soon. [00:59:34] Um, let’s see. Is there any last comment? Let me just, let me think for a moment about, about what we’ve talked about. [00:59:51] Okay. I think there’s an opportunity that. Hasn’t fully [01:00:00] been crystallized, but I think would be really very valuable, which is some attempt. And since it involves enough people that need to be at a somewhat organized attempt to try to synthesize and organize information on the topic of coordination, like a lot of the things we’ve been talking about are coordination related things. [01:00:28] And in particular coordination and knowledge, what happens when you increase the number of people trying to number of researchers, a hundred X, what, you know, what happens? You change the coordination mechanism. Maybe you change the research. Um, we, you know, looking at, I mean, Needing authoritative statements from government that’s frequently for the sake of coordinating populations in general. [01:00:52] Um, and so there’s like another relation there. Uh, I think the specific, uh, design of research [01:01:00] programs frequently relates to this and then something we didn’t talk about that much except, you know, startups and scaling is that recently, I think a whole lot of knowledge has come into existence for the first time. [01:01:12] I think it’s the first time on the topic of what a certain type of coordination experiment AKA startup is like. I mean, you have tons and tons of attempts to work together in a small scale way that eventually expands and then provides a stable product or service to society. And if you look through all of the knowledge on the topic from Paul Graham and Peter teal and so forth, Tons of it is knowledge of coordination and how coordination works. [01:01:47] I think there should be at least some, there should be more movement in the direction of bringing together and trying to organize all of this information, because I think there’s [01:02:00] now a lot of information available that will help us to understand how can we effectively organize people for research. And, uh, and maybe there are larger sort of consequences that could be drawn. [01:02:11] So I think, I think that’s a thing. There’s a lot of people who I, when I, you know, meet independent thinkers, a lot of them are really focused in some way on this question of coordination, like different facets of the gym. Um, and I think it would be great if that knowledge started to come together. That’s excellent. [01:02:28] I love that thought. Well, thank you. We’ll definitely have to. Absolutely. Thanks. Yeah. Thanks for having me on this is great time, but anyway, I’m really flipping. It’s great. All right. See ya. [01:02:41] Well, that’s our show for today. I’m bill Jarvis and I’m Will’s dad. Join us next week for more narrative [01:03:00] .
Join the discussion