52: Neuroscience, FROs and Biology with Adam Marblestone

Play episode
Hosted by
Will Jarvis

In this episode, I got to sit down with Adam Marblestone to talk about how to accelerate science, what most people don’t understand about Neuroscience, and how to have an outsize impact. 


Will Jarvis 0:04
Hey folks, I’m William Jarvis, along with my dad, Dr. David Jarvis. I host the podcast narratives. narratives is a project exploring the ways in which the world is better than it has been the ways it is worse in the past or making a better, more definite future. I hope you enjoy it.

If you enjoyed this episode, please subscribe. You can get on our mailing list, find show notes, transcripts, as well as videos at Nerdist podcast.com. Thanks. Well, Adam, how are you doing today?

Unknown Speaker 0:45
doing pretty well been a long day, but happy to be chatting.

Will Jarvis 0:49
Well, thanks. Thanks so much for hopping on at the end of the workday? And could you go ahead and get started? And just give us kind of like, a brief bio and some of the big things you’re interested in?

Unknown Speaker 0:59
Sure, sure. Yeah, I

Unknown Speaker 1:02
mean, I think my the the common interest. I’m trained as a scientist, I think that my my main interest has really been in advanced sort of physical technologies. Since it since I was a teenager, I was interested in nanotechnology, I read a lot of sci fi books. My you know, academic training is in physics and biophysics, a little bit in neuroscience. But I’ve always been coming at those sorts of from a frontier engineering kind of almost sci fi engineering perspective, you know, less less. So just thinking about how to advance the scientific research topic now. And more kind of, well, what would be the huge nuclear sledgehammer that we could bring to this problem. And so, you know, maybe we can talk about some of that. But that was sort of the approach, for better or for worse that I was taking was a neuroscience that has has somewhat led to, over the years to the realization that in some cases, we don’t have an organizational mechanism to push hard enough in a concerted enough fashion on certain technologies that impact even very basic research, let alone more product oriented technology. That’s sort of led to what I’m what I’m working on now. That’s awesome.

Will Jarvis 2:30
And could you talk a little bit about, you know, where you think that gap is? Like, like, Where’s the gap and like, in either funding, or like incentive structures, whatever, to get those those gains?

Unknown Speaker 2:41
Well, what I’ve been talking about with a lot of people, including some of your previous podcast guests, like Ben Reinhard and others, is that there actually, there are multiple different guys, and they’re probably multiple different ways of fixing those gaps. What my particular lens coming from a couple of different areas, particularly neuroscience, a little bit nanotechnology, sort of biomolecular adjacent sciences, has told me is that there’s also a gap in the development of sort of fundamental platforms and tools that would be used not only by end users, but by researchers developing the next generation of knowledge and technology, around tools and platforms and systems, which which require a kind of tight knit concerted systems engineering type approaches to build. But were either the end users of that, or the communities that would be involved in building that exist within more of the basic research ecosystem. You know, if you if you need to make a VR headset or something, you know, that’s something where you can get a lot of systems engineering behind it in the context of a company. If you need to make a robot that does brain mapping, or a microscope that looks at proteins, or potentially some other other, if you want platforms in areas that are still somehow pre commercial, for whatever reason, are generating a public good, rather than a commercial product for whatever reason. I think there’s a gap there. That’s partly because of how our research system is organized, where it sort of fragmented into 1000s of individual academic laboratories for the most part, with less ability to organize kind of startup or industrial like structures for building stuff.

Will Jarvis 4:36
Gotcha. So it’s something where like, you know, maybe the labs themselves to kind of too small to take on this kind of task. And then like, it’s not quite at the level for you know, it’s not close enough for commercialization, like the startup phase, like venture capital. And so there’s some like, middle ground where things are missed, and whether it’s kind of a you could do it lot, so great, right? I

Unknown Speaker 5:01
think so and I think a lot of it has to do with, again, this, this comes back to the idea of there being multiple gaps. But but the kind of common theme is, is the idea that the research system is particularly in the biomedical sciences is sort of a bit homogenous, structurally, there lots of researchers applying for the same relatively small scale grants, and training students and postdocs who then want to get into an academic position where they can apply for the similar kinds of grants, okay, and no nowhere in there. With some notable exceptions, certain large institutes, or if you want the sort of startup ecosystem around biomedical tools, but for the most part, it’s much harder to to organize a team to build a system where the goal is not to keep participating in that particular system of getting these types of grants, right in the types of features that lead to that getting those types of grants.

Will Jarvis 6:04
That makes it so it’s something where, like, the incentives, especially like at the lab level, our academic research labs, you know, all the graduate students, when I get into similar academic position, you need to publish. So you know, you’d like to get a got to get your, as many, you know, high end publications as you can to get an academic position. It’s intensely competitive. And so like, the the only goal you can really have is like, get as many publications as possible to make it on the employment ladder.

Unknown Speaker 6:31
Yeah, and taking taking three years to put together a CEO led 20 person team and get millions of dollars of funding and move outside the university and those kinds of activities, they would be, they would constitute a pretty big risk. And a divergence from that model is a bit of an oversimplification. Lots of people are now increasingly in the last five or six years, leaving to do biotech startups, instead of doing the academic path. There are some fantastic new institutes and old Institute’s that are changing some of this, but for the most part, what your description is, is sort of the the problem sociological kind of problem setting, in the biomedical sciences, and I think that that extends to, to other fields and in different ways where people talk about technology somehow being stuck in the lab. But but I think it’s a little bit more to it than that, it’s, it’s not that you necessarily want to just take it out of the lab, you might want a different kind of lab, in a different kind of team is working on that technology, even at a pre commercial or public good or public data set generating phase.

Will Jarvis 7:38
That makes sense. I really like that. This is a broader question. It’s a bit of a left hand turn. But in general, do you think science works better than it has in the recent past? You know, since like the 70s, or 50s, the 70s? Like, or do you think it’s working less? Well? Do you think it’s better about the same? Do you have any sense of that? Yeah,

Unknown Speaker 7:59
I mean, I think that I am a little bit skeptical of the premise of saying, quote, unquote, science, and then quote, unquote, is, quote unquote, better or worse? quite broad. It can mean, and I don’t mean, even to push back. And I think it is a worthwhile question to ask. But I think that my personal interest is on kind of hyper specific and highly heterogeneous, kind of cases of this, and sort of finding the ecological niches that may be coming into or out of existence to different degrees. I think that there is a general trend, though, that I, I alluded to, which is, which is it’s a theory, I think it’s this is not coming, in my case from a detailed academic analysis of it, but that there has been this kind of proliferation of federally funded science in a certain kind of model done in universities, which I think on the whole, it’s a good thing, and it’s a good thing to proliferate, right? And in no way would I push back, I think there might need to be more PhD students now in the future than there are now i’m not saying that, oh, there’s too many PhD students or something. But But structurally, there’s too homogenous, a set of incentives. And parts of the system have become so competitive, that there’s a kind of so competitive along so few ecological niches that are meaningfully distinct, that kind of a lot of free energy kind of gets sucked out of the system, right? You’re spending all your time competing for certain kinds of, of progress that you need in order to stay in the system. Whereas the my imagination of 50s and 60s science is that There were, in some sense many more ways to do it, or many more kinds of scientists, and, and sometimes much more trust placed in individuals, or sort of more speculative visions, potentially longer term activities, different kinds of Institute’s different kinds of personalities. And that were putting so many requirements on people to be able to get those NIH grants, right. And the equivalence of them for other other types of researchers, that we might be sucking some of the air out of the system at this point. But at the same time, other things have emerged, like startups, where you can do extraordinary new kinds of research, if you have the right match to a business opportunity. And in the end, they’re still there’s still lots of really fundamental stuff coming out. You know, I, some fields, I think are working incredibly well, quantum computing, I think is working stunningly Well, at basically every level from the basic intellectual creativity all the way down to, you know, hardware engineering and commercialization, my impression is that quantum computing, both in the US and other places, like China, is just booming, and that it would have been very impressive, even by 1960s standards, maybe even more. So because it’s proliferating, if you’d go on the internet on archive.org. You know, you’ll see, like, 100, quantum computing papers come up every few hours, you know, and it’s just a lot of them are good. Right. And so it’s I think, I think there’s some fields that are making really stunning progress. Overall, we’re making a pretty decent amount of progress. And then I’m interested in finding this sort of micro niches where you could unlock particular types of progress.

Will Jarvis 11:57
Definitely, yeah, it’s really interesting. And I like your approach, and you mentioned free inner energy. And, and that reminds me of inadequate equilibria, which is exactly where that’s from. That’s awesome. So I, I found the book on your blog, and like, how that’s awesome.

Unknown Speaker 12:11
I’m glad you asked about that. Yeah, it’s a really useful mental framing somehow. It’s a little hard to capture, like, what is actually, like, the thesis of that book that isn’t somehow like, already known to everyone, but rather level like, it has a lot of useful framings. Yeah,

Will Jarvis 12:28
yeah, exactly. It’s really, yeah, I know exactly what we’re saying about the book. And could you talk about that a little bit, because I think, you know, the area you’re looking at, it’s interesting, right? Because it’s underserved. And, and, you know, we talked about FRS a little bit and, like, more specificity, specificity, but I’m, I’m curious, you know, how did you go about thinking like, like, like, when you’re thinking about this problem, originally, like, I need to go work on this, you know, how did you think about, you know, okay, like me, you know, why is everyone missing this? You know, I guess like I and that’s kind of a vague question. But, you know, what

Unknown Speaker 13:06
was that thought process? Like? Yeah, it wasn’t at all obvious to me, it’s the result, I would say 10 or so years of being in the research system. It was not at all obvious to me that there was a gap of this type. And it didn’t come, in my case from a systematic analysis of the type lsca le s or use kaski, doesn’t inadequate equilibrio general general kinds of incentive traps and you know, emergent phenomena and systems and stuff like that. It didn’t come from that it didn’t come from historical analysis. It just came from an odd situation that I was in, where I was this kind of very bullish, excited new graduate students, maybe irrationally exuberant, early graduate students, I had been doing a bunch of physics research as an undergrad, I hadn’t had any real obstacles in my way, it was all very fun and exciting. And I was in a grad program in biophysics with a very high freedom fellowship that basically let me do whatever I want a grad program that would let me do whatever I want. And an advisor that would let me do whatever I want. And so I had this kind of irrational exuberance, about the kind of problems that I could pursue as a graduate students, that in retrospect, you know, anybody who had a decent need, my PhD mentor was amazing, but anyone who had a traditional PhD mentor would have seen the traps that I was getting myself into. But you know, my, the first project that I wanted to do was to try to create a fabrication method that would, would allow you to put any molecule at any location on a chip over a centimeter scale with 10 centimeter resolution or something using DNA nanostructures. And then then we we tried to make, you know, genes that couldn’t mutate and we tried to record all the neurons in the brain into DNA, and we had just a really good time. With with the really big ideas, but what I struggled to do was to recruit more than a couple people to kind of work coherently with me on these things, right? It was always this dance of writing joint grants with other academic clubs. And what does this postdoc want to do? And, you know, what did they have funding for versus what do we have funding for and it kind of it, there wasn’t a kind of growth model or or development track, that was the equivalent of say, Hey, we’re going to start a start up on doing molecular recording of neurons in the brain, or we’re going to start a start up of making nanometers to centimeter molecular chips. There wasn’t a clear model for doing that. And so I tried in various settings, I tried in the academic setting, and we tried to get both government and philanthropic you know, large scale grants in the academic setting, particularly to do things around large scale brain mapping approaches, that kind of combine a bunch of technologies in a somewhat high risk and complicated way. Eventually, I sort of tried to do that also in the for profit sector with sort of billionaire funded startup adjacent to neuroscience, called kernel, which was a really great experience, where we were able to just pull together teams of this incredible way. I mean, we could remember an instance where, you know, one week we decided we wanted to work on atomically, precise atomically pumped, magnetometers, a new, different way of sensing magnetic fields coming from the brain. And we were like, okay, let’s like call up this lab at NIST, and like, find, like, where all their best postdocs went and like, call those up. And like, a few weeks later, like we had those people they had, like flown to LA, and we’re like working on like, atomic, atomic, optically pumped atomic magnetometers. Sorry for butchering The, the what we were actually working on, but and that was that was really great. Even in that setting. I think there was a very strong pressure to say, what’s the near term commercial path, right? for this. And it would have been hard for us to be pursuing something just for the good of science or humanity, right, or neuroscience in with that kind of facility of being able to put in people and invest in in specific projects. And then finally, you know, I spent some time at DeepMind. And I saw their ability to do this really well. This thing that I want, which is the ability to sort of pull together these tight knit teams and do systems work.

Unknown Speaker 17:56
In something that isn’t immediately a product, in this case, an example what they did was protein folding, another one was go playing, you know, computers, that was really, really impressive. But then I was still sitting there thinking myself, Well, this is great. But this is just AI and computing, how do I do it still for biology. And so I just directly like, I think I just tried by process of exclusion, given a very lucky situation where I was often very well funded, and working with amazing people and having great environments, that even in those settings, kind of best case scenario settings, and mentors and all that it was very hard to do this kind of get this kind of product to exist even very hard to even spend time talking about it. Because why are they paying you to spend time talking about that, right? You’re supposed to be doing something useful, like writing a paper or something, right? Yes. So anyway, so I, I found the gap just by by trial and error. And then now what we’ve been doing is a couple of us that have found the same gap are now sort of generalizing that and saying, well, putting, putting putting words to it.

Will Jarvis 19:04
Yeah, that that’s, that’s really awesome. And this reminds me this is a this is a consistent theme I’ve heard on credit was from quite a few guests in different ways. Have you heard of Don braven? scientific freedom? Sure. Yeah. So we had dot odd, you know, he’s like, 85, still sharp, it’s awesome. They’re like, really cool guy. But, you know, it seems really important to, you know, have a certain amount of slack to pursue like these kind of, it’s kind of like, you can get stuck in these weird, like, you know, local places where you can’t like get over the next hump. And so what does it look like does? Does your model you know, do you think about it kind of like Don does, where you find like, you know, a smart person, smart team, if you just give them kind of unrestricted, small unit. It doesn’t have to be huge amounts of money, but just enough so they get enough flexibility to pursue the research goals. You know, what do you think I like?

Unknown Speaker 19:58
Yeah, well, I think What you just said and in that model that that actually relates to what I was just saying about the the kind of enviably high freedom situation that I was in. Because in realizing these these types of problems, because if I had had to just write good papers, instead of doing what I whatever I was doing and brainstorming about nanometers centimeter chips and stuff. I don’t think we would have even thought up these thoughts. And so one distinction I would make is between what is that? What is the early stage model that allows people to come up with really new ideas, and really new directions? And even new meta directions? Like focus research organizations is more like a meta direction? Yeah, that all relies, I think very much Yes, on this kind of just fund smart people that wants to dream about stuff and let them just work on those things in a totally self organized way. And none of the outcomes that I’ve had in my work have been at all predictable, even a few years in advance. Even the places I’m working, not predicted right. And so I’ve benefited a lot from that kind of ultra high freedom. And I think we need much more of that. Yeah. Um, then when it comes to the specific projects, and the specific teams now zooming into a particular idea, which is the focus research organizations that I think requires more of a coordinated set of stakeholders to get involved and, and more of a roadmapping process. And it would actually be potentially a very bad idea to do that on something where there’s already startups or it’s just better done as an academic project and additional sense. So it needs a lot of I think scrutiny of are you hitting a particular bottleneck in the field that you need a public goods generating philanthropic or governmental, you know, nonprofit org, to be formed? Is it worthwhile for people to take the risk on these projects and divert from what they would otherwise be doing? Do you have to go through all the hassle and complexity of figuring out teams and roles and compensation and all these things that that startups have to figure out? It’s quite a schlep to create a new organization to solve a problem. It’s also quite a schlep for a funder, to give people amount of funding that justifies that we’re talking 10s of millions of dollars, right? I think that the focus reserve organizations are actually very different, much more directed research, much more goal driven okrs roadmaps, CEOs and teams, then what braven is talking about, but I think fundamentally, like everything comes from like unrestricted research, including the idea of FRS and probably most of the ideas for particular, FRS are not going to result from the Department of SRO, you know, milestone driven quarterly process, it’s going to come from weirdos being supported, right to take unconventional perspectives that are maybe not legible to others for years. And, and but might be legible to a don Rubin, who has the benefit of an individual human mind. Right. talking to someone, you know, not a grant review committee.

Will Jarvis 23:20
Definitely. You don’t have to win whenever the, you know, a bunch of people. It’s like he can he can understand and last the early stages, you need some

Unknown Speaker 23:29
trust. So you’re the training? I don’t think he does it in a completely blind trust way. I thought, you know, some of the stuff that that Ben has said in his interviews with him and just stuff that he’s written about how do you suss out systematically, people who not only have an unconventional vision of some kind, but kind of are so obsessively curious about that, that they actually know the specific next step to take. And they can actually have an incredible level of concreteness of what they’re saying. So you’re not just saying, hey, I want sort of in a purely artistic fashion, I want this thing. I have this new idea. Isn’t it great? You’re you’re sort of saying you’re you’re able to search for people that are have a unique vision or obsessively curious or so curious that they’ve actually gotten down to concreteness have a certain level of functionality as people so that they can get get stuff done. But it’s not something I think it’s a grant committee is the best way to do it. I think. I think it is something more like, like, individual taste. Yeah.

Will Jarvis 24:36
Yeah. And I think you’re absolutely right. And I think it’s something weird where, like, a lot of what’s gone on is, you know, because no one gets wants to get on the front page in New York Times about how all this money was misspent. And you know, you give it to one wrong person. Yes. blows the entire thing up. And dance whole thing was like, you know, yeah, like quacks would call me like all the time, and they would say I’ve solved this girl. Right, new advances in theoretical physics and you’d be like, well, like, how would that work? And then they would just never call back. Which was right.

Unknown Speaker 25:06
Yeah. That’s, that’s really interesting. Yeah. So there’s, I think that there is more ability for individual people to suss out talent and coherence of other people’s ideas, even if they don’t totally understand them. Because by definition, like you’re it’s not that Don Friedman understood everything about theoretical physics thing, but but there I think there are filters that individuals or small groups of people that have honed in on this can apply, you probably want multiple different kinds of people with different kinds of filters, right. But I think we underestimate because it’s not legible because it’s not accountable in the right ways. we underestimate not only how valuable that is, but also just how possible it is. For somebody like Don Brittany, Don Bradman to actually exist I don’t think it’s just him, I think. I think it’s actually probably a learnable skill to sort of be like a Don braven in sussing out people. Yeah.

Will Jarvis 26:05
Which is, which is really weird, really weird and almost like counterintuitive in the agent live in that, you know, like, yeah, like Don could, like, have some really good ideas about how the future is gonna play out. And like, it’s,

Unknown Speaker 26:15
it’s not at all obvious that it should be the case. Because you could say, well, you know, Einstein is, by definition so much smarter than you that distinguishing Einstein from a quack pot, you know, is not necessarily an easy problem. I mean, how do you how are you gonna evaluate Einstein’s ideas without being smarter than Einstein, but, but effective Michael Nielsen, and both us a bunch and, you know, there are people historically that have have done it really well, like Michael Nielsen, give you an example of john Wheeler, the physicist Fineman was a wheeler. Yeah. and a bunch of other ones. JOHN Wheeler, it was either was that good at selecting or just good at attracting? I’m just, yeah, I think we underestimate the power of the individual mind a little bit these days and finding, finding good stuff.

Will Jarvis 27:01
Right. And I’m reminded a s who talked about this, maybe it’s a Peter Thiel in relation to like the Clean Energy kind of bubble in the late 2000s. And he talks about Solyndra, and if you ever heard of Solyndra Sure, yeah, you know, like, it’s like, it’s not a flat panels around panteley, like round solar panels. It’s like whatever pies efficient is like a flat one or whatever. And it’s like the the PhD Department of Energy had in physics couldn’t wasn’t allowed to use, you know, simple, high school tools to kind of determine that this was not going to work, or something,

Unknown Speaker 27:36
or Yeah, I don’t know much about that particular story. But But yeah, I think it’s this issue of individual agency as sort of, it’s an important one, I somehow heard that cylinder was actually less of a mistake. It was more of a politicized thing. Oh, interesting, actually, as bad as people said it was, but but I actually don’t know the details of that story. So that’s

Will Jarvis 27:56
it. That’s cool. That’s cool. Very interesting. So FRS, like, Can you give an example? Like what’s a really good example? Or like application? Like, like, what kind of problem would be perfect? Is it something like Manhattan? Is it something like, different like, what’s a good example?

Unknown Speaker 28:14
Yeah, I mean, the example that I’m often giving just because this is also this historical one, that sort of motive motivates it. In my particular case, is this idea of, of how do you how do you map map brain circuits? And if you think about it, you know, why isn’t brain circuit mapping a great company? Right now? Maybe it is, if you had the right investor? You know, if Ilan was investing in brain circuit mapping instead of neural links, maybe it would work out? So I don’t exclude that you could do a company on it, potentially. But why isn’t it in a kind of risk benefit calculation of a venture capitalist? Kind of a great, great bet right now? Well, first of all, there’s a bunch of technical risk and a bunch of a bunch of challenges right? in it. Second of all, it’s pretty capital intensive to get up to a level of scale. where, like, the brain is just really big, honestly. And so you have to get up to a pretty big level scale, you have to create a lot of baseline data sets, and kind of a lot of baseline maps, you know, what is a normal mouse brain look like? What does a young mouse brain look like old mouse brain before you can really say what’s going on? That’s different in schizophrenia. Right? It’s because the human brain is so huge, it’s actually quite hard to apply this directly to the human brain. So often, you’re talking about model organisms, like mice, where the application to something like a disease model is speculative. In principle, I think this is going to be hugely impactful for artificial intelligence, and we’re gonna understand the architecture of the brain. It’s unbelievable. But drawing a direct path of this is at any given time the best use of an artificial intelligence, venture capitalist or corporate arms money is to is to invest in the fundamentals of brain mapping technology. This is a big stretch. So empirically, you know, doing, you know, $50 million scale technology development projects, which are in turn upstream of the ability to map the brain circuits fast and cheap. That hasn’t been an easy sell for, you know, ai companies or companies or biotech VCs. And at the same time, it’s really a problem that requires a lot of integration of different components into systems in a way that isn’t easily sold as the greatest idea for your grad student thesis, right? Like if you’re a grad student thesis is about the chemistry for labeling neurons with different colors. But then that thesis, for it to be useful to anyone, you know, depends very much on somebody creating an ultra fast new kind of microscope, so that you can actually image those colors, or somebody creating a new kind of virus that can infect those neurons. Or if you have these coordinated problems, where you have to solve multiple problems together in one system. It’s not an easy sell that this is going to be the best paper that you can write, in the next two years to work on on those components. And even if you did, you might not be well coordinated with other people in other labs that are that are doing other parts. So so it’s an example that requires systems engineering and focus and scale and multiple microscopes and kind of kind of a very concerted approach. But it’s somehow like, fundamentally kind of pre commercial Unless, you know, someone like Ilan decides that they will just do it as a kind of risk investment and kind of call it a for profit company and then see if they can write it out long enough.

Will Jarvis 31:45
Right, right. So it’s just like this weird middle ground, it’s just like not, not really, really achievable. That’s, that makes a lot of sense. That actually reminds me another another left hand turn here. But you know, Robin Hanson’s kind of age event, like brain emulation is a path for AGI. Do you think that’s that’s a likely scenario?

Unknown Speaker 32:05
It hasn’t been my particular like, focus, or like, interest? I think it’s often frustrating to me, because I think a lot of the discussion around this gets caught in questions of sort of, in principle possibility of these things. And so you have neuroscientists, and you have philosophers and stuff saying, Oh, this, this is nonsense. It’s not in principle possible. I don’t think that’s the right level to be questioning this, I think, I think with sufficient technology, sir, that there’s lots of philosophical questions, would it really be you and so right. But I don’t think that there’s really an in principle kind of issue here. There are some interesting questions like like Christof Koch, and some others have this integrated information theory idea about consciousness. And in their interpretation, at least as it was a few years ago, if you were to run a simulation of a brain on a classical von Neumann computer, like your laptop with a separate memory and CPU and everything, that for various technical reasons, have their scheme about quantifying consciousness, that that thing wouldn’t be nearly as conscious than if you made a neuromorphic chip that actually had the physical conductivity, similar to the way the neurons are connected in the brain. Super. But even then, I mean, you could make that chip, right. I’m also not sure how seriously I take that particular claim. And anyways, I don’t think that there’s like a in principle, obstacles to doing this necessarily. I think there’s just a lot of, you know, is this a practical, desirable outcome that humanity is likely to pursue in the near term? Is that likely to be something that anybody really wants? Is it likely to be something that comes anywhere near before like, very advanced AI other kinds? I just it I haven’t been convinced that this is like the, the is the ticket, the right kind of framing of any of the stuff that I’ve been doing, I think of it in terms of fundamentally understanding how brains work on understanding what’s different in about a disease brain versus a healthy brain. But not uploading brains per se. But certainly, that’s a very fascinating book. And exploration, that’s a bit scary about, you know, as it were, it’s a little bit hard to maybe find what’s the what’s the particular assumption or hold that would make this I think, somewhat dystopian outcome like, right, not happen. Maybe it would happen.

Will Jarvis 34:40
Definitely. I think that’s a very wise observation. Well, Adam, your original background was in theoretical physics, is that correct? Yeah, that was what I did my undergrad on. Yeah. And then and you kind of you kind of switched into biology. Do you think they gave you kind of a unique perspective like on the field

Unknown Speaker 35:00
It’s something that Ed Boyden, who’s one of my neuroscience mentors has, has suggested, to actively push for in students is to, is to make sure that you learn both an empirical discipline and a theoretical discipline. Oh, interesting. Um, the theoretical part being gaining confidence in long chains of reasoning and abstraction and value of upfront design. And the empirical side, just being kind of learning from your senses almost guided learning, socially and learning in other kinds of ways that I mean, I personally found biology lab research vastly more difficult than I found any kind of calculation. computer based research, I think many people actually find that although some people seem to be naturally good at at wet lab biology, I certainly was not. But it was, I think, very interesting to expose myself to all that and sort of try to fight through it in various stages. Because it certainly changes my sense of what our possible limiting our gating factors and right project design or what kinds of personalities you need involved in projects, or what can go wrong. Yeah. Good. I think I think that that’s something that’s, that’s really useful. Most of the time when I see really fantastic computational mathematical people sort of coming and talking, say, Hey, we know what should I work on? I don’t usually actually say, Oh, go and forget the fact that you’re a really great theorist at the whiteboard. And instead, you know, do you know, solder together electronics or, or, or, you know, dissect embryos or stuff like that. I think that in many cases, people should just do what they like and what they’re good. And, but some people I think having joint exposure is, is really useful there. Certainly, certainly a crew of biologists that are better than me at both, that then ended up being very successful, you know, systems biologists or things by having both. Yeah. Gotcha.

Will Jarvis 37:16
I think that that makes a ton of sense. And, and they come at things from different angles can can often be helpful. So Adam, you know, other question out of left field, I love questions out of left field. But I’ve got a lot of diverse interests. So I guess that’s why, I don’t know, yet. But you know, what is common knowledge and neuroscience that kind of lay people just don’t understand or have any idea about? Perhaps Yun misunderstand.

Unknown Speaker 37:47
One of the things that I’m not sure even anybody understand. I mean, I think maybe these things are just things that Nope, nobody understands. But, I mean, one thing that I think is most interesting about neuroscience right now is like this, this interplay between like, sort of very uniform, or kind of unifying theories, and just kind of massive, like, biological heterogeneity. So like, when I was a teenager, you know, among other books that I was reading and whatnot, I was reading, like evolutionary psychology books, and like Steven Pinker, like how the mind works, and various specific, you know, books like that, where they sort of talked about this, like modular modular model of the mind, right, you have a different circuit that’s built by evolution for each thing, right? Um, you know, so you’re gonna have made detector circuit, and you’re gonna have, you know, decide, you know, how to deal with conflict? circuits, right, something right? You have, you have a bunch of things, and you have, you know, detect, you know, you know, all the visual tasks that are specific to hunting or something, you know, you’d have these sort of modules like, I think, is part of the idea. And then there’s kind of simultaneously this other body of ideas like the brain is a universal learning machine. It’s a universal substrate, computron Iam kind of substrate all the

Will Jarvis 39:09
rest, and like,

Unknown Speaker 39:10
yeah, Friston and Jeff Hawkins and a bunch of stuff before that, and this kind of canonical cortical circuits and all that. And I think like, what it is, like, really interesting to me, I feel like AI currently is mostly kind of on the more universal models. Kind of frost, right? Yeah. Um, what’s super interesting to me is like, we had just how do those combine, I mean, you could say this is just nature and nurture. And just how incredibly fruitful the nature and nurture problem really is. It’s not a simple answer. It’s like you know, you know, maybe, maybe, maybe you Want to form relationships as a function of social status or something when you’re a teenager or something, and this is a innate biological reaction is to like, participate in social status groups or whatever hierarchies? That’s a neat, like primate reaction. Okay, great. But like, how do you detect what a social hierarchy is? If you can’t see, you can’t see like difference between a triangle and a square? Right? How are you supposed to know the difference between high social status and low social status, right? And so all these things that seemingly are, quote unquote, innate, right, they actually depend a lot on learning to write. And so somehow, this, the parts of your cortex that self organize, to detect shapes, and then ultimately, they form this abstract concept of social status, or whatever, has to somehow plug in, and like, send the right wire down, you know, to your brainstem circuit that makes you feel a certain way, or whatever. And, you know, so so so so you have to have both self organized learn stuff, and like, ultra specific evolved stuff, they have to intersect in some way that I think we really don’t understand. And I think it might be partly that, compared to current AI, like, basically, the brain just has much more complex, like learning or training signals or reward signals, like it’s not just one reward signal, got it, good or bad, it’s like, this part of your cortex is being trained to do this particular thing that 20 years later will ultimately, you know, influence your, you know, participation in complex social, you know, you know, networks and hierarchies and stuff. But it’s being trained, you know, not just with you know, dumb generic dopamine, but with some, you know, it’s a, it’s a very evolved specific thing, this training, a very generic thing, and then you end up with some crazy interplay of those. I don’t think we understand it all. So anyway, nature nurture, I think, just how incredibly rich that’s going to be for the next like, 50 years of neuroscience is like, I think under appreciated,

Will Jarvis 41:58
definitely. No, yeah, definitely. That’s a, so much to look into it into and try to understand it’s also complex, it’s really, it’s a really fascinating. So you mentioned ai, ai safety, you know, how concerned are you about AI safety? And, you know, should we play putting more resources into it less resources? doing okay. What do you think?

Unknown Speaker 42:20
I’m pretty, pretty, pretty serious about a variety of different potential sort of x risk topics, I think, is something that is sometimes sort of dismissed by the like progress community as if they’re there. There’s, there’s if there’s an inherent conflict or something, I think it’s an important question. I think, I think the question is, what can we actually do? And what can we actually understand now about this? You know, for example, I’m coming back to what I was just saying, there’s a really interesting person named Steve burns, who was a physicist by training, and then recently sort of dropped out. and is supported on some kind of fellowship to work on AI safety research, but he’s specifically working on this question about given what we know about the biological brain, is there something you can say about AI safety? And when you start to do that, if you if you take seriously some of the stuff that I was just saying, like, what if there instead of one reward function for the brain, as we have in current AI, end to end learn systems? What if you have 1000s of individually trained modules that are getting training signals from a subcortical system that is itself a complicated intelligent machine? Then that starts to look actually quite different as even basic framing of the AI safety problem is what do you do if you have multiple sub agents or multiple sub, you know, cost functions or so on training it? versus this? Am I trained to optimize paperclips? Or am I trained to optimize something else right monolithic. And then there’s yet other ideas like whether you can make General, very advanced AI without having any kind of agentic emergent agency happen at all? And then there’s kind of this higher level thing. What if you have lots of AI is of various kinds, but just civilization like think about Facebook? Right? like Facebook, in some sense, it’s kind of like a not an AGI, but it’s kind of civilization and Facebook are sort of maybe kind of misaligned in certain ways, in terms of like an overall scale. So I think the question is, how do you hone in on the framing and are there any like really interesting, actionable and your current research problems? They’re interesting, um, but like, on the margin, you know, the AI safety research that’s going on for the most part, I think is very worthwhile. Because they’re taking stabs at the stuff, right? And sort of the way I think that, you know, just various kinds of pure math are interesting, because they kind of give the fodder for whatever’s the next round of thinking about this. I, you know, I’m broadly like very supportive of a bunch of AI different kinds of AI safety research near term and long term that people are thinking about. I’m just not convinced that it’s really converged. On Jada. Here’s the type of research like, tell me what, as a program manager, I should fund in AI safety research. It’s still very, very unclear to me. Gotcha.

Will Jarvis 45:35
So it should be looking into it. But there’s there’s a lot of different avenues and maybe even ones that are broadly discussed yet.

Unknown Speaker 45:40
Yeah, and but in general, Ai, safety biosafety? I think these are really important things to be. You have to have the community working on these things, or else, where’s the idea is gonna come from, nobody will have thought about it. But it’s obviously a complicated trade off, because I think a lot of what has galvanized AI safety research in recent years is progress in AI, again, so I am not into the idea of trying to slow down progress in AI, I do want to actually, in some ways, deepen and the foundations and improve the level of rigor, perhaps of certain kinds of AI, but but it’s, it’s not that you slow it down is that these things have to kind of comment last wrap. But if we, if we somehow say AI safety, this is all bullshit, yeah. Can I swear on your podcast? Please forgive me, if we say something like this, then then we’re really excluding something that’s very important.

Will Jarvis 46:44
And I’ve definitely with you one of my, I think I very much in the same vein of your thing, I think AI safety is important to think about, and also, I think it’s, it would be very bad to like, up, we can’t do anything with Ai no more research, you know, just like cut it off. Because I think, you know, computer computers is one of the few are, it’s one of the few areas of our society where you can still, you know, make a lot of progress. You know, it’s it’s, there’s, you don’t have the FDA like stopping you. And like, I don’t want it like just, we’re gonna stop now. No more AI research. You know,

Unknown Speaker 47:16
there’s always another one that I think is maybe sort of neglected is research on for broadly the category of thing that Facebook does like recommender systems, and influencing human behavior and surfacing information and propagating information and networks. But that is sort of more for the public benefits, right. So you could imagine versions of Twitter algorithms that are actually searching for correct information, as opposed to viral information or so on. But I feel like those are tools that we should be able to, I should be able to get a Chrome extension that looks at my Twitter, and sort of helps fact check me or helps recommend me a different set of content that’s more beneficial to me, long term. And so we’re actually even thinking about whether there are kind of focus research organization type problems in sort of that that space of improving, improving human reasoning improving this course. And that those are also I think, like a broader aicc kind of notion. Yeah,

Will Jarvis 48:17
absolutely. Yeah, you look at I don’t know, if you’ve ever seen like a younger person using Tick Tock and, you know, it’s like recommender system. It’s just, they’re just swiping for hours, and it’s not clear, there’s really any social benefit to it at all, you know, like,

Unknown Speaker 48:29
yeah, or us, you know, and and that feels to me like it is one of these areas where there’s kind of this, this disproportionate balance of power, kind of on the on the selling ads side, yes. Things versus on the kind of humanity figures out how to use these things really well. So maybe, maybe, maybe we need those sort of public benefit. You know, gigawatts of computing kind of situation for for some of these alternative approaches. Yeah,

Will Jarvis 49:09
definitely. So, before we jump into overrated, underrated, I have one more question. It’s kind of broad question. But you know, you’ve been working on this stuff. You’ve been thinking about it since, you know, at least since you were a teenager, you’re making progress in neuroscience and AIS related fields. You know, have you gotten mobile more bullish on your ability to make progress over the years less bullish? You know, maybe it varies over time? I don’t know.

Unknown Speaker 49:34
Yeah, I’m, I’m more optimistic now than I’ve been in a while. That’s awesome. I think that we’re, what’s funny to me with the focus research organizations is it’s sort of a very simple idea. You know, it’s, and yet it’s kind of getting traction, I would say in both the philanthropic and government settings, and it makes me optimistic of just sort of, if you really just articulate very clearly what you want and you kind of go around yelling about And you have the right networking in certain ways and if you try to try to be rigorous in certain ways, but I’m, I’m optimistic about the more optimistic now about changing the organizational side, the funding side, I think there’s a lot of post COVID stuff. There’s a lot of stuff from from cryptocurrency and the set of people that made a lot of money on cryptocurrency and what they can do to fix science. So I’m very optimistic about the set of organizational and funding modalities that are spinning up now that I feel like really, you know, there certainly were when I was describing the early period of being in grad school sort of being disillusioned about what can we do there? I mean, that was around the same time, you know, DeepMind and other things were being started. So there certainly was a lot of organizational innovation, you know, the Allen Institute and a bunch of bunch of other great things, but I personally feel like the the prolific there’s a proliferation right now.

Unknown Speaker 51:00

Unknown Speaker 51:02
potential to create new organizational modalities for these things is super exciting. Whether they be harpa or FRS or, you know, bespoke Institutes for particular problems or any number of other things.

Will Jarvis 51:16
That’s really good. That’s really good. A little little note of hope there. I really like that. Yeah. So overrated, underrated. I’ll just throw a term. Just tell me it’s overrated, underrated. And maybe sentence why, and I forgot something. I love this. And maybe it’s correctly right. And I don’t know. So George church, overrated? underrated. Still underrated?

Unknown Speaker 51:37
I mean, still still underrated? Absolutely. You know, he’s, you know, he’s in mid 60s, he’s starting to be recognized by just know, so I mean, he, I think that so many, so many ideas. You follow them a few steps back. And you get to George either as a catalyst of those ideas, or as the actual human origin of those. And he’s, he’s just so supportive. And, you know, I think of it actually I talked about it a lot with with FRS. Yeah. You know, how are you going to do make some really exciting environment for someone to work in an FRS, if you’re not getting them startup equity from the beginning, alright, because, you know, depending on how they they’re done, they’re, you know, they’re often going to be, you know, 501 c, three nonprofits or subsidy, right? I think about the Trix lab, where it’s a, it’s a, it’s a university, but it’s basically like a startup incubator without even trying, right, because you just get so many great people. And then the things that they do afterwards are so shaped by that. And by the presence of that network, that they, you know, it, it changes their opportunity space to just hang out there. Yeah. And so I really want new research organizations sort of have that character where it’s not that you have to necessarily make money off of something directly, but but you benefit by being at the ground floor of a huge amount of entrepreneurial activity. Yeah.

Will Jarvis 53:06
How do you get some of the upside? Yeah, and to think, you know, eight, eight miles down the road from here at dick, you know, they kicked him out, like, you’re not going to class. Like, holy crap, you know, guys, geez, Duke is a really awesome place. They can allow to make mistakes, that’s probably a really big mistake. Right? That’s a big mistake. Neural link, overrated, underrated kind of correctly. Right. What do you think?

Unknown Speaker 53:33
depends on who you ask. So I think that the neuroscience community, some subset of the neuroscience community, I think, underwrites it somewhat, I think that there’s a tendency to sort of say, Well, look, the data that they’ve produced isn’t anything particularly special scientifically. So let’s just discount the fact that, that microchips and surgeries and stuff that they’ve been developing the packaging and all the engineering is much less, it’s just so much more advanced and so much more of a step towards real human applications. Yeah. Then much more clunky engineering that’s been done in the past wireless transmission of the data. So I think that there are certainly some that underrate them. I think that the particular approach that they’re taking of putting lots of little electrodes in the brain is overrated. I think that it’s actually very hard to avoid some of the things that they’re working on but but you know, immune responses in the brain vessels getting very wide coverage not having to make invasive and infection prone surgery. And there are certain things where you can substitute, you know, for for, even for severely disabled paralyzed people were even non invasive technologies can still still give a lot of benefit. And so the cost benefit for that particular type of highly invasive technology is Less into but you know, I’m still very into brain interfaces. And I mean, I think that neural link is generally a positive force in the world. And

Will Jarvis 55:10
nice. Do you think there’ll be like less invasive interfaces that will come along that are kind of a superior approach,

Unknown Speaker 55:16
eventually, eventually, eventually, I mean, dark, DARPA actually has a program called, I think it’s called n cubed, which stands for non invasive and non surgical neuro technology, which combines some of these ideas of non invasive measurements, like colonels working on with other ideas, like sort of nanoscopic transducers, whether they be biological or something else, that help those non invasive technologies to actually pick up a signal because, you know, neurons in your brain are not evolved to to be able to be sensed by hardware, they don’t care. But if you can put little transducers in there that that can, can make it much more powerful. So down the line, but I think we’re talking decades down the line, I think,

Unknown Speaker 56:04

Unknown Speaker 56:06
And I’m also optimistic about certain kinds of near term things in neuro technology, medical applications, and new kinds of deep brain stimulation, or ultrasound stimulation or other things. But I’m just not on board that the particular device neural link has shown so far is going to be but like the thing. The iPhone of neural interface, right,

Will Jarvis 56:27
clicking that lightning cable right up there. The Manhattan Project overrated underrated. posh, that’s a complicated one. And,

Unknown Speaker 56:37
you know, I feel a little bad for, for in one of my early tweets about FRS is referred to as many Manhattan projects. And well, you know, not everybody thinks the Manhattan Project was a great outcome for humanity. Yeah. I mean, the scientists that work on it, I don’t think think of it as a great outcome. But, boy, the combination of organizational and technical excellence of the absolute best in the world on both fronts and truly pushing on something that was life or death.

Unknown Speaker 57:14
underrated? Yeah.

Will Jarvis 57:19
Yeah, I’d be read. Now, it can’t be told by General groves. Highly recommended, you know, he, and he just, so he’s the guy, you know, General groves, he managed the, like, the management side of the Army Corps of Engineers, you know, and he like, picks, you know, early parts of the book, you know, he picks open heimer. And the entire army leadership’s like you can’t pick them. Because, you know, he’s like, yeah, drama is politics, then online, and general Greg’s like, I don’t care. He’s the best. That’s when we’re going out. And like all this, just story after story of like, how they just willed this into being in like it right. You know, that there’s obvious downsides to having atomic bombs. And there’s a lot of bad things that happen. But the technical accomplishment in such a short period of time is very impressive.

Unknown Speaker 58:01
It’s extraordinary. Yeah, no, so so in general. So I have a two five hour car rides coming up next weekend and planning to have audio book of Richard Rhodes, the making of the atomic bomb. As my my backdrop.

Will Jarvis 58:16
That’s awesome. That’s gonna be good. Hopefully. That’s great. Well, Adam, thanks so much for coming on. Do you have any parting thoughts? And where can people find you?

Unknown Speaker 58:28
Thanks for having me. Yeah, no, great, great to chat. Awesome questions. And, yeah, yeah, you can google Adam marble stone, or I have a website still linked to my old MIT address and in Google focus research organizations and find our white paper about that. Yeah.

Will Jarvis 58:48
All right. Well, thanks, Adam. Really, thanks so much. Well, that’s our show for today. I’m Bill Jarvis. And I’m will join us next week for more narratives.

Transcribed by https://otter.ai

Join the discussion

More from this show