71: Robotics, AI and Philosophy with Abhay Venkatesh

Play episode
Hosted by
Will Jarvis

In this episode, I’m joined by Abhay Venkatesh to discuss AI, software infrastructure, robotics, anthropology, philosophy, and startups. 

William Jarvis 0:05
Hey folks, welcome to narratives. narratives is a podcast exploring the ways in which the world is better than in the past, the ways that is worse in the past towards a better, more definite vision of the future. I’m your host Jarvis, and I want to thank you for taking the time out of your day to listen to this episode. I hope you enjoy it. You can find show notes, transcripts and videos at narratives podcast.com. Hey, how are you doing today?

Abhay 0:45
Meanwhile, sneaked out work a little bit early today to do this podcast. And yeah, excited to be on the show.

William Jarvis 0:53
That’s awesome. Do you mind given just kind of a brief bio, and some of the big ideas you’re interested in

Unknown Speaker 0:58
a

Abhay 1:00
little bit about myself is I work at this company called enduro industries, which is a software and hardware products for using defense. been working here for two years and eight months, I currently work on the platform infrastructure team,

working on a variety of different things, mostly on our on our platform, which is we’ve called the lattice AI platform that is this platform for defense. And before that I was autonomy program I built for early autonomy systems. And before that, I was in the perception side working directly to the CEO, as an early employee. And before that, I was in college I study I research have, and yeah, that’s been like a little bit of my of my current most recent background, and I was also the president of Rei club, in university that we built from scratch. So that was fun.

William Jarvis 1:59
Super cool. Super cool. And you know, why spend your time on defense? You know, what about it is is appealing to you in particular?

Abhay 2:09
So yeah, that’s interesting question. I just, I’m not sure how much I’m able to talk about this. This topic. Yeah, yeah, yeah, it’s a little bit related to my boiler. But the way I ended up at my current job was mostly around, I was looking for a fast growing company. I thought Android is one of the most is going to be like a fast growing company, given their background it had, you know, the Director of Engineering from palantir, which is Brian, partners, partner, Trey and co founder and CEO of oculus primer, we’re already sold $2 billion company, Facebook. And just having that all star lineup, as well as Matt and Joe, who are the other founders was so successful from Oculus and palantir in the past, to just like, and also our founding team like that is tied to was kind of stars aligning moment that it was hard for me to not see how this would succeed. So that was good. I ended up joining

William Jarvis 3:15
nice. So it’s something like, you know, this is a rocket ship, I see I’ve got an opportunity here, this is what this is worth pursuing. Not like, Okay, this defense, defense is a big problem. Like I want to apply my talents to, if that makes sense kind of came after the fact.

Abhay 3:27
Yeah, more lending more, more on both sides.

William Jarvis 3:31
Cool. Cool. It’s super cool. I want to talk about robotics a little bit. You know, we had this like, promise, if you look at science fiction in the 40s, and 50s. And you think about the, you know, the Jetsons and you think about right, you know, you know, robot assistance around everywhere. And that really has not panned out, right. I mean, I’ve got a Roomba sitting beside me, but that’s about the extent of, of how far we’ve gotten our, their, their hard technical challenges around perception that just make, you know, what is the big stumbling block from where we are now to get to the point where we’ve got, like, you know, really robust, and I may not be defining this well, but, you know, people that can help us like, you know, fold laundry, or like, robots can help us fold laundry or just like, household chores, you know, industrial capability, you know, where do you see the big, big places that need innovation to get there?

Abhay 4:27
Definitely. As a kid, as well, I grew up watching the shows, I would watch the Jetsons and that’s probably the first way you get introduced to robotics is a humanoid robot was like friendly, rough out and so forth. And then, as I studied computer science, and studied artificial intelligence, this almost was you know, the opposite of the reality where these kinds of technologies are, in fact a lot harder than building let’s say, some sort of pure virtual Isn’t that like series like easier than building? Someone who will clean your room? Right? And it’s not apparent? At first, like, why would that be the case? like Siri can answer any questions about the world because it acts as search engines. Why is that necessarily easier than being able to clean my room or something or make me food? That obvious, but what we realized is that we have really good systems for information processing, collecting the world data, organizing it until further we don’t have very good systems for perception, collecting real world data, understanding the real world. And what I’ve realized over time, is that the systems necessary to build these kinds of algorithms, the real world infrastructure is not really that there is really far away from being able to build general purpose robotics that actually operate in the real world, the closest we have gotten to general purpose robotics is more recently is again, in the domain of picking where you will pick like objects in line. And it will be general purpose in the sense that you can pick different kinds of objects. But but that’s the extent to which general purpose robotics has gotten. And yeah, it’s I would diagnose as a systems problem at the end of the day, but obviously, people will probably differ on that.

William Jarvis 6:23
Interesting it when you think about it as a systems problem, is that just like connecting all these disparate technologies you need, like so I can think of, you know, let’s say you want to pick a strawberry, well, you’ve got to identify what a strawberry is, yeah, you know, and then you can’t, you know, you’ve got to have some way of determining how hard you’re going to press on it when you pull it off. Because if you press too hard, it’s gonna explode. You know, there’s, there’s many challenges here, you know, how do you navigate in space? Yeah. So I guess what do you mean by a systems like system problem? Or is it just connecting everything up in a robust way.

Abhay 6:55
So certainly, that is one aspect is wiring things together, and getting all those things going. But the other aspect of this is I like, so I think there’s this quote from Isaac Newton, when he says that, if I’ve seen father it is by standing on the shoulder of giants. And that’s like, he was speaking from a scientific perspective. But he’s looking to invent new things and discover new things in the scientific domain. And technological domain is a similar kind of idea where every sort of great leap in technological progress is made possible by infrastructure or systems. And let me give you an example for that. When we think about Google, right? It’s how was Google even possible, it was possible because you had this thing called the internet, which is this ubiquitous technology that we don’t even think about. Alan Kay has a line on this, which is, when was the last time you saw technology that this technology doesn’t even feel like technology almost feels like a natural resource. It feels like the ocean that you can ship bite onto, and it will just work. So these kinds of when we think about, okay, what enables these really powerful applications? It’s those, like hidden layers of systems that are built up. And when you transpose this to robotics, similar, like, do we have the hardware? Do we have a sensitive system? Do you have the perception systems? Do we have the center of like, I don’t know, like, development tools, in you know, the robotics development ecosystem is far worse than the cloud software development ecosystem. So really, when you add up all those things, it it manifests in a difference that is not just like, a 10x difference, but it’s like 1000, or multiple R. Us difference in you know, quantity output? I guess. That’s one way to formulate that. Not sure. How would that make sense?

William Jarvis 8:50
Yeah, that makes a ton of sense. It also seems like, it’s probably like a part of it seems to be kind of a capital problem. It’s just so much more expensive to do things like and in the real world, like, you know, when you’re dealing with robots is much easier than like, you know, zero marginal cost software, right? Like, I don’t know. Do you see things changing now that you know, capital is so easily available? Do you think it’s possible to that we’ll see more innovation in the in the robotics space, just because

Abhay 9:20
I go back and forth on that, ideally, so the way the whole you know, the way the idea around quantitative easing is that when you increase the amount of credit in the economy, it leads to either one or two things one, it leads to productivity increases or leads to inflation. To get the productivity increase, you need spare capacity, the ecosystem which basically means that something along the lines of engineers ready to build stuff, and they’re not building stuff because they don’t have money. And if that is true, then you will, it will. Because you have spare capacity, but if that’s not true, you see, like weird phenomena, okay. A what what teams do You haven’t even to me is you don’t you don’t actually seen people going out and learning stuff it actually. You’re using weird phenomena on the internet where you’re seeing, like, you know, I don’t know, like NF T’s or something and crazy stuff like that, like the meme economy is getting created instead. Yeah. So what we wanted was, you know, people building robots and buildings and monuments and so forth, what we thought was a meme economy. And everything is more fake than ever. So right. I that’s, that’s what I see is happening. The question maybe is better framed as we don’t need more credit, what we need is spare capacity. And then, yeah, how do you sort of arrange things in a way that has spare capacity to build stuff in the real world or something?

William Jarvis 10:48
Right, right. Yeah, it’s a it’s a real problem. And you make a great point, it does seem like things are just getting even more virtual, you know, there’s this whole discourse around like, whatever the hell web three is, I’m not sure. But you know, it’s something right. Everyone’s like, just leaning in there to, it’s very interesting. You know, how do you this is a tough problem. And I don’t expect you to have a an answer to it. But, you know, you know, how do you encourage people to actually work on real problems in the real world? You know, yesterday, we talked to Carlos from quase, who’s doing millimeter wave drilling for geothermal energy, which is really, you know, really like cool technology. It’s very difficult. But you know, if he pulls it off, it’s like, this is very important thing to be doing. Is it just like, you know, you started an AI club? Is it just talking to young people, telling them that it is possible as to solve some of these problems that we have in the world? What do you think about that? Like, like, what should we try and do as individuals to help?

Abhay 11:53
It’s a good question. And it’s a weird one. Because it’s funny, because when you look at the decisions, people have, you know, very good spend their time on, it’s like, you can make $10 million in three months working on crypto, or you could spend 10 years on improving other people’s lives. I mean, I’m not I that’s like, yeah, I’m not saying that crypto is bad in everything. But I’m saying that the way things are set up right now is that the incentives are to work on the virtual thing, which is the To be fair, it’s like not been the case necessarily just for crypto, it’s been the case for software more generally, or like, the last 2030 years. So it just been way easier to build stuff in the software. So I think there was a point someone brought up on this was, it was a satellite that actually very excited, okay, if it’s not, that we should back up to what we should do is make the real world stuff easier. And then that’s like, it’s definitely a long debate. And I think there’s many ways to do it. And, you know, one of the ways is just making that cool again, and I think people have been working on this. And it’s pretty cool, actually, that when you tell people about your crazy idea, Kermode mining project, I mean, yeah, I think it is in and then what we need to see is that a series of successes that make this like the thing to be doing, and that can maybe Yeah, yeah, really?

William Jarvis 13:26
Yeah. And it does affect the success acts aspect is really important, right? Like people need to have something that can kind of look up to or like desire, and that’s, if they don’t really have that kind of model they’re in, it’s in trouble. It’s like, how do you get there? Really, absolutely. Kind of success begets success is super interesting. Glenn, do you have a question?

glenn jarvis 13:47
Yeah. So as an AI guy, I’ve had this, I’ve come across this idea that for things like vision systems, and general purpose AI, what you might need is, in essence, a body or a way for the AI to navigate the world. And this is from the idea that as animals, and specifically as people with really advanced consciousnesses, we think of things in terms of what we can do with them. So we think of pens as things we can pick up, and then things on the size of way too big to pick up. We think how we can get around them. Like we look at a mountain and we think I can walk up that. Yes. What’s your opinion on that?

Abhay 14:33
I think I’ve read an argument from this, I can’t remember but it’s something along the lines of fishermen basically use something like this argument where, since the AI is not in the world, and then it’s not. Since AI is not part of the world is like a philosophy, the arguments being in the world, in the world in the sense that you just described very do not interact with the world. You cannot actually attain intelligence the same way humans do. Therefore That artificial general intelligence is not possible. So something along the lines, it’s a pretty interesting paper. I can’t remember who wrote it. It’s a pretty famous one is but think, is it Harvard, Dreyfus? I think think it might be. Yeah, hybrid Dreyfus, if I’m getting that right, or some someone really fat person, but it’s an interesting idea. And it’s always an open question of, can you actually get intelligence this, like, there’s intelligence as being substitutive of humans, and then there is intelligence that is of a different kind, almost, like supernatural, maybe not supernatural in nature or something. And I’m always, I don’t know how you get intelligence studies, substitutive of humans in the sense that you describe like, how do we get an AI to move around the world, even if you get the perception systems sophisticated enough, you know, even that it’s unclear, maybe, maybe not, maybe I’m wrong, where maybe there’s like a limit, you hit better, the perception systems get so good, they meet the resolution of the eye, and nose into fourth, and then you put all those together in a mechanical body, and then it’s an age and it will learn by moving around the world. Maybe that’s possible. I don’t know. That’s one time intelligence. And then there’s the other kind of intelligence that’s not in the world, in that in the formal sense where it’s in this virtual world, it’s in the internet or something, and then it operates purely on language. This is more closer to the current foundation models we’re looking at, for legibility, and birth, and so forth, that are imitating intelligence, purely by operating on language. And that is a different kind of intelligence, then the intelligence that is in the world that is interacting with the world, and the physical world, at least. And those can be separate agencies with different kinds of powers. And maybe we’ll have Yeah, examining speculating a little bit, maybe those two kinds of intelligences have different road trajectories, where the, the GPT 20, or something is very good at and knowledge work, and so forth. Whereas the, the depth and bite that is in the world is able to, yeah, maybe maybe interact with as an agent with the physical world and learn interesting things that. But there’s like pretty, pretty hard challenges on the real world side, because the way we have done agent learning is using a large set of episodes, in terms of like you should, when you’re training a game, or agents in the game to play the game, you have to like run billions of simulations. So you can’t do that with the real world robot. So that’s one challenge. But otherwise, yeah, maybe they will train two different kinds of intelligences. And not necessarily like one super powerful intelligence. That makes sense. Right?

glenn jarvis 18:01
Have you heard of psych, it’s a company who’s trying to, like hard code, almost heuristics and language. And it’s really interesting, because they’re trying to get a jumping off point for general intelligences to understand things like, you know, oh, it’s really hard to explain. But like, if you say, Glenn died, and then in the next sentence, you say, Glenn was walking around the mall? You can you assume that, like, they’re talking about different times, because of your general heuristic of you know, that people don’t walk around malls or something like that, or dead people can’t walk

Abhay 18:45
is the idea that you’re going to build a heuristic database of some kind and prime, the general learner from that?

glenn jarvis 18:52
Yes. And they had written this like, or they had come up with this, like, type of coding language where you could like, you know, explicitly state these heuristics. And I listened to a podcast about it. It was from Lex Friedman, but I forget the guy’s name. And I don’t even know how to spell psych. So yeah, I don’t know if this helps a lot.

Abhay 19:17
It is recent, but I guess it’s Lex Friedman. So I’m sure it’s like fairly recent.

glenn jarvis 19:23
Well, the guy’s been doing this for something like 30 years, I think. So. And the way he started it was he was asking his friends, or colleagues or whatever. You know, if you could distill all of the common sense, into a number of ideas that people have learned, like, how many could there be, you know, me, and everybody kind of came up with like, probably around 100 million maybe. And so how long would it take for a number of coders to using a language right explicitly Write them down in a in a database.

Abhay 20:02
Hey, that’s an interesting, interesting point. When what number as you got into Do you happen to know?

glenn jarvis 20:10
They said it in the podcast was younger I think his name was younger, something younger and he was psych.

Abhay 20:18
Right? I guess how many like so you said it. There’s only 100 million core ideas or something. And how many ideas does he have so far?

glenn jarvis 20:27
I think he had gotten to something like 70,000 or maybe 700,000. Okay, like,

Unknown Speaker 20:33
that’s a really fun way off.

glenn jarvis 20:37
Yeah, not super far. And he’s been going for 30 years. But um,

Abhay 20:41
he writes brutal high motivation first, and I guess

William Jarvis 20:46
Yeah, exactly. Well, this reminds me of Hey, you know, how concerned are you about like AI safety? We’ve talked to quite a few people that will work on the field in the park, on the podcast. I’m almost more worried that we never get to general AGI you know, AGI. But yeah, how concerned are you? And what, what do you think should be done about it?

Abhay 21:12
So I am concerned about the AGI question is always question mark. And there are people who have been, you know, raising their arms about it. You know, this famous figures about that, the shorter term concern is actually more around what I would say data privacy, and their regulation, which is who controls like the data sets at one, one setup. And then the other one is more on a compute governance, who controls the compute, because what we’re seeing right now is that the powerful intelligences that are getting built, require heavy compute. It’s a compute intensive, yeah, I don’t even know if it’s like data intensive anymore. It’s almost like a compute intensive. So whoever has the big clusters are going to yield outsized returns, or enter for Joe, that is a beard setup. I think mostly because when I think about AI, it’s kind of a public good invention, in the sense that this is a trick of scientific discovery for discovery against the public good discovery. It’s like patenting equals mc square, or some fundamental law of the universe, like that. Those are the kinds of things you’re discovering, but what we’re seeing in practice is that it’s been big tech companies that have the capital resources to build these things. So that is the that is how I would frame that whole debate, at least in my head. I mean, I’m sure that people have other opinions. But that’s my main main card there is who controls the the compute

William Jarvis 22:52
just a resourcing problem, and there’s only a few people that really have the capability to make that happen.

Abhay 22:57
Yes, and then they will make it happen for their own benefit, I suppose. For the benefit of some some greater, greater good. So that is, that is the situation right now.

William Jarvis 23:09
I think it’s a good way to think about it. And one should perhaps go buy some Fang stock.

Abhay 23:14
Yeah. People have been new on that and going to be interesting how that develops.

William Jarvis 23:20
Yeah. Do you think you’re feeling that the I have the I have the feeling that the big tech companies have gotten fairly sclerotic. Yeah. What’s your feeling on that I, we’ve talked to someone from Facebook’s AI research lab, it and he works on robotics, as well, actually doing some pretty interesting things. But outside of like some few small pockets, it seems to be just like, sit on the giant, you know, money printing press of ad revenue and ride that out until, you know, despite internal political battles until it explodes or something, I don’t know. What What’s your feeling on that?

Abhay 23:59
Not sure it’s uniform at these companies. There are certainly places. There are certainly ways to retire early by working with these companies, right? Like when you, I’m sure there’s like many, many positions you can join right out of as a college graduate at a big tech company where you? Yeah, it’s an early retirement package where you you graduate, and now you’re dying, or something. Yeah, I’m sure that’s possible. But at the same time, it’s also the case that all the top AI researchers are in some sense, working with these companies. Yes, actually, Facebook or Google are opening averages now kind of like yeah, funded by Microsoft. And so yeah, it’s it seems to be the case that both have, they do have the best day I had hired. So this has been from our outside perspective. I did enjoy Facebook, but that’s been a while and yeah, that’s that’s my view. At least make

William Jarvis 24:59
sense? Make sense? I want to shift a little bit. And Gerard, you know, how did you first get introduced to Gerard? And and to you? What makes your rod really interesting? Like, like, What fastpack? Is it like, understanding desire? Or is it the kind of understanding of myth and violence like, like, what is really interesting to you?

Abhay 25:22
The way I got introduced to it was I was really interested in understanding anthropology or, you know, studying human nature and what that means and came across. Yeah, I think Jarrod Stein himself as a cultural anthropologist, and he has this theory of human nature, at least that is sort of revealed through the certain texts. And I guess that is the aspect that is interesting to me, which is, yeah, he has this theory of human nature. And it has, it’s like a, it’s one model of viewing people as like mimetic objects. And yeah, and it’s, it’s, I guess, from my AI perspective, not that crazy, because you’re totally imitative learning, so forth, right? And now, it’s an interesting model and interesting way of analyzing human behavior. And that is sort of my main exposure. That is

William Jarvis 26:22
definitely definitely, I don’t know about you. But do you have a moment, you know, after you’ve started reading, Gerard, we’d like to look at your own desires seems like wow, like, man, like, I need to really sit down and think about this.

Abhay 26:34
Absolutely. So there’s been, at least the whole so there’s this, he has this one, one aspect of his theories, like the triangular structure of desire, and that one really hit me hard. And he’s been I was analyzing my own sort of relationships, and so forth. It’s like, it’s too, it’s too real. And it’s like, well, this is this is way too real. And that has been pretty powerful, at least. And I’m surprised people haven’t discovered this already. It has embedded in literature, it’s embedded in Shakespeare and so forth. But I’m surprised nobody just said this theory, you know, think someone I’m sure some people have. But yeah, it’s a really powerful model that it sounds almost too stupid to be true.

William Jarvis 27:21
You know, it does it is obvious seems simplistic, until you start thinking about it, you’re like, oh, man, like, it drives quite a lot. I’m in the middle of reading notes from the underground by duster yesterday. Excuse my pronunciation there. And it’s incredible how ingrained all these ideas are around, you know, like, you know, he’s talking about how we used to scapegoat people. And now like, we can’t like, when we do that we know it’s wrong. And isn’t it worse to do it if you know what’s wrong? And then I like to make carriers. It’s quite, quite impressive. And I can see how tirar pulled a lot of his ideas out of that. Have you gathered, you know, how much do you lie about the specialness of Christianity? And how Gerard paints it as being like that first. Like the Christian myth, the story of Jesus being innocent being the first kind of scapegoat this kind of proclaim to be innocent in in myth a, I said, I don’t know if I’m saying this very well. But yeah, yes, this do you buy that? Do you think this is? This is a true, true statement?

Abhay 28:27
Yeah, that’s an interesting question. Let me think. I think I would say here is that I think you’re not sure, Gerard. I don’t know where he comes off on this. So in his own texts, he has mentioned that it’s not while this is why definitely mean is that this is this. The way he describes it is like this is regulatory mechanism in Christianity, not sure he thinks it’s unique. And he has pointed to other counter examples, like for example, there is a Hindu tradition, which and he’s not an expert in Sanskrit, but he didn’t leave. He has talked to Sanskrit authors, and he believes a similar mechanism is present in other regions. So I don’t even know where he comes off on that. His own background is Christianity so it’s understandable why he is focused on that. So it is entirely possible this is this is it seems like underrated phase of research right now at least in the way he describes his voice. Or I don’t know if it’s just your IDE but but I’ve heard on this is like, religion is bad cosmology, good on anthropology, meaning that these texts have anthropology embedded in them, and they’re all worth understanding and studying for that reason. And then there’s this whole notion of I don’t know if you’ve heard this, there’s like The axial age and there’s always like ages in time or knowledge where it’s almost like human history is evolving to language or understanding of consciousness evolving. And during the axial age, all these, there was like, one period of time in human history where all these religions were born. It’s kind of like a Cambrian explosion button in religion. Yeah. And after that, you don’t see any more religions being born. So it’s kind of like this evolving consciousness or of our language. And that way, you can see, you know, I think the same period of time is Christianity, Buddhism, Zoroastrianism, and a few other religions being formed. So it’s like, very much like this is evolution in consciousness, which is interesting way to think about it.

William Jarvis 30:48
Or super interested, maybe it’s just like, all these things came together, which is the right time for, you know, this kind of like, thing for us to discover. And then kind of like the Julian Jane’s like origin of consciousness thing, right? That’s very interesting. It’s very interesting. Which one of gerards works impressed you the most.

Abhay 31:10
So that’s a tough one. I think I really like this one called when these things begin. The reason why that one is the setup is basically like the author attacking him with questions I line up with, but why not this? and so forth? And he’s just, like, destroys him. Yeah. Which, which I really love. If it’s a pretty interesting set up, and he, at least the to the defense of the the questioner, the question was asked, like, main, the main objections, people usually have the right theory. And he answers them, depending on your interpretation. I thought it was fairly, you know, in a fairly agreeable and there were some, there were some very wild wild talks you had, that that text not only goes over, like the usual, like, anthropology, philosophy, religion questions, but also goes over, like science questions. So like, Gerard has theories on like, why, you know, we had this, this, they had this whole exploration in, almost like, there’s this whole this pattern in like, physics, where people are like, okay, we’re gonna open up parts, like we’re gonna, okay, we discard the atom, and then we have several atomic particles that are strange, and then just keep trying to go down. And then somehow, Gerard believes that this is almost like a dead end, in a way or we’re looking, we’re just gonna keep on finding another layer. Now, he has views on all these things, which is not necessarily directly attached to his main, main main main theory. So yeah, that’s why the interesting tidbits.

William Jarvis 32:53
Oh, that’s super interesting. What was that text again? Was it when it begins? When these things begin, when these things begin? That’s super interesting.

Abhay 33:01
Look back at the name as well. Yeah.

William Jarvis 33:05
super interesting. You know, what does it been like? I’m assuming you’ve got you know, your, your great engineer, got a hard engineering background? How is it like coming to like, look at human nature, cultural anthropology through that lens? And has that informed anything? And perhaps, you know, do you think that made Girard a bit appealing and that it’s systematizing in some sense of human nature?

Abhay 33:34
Yes, that is definitely one aspect that I. So this is one of the critiques of your lecture, I believe that there is an aversion in the humanities or social sciences to systematising. Right. And they, they were they were the structuralists like fly ladies fries into Ford, who did have some systematization. But more more than modern has been against systematization. And more around. Yeah, like avoiding that. And there’s, I think, a long intellectual history reason for why that is. But that is definitely true. In the humanities, or social sciences, there’s an aversion to systematization. And as a computer scientist, that is something that I’m naturally inclined towards is understanding, you know, patterns, all kinds of patterns in you see similar patterns. In like software engineering, we’re thinking about how to structure an organization, and the code you develop is a function of that organization. So can you have all these in a systems perspective is one thing that really interests me and others, your IDE, but there’s like other other, you know, thinkers who have this approach, a systematic approach to thinking about, yeah, human nature. Yeah, definitely one resonates with me.

William Jarvis 34:57
Definitely. And who are some of those thinkers, you know, like, what would you recommend

Abhay 35:02
the classical thinkers like Aristotle, and so forth, these people had a similar approach to understanding things they brought down, like they brought down nature from the heavens. So for the first time, they were thinking about human nature is something that is like, almost like a celestial object, and more as a thing you can observe and understand and build a theory from that. So the whole, like scientific tradition is, in a way born out of that. So there’s a lot there, actually, if you just read the classical texts of restoral, like he’s, he systematizing. And he’s mean, I think he was his idea, really, which is like, Man is, or humans are the most imitative of all creatures. That was not your idea. That was Aristotle’s idea. So he came up with it before? Or maybe he saw it from someone else? I don’t know. So the classical thing goes are really good source in that sense. And if Yeah, if for some reason people don’t go to the direct sources, but there’s actually a lot there. And in a weird way, we haven’t necessarily moved too much, far from that. Which is, which is really bizarre, I think.

William Jarvis 36:15
Definitely, definitely. It’s quite, it’s quite bizarre. I’m curious, are you deaf? For a quick round of overrated or underrated? Sure. Okay. So the first one, Boston Dynamics, overrated? Or underrated, or correctly rated?

Abhay 36:32
underrated as a marketing firm over it as a engineering company. Oh, really. But it isn’t really a marketing company at the moment. It’s primarily but Right, right. In the wild video. Brains explode. And maybe for like, what, two decades? They’ve been like shuffle, like, just like they get acquired from one company and do either and gotcha. They’re really good marketing. I mean, I love it. I don’t know if how, what can you do with them? They can you actually use them? I don’t know.

William Jarvis 37:02
Right? So it’s like, kind of searching for an application,

Abhay 37:06
but has acquired for 20 years. I mean, that’s the point. At some point, you’re Yeah. So it’s you start being like a consumer product market fit oriented company and something else, I guess. Yeah.

William Jarvis 37:21
negative interest rates overrated or underrated?

Abhay 37:30
So on the one hand, everybody’s like, always screaming about those things. Yeah. On the other hand, there is visibly, visibly high inflation and low productivity. So maybe appropriately rated. Yeah, it’s, it’s an evergreen problem. nobody’s doing anything about it. So it just is, I guess it’s going going.

William Jarvis 37:57
Right. political polarization in the US, is that overrated or underrated right now?

Abhay 38:09
So I’m probably appropriately rated. I mean, people seem to have crazy fair manner and yeah, and yeah, I think I think there’s no there’s no like, alpha but better terms of Yeah, right.

William Jarvis 38:26
To people looking at it, right? Yes, exactly. Definitely. GPT three, overrated or underrated?

Abhay 38:36
I would say currently. It’s actually this is a tough one. I think it’s definitely it’s definitely underrated. I think people are under estimating the extent to which language models will be can be the next computing platform. So I think if magic we did three, the gbd 20 is going to be like I don’t even know absurdly good. Like it’s going to be it’s going to be wild.

William Jarvis 39:05
But really scary. Do you think it just ends up replacing most knowledge workers kind of in the US

Abhay 39:12
I don’t know about that. It will definitely like change the character of the knowledge work by law. So if it continues and the general trend of automation where you have automation then that’s like, it changes the character of work and then it just the time spent is like on different things so you no longer shuffling papers and stapling, you know forms you’re doing something else sending out an email or something. So it definitely will shift the the, the way people knowledge workers spend their time. I don’t know it can replace a human because it was lack intentionality. By that I mean, you can be open Yeah, but not operating the world, particularly if you want to operate in the world today. In the world, and it won’t be able to we wouldn’t have agency. So someone’s got drive that.

William Jarvis 40:07
I’m sorry, someone’s got to drive that like direct. Yeah.

Abhay 40:10
Tell me how to drive it. Yeah, it’d be more of a car in that sense. So someone has to be the driver. And I think that Yeah, so it will be it will be that that’s how I would predict it will go in the short term. Nice.

William Jarvis 40:22
Awesome. Awesome. I love that. Well, hey, thank you so much for taking the time. I’ve learned a ton. I really appreciate you coming on. Where should we send people? Do you have a personal site or something that you’d like people to check out?

Abhay 40:34
Yeah, absolutely. I have a personal website. It’s just Ave venkatesh.com. Ab hayvnktsh.com. Yeah, feel free to check it out.

William Jarvis 40:46
Awesome. Bullock to it. Thanks, fa. Absolutely.

Abhay 40:49
Thanks. Bye.

William Jarvis 40:57
Thanks for listening. We’ll be back next week with a new episode of narratives.

Join the discussion

More from this show

Subscribe