In this episode, I’m joined by Quinn Lewandowski to chat with Katja Grace. We discuss existential risks, the current state of the world, and AI safety. Katja runs the nonprofit AI Impacts, and you can find her work at https://katjagrace.com/
Will Jarvis 0:05
Hey folks, welcome to narratives. narratives is a podcast exploring the ways in which the world is better than in the past, the ways that is worse in the past towards a better, more definite vision of the future. I’m your host, William Jarvis. And I want to thank you for taking the time out of your day to listen to this episode. I hope you enjoy it. You can find show notes, transcripts and videos at narratives podcast.com.
Tonight, I’m joined by Katya grace and Quinn Lewandowski. Hey, got ya. How you doing today? Hey, I’m doing well. Great. I’m good. Could you go ahead and just give us a quick bio and what you’re interested in? I know it’s good.
Katja Grace 0:58
I guess I work on trying to forecast artificial intelligence and in particular, how, how it will affect society in the future, in particular, whether it will affect it very badly in the future, perhaps. Or when I say forecast, I mean, very broadly, just like what’s going to happen? Will it be bad in some way? What kind of problems will arise? which involves a lot of researching things other than artificial intelligence. So I probably know a lot more about your discontinuous progress in cotton gin technology. the cutting edge of AI, my interests are broader, I suppose I, I blog about a lot of different topics, I guess, how society works. Mind game theory,
philosophy. As I feel like I’m interested in all kinds of things. When I
look online at a list of topics I’m allowed to see for some magazine or something, I realized I’m interested in almost nothing. It’s like, do you want sport or lifestyle? But it’s hard for me to describe what the category is that I am interested in.
Will Jarvis 2:19
That’s really good. No, I love that. Quote. And I also, you know, we’ve got these really wide variety of weird interests in some way. We’re not sure how exactly but yeah, it connects somehow. You know, how did you decide on AI is, as the important thing I think this is, you know, people hear about it, we’ve been talking about it more. I feel like after Nick Bostrom, kind of Originally, I feel like maybe he helped popularize some of the existential risks with AI. But But how did you, you know, come to think, you know, Ai, you know, this is the big issue, that I should work on that side. And that’s important.
Katja Grace 2:55
Yeah, um, I guess, when I was 13, or Sir, I, it occurred to me that it’s possible to spend money on, for instance, helping people who are starving, and I was like, Whoa, obviously, I should do that. So I just hadn’t occurred to me that like the money we’re spending on random things around my house, like was fungible between that and like saving people’s lives. So I was, I guess, as a teenager until I was maybe like, 20, or something. I was like, very committed to saving all of my money for that kind of thing, basically, except, like, what was necessary for eaching or something. And I guess, over time, that kind of broadened into Oh, but if there was like something even more important than, like, helping people who are starving right now, I guess I should do that instead. And kind of gradually broadened into this, like, Oh, I should try and figure out what the best way to help the world is, and do that. And so I guess, I went through, maybe its environmentalism for a bit. And then, when I left university, I guess I sort of went on a holiday from university and took a trip around the world that I’ve been given as a present for my birthday and ran into the people worried about AI risk, I guess this was in 2008. And they were like, you know, have you have you considered worrying about this one? And I was skeptical and have remained somewhat skeptical over the years. But I thought, it seems like an important enough thing to really try and look into it and save better whether it is a big deal. Yeah.
Will Jarvis 4:56
I love that. No, no. And I like that. I like that journey of thinking about you know, How do I help the most? You know, there’s like very direct way. And then, you know, how do you prioritize different things? Right? And how do you think about that? Can you talk about like, the level perhaps, you know, the listeners may have not heard of the term existential risk. Can you talk about it? Like why that’s so important that like people in the future matter a lot?
Katja Grace 5:20
Yeah, I guess existential risk basically means something like human extinction. Very bad. Yeah, um, but I guess it’s, it’s usually taken to be slightly broader to also include, you know, situations where humans still exist. But somehow things are extremely bad such that it’s sort of no better than if humans were extinct. And it’s, it’s maybe also somewhat more complicated by like, maybe there are also situations where humans go extinct, but we like enough what there is, instead of humans, or it’s morally valuable enough that that we don’t call it a bad outcome, in which case, I think maybe that would often not be considered an existential risk. I’m actually not sure what the exact preferred definition is, by the people who think more about this. But broadly, human extinction or something that bad. And I guess the reason that would be very bad is that, to the extent that he’s explaining is that the future could be very long, and very big. There seems like a decent chance of there being humans are other creatures able to have happy lives. In other parts of the universe, there’s a lot of universe out there. So good, have a lot of optimism for the future. And it would be unimaginably tragic if that word just sort of cut off.
Will Jarvis 6:52
That’s great. So it’s something like, you know, we could you know, if we went out, we populated the galaxy in the future. And there’s billions and billions and billions and billions of people. And so, you know, there’s an extinction event next year due to, you know, nuclear weapons or something, they would never exist. And that’s, that’s a really, that’s something we should consider and worry about.
Katja Grace 7:12
Yeah. Maybe there’s some question of whether like, other creatures would arise in the future or elsewhere, who, who would, you know, go on to do great things also. So it’s all a sort of complicated question to work out how bad it is, I think, but seems like a bad thing. Yeah,
Will Jarvis 7:31
definitely. So you know, talking about existential risk, thinking about AI. You know, Clint and I were both part of a slate star Codex Meetup group. And a lot of times AI will come up and I work at an AI company and have for about five years. All right, I’m much more concerned about AI never happening, and just general stagnation. And you know, things just generally decay, then the AI being bad, and it getting us or something to come to get us and being evil and not wing. Well, Allied. Interesting. Yeah. Yeah. What do you think about that? And do you think about that other risk as well that like, we’re like, oh, God, we need to be too slow. And then we just never develop it. And there’s no progress? versus like, of course, it needs to be well, why? Because we don’t want to die. And like Terminator, maybe like, that’d be very bad.
Katja Grace 8:21
Yeah, I guess. I think more about the it not being aligned type problem. But I do have thoughts on both. I guess I’m, I’m less concerned about that. It never happening, one. important because maybe I find it hard to imagine it never happening. I feel like even if we had quite a stagnation. Or even if, you know, the current civilization sort of fell apart. And we went back to sort of earliest stages of civilization. Still, sort of eventually, I imagined that we get there eventually, I think it would be pretty surprising to just never do that. I guess it’s, I guess some people would say that it like maybe it just can’t happen. I find that hard to imagine. I think, I don’t think that we have we have minds. Like as a proof of concept. It would be surprising if you can only do that with with biology. And or if you can’t make anything much better than a human mind, if you were trying and understood what was going on with them.
Will Jarvis 9:34
Definitely, no, I like that. Um, do you think do the path forward? You see for AI? Is it something like brain emulation, like Robin Hanson thinks about like age of m? Or is it just you know, algorithms slowly get better? I don’t know if you thought about that at all. I assume you have
Katja Grace 9:51
some, um, I think my best guess I I haven’t. I haven’t researched it much. Most I’m sort of going by like other people having, you know, discussing it a bit and that sort of thing. My sense is that it’s less likely that brain emulations come first. There’s a question of what comes first. And then like, you might imagine that in the long run, there’ll be a lot of different kinds of AI potentially. And, you know, if brain emulations came first, you might imagine they would still ultimately make other sorts of AI. Or if, if other AI comes first, that might still be good reasons to want brain emulations. Gotcha. Yeah, I think, currently, maybe my main expectation, which isn’t necessarily amazingly informed, is that the kinds of AI we currently have that aren’t that good, become better and better. And that’s how we get something that’s roughly human level at gotcha. Most things.
Will Jarvis 10:58
Just keep iterating up, but it gets advanced, more advanced, more advanced over time. Yeah, I really like that. So my next question kind of goes off of that, you know, you wrote a post, it was about discontinuous progress in history. What is that? And can you give some examples? In Do you think that’s gotten more common?
Katja Grace 11:17
Yeah. Um, so the the post I wrote was sort of summarizing a bunch of work that that several people at my research group AI impacts, did we? So what we did was, we wanted to know, is it likely that artificial intelligence very suddenly gets better at around human level, because that was often a concern people had or have, that it would sort of be going along, not looking very, very exciting. And then quite suddenly, like, there’d be like one insight that would take you to general AI and suddenly be able to do a lot of things. So we thought a place to start on that is just how often does that happen with any kind of technology? So to answer that, we tried to look at other technologies and see how often it happens. But it’s, I guess, we didn’t just like take a lot of random things, and then see if they were discontinuous, which might have been a better way to tell like, how common such things are, instead of like, Alright, can we find anything like this? Like, do these things ever happen? And we determined Yes, they do happen? Like probably not that rarely. But given the way that we went about it, it’s hard to say definite things about the frequency except that it’s like not never. But so from the ones that we saw it, it looks like are there distributed over time, such that most of them are in? Let me just look at the graph here again. Yeah, most of them are in the, the, I guess, 18 and 1900s. And sort of getting rarer, going back to like, I think 1000 ad No, there. It was, even if the the pyramids actually were also one. So going going way back. But I think this could very easily be just from like, which things do we know about? And I guess more recently in history, they weren’t getting more common, I think like the, the 40s. And the 50s. Had, or guess maybe especially the 50s had the most in recent decades. So it wasn’t like getting common in the 1990s and 2000s. But they could very easily be from like, which things do people know about because we sort of crowdsource theories here. Like, you know, have you heard of anything that was just a sudden jump, and it’s easier for people to hear I think about things like, you know, Telegraph’s and trains and giant buildings, right? And like, your tiny things going on with improvements in computing or something that they don’t see.
Will Jarvis 14:14
Right? Much more vivid. Yeah, actually reminds me we had one of my favorite professors from college on a couple weeks ago, and He is an expert on the 17th century, 17th century England 16th 17th, I should say. And especially on Francis Bacon, and a lot of his work. Was Bacon’s method, really that new? Was it really like a new thing? And he’s like, Well, you know, I’ve looked at it and all that, you know, everything he was reading and all of his manuscripts and he’s like, yeah, I think it was like that new, like, it really was a quantum leap. Yeah. Which is interesting, like, so things like that do happen sometimes. And it’s like, that’s a good thing to be concerned about, and at least have your eye on right when you’re thinking about AI forecasting. It seems like that’s cool. I’ve got I’ve got another question caught yet and it’s, it’s going back a little bit. You know, why did it take humanity to kind of start getting going and everything, you know, if you look at just a line of growth, it’s just Flat, flat, flat, flat Flat until you get to the Industrial Revolution. And then, you know, everything starts shooting up. You know, you wrote a post, and you mentioned that we didn’t get rope until, like, 2008 1000 years ago, we didn’t get the wheel for you know, it’s a scary amount of time. You know, we didn’t get riding until about 3000 BC. Why do you think progress was so slow? And then it suddenly just started accelerating?
Katja Grace 15:35
Yeah, I mean, I think it’s a it’s a very confusing question. Yeah, I guess I wrote about how confused I was about that. Yeah. People suggested lots of different answers to me. So I originally got the list of them. But I guess none of them really struck me as like, Oh, that’s clearly the answer. And I didn’t know that any of them came with like, anyone else being like, yeah, I’m an expert. And this is the answer, by the way. I think the ones that were interesting to me, or, or seem sort of plausible. Back here. All right. I guess one that was interesting is it’s like, there’s actually progress and concepts the whole time as well. And like, just even the concept of trying to make things better as a goal that you might be focusing on? Is, is newer. I think that one seems kind of wild. But that’s pretty interesting to think about. Maybe a more plausible one. To me, that’s perhaps a bit related, is that, for instance, I guess, rope is one that I thought especially about, because introspectively, it feels like if you know, if I was if I had almost nothing, but I was carrying things around, it would occur to me to try and make something like work. And also, I feel like it’s not that hard to make something like, out of plant matter. I feel like I used to do it when I was a kid sometimes, right? They weren’t necessarily very good ropes, they like, you know, whole lifetime with the theater hanging out with big leaves and stuff. It doesn’t seem that unlikely, I would think of it. So it’s sort of surprising if it doesn’t happen and editor like hundreds of lifetimes, like consecutively and lots of people living during that time. But I think I could imagine is going on is that it occurs to me to make something like rope because I have some sort of affordance to tie things up, which is because I have seen rope before. And so it’s like, I have an impulse to find a route like thing and tie it. Whereas if I’d never seen that, it just wouldn’t occur to me. Which makes me wonder, like, you know, what, like that are we not seeing now, like, maybe, maybe there are other things around that editor that I could actually think of, and they’re just invisible, because I’ve never used such a thing. Another interesting one was, like, there are sort of basic solutions to lots of things, but most of them sort of don’t get built on. And so they aren’t useful in the long term. Like in the rope example, for instance, maybe I want to know how to carry this pile of food with me. And so I look for solutions. You know, like, I do have a concept of let’s make this better, and like, Oh, I can get my kid to hold it. Great. And then it’s like, all right, problem solved. That one doesn’t get improved on well into, you know, better ropes or something that, that we would then remember. Yeah, I guess ended up with maybe like, 20 different more than 20 different ones. You suggested? That? Yeah, there’s some of them.
Will Jarvis 19:10
Yeah, there that is really interesting, right? Like, how do you go make that first step from just the first person to create rope, you know, going from zero to one? Like, it’s a completely like, if you don’t have no prior concept of it at all? It’s a pretty wild leap to make. Yeah, that’s interesting. Um, let’s see. So you’ve read a post, and it was a list of high level narratives about what was going on in the world. And yeah, this is where this couple of months ago, I really enjoyed this one. Do you mind if I asked you about a couple of those kind of statements? So that one of them was Western civilization is declining on the scale of half a century, as evidenced by its inability to build things we used to be able to build things and the ceasing of apparent economic acceleration towards the singularity. Seems like a big deal. And I kind of paraphrase there. You know, is do you think that tech stagnation thesis is a real phenomenon? Is that correct?
Katja Grace 20:11
Um, well, I guess with all of the high level narratives that I wrote in that post, I didn’t mean to endorse them. These are potential ones. Nice. Yeah. So I think on that one, I haven’t looked into it that much. So I’m, I’m unsure what the built status of the evidence is, I think there are some things that are pretty notable, like, I guess, like, actually, during COVID, I think a more I had the sense of like, wow, we we just can’t do this. Yeah. Especially editors seeing other places do better. And it’s I also wrote a post actually about, like, New York’s effort to vaccinate lots of people. Whenever that was, like, 1947, and that it seemed like it was maybe 50 times faster than a while. And then New York’s effort to vaccinate people against COVID. Sort of like, well, that’s like, that’s a very notable fact.
Yeah, and I guess, I think I do know more about the evidence for is like,
economic growth, where if you look over long term history, I think, and try and fit a curve to it, it as, as gross world product gets bigger. Also, the economy doubles faster, roughly. And so I think for a lot of history, if you’d look forward, you would say, Oh, it’s going to reach a kind of singularity point, like, it’s, it’s going to, you know, just mathematically, it looks like it will go to infinity at a point, though, you know, presumably, in reality, something else will happen. But I think it sort of stopped looking like that at around 1950. And so that’s an interesting thing to note.
Will Jarvis 22:11
Definitely. And it does seem like they’re, they’re long raging consequences of, of that, you know, growth kind of leveling off a little bit, and not just, you know, rapidly accelerating towards the finish. To begin. Exactly. So you mentioned the vaccination stuff. I also wrote this, this very brief piece complaining about vaccination distribution here in North Carolina, where we are, that they had a problem where they couldn’t they literally, they weren’t getting them out of the warehouses, you know, it’s just sitting in warehouses, and no one’s going to get them. And I it was just baffling to me why the governor just couldn’t send, you know, the State Highway Patrol, literally, he sent a one day course, this is how you give an injection, you tell each of them to go pick up 100 doses. And you could have, you know, all these vaccines distributed and within like, one or two days? Yeah. What? Do you have any sense on why we’ve gotten so bad at doing something so simple?
Katja Grace 23:10
Um, I have I have guesses. And, yeah, I have no particular reason to expect right guesses are better than other people’s. But I think that, like some of it is like regulations and things where I think maybe in the 1947 case, I think it was much more like, well, let’s just get anyone to volunteer to needles. Because just do it as fast as possible. And whereas now, I’m assuming that, you know, it’s harder to do that. I guess, I think I wonder about, like, in the COVID case, that doesn’t seem like it can be like that is I guess, initially seeing, you know, pictures of China and maybe a few places near me. There are cafes that move their, their counter to the door and then serve through the door, so no one has to go inside. And that seems like a very simple solution that like any particular cafe could do say, I think they’re like newer cafes near me, like that are like it’s quite a walk to get to one that’s doing that. And I’m like, oh, why is that like? It? It seems like? Like regulation is not preventing that RAM is it that no one really cares about it. I would think it’s sort of bad for the staff at least, like even if each customer is fine standing in a room with 10 people for five minutes to get their coffee, like the staff are spending a year like mingling with like every resident new here. And I feel like usually if there was a thing that a cafe was doing that would I think when I calculated it, it was like maybe in a year it would be Take three cafes to eventually kill the person or something like that from COVID. Like, usually, if the cafe was doing something that would kill someone every three years, like, you know, there would be some pressure for them not to. But I guess one thing I wonder about all of this is that there’s, there’s some thought that a lot of you know, human progress comes from us sharing ideas well, between us and that each human isn’t that great at like, thinking of good new thing, ways to do things that actually go well. And and so while it seems like for a sort of inventive on our own, unable to solve problems, that’s not where the main action is. The main action is in like, thing, cultural things spreading if they’re if they work well. And so if there’s a very new problem, like a pandemic, we get to see the, how poorly equipped we are for things where we have to just think on our feet about what to do. I think it’s, I think the pandemic has pushed me more toward thinking that’s, that’s going on. But it’s still very confusing to see on a small scale, I think, yeah. You think you could just go and pick up the vaccines? Definitely. Yeah, well,
Will Jarvis 26:18
I’m actually I’m reminded, I have this feeling that humans are, you know, pretty good at copying each other, but really just not very good at rationally optimizing. And in fact, they’ve done these studies with pigeons, I think they give them like the Monty Hall problem, or something. And a lot of pigeons can outperform quite a few people on these kind of rationalization problems, rationally, rationally, optimizing problems. So I wonder if that is part of it’s like, really, at the end of the day, our superpower, it’s just like, oh, that works. That’s rope. That’s great. We’re gonna do that. And we’re gonna implement that or, you know, we’ve got this cafe. Well, our neighbors are doing that. That seems like a good idea. But all around was like, I don’t even know what’s going.
Katja Grace 26:57
Yeah. Which I think interacts interestingly, with the the AI concerns. Because often that’s phrased as like, well, humans with their amazing brains has like, done great things. But if it’s more like the the network of humans, passing ideas between them, have done great things. And now we’re going to add more nodes to that network that are different. It’s like less clear what you should expect to happen there. I think.
Will Jarvis 27:26
That’s a really good point. That’s a really good point. That’s a good way to think about the next question is the statement you put in there. Everyone is going to die the way things stand. And I believe that, you know, this is a very bad thing. I think it’s very bad. Do you have any thoughts on you know, cryonics? Anything like that? Do you think it’s a good idea? I think it’s a bad idea. Yeah,
Katja Grace 27:53
I think it’s a good idea. I mean, I guess there are different meanings of good idea. I think, like, if you’re looking for the best altruistic way to spend money, I think it’s probably not that. But I think, you know, everyone puts a lot of effort into, you know, preserving their own health and looking after themselves, I think, right. Like as a as a way of looking after yourself. It seems like a great idea. Yeah, I think I enjoyed the wait, but why article about this, it was sort of describing it as like, well, there’s not a there’s not a particular point that’s like you’re dead. It’s just like, when can they revive you? Like now we can revive people who have been quite cold or their heart has stopped for longer than in the past. Whereas in the past, it was like they killed over like you’re dead. But at the point where we can actually freeze you and in the future, we’ll be able to wake you up again. Like, we shouldn’t really be calling you dead yet because you can be revived. Yeah, sir. I’m not signed up for cryonics there. It’s on my it’s on my list of things to do. Gotcha. Yeah, that makes sense. I probably want to look into it a bit more before I actually do it. On my current level of understanding, it seems like a great idea.
Will Jarvis 29:17
Gotcha. Quint, I know you had a question about communities. Was that correct?
Unknown Speaker 29:23
Yeah, I think about we were talking about overrated, underrated, right? through that list of things, and I brought up the question about I’m thinking career yourself versus distributed social cognition, or you should trust your society tells you and it kind of seems like what you just said goes to that. I mean, it. I had a friend just, you know, my really close friends growing up to a Down syndrome. And so that was kind of a really strong, immediately available proof of concept that you should always try to think for yourself, there are times where you can be reasonably confident with the people around you do know better about enough stuff with trying to figure everything out for yourself. It’s almost suicidal. But I’m not sure where I’m very unsure where the cutoff is for. Because you do if you don’t think for yourself, you leave on the table, the possibility of them bear with you, your society of doing better than the people around you. Yeah, seems like, wow.
Katja Grace 30:35
Yeah. That’s a really interesting question. And quite hard. I think there is sort of a tendency, I guess, among is me and people like me, I think start off as like, young people being like, Oh, I don’t understand why people are doing these things. They’re probably silly. I just, I just learned through them. And then I’ll do better than society. But why do we have to get up in the morning? And why don’t we eat ice cream for breakfast? And why do we have Christmas? None of these makes sense. All right. I’m just gonna stop. And then kind of later on coming around. And I think, I guess the idea of Chesterton’s fence here is one that I find helpful for thinking about things for if you the idea is if you come across a fence in a field, and you don’t know why it’s there, then you shouldn’t take it down if you don’t know why it’s there. Because then like, maybe there’s a pool behind it. If you do know why it’s there, that’s when you’re allowed to take it down. And so I think for deciding, so I think, like younger me is doing exactly the wrong thing by being like, I don’t know why we have breakfast in the morning. Have that we don’t do that. Whereas if if you’re like, oh, that what? Okay, the reason we do this thing is x, y, z, and actually that like that made sense for other people, but doesn’t make sense for my goals, then I won’t do it. But, but I think that it gets really complicated, because I think often, it’s hard to know why a thing did become a thing that lots of people do. And often other people don’t know the reason. So I think another kind of failure that happens is you, you ask a society, why do you do this thing? and society gives you a whole lot of bad reasons. Like, you know, in high school, you’re like, Why do I have to do this math. And the math teacher is like, well, you need to buy groceries as an adult. And so you’ll need to sum up numbers to buy the groceries and you’re like, Oh, actually, well, I know that I have a calculator. So that’s not important. And then you don’t study math. But it turns out that math is actually good for lots of other things. And I think this sort of connects with the the same thing I was mentioning, where often things kind of spread by copying, and the individual people haven’t thought them through well, where the reason you’re learning math at school is sort of more like coming from a selection effect or something where maybe for some batteries, and someone decided to start teaching it. And that happened various times. And it went well, where people liked it, or seemed helpful. And that stayed around. Rather than like your specific math teacher, knowing what the ultimate good of it was. Yeah, so I think I’m still fairly confused about when it’s reasonable to decide that you know, enough about a thing, and enough about what your values actually are, to not do it. But I think it’s also good to, to separate what you act on and what you sort of tentatively believe some extent where, like, I think if you’re in an area where you think that that other people probably knew better than you, it’s still good to have your own guesses about the thing. So you can like practice thinking about it and better coming to understand it. But then just like actually make your bets based on the wiser guesses of someone else that you don’t understand if that makes sense.
Will Jarvis 34:24
Yeah, definitely. I think it’s a really good approach. Katya, you wrote a post on advertising, it’s kind of a left term. But somewhere, like, Can you talk a little bit about that, and some of your broad thoughts on advertising I thought was really interesting. It’s all over the place now, you know, built on the back of the advertising in different ways. All the time.
Unknown Speaker 34:50
Katja Grace 34:52
Sure, I guess I i tentatively think it’s pretty bad. I think or well. If you have very mixed feelings, or maybe controversial in different directions, I think, to the extent it’s grabbing your attention and lots of situations where you didn’t want it, I think it’s just bad and as a way of stealing something that shouldn’t be stolen. And basically, that’s just an externality where other people can grab your attention without you being able to stop them. And I think, if you could, like, if the property rights were well defined here, and you could like pay some small amount to not have your attention grabbed in these cases, I think it would very often be worth it. And so it’s basically just doing harm. It’s easy if you just couldn’t stop people from picking your pockets whenever you went outside or something. I think that the service though of like, figuring out which things you might want, and telling you about them, is actually a valuable service that someone might want. You know, I do in fact, once sometimes, I think, actually, I enjoyed the ads on Facebook a lot. I enjoy them loving the content. They’re just, there’s somehow so good at figuring out what I might want. I don’t understand what kind of spying they must be doing to carry this out. But like, I often buy the things they want and, or the things they suggest. And I think often I wouldn’t have known that they were available. And recently I wanted to buy running pants for my boyfriend. The way I went about looking for good ones is I just google a few times what are the best running pants or something hoping that Facebook could like realize this is what I wanted, and it and it worked and that they started advertising running cats to me. Also, I clicked on some of their ads for that. And then they just went crazy. And I maybe like seven different fancy running. which didn’t ultimately lead to good running pants. I admit. It’s hard to buy clothes on the internet because we have to try them on. But another time I had a blog post about round seals, like like the kind of mammal with photographs of them. And then Facebook advertised to me that’s like stuffed round seal. It’s like, wow, that’s very precise. very specific. Yeah, I think there are other sort of specific ways that ads are bad are different ways of thinking about it. Like I think they’re, they’re sort of like a voice that is there talking to you that can just change the vibe of the situation often. Like if you just had a sort of annoying person following you around as you as you wander around town that would make the experience worse. And for me, at least I think ads are are often like that. Especially notably, I guess, in the UK some way for some reason. Oh, interesting.
Unknown Speaker 38:20
Will Jarvis 38:22
Got it. No, I think that’s quite interesting. And you mentioned stealing, stealing attention as being you know, very bad and perhaps something people don’t realize. I agree. I think that’s really important because human attentions, you know, it’s how we solve problems in the world and it’s a limited resource that if it’s getting burned up, looking at you know, subway ads, ads on the subway, you know, you’re not solving other problems, which is a real I wanted to move on now and do a quick round of overrated and underrated. Throw a term out and give us your thoughts on it. gmu econ overrated or underrated?
Katja Grace 39:02
Oh, I think probably underrated. I you know, I can’t speak to like everything that has ever come out of gmu Econ, I’ve probably seen a very tiny fraction of it. So I’m not necessarily endorsing everything. But I think I’ve heard a lot of interesting ideas from from there, and I’ve appreciated their contribution to my mental world. Yeah,
Will Jarvis 39:30
I love it. Let’s see colonizing planets other than our own, overrated or underrated? Oh, I think
Katja Grace 39:40
I think a mixture. I guess probably many of these things gonna be different, different crowds. It seems like in the long run, it’s going to be important to your go to space and colonize other planets. And that For the most part, people hardly think about that. So it seems very underrated in that sense. I think, to the extent people are excited about colonizing planets soon. My guess is that’s sort of overrated in that it doesn’t seem very economically. You know, what a couple are like? I mean, it seems perhaps feasible but not like there’s a good reason to do it except like, wanting to have done the thing in the near future.
Will Jarvis 40:29
Definitely. aphorisms overrated or underrated.
Katja Grace 40:33
Oh, I think underrated. I read it. Well, yeah. I like a good effort.
Will Jarvis 40:40
They’re quite helpful.
Katja Grace 40:42
They’re very short. Yeah. I do appreciate writing being short. That’s that’s a passion of mine.
Will Jarvis 40:50
Are you Hemingway fan?
Katja Grace 40:51
Oh, actually haven’t read. I’m sometimes criticized for trying to write too much like if so probably should have been
Unknown Speaker 41:06
natural hammer. You know, they had the same editor. And we know Hemingway was a really big natural Hamlet fan. And we know Dashiell Hammett wrote that style first. So I’m a big Dashiell Hammett fan. And I know there’s academic debate about where Hemingway stole or just rather than independence reinvented. Now find the were very creative. I also think it’s a good thing. But I mean, he took an idea that was working and used it to do more good things. So I often call that sorry, it just it wandered into something I have a passionate interest in. So
Unknown Speaker 41:39
I don’t think so. No. Well, Katya. That’s all the questions I had. And Quinn is I think that’s all the questions you had to is.
Will Jarvis 41:47
Awesome. Well, Katya, I want to thank you for coming on and chatting with us this afternoon. It’s been quite wide ranging. Do you have a you know, parting thoughts? Anything you’d like to leave with anybody? Or? And also, where can people find your work? Oh, um,
Katja Grace 42:07
well, my work is on. On the AI impact website is it’s my work work, which is ai impacts.org. More wide ranging, non work. output is these days mostly at World spirits sock puppet.com. Nice. Yeah. And it was, it was a really interesting conversation. Thank
Will Jarvis 42:36
you for having me. Absolutely. Thanks so much, guys. Yeah. Thanks for listening. We’ll be back next week with a new episode of narratives.
Transcribed by https://otter.ai