10: GPT-3 and the Future of Artificial Intelligence

Play episode
Hosted by
Will Jarvis

What is artificial intelligence? What does it mean for the future of humanity? What is the state of the art today? Will and his brother Glenn cover out two competing visions of how general artificial intelligence will be designed and develop over the coming century. 


hi i’m will jarvis and i’m will’s dad we both love and are fascinated by stories stories about people stories about places and stories about events our stories give shape and form to life they give texture color and rhythm to the blank canvas that every new day presents to us and they do that by informing us of our past as a directional marker for our future okay will it’s narrative time tell me a story

hey folks it’s a cool sunday morning here in uh central north carolina i’m sitting here with glen jarvis my brother and we’ve got some really interesting things to talk to you about today today we’re talking about ai artificial intelligence that’s right artificial intelligence so a recent news item that uh i came across have you ever played with gbt2 never okay so gbt2 um it’s a language transformer it’s a it’s a special ai algorithm that is super powerful openai which is a non-profit in san francisco actually created it and they’re trying to prove what kind of text algorithms can do so these algorithms can actually write text if given a prompt

so they just released a the newest version gpt3 which is orders of magnitude better it uses more it was trained on more data um so it’s a lot better at writing than uh gpt2 was so i just wanted to read everyone some poetry from one of my favorite uh poets william blake and um someone we know well someone we’d like to read fed fed it into gbt3 as a prompt and then gbg3 spit out a poem so we’ll read that too so uh we used the tiger by william blake and you guys might know this it’s tiger tiger burning bright in the forest of the night what immortal hand or eye could frame thy fearful symmetry so that was kind of the prompt they used so they fed that in the algorithm and gpt3 came back with and oh tiger witch shouts with the what glory was with us was amid those forests the when our first ancestor and the being a tiger slew i can imagine you probably roar and say i am the tiger that’s pretty deep so that is yeah i moved i don’t know so i i i think the the interesting thing about this is uh just how uh you know it’s not amazing but i’d say it’s like it’s better than high schooler level yeah better than a high school level uh student um coming across the uh the wire there which i think is super interesting um so i i think you know i mentioned this to to kick us off because i think uh it’s something people can relate to it’s something know about it i think it really brings us into the the real world today um like this is the current you know state-of-the-art algorithm um for artificial intelligence and text creation generation and uh just just so where does that leave us for the future glenn and and i know we were talking earlier this week about artificial intelligence and we actually have some uh views that kind of uh branch out from each other on this and what the future really looks like for us for sure yeah um well the potential for ai is really vast i mean we just heard some moving poetry written by a computer and i mean that’s kind of out there for what ai could do um our first thoughts on ai would be like automation that brandeis term where andrew yang wants to reduce um you know human suffering because automation is going to take all our jobs quote unquote automation right you know ai has huge potential for automation we could make it could design the products to design what ever product we want so um so is there is there a distinction between artificial intelligence and software yes so software is something as simple as the logic behind um a computer would say you know if this then that and then you know while doing this do that and something as simple as logic like that and so but artificial intelligence is uh the generation of new ideas from so like like your poetry um from sets of data that are correlated in in some manner and so like humans do this not artificially you know naturally but natural natural intelligence and we take in a bunch of data from our surroundings and we also take taking data from ourselves right we self-generate and then we come up with you know what we term meaning and then we figure out how to do things and and that’s kind of what we deem intelligence is uh having a goal and figuring out how to do it that’s right yeah that figuring out part is the most important that’s the rich thing so it seems like so i’m reading a book called joe excuse me it’s by joe henrich um it’s called um secrets of our success and it’s a really interesting book because he’s a uh he was an aerospace engineer by training went back and became an anthropologist who’s now kind of like an economist so he’s kind of been all over the place yeah of all traits exactly so uh he in the beginning of secrets of our success i haven’t finished it yet but um he goes on this huge tangent about how humans like are actually like fairly underpowered in terms of raw like processing power so like you know three-year-olds are regularly beaten by you know chimpanzees on all kinds of like pattern matching tasks and things like that or it’s much closer than people would like to realize um okay and so not only against computers but also against the biological like counterparts that we have to experience ourselves that’s right and even like uh so corvids i don’t know do you know what corvette is a corvette is a crow crow ravens ravens so they got they can uh outperform us on some tasks which is oh man pretty interesting um but his point was that uh essentially what humans are really good at is copying actually really so yeah so like what we do is that we’re really good at finding so something about human nature which is really important that it’s it’s finding high status individuals and copying their behavior that makes a lot of sense so like you copy like successful people that’s essentially it’s like so what humans are really good at at is pattern matching and finding like things that align you know he also mentions these things where these ideas his idea is that you know if you dropped a human off in the middle of the woods in like uh you know let’s say somewhere in the jungle and some tropical climate um and you you drop the chimpanzee there you know the human would end up much worse off oh yeah uh which i don’t know like you know i guess without any training it would be difficult to and he also mentions you know like there’s all these uh you know quite a few conquistadors and um you know uh north um antarctic explorers died even though there was like all these things they could have done but it was like very difficult to figure out without cultural learning so like you know uh there’s all things that you know there’s plenty of things that uh native people knew how to manage that just you couldn’t figure out even though like you’re you know we’re pretty smart it’s just it’s impossible to figure out without cultural learning that came before us and copying what’s successful like like simple things like there’s certain what you know what plants you can eat what insects you can eat which ones you can’t and um you know a lot of cultures have different kind of uh taboos around eating things right and that can actually be like oh this is actually poisonous but it’s not like a conscious reasoning around that it’s like it’s just passed down right so humans are really good at copying long story short that makes a lot of sense yeah i mean this people think um so humans are really really smart as a species for sure and um computers are also really really smart as a collective yeah so you know an individual computer is powerful but drop you off in the middle of a desert with a computer i mean what can you do not much right right but connect your computer to the worldwide internet you’re very powerful that’s right and so um artificial intelligence also has this quality or could be designed to have this quality of shared learning multiple intelligences with multiple experiences communicating ideas and expressions i think that that is the major part of our intelligence that makes us powerful over other animals but i also think there are other elements of intelligence which are incredibly important and um just to go through a few of them like um there’s many forms of intelligence like we say intelligence but that’s a very that’s a very broad term and when people first think of intelligence they think like like quickness like how much horsepower do you have well yeah raw horsepower in in yeah the the chipset of our brain if if our brain is our cpu we’re going to be and yeah exactly and um that’s not the end of the story though that’s obviously not the end of the story um we also have crystallized intelligence quote unquote and so that’s like you know you ask somebody how much vocabulary they have and they have to go learn it right they have to spend time putting that into their memory banks you know downloading it to their hard drive and then they can build a large index of crystallized intelligence but that’s not the end of the story either we also have what we deem wisdom you know um i’ll go into that later but wisdom is really important it’s kind of choosing not only how to conquer a task but which task to conquer and also how to go about conquering it and there’s also like insight and insight is even different than wisdom and insight is like you can see um how like the process of something or you can see patterns within something and i think that’s what you’re saying earlier is like we copy and we see what to copy out of the patterns that we yeah what do you pattern match out exactly what we recognize and then we also have uh like creativity is uh everybody raves about creativity like oh everyone’s creative like that’s not true for sure that’s not true but definitely everyone has some small potential of creativity and that’s that’s the create generation of new ideas is how it’s generally thought of but really it’s like the randomization of like shooting out in different directions for right for that’s how you get new ideas is you know you have a thousand bad ideas and then bam you have a really good idea that’s so different than everyone else because you spent a lot of resources on you know generating random ideas and so creativity is definitely an important component of intelligence and that’s what everyone says like oh machines can’t be creative like i mean obviously we just heard a very creative um poem from from a machine and then also you have this resource management idea it’s like um we have a monkey brain that says you know i understand you want to be like smart right now i understand you want to do this podcast but you have to breathe if you don’t breathe no more podcasts exactly like step number one take an air let it out yeah and it also goes like you have to sleep right now like you are not allowed to not sleep period yeah and so our conscious minds like aren’t the only form of intelligence that we have it’s like all these background processes running yeah it’s almost intelligence of design from the evolution of random creation and destruction pretty cool and so you know that’s just a couple of the components of intelligence that you know you can compare these like how good are humans at this like very good at almost all of them yes how good are machines at this like very good at a few of them like machines are very quick and ai right so like they are almost arbitrarily fast you know you put another cpu and they’re that much quicker that’s right so they they have unlimited working like let’s say near unlimited working memory right uh you can huge amounts like unfathomable amounts of working memory and then um processing capability just just arbitrarily large like it’s it’s literally how much money you put into it will dictate how much return of like cpu speed or how much you know memory you have how much random access memory um because all you have to do is you have to go to amazon.com and say okay spin up another server yeah another one and they’ll say you know five more dollars you got however right right yeah so so i think this leads us to a a super interesting distinction here between um so human uh humans and the difference between humans and machines like machines can be unfathomable unfathomably fast and have unfathomable amounts of like raw kind of strength brute strength in this uh sense but if we go back to our example where we have the chimpanzee and we have the human and we had now let’s attic him let’s add gpt3 to the mix running on my laptop right um and we drop it off in a tropical um uh you know environment a a tropical forest rainforest you know miles away from contact and they we have to survive um so if i’m looking at this uh you know gpg3 stands no chance zero okay no chance we chimpanzees higher so i i think this is this does bring up something important right like it’s it’s that uh humans uh we we lack this raw power aspect but we do have a certain capability to adapt to new situations where modern algorithms cannot really adapt like you can’t take a text algorithm and feed in uh and and tell it to generate videos right and i think what we’ve done as humans with the generation of software after you know the huge computer uprising is our computer revolution is we have this information revolution and so what we’re doing is almost forging a environment in which an ai algorithm would thrive and already are thriving like we have really like low-level ais which you know are working for google like um deep mind and image recognition if you google search so you can go to image search and plug in an image and it will actually search the image i don’t know if you guys know that’s kind of a cool thing to see exactly um or how facebook knows who’s in what photo it’s another good example yep and so um it it really shows that like um no longer are we in this in the case where we’re just dropping off whatever intelligence we have into you know the wilderness it’s instead into a petri dish of you know prehistoric soup from which the microorganisms of artificial intelligence might arise and so um and it’s actually a really it’s like these artificial intelligences aren’t at the amoeba stage they’re right they’re like already multicellular organisms yeah and a really weird like it’s so interesting right because in some sense they are you know incredibly advanced and then in some sense like incredibly basic yeah basic right i don’t know yeah and and i think um it’s partially the way the the process of design is kind of like that yeah it’s you know you have um you have a cart for a very long time and but i mean you know a cart with wheels is so much better than a sled right and then you have oh man this cart has you know a a greased axle it’s so much better and then as soon as you have the greased axle you have oh lord we have engines and now we have cars and now we have a ferrari or a lamborghini right and um and the difference between greased axle and lamborghini is like 100 years while the difference between you know a cart with wheels and a cart with greased wheels is like or with a greased you know hard bearing axle is like right thousands of years right and so um so not only can the design process uh quickly ramp up like exponentially um if artificial intelligence can design itself and do it in a way which is actually productive um almost instantly like almost overnight you will have an intelligence which is so intelligent like it would it would make us seem like really dumb and not just with the iq quickness not just with with that but with all aspects of intelligence super interesting so um yeah i i find this uh this super intriguing i’ve got this idea that the the way these algorithms will actually develop is we’re going to continue a pace at this kind of like uh this pace where language transformers end up getting better and better and better and like you know image recognition gets better and better and better and things like that um but there’s not going to be a generalized intelligence until we get good enough imaging technology so here’s what i’m gonna i’ve got this this weird idea uh i got this from robin hansen he’s an economist at george mason university and he has this idea where okay we don’t really know how the human brain is able to like we said plop it in the middle of the jungle and figure things out or try and figure things out like novel situations how do we deal with novel situations where you’ve never seen something like it before right um which is something that modern algorithms are terrible out in fact they just can’t handle it like that they actually just can do nothing um so his idea is that okay we get really good imaging technology so we can image our brains at a the atomic level and just just like a video game simulation we just run a physics simulation so we run a physics simulation and we’ve got all the data from the imaging technology for your brain and we can run an emulation of a human brain right so that means that what we could do is well once you’ve got it on a computer if you’ve got a human brain on a computer you can run it faster you can run it slower so you can train someone you know up to you know your level of mechanical engineering in five minutes you know you just yeah press compute button up up up up press go um and his idea is this the next kind of uh great leap forward in terms of like the industrial revolution mod you know agriculture things like that for growth potential for the human species um so if you think about it like that i think the good thing about that is it’s like a concrete path forward like i can i can think through that i i can’t see the algorithms just getting better and then becoming generalized one day just out of the blue right and that’s been actually my big critique about uh a lot of the rationalist fear of ai and things of that nature is they like oh like how’s this thing gonna think how’s it gonna you know like it’s gonna eat us um and i think well it’ll act just like humans because that’s what’s gonna be it’s gonna be human brains that we just speed up because we don’t fundamentally understand and i don’t think there’s a good path forward to understand how human brains compute and things like that and and how they work exactly i actually i disagree on the path towards a generalized ai being through the human brain interface emulation yes you don’t think it’d be emulation i don’t think it’ll be emulation i think that that is a potential way forward but um and i think it actually is a lucratively potential way forward but i think uh there’s it’s a designable problem it’s actually really designable because um people are really good at design like people think oh man like we’re not actually that good at like making things but actually like people are like really amazing at making making things like designing things and taking from first principles to like a really practical application of like understood knowledge and so um i’d like even even before you get there if you don’t mind i i think this is super interesting because i think it’s a this is a uh this is a fundamental different way in which we see the world which i think is super interesting to highlight so i tend to think that humans are very poor at creating new things like i think that it’s almost impossible like essentially and it very rarely happens i think we’re very good at copying things so that’s why i like i i i actually i don’t believe that um people are able to really invent just new stuff in general like i think it’s incredibly difficult it very rarely happens i think generally we’re always driven towards just copying and that’s why this model makes sense to me because i believe like there’s it’s easy for us to copy things it’s more difficult for us to just come up with new things right you’re much more bullish on uh human agency which i like yeah and i i think you know if you take a code in class and um then you will quickly see that um there’s this form of copying that is creative and and i think it’s you see how something works you have this insight you say oh i totally understand it like addition is just you take two values and it’s the values stacked on top of each other or something right it’s just like if you have grains of sand in a bucket like that’s all addition is in multiplication that’s just like when you’re or whatever and um i think this is like the real power of humanity is like or human intelligence is that um we can like go up and down levels of complexity really quickly so like um do you know what data compilation is no so it’s you have like binary and you have like a million ones and zeros and then you say i’m going to take this these two numbers two decimal can you describe what binary is binary is a counting system which uses only the digit one and the digit zero and so count normally counting one two three four five six seven eight nine is uh base ten and binary is base two so you so we have ten digits zero through nine and binary only has 2 which is 0 and 1. and so it takes a lot more digits to represent the same number so like to represent the number 4 in binary you have to say one zero so so it’s kind of like um so this is important because computer ships uh on or off yes okay and you know you could make a computer which isn’t just in binary but um so data compilation takes large huge amounts of data and then makes it really easy to communicate pretty much so you could go from binary to hexadecimal and hexadecimal has um i think it’s like 18 uh oh wow yeah it’s zero through ten sorry zero through nine and then a through g i think kind of like music a3g so quite a few more numerals there exactly way more numerals and so they can use numbers with much less digits because um and so it it’s kind of this idea like okay i could tell you it’s kind of like you can write tldrs right so you could have dldr just for the too long it didn’t read to the internet audience yeah it means too long didn’t read so data compilation is is kind of like a tl dr yes you can get almost 100 accuracy of what an idea means in a very short statement so the tldr of this podcast is ai scary right um yeah that’s it and so in two words i summarize the whole podcast right you lose data but you also um right so you can you can almost you can decompile so you can you can compress the data and you can decompress the data gotcha and so you can make a lot of data in a small package and then back into a big package so the way that humans do this and a lot of people don’t think about this is um imagine you’re learning martial arts and um you say okay so i’m gonna punch something how do i learn to punch something you know so if you’re on the finite level if you’re if you’re totally expanded data you say i’m going to bring my arm up at 10 degrees or whatever i’m going to apply this much force bam twist my hips this much but your martial arts instructor is going to say something like move like water and then from move like water you can take out all of the information of like you know i’m going to twist my hips this much i’m gonna right so but you lose some of the data but also you can you can abstract that much data from it right and so humans are really really good at compiling and decompiling data and also seeing and copying the information that is within those ideas really quickly and um a great way to think about this is like all speech is is we’re moving our mouths noise is coming out and then you’re reading my mind from that yeah exactly so telepathy a little bit almost telepathy right and we lose a little bit because we don’t perfectly understand each other but like that’s what’s really powerful is that we can communicate vast amounts of knowledge from wiggling our lips and vibrating air right right it’s wiggly air but it’s also infinite knowledge or whatever yeah and um and so that’s what humans are really really good at is compressing and decompressing data gotcha and um how’d we get on this so i we’re talking kind of the distinction between uh algorithms and artificial intelligence and human intelligence and pathways to general intelligence so uh i i think we’ve hit on this uh can we in a kind of roundabout way but maybe we should mention what agi is oh agi artificial general intelligence yeah right so people are general intelligences right they can apply to any problem right generally applied generally applied and um almost all the software we have every all the software we have now specifically is specifically applied so you throw a image recognition software at the problem of like what’s the meaning of life right the image software is like there’s absolutely no way you can ever get there yeah it can’t even like yeah right error like you can’t even type that into the image software um but you ask that to a human and they’ll at least try right and they’ll like you know maybe someone will claim to be like oh yeah i totally know that right it’s like pet dogs sure yeah right that is a solution to the meaning of life maybe definitely and um so an artificial general intelligence would uh be artificially made right like a computer probably software programmed yeah um maybe biologically constructed who knows right um and it would be general so it would run and you know construct its own questions and right you could ask it things and it would like try to answer them and you can kind of ask it anything and it would kind of try to answer anything yeah and um the fact that it would be artificial would you know if it’s a computer it could have arbitrary quickness yeah and it could you know it’s extremely powerful it could be extremely powerful due to the you know the way computers operate that’s right that’s right so uh it’s uh the biggest takeaway should be that is is from that is if such a thing was created tomorrow uh the problem is is what if it wanted something that you know could be dangerous to humanity right so let’s talk about the potential applications for ai because i have a lot of them here written down um the first one’s automation that that’s one of the ones that um i’m very interested in i’m a mechanical engineer and we already have design ais which like maximize strength for weight and stuff like that so you look at um the panels that make up a rocket and you see these you know hexagons and that’s because we have design ais that told us this is the lightest way to construct this that is as strong as possible for your given criteria um you can also think about banking um for finances uh they’re already running ais that are trying to pattern matching make money exactly yeah um another one is you know space travel right it’s like if we want to go to mars do we actually want to ship a person out there we also have to ship all the oxygen they have to breathe right ship all the food they got to eat yeah a lot of problems there or you know send a computer chip that’s the size of my thumb that has a general ai on it right and it’ll solve all our problems and you know make us a great civilization so that when we go over there we can just use all the stuff that it made for us so just to interject here it seems like uh important to recognize that so ai is one of these words like machine learning as well that’s kind of fraught with peril you know it’s kind of like a it’s become a cliche right so all software marketing uses it now yeah and there’s in some sense what exists today is like really advanced software like that you would think of it’s specific it’s very very specifically applied and it doesn’t exist in the general sense where you can like send it to you know i don’t know like accounting school and have it learn um the whole cpa handbook or whatever and apply that it’s not there yet agi doesn’t exist that’s more of a general intelligence kind of like we talked about right but we also we have specific processes in our minds so like reading is a specific process and um good point and so kind of general intelligence is composed of many specific processes tied together gotcha yeah so that’s one of the reasons i think that general intelligence is a designable problem it doesn’t have to just be an emulation of an intelligence that we know gotcha yeah i really like uh so i i think that’s a that’s a good point um so you’re hitting on the fact that there is like a theoretical underpinning we could use to kind of get there right so we had like some broad search algorithm that would apply some specific algorithm um right and but yeah again i i still think emulation is probably the way forward just because it’s the path of least resistance that makes sense for sure especially and it also depends like how good is imaging technology get over the next 50 years right i mean they’ve tried to do this there’s actually an open source project with worms have you seen this i have not no so there’s like a really simple worm i can’t remember the the name of it but you know it’s got like a a small number of neurons i can’t remember what oh yeah talking about c elegans yeah and so they just straight up run a computer program that emulates that yeah now i don’t know if it’s on the right level because it might need to be like the atomic level or something weird like that definitely not definitely not just straight up neurons firing that that’s one of the problems it’s like if our intelligence if our consciousness is on the level of neurons yeah like neurons are fairly big neurons are not atomic levels yeah at other they’re cellular levels yeah and so which is your problem much easier problem and i mean if you’ve watched elon musk’s most recent neurolink uh presentation can you describe that a little bit with neuralink yes and the the demo definitely so they have a computer chip that has tiny little wires that come out of it like a thousand tiny little wires and they had a robot drill a hole into a pig’s skull harmlessly and insert these but the pigs were happy they seemed okay the pigs were happy they’re the most pigs i would say yeah probably happier lives than most pigs have and um but they have a hole in the skull yeah not anymore since they fill it with this computer chip and they they place the wires very carefully and they’re the wires are so fine they’re so small they’re smaller than human hairs and they go between the blood vessels so the pigs don’t even bleed except for when they have their hole drilled in the skull and the wires are able to sense the firing of the neuron and they’re also able to cause the firing of the neuron and this and a neuron is just a uh can you describe that well it’s kind of like um it’s so it’s you could think of it as a computer chip like a kind of a yes no yes it is yeah neurons are pretty much binary they’re pretty much binary and um so you can read the firing and also cause firing which means you can read or write to that specific digit if you want to call it that right it’s not really a digit but digit of the brain and with this they were able to predict and map the walking gait of the pig on a treadmill and so this means um they were reading the brain’s output to the legs of the pig walk to walk so it was like left foot do this right foot yes back left foot do this however picks walk yeah and that means um what was the original oh oh so yeah we’re just describing neuralink and how how it works and and right so so it can read and write to the brain and currently they have put it into a pig they’ve taken it out of a pig and the pig was healthy and happy looked healthy and happy yeah so the idea is eventually you’d have like a brain implant that you could like use like your phone probably right or or will not only use like your phone but it has really really vast implications of if you can read and write to a computer without thinking typing on a keyboard then you have such a quicker interface with like the digital world and information so right instead of oh man do penguins have knees i have to bring up my phone do penguins or you know hey alexa don’t say that word too loud um do they have knees yeah but instead if you could just think do they have knees oh yeah they have knees it’s like you have that information a thousand times faster right and so that’s the goal of neurolink is to create a human brain inner human brain a brain computer interface that is um seamless and and like instant and elon musk wants to use it to like contend with ai or at least have a symbiotic relationship with ai which is like a noble cause and i think the goal should be be very careful in the design of ai right and you will hopefully be able to predict it’s like it how it’s going to work right so uh peter thiel is a great line it’s like well you know so there’s like the andrew yang people who are like oh god like it’s coming for our jobs it already has come for our jobs which i don’t think i don’t i think that’s actually not true as a matter of fact and we can talk about that in a little bit really yeah i actually think that is not true um and uh not to the extent he says but he says you know the first thing you’d be worried about in agi is not whether go have a job it’s like whether this thing will kill me or not it’s like a terminator right one question is why would an ai want to kill you that’s a good point so there’s kind of two approaches to this the terminator approach is there’s some truth to the universe or whatever where like human beings are ethically bad or they are functionally bad and so yeah there’s some part of human nature that’s just quite disturbing and difficult and problematic and so humans should be eliminated that’s that’s the terminator approach there’s also the approach where an ai does something unexpected and it does it so incredibly well that humans are just swept away in the process so i caught so it’s been termed the paperclip problem but i like to call it the amelia bedelia problem do you remember emilia amelia bedelia is a uh children’s book where amelia bedelia is the maid or whatever and whoever hires the maid is like oh i want you to make me a sponge cake and do this and that today and so it’s like amelia daley is like all right and so she makes a cake out of sponges and like a misinterpretation of data it’s like um it’s so the paper clip example of it i think is better but i like amelia bedelia because it’s funny um the paper clip example is so an ai generating team says oh we finally cracked it we’ve got general intelligence woohoo and a paperclip company says cool sure we’ll buy it sounds interesting and uh it’ll make us paper clips awesome and they say okay general nutella agi i want you to make us just as many paper clips as you can that’s your goal make as many papers as you can and we’ll be rich because we’ll make so many paper clips everybody will just pay some a lot of that yeah a lot of money and so the general intelligence says okay cool and in five days time the earth has been enslaved by the general intelligence and everyone is making paper clips and then it says you know what we need to expand we need to you know go build spaceships to go to mine asteroids to make more paper clips than the entire universe’s paper clips and then the entire universe is paper clips and it’s like kind of like the replicators in right stargate i don’t know that’s kind of a nerdy example no no that’s really good that’s really good um and that’s that’s why uh yeah so it’s it’s an important thing so that the future the way it plays out you know it’s something to watch closely because okay let’s say if it turns out like i think it will where we just end up emulating brains because that’s the path of least resistance um and that’s my thesis uh then they’ll just end up being like humans and they have the same political concerns that we deal with today except they could be much smarter and you know you have the smartest person in the world trained up in five minutes on any subject in the world so that has its own concerns but that seems much more manageable than a completely new alien thing i actually does that make sense i i disagree with that and and here’s why um if we emulate the human brain yeah along comes the monkey brain right our underlying this needs to happen brain right and that part is kind of deeply flawed in you know in a computer-generated context so if frustration comes along with the rest of general intelligence then you know you have the most intel you have such a vast intelligence yeah and it gets frustrated what happens when the ai who you know can just immediately generate a thousand nuclear warheads says yeah you know has a temper tantrum right if it’s a baby yeah so i i guess it’s right it’s important to emulate the right people i would say more mother teresa’s less uh you know less temper tantrum baby but even mother teresa when she was a baby you’re mad who knows you know that’s fair for rattle that’s right and and with these things running so fast it would be difficult to tell although i i do think it’s a more i guess my point be it’s probably a more straightforward problem than uh agi if that makes sense like like i like coming up with agi from first principles right and managing that because like we have some sense of how humans think about things and make decisions or at least much more sense than a complete alien being if that makes sense that’s true um i don’t think that a designed ai agi would be a completely alien being and i think it’s because it’s in the framework of like human desire if that makes sense so like the designers even if they don’t know exactly what an outcome will be or even if they think one outcome will happen and a different one happens yeah i think it’ll still be in the framework of like human design if that sense no that makes sense that’s super interesting so i want to talk about c elegans because yeah let’s do that so c elegans are this tiny little worm and if you haven’t heard of them it’s spelled the letter c apostrophe e-l-e-g-a-n-s and okay you can google it real quick it’s really interesting and so the history behind it is you know sometime in the 2000s uh a scientist said you know we we have really advanced technology right now we have like video microscopes what we’re going to do is we’re going to take this tiny 100 cell worm c elegans and we’re going to watch it grow from like as tiny two cells or however many cells i’m not a biologist however many cells we start from and we’re going to watch it through its whole life cycle so these two cells replicate then they’re four cells eight cells and then eventually you have this whole 100 cell worm and it’s only 100 cells so we can like brute force the data of like how does this thing go from two cells to a worm super interesting they pretty much did it they they they got all the data they you know they watched every neuron linked to every other neuron and so they they kind of have they think we got this thing dead to rights right yeah we’re gonna we’re gonna understand like like this worm is like kind of intelligent like yeah it hunts right right it eats it has it scared normal worm does everything a normal worm does but there’s this emergent property yeah of intelligence if you want to call it that of fear of these different things where you know you you can have all the data but you don’t understand how this property comes out it’s kind of like if you if you took all of the computer code and showed it to someone who can read but doesn’t know how the code works it’s like they could they could kind of see some stuff and they could see the whole code but they couldn’t understand that it’s a separate thing yeah it’s it’s so there’s this difference between like knowing all the data and understanding how something works and um there’s almost this emergent property of all of what makes something that complicated happen uh have these like almost magical effects like intelligence fear you know what makes it eat right right this is super interesting and um i i don’t know if i’ve explained what emergence is i guess yeah let me try and do that so emergence is just like uh coming from something else like emerging from developing from some like lesser systems that makes right is that a good way to say it schools of fish are really a good way to explain it so um each individual fish just follows like a couple processes like if you’re if the partner on your right turns right go right and then that ripples down the line and then it causes like schools of fish can have really complicated and interesting and like quantifiable like like these patterns they move around and they look from very simple yeah they’ll like leave holes around sharks that are going through the schools and stuff like that and yeah but so we can we can see that like oh that school of fish is leaving a hole around the shark so that it doesn’t eat them but we can’t quantify from the individual like and we can see how each individual fish is reacting and we can see the rules by which they’re reacting but we can’t go from there we can’t jump from one logic to the other gotcha and and so that’s kind of the idea of emergent properties is that in complex system simple rules can turn into really complex like outcomes very cool that’s super cool so we’ve talked a lot about emergence um ai the features of ai are there anything else that you wanted to cover today yeah i wanted to talk about like if you were to design ai um like what are the requirements for design and i think um you’d have to be a real jack of all trades like a real expert on many subjects in order to gotcha try that and the first would obviously be computer science right i think this is definitely the way towards artificial intelligence like maybe we could come up with like some artificial brain or something that would have intelligence but if you had a brain alone in a jar i don’t think it would come up with like a human intelligence be difficult be difficult um the next would be psychology i think you would have to deeply understand like how humans are intelligent and how other animals are intelligent um because you have to really understand like what makes intelligence what it is because it’s a really abstract idea you really have to decompose it and um next would be like how you inc how humans encode information like i was talking about earlier so i think you you’d have to be really wise on philosophy on like religion also because i think that’s another encoded information problem is like gotcha we might not ex like i don’t know religious ideas have practical meanings that are explicit but we don’t say them oftentimes like that eliminates like if if you know what the real rule is yeah then you might not do it and um i think also art and i think how we started the podcast today was a really good example of like how understanding like art can put that information much higher level of understanding exactly and um and so you start with like the bricks of how to make the building yeah with computer science and you also need to know what type of building you want gotcha yeah what the end goal looks like right cool and so don’t be afraid of putting on many hats very cool so that’s super interesting so i i think we’ve covered a ton today and i thought it was a great overview of the current state of things and where things may be going and kind of two different visions of what that looks like right and if you’re interested in deep learning there is an mit lecture course on youtube right now quite good yeah and uh i mean you can just go to github if you know any um like python yeah there is a deep learning algorithm on github and you can start applying deep learning like really the basics of ai right now and and see how it works super cool so we’ll include the link a link to all that information so you can try it out for yourself for sure super awesome well thanks man it’s been a great time today i think we learned a lot thank you well that’s our show for today i’m will jarvis and i’m will’s dad join us next week for more narratives

Join the discussion

More from this show