Byron Reese, Speaker, Author, Entrepreneur
THE AGORA PODCAST

Stephen Wolfram on AI, Ethics & Philosophy

Decoding the Universe with Stephen Wolfram: The Intersection of AI, Ethics, and Philosophy

Unraveling the mysteries of our world: from the intricacies of Wolfram|Alpha to the ethics of AI governance.

In this episode, Byron speaks to Stephen Wolfram, a renowned polymath, computer scientist, and physicist who has dedicated his life to breaking barriers and redefining our understanding of the universe. From creating the innovative Mathematica, the Wolfram Language and Wolfram|Alpha, to his groundbreaking book, “A New Kind of Science”, Wolfram dives deep into his endeavors to create a new scientific paradigm, and discusses artificial intelligence, addressing its ethical implications and the profound challenges that lie ahead in governing its future. Drawing on Wolfram’s expertise, this episode considers the intersection of technology, policy, and morality and how the choices we make today will shape our future.

Biography:

Stephen Wolfram is the creator of Mathematica, Wolfram|Alpha and the Wolfram Language; the author of A New Kind of Science; the originator of the Wolfram Physics Project; and the founder and CEO of Wolfram Research. Over the course of more than four decades, he has been a pioneer in the development and application of computational thinking—and has been responsible for many discoveries, inventions and innovations in science, technology and business.

Links:
www.stephenwolfram.com
www.wolframalpha.com

Key Takeaways:

  • AI’s Ethical and Philosophical Challenges: The challenges and intricacies of AI governance, touching on ethics, policy, and technology.
  • A New Kind of Science: A look into the depths of “A New Kind of Science” and how it presents a new scientific paradigm.
  • Limitations of Predefined AI Rules: The limitations of programming rules for AI, highlighting the challenges of computational irreducibility.
  • Thought-provoking analogies for understanding AI’s place in the world: the malevolent genie, the zombie, the calculator, and the stick of dynamite.

 

Byron Reese: I’m Byron Reese, and this is the Agora podcast. Today my guest is Stephen Wolfram. Stephen Wolfram is a polymath – I don’t even know how to introduce him. Just off the top of my head I think he was the first batch of people to get a MacArthur grant, I think he was the youngest person to get a PhD from Caltech. He created a new kind of science. And he wrote about it in a book called “A New Kind of Science.” He created Mathematica, which is an entire programming ecosystem at this point, 30 years old. He has something called Wolfram|Alpha, which is an answer engine. He’s got something called the physics project, which is trying to advance fundamental understanding of the universe. He takes a great interest in education. He’s got so many, many, many things. He’s an amazing person. I’ve had the privilege of knowing him for 10 or 15 years and every time I talk to him, I think his IQ probably falls two points. I just get my mind blown in so many different ways. He’s just an amazing person, and an amazing intellect. I’m so grateful that he’s on the show. Stephen Wolfram. I’ve been reading lately, and listening to your thoughts on AI governance. And you made some really interesting observations, you said, you were disappointed with the state of the conversation around it, that it was a nexus of three issues, which are: ethics, policy and technology. And there weren’t people who kind of mastered all of those, and you felt like you knew a little bit about each of them, but weren’t really the guy, like that wasn’t your thing that you were going to do. Did I represent all of that correctly? Is that how you feel?

Stephen Wolfram: More or less. Yeah, it’s kind of an evolving thing, week by week, month by month. I’m kind of a rank amateur. It’s not clear, that there’s an area for professionals, you know, the thing I realized recently is, philosophy, you quickly sort of go into issues that are typical sort of political philosophy type issues. And I was realizing this is a time when that all matters, I was realizing when did that last matter? And, you know, 1600/1700s, people were inventing a kind of modern democracy, people really cared about what, you know, a bunch of these political philosophers had to say about that. I think we’re back at a time when one actually has to think and think, in a sort of deep, philosophical way. And I don’t have the answers, but I think that the idea that there’s kind of a quick solution, and we’re just gonna put, you know, a few little technology hacks in here and there, and it’s all gonna go fine. There’s deeper thinking that has to go on, I mean, I’ll give you an example of something I was realizing recently. People keep on talking about how we’ve got an AI, we’re going to put some sort of constitution around what the AI might do or whatever else, but we’re thinking about that as an AI. My guess is that one of the things that, you know, if we look at humans, and we say, let’s say there was just one human in the world, how would we get that one human to sort of do the right thing? It’s very hard to see how that would work. When we have human society, there are many more kinds of checks and balances that exist in the way that people want to be kind of connected into the society and so on – that has an effect. Like, for example, if you imagine the AIs, and you say, does an AI have an internal belief that it should survive? You know, we humans have sort of a built in belief that comes from a few billion years of us having sort of struggled for life in biological evolution, we have a built in instinct that we should survive – at least most people do. And that means that certain kinds of things that say, well if you do this, or that bad thing, you won’t survive, or you won’t survive happily, or whatever else, that becomes a real reason for us to do the right thing, so to speak. But for an AI, as it is, right now, it’s kind of like, well, unless we insert into the AIs, an instinct for survival, which seems very Terminator-like, and very scary, so to speak. And it’s probably the wrong thing. If we just say, okay AIs, you’re going to be fed the thing that we just got from a couple of billion years of biological evolution, we want you to think you should survive, and therefore you shouldn’t do the wrong thing. Because otherwise, we’re going to switch you off. That’s one approach. Another approach, which I was thinking about recently, is what if there’s a whole society of AIs, and a large part of the value of an AI is that it has connections to all these other AIs. And if the AI kind of does things which the collection of AIs, the Society of AIs, thinks are a bad idea, then that AI gets sort of ostracized from that society, kind of naturally, because those other AIs are trying to do things where ultimately, sort of at the edge of the AI network, somebody is going to trust that they did the right thing, and so on. It’s one of these things where, if you have the Society of AIs, you’ve got a different set of possibilities for how you think about, you know, sort of determining what the AIs do, than you do if you have the one AI, controlled by some sort of set of, you know, programming rules or something like the AI, which, by the way, I think that is a doomed concept. I mean, the idea that we’ll write down rules that say exactly what the AI should do, kind of the whole computational irreducibility story. And common sense basically tells you, there’s always unexpected stuff that’s going to happen, you’re never going to be able to get a set of rules that allow the AI to do things that you want the AI to do, because they’re going to be valuable and helpful, but always guaranteed to never let the AI do the wrong thing, so to speak. And I think that, yeah.

I don’t have a silver bullet kind of answer. But I do have a way to frame it. I think that’s useful. Which is, you know, we have, you’re talking about, well, we’re gonna have to decide should it run over the llamas or the two dogs? Yeah, we’re gonna have to figure all that out. The thing about it is we have 400 years of English common law and case law already addressing all of those kinds of things. And that’s based on Roman law, which goes back to Hammurabi’s code and Justinian’s Law. So we have actually thousands of years of thinking about this with incredible nuance, and with tons of case law of every kind of thing. And I think the idea that somehow we have to put all that aside, and that this is new, is incorrect. And so the way I think about it, is when you ask people like “Oh, think about what it would be for an AI”, that’s very alien to them, because they don’t know what that is. So I think it’s all going to boil down to analogies that we use to tap into all of that common law history. So let me just say one real quickly, which is, I think, sometimes, AI is a zombie. Sometimes it’s a malevolent Genie. Sometimes it’s a calculator. And sometimes it’s a stick of dynamite. And let me just take one of those: a malevolent Genie. So you know, the genie thing, which is, you know, you get three wishes, and like every wish you make, somehow gets twisted. So it’s obeying it, but in a way you did not intend. And that’s your computational irreducibility idea that you can’t foresee what the genie is going to do. And you say, “Okay, well, we have case law about that.” Were you negligent in how you programmed it? Or could you have foreseen that and so you tap into a lot of history. Likewise, if we think of the AI as a zombie as something you tell it what to do, and it goes and does something bad, well, we have rules about that. Like, we know how to assign responsibility, we know who has to pay damages and all of that. And finally, a stick of dynamite. A stick of dynamite has legitimate uses, people can take that stick of dynamite and do illegitimate things with it. And, you know, is that the dynamite manufacturers problem? Or is it the person who did it? And so I think it’s going to be about finding the analogies that allow us to tap into all of that 1000s of years of thought we have and apply it to this thing, which, if you just say, “Oh, it’s an AI”, it’s like, well, I’m don’t want to think about that. But I can know what I would do about a malevolent Genie, or a stick of dynamite or all of that. So that’s how I would approach it. At a gut level does that resonate with you or not?

Well, what is ethics? What kind of the things we want to have happen, we humans want to have happen in the world? What you just described, the huge volume of previous legal work, that is the best representation we probably have, of what we want to have happen in the world. It’s not completely the story of what we want to have happen in the world, we can see that in, for example, you put things out in social media, you know, I tend to think you should let anything go out that is within the law, as law has been written. But some people don’t think that, some people think one has to do things that are aspirational, which the law is mostly not about, the law is mostly about what you can’t do. It’s not what you should do, so to speak. But I agree that that’s the encapsulation of what we humans have discovered, we’re happy with, or works or something for human to human interaction.

By the way, it’s culturally sensitive to right, a Chinese AI would therefore behave according to their case history, and somebody in Iceland. So you actually solve all of those problems, instead of having to solve for the world, you’re saying…

Yeah, you’re never going to solve it for the world? That’s a hopeless idea. I mean, the idea that there is a perfect ethics and set of principles that apply to everybody and everything in the world. It’s just not, I mean, fortunately, because if we did it that way, let’s say we did it that way, it’s a global set of rules, then let’s say we got one of them wrong, you know, then we’re screwed basically. Because there’s no, you know, oh, this country didn’t make it, and some other one did. It’s like, the whole species just went off track, so to speak. So I think that I completely agree that what makes sense in country X is going to be different from what makes sense in country Y. Potentially, what makes sense in online community X is different from what makes sense, in online community Y, and so on. But I think the real question is, how do you take laws and so on, which have been set up for human to human interaction and how do you translate them to AIs? Because, you know, you can say things like, well, every AI has to have an owner. And the laws and the principles of operation of the AI must redound to the owner somehow. Well, you know, for example, a place where that’s not working, is people saying, I’m creating this piece of art with AI. And I want to be the one who’s getting credit for that art. People say, no, no, that can’t really be right. I don’t know what I think about that. But that’s an example of where one’s teasing apart, what sort of the AI just did it for itself. Now, another thing is, you know, in the world of computational irreducibility, the AI does something nobody could have expected, you couldn’t know it was going to do that. But the question is, you, the owner, have certain responsibilities. Now, if you say I’m keeping a dog, and the dog will occasionally do crazy things. There’s a, you know, well established set of principles about what you can reasonably do if you’re keeping a dog and so on. I don’t think we have that, you know, if we could develop those kinds of ideas around AI’s, I think that would be a fine thing. I don’t see quite how to do that. What you’re basically suggesting is, the reason that laws work, I think, is because in the end, people don’t want to be punished for breaking the law. But if you don’t have a person in the loop, if you don’t have a thing that has skin it cares about in the game. I don’t know how you make that work. And that seems to me to be the problem. Because one thing you can say is every AI is attached to an owner, it’s a possibility. If you believe that an AI is just an appendage of its owner, I’m not sure if that works. I think people don’t really quite believe that that works. Like who is the owner of the self-driving car? Is it the owner, the person who has the titles of the car? Is it the software company or the car company that made the thing? If it’s the person who happens to have the title to the car, they’re going to say, but I don’t know why it did that. I have no control over this, you know, how can you hold me liable for something that is, you know, completely out of my control? That doesn’t make any sense.

You know, when the ChatGPT thing happened and the LLMs all happened. I could tell you were super excited, I was super excited. I think the world is kind of as a whole. I’ve seen this enormous amount of fear that has come with it. And you know, when I think about that open letter that all those people signed, it has a line in it, it says “powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.” And when I saw that, I thought there’s never been a technology that could pass that test. The printing press couldn’t pass it.

No, that line is a great example of what you raised at the beginning, is like who’s actually thinking about that stuff. That line is not a line that people who actually think about this, in any serious deep way could seriously write.

The internet wouldn’t even pass it today. I’m not even sure if on balance it is positive. So where do you think this enormous amount of fear comes from?

Well, I think it was a shock, ChatGPT was a shock – to everybody – including the people who built it, nobody knew it was going to work this well. And, you know, I think my best technological analogy is probably the telephone, where people had known since the early 1800s, in principle, you can transmit, kind of sound through electrical wires. But it had never got to the point where you could actually understand what somebody was saying at the other end. And then, you know, Alexander Graham Bell sort of hacked together what was needed, and suddenly, it was good enough that you could understand it. So similarly, here, we had language models that could babble away and say things that were sort of syntactically correct, they didn’t really make much sense. And they weren’t that interesting. And then suddenly, you know, things reached the point – nobody really understood why – suddenly, it reached this threshold, where it started to write sentient text and be able to do useful textual things. To me that’s an exciting scientific moment, that probably reflects something about language that we didn’t really understand, it probably reflects the fact that there’s more of a kind of semantic grammar, more structure to the construction of language than we imagined. It’s something where I kind of feel like it’s almost embarrassing. It’s like Aristotle almost started talking about this. He got logic, and he almost started talking about things with sort of generalized logic to other kinds of formalized thinking, but he didn’t really make more progress on that. And it’s been kind of dropped for a couple of thousand years. And now this AI comes along, and it’s kind of waving this big flag saying, yes, you know, human language actually does have more structure than you thought. And to me, that was sort of an exciting scientific thing. But I think also, you know, for people in general, it was just like, it was a moment when something happened, it wasn’t a gradual thing. It was a big surprise, it was something where there had been this sort of belief that kind of the knowledge worker class was immune to any kind of automation, you know, who’s going to automate writing the brief, writing the essay or the report. And then suddenly, it was like, oh, this is gonna get automated. Now, probably, if people had followed up post-Aristotle, everybody would have known that that was going to get automated, but we didn’t. So it was kind of a big surprise. And when you see that, you’re like, oh, my gosh, what else is gonna get automated?

The thing is, is that, you know, I’ve had a podcast and I’ve had people come on for 10 years, saying, you know, oh, we’re gonna lose all these jobs, it’s gonna happen any minute. And as far as I can tell, I can’t think of a single job that’s been lost, a single one that’s been eliminated in the last five or ten years. Not one. And I spent a long time trying to figure out the half life of a job. And I think it’s 50 years, I think we lose half of our jobs every 50 years. And I think it’s great because we create new jobs at the top, destroy ones at the bottom, and everybody shifts up, and that’s how we have maintained full employment and rising wages. And I actually get a sense it doesn’t seem to be as fast to me, as it has been. Do you have that sense? Everybody’s always like, oh my gosh, this time is different. Do you have any sense? Can you think of any jobs that have been eliminated in the last five or 10 years?

I did a bit of a study, a few months ago, of looking at the last 150 years of jobs in the US, you perhaps have done similar studies; there’s this data going back to about 1850, you have to reclassify the jobs, you know, a chain man, for example, that job category doesn’t exist anymore, you know, that job has been lost, that was a person who helped surveyors with chains, and so on. But, you know, what you find over and over again there, is you find something, which was a big job category, like agriculture, and it disappears. But the very automation of that job category creates the possibility of many new categories of people having food distribution companies and people doing this and that, and the other. I mean what I think one sees, and I think one sees around the world, is that the more developed an economy gets, the more fragmented the kinds of jobs that exist. And I think, you know, with this automation, I have no doubt the same thing will happen, no doubt that there’ll be a bunch of new categories of jobs, whether they’re prompt engineers and AI psychologists, or whether they’re people doing, I don’t know, you know, auditing of AIness of things, or whether they’re AI philosophers or whatever else, they’re going to be a bunch of new job categories created. And I think the way to think about it is this question about, you know, you have a job that gets very well defined, it gets very mechanical. And once it’s very mechanical, it can be automated, and it is eventually automated. And then there are jobs that open up where they involve choice, you know, where somebody has to decide what exactly is, you know, you’re a prompt engineer, what kind of thing do you want to achieve? Or you’re an AI ethicist, what do you want to achieve? There’s a bunch of choices that have to be made, which are necessarily human. And so that ends up being a thing that requires, you know, you have to pour humans into that. And eventually, over time, perhaps that job gets well enough defined that it can be automated. And then you go through the cycle again. But yeah, I agree with you that I mean, there certainly are, from the job categories from 1850, and so on, there definitely are all categories that just don’t exist, I don’t even know what they are. But, as a broader thing, what happened, I think repeatedly, is something got automated, and in its wake, a bunch of new opportunities were opened up, which created new jobs. And in the end, the place where humans are needed is where human choice is needed.

You said in a recent interview, that these advances were kind of step functions that we got image recognition down, and then it hasn’t really gone, and then we had the LLMs, and yeah, they’re gonna get better, but it’s not going to just necessarily be that. Do you think the gating factor is the miniscule size of the internet? Like, if the internet had 100 times more history of, you know, blog posts and used that 100 times as much, would it be qualitatively better? Or is it like, no, we have enough data.

Well, I mean, there is more; there’s archival internet stuff that hasn’t been used in the training of LLMs. And there’s the deep web and so on, which is probably 100 times and the, you know, the archival stuff is probably at least 10 times what’s been used in the training so far. So there’s a couple of orders of magnitude more, I would be surprised if it’s qualitatively different. I think what’s happened here is that the LLMs have sort of discovered this kind of semantic grammar. I don’t think it’s that complicated, I think its way of implementing the semantic grammar is probably rather inefficient. But at least it has managed to discover it, I mean, imagine you were discovering logic, you know, Aristotle could have gone through, I don’t know how many speeches he listened to, to come up with logic, so to speak, you know, maybe it was 1000. But once you’ve got it, you can throw that away. And it’s just here it is, it’s a little formalism, it tells you what arguments make sense, and so on. And I think this is sort of the same way that there’s a certain set of things that are the canon of human language and common sense, that really aren’t that large. And then there’s a lot of facts in the world. And, you know, I guess I probably have a better sense than most people of what those are, because we built the biggest system that deals with those things. And, you know, there are a lot of facts in the world, but it’s manageable with our computer systems. I don’t think, you know, saying let’s get even more text, let’s get even more […], it’s going to be more of the same I think, I don’t think there’s going to be any kind of, you know, wow. And by the way, we can see that to some extent in places where we can generate synthetic results, like in math theorems or something like this, it doesn’t seem to be, you know, there’s a certain kernel of how humans pick their math theorems or whatever. And then going beyond that, and saying, let’s throw in a gazillion more. Well actually, the other issue with that is, people say, what will the AIs do? You know, let’s say we let the AIs kind of have their head, do whatever they want to do. What will they do? Well, they’ll do all kinds of things, potentially. The question is, to what extent are those things aligned with what we humans even recognize and care about? Or are the AIs going off and generating all this amazing computational art, which to us, just looks like a bunch of random pixels. And you know, it’s the ultra modern art, so to speak, that is that we don’t yet understand. And the AIs just went off and generated it. And it doesn’t relate to anything we care about. And I think that’s, you know, when people say, what will the AIs choose to do? Well, there’s sort of an infinite space of things. I did the study recently, looking at generative AI kind of imagery, and asking the question: what fraction of the space of images that can be generated by a generative AI that’s been trained on human images, what fraction of those images are associated with concepts that we actually have so far? Like, you know, I had the example of a cat at a party. There’s a kind of island in the sort of space of possible images that we recognize as something like that. So my estimate is that about one part in 10 to the 600 of the spaces of possible images, consistent with the kinds of things that we humans have put on the web, and so on. Only one part in 10 to the 600 are the kind of concepts that we’ve already got names for. Everything else is kind of this interconcept space of things that we humans have never explored, don’t have words for, you know, we might in the future, our civilization might, you know, go in a direction where, yes, that counts as art now, which it didn’t, before, so to speak. But you know, there’s an awful lot of room there. And there’s an awful lot of room for the AIs to go off and do things that are sort of consistent within themselves, but which have kind of no attachment resonance connection to us humans.

Alright, two final questions. You know, there’s long been this debate about the nature of language, whether it’s innate, which is kind of the Chomsky view, or whether it’s, like, completely created. I’m a Chomsky, I think I follow that way. And I think that ChatGPT, broadly speaking, would support it, I only say that based on what you just said, which is, evidently there’s a structure to it, that we were maybe not quite privy to…

Yeah, I think that, you know, if you have a thing that is basically a neural net, that has a certain way of operating, it’s going to have certain kinds of things that it can deal with. And, you know, that includes language. I can’t tell you that language, as we have set it up, is the only conceivable way of achieving that kind of communication. But I think it is the case that, given a neural net, I don’t know how many different ways there are it could be set up. But that’s another constraint on how it’s set up. I mean, I think the thing about language, for example, composability in language, the fact that there are words that can be taken out, disembodied and stuck in somewhere else, and still are useful, isn’t far from obvious. But it is incredibly fundamental to the way that our kind of thinking processes seem to work. And if you don’t allow that, if you say every word is its own thing, the same word will never mean the same thing twice, then you’ve got something very different from our current way of communicating. I think, at a very sort of weird, kind of conceptual level, I’ve kind of realized that the way that particles like electrons work in space time, the physical universe, and they kind of transmit sort of information from one place to another in the physical universe, I kind of see words, and these kind of disembodied things, that are a way of sort of transmitting information across real space, I mean, an electron, what’s notable about it is you’d have an electron in one place, you move it to another place, it’s still kind of the same electron. It’s not obvious it would work that way. And similarly, when you try and move a concept from your mind to mine, we do it using these packaged things, these packaged concepts, like when you try and move a thought, from your mind to mine, you package that thought up in these discrete concept things that are things like words, which is sort of robust enough to travel without change through real space, and then be deployed in a different mind. And I think that’s, kind of my, my picture of what’s going on. And you know, that is implementable in a neural lace, I can’t tell you, what are the kinds of things like that might be implementable in the neural nets. But there are surely constraints along those lines.

I recently wrote an article, I haven’t published it yet, called The 4 billion Year History of ChatGPT. And what it posits is that, for three and a half billion years, the only place we stored information was in DNA, that was the only place to store information. And then we invented brains and for 500 million years, we had the second storage thing, which was faster and that could hold more. And then we got speech, which was a way to transmit information and then we got writing, which was an even better way to store it, then we got books, and we got libraries, then we got cheap books. And each of these is like this stuff, because you see civilization and all of that. But the inherent problem with a library is, at some point, a million more books, you can’t access them all. And it seems to me that these LLMs are the reason they’re significant – you remember when the internet first came out, we had directories not search engines, we had directories, which created these libraries, every website had an entry. And we had search engines, in which the unit was a page and it could find pages. It seems to me that these large language models are a form of synthesis, where it brings all that information back. So if I am like, “what’s the difference between cold and flu?” in Google, it gives me a million results, which, I don’t want a million results, I want one result. And it seems to me that’s the kind of philosophical significance of these LLMs is they try to consolidate all the information. So you get just one answer and not a million.

Well, I think there’s, there’s something to that. I mean, I tend to think and perhaps because I’ve spent my life working on it, that’s kind of the computational language idea, sort of formalizing the world, in a computational formalism, is a pretty significant thing. I mean, I think that, you know, in your sort of long history of how one stores knowledge, you know, I think that the invention of language, originally by our species, or give or take, that was pretty significant, because it allowed one to sort of formalize things about the world, rather than just pointing at that thing, you could say, a rock, and you had an abstract representation of a rock. And then we got things like logic, and we got mathematics, which is another type of formalization of the world. I mean, I claim that computational language, while not yet, you know, properly appreciated, is kind of a very important formalization of the world. And I think that LLMs are, you know, they’re a slightly different thing. They are, you know, it’s like, well, you can use logic and mathematics, and you can conclude things about the world that are not quite the same as what a person would conclude, kind of off the top of their head. What LLMs are doing is kind of doing what humans do quickly, so to speak. They’re not doing what we’ve now learned formalism can do in a giant tower, so to speak. But in terms of, you know, are LLMs a good encapsulation of the average human and what the average human knows, and how the average human thinks about things, the answer is, yes, basically, and I think, probably, you know, there’s a certain interesting point that there are things where you say, well what’s the average conclusion, I find it very useful to, to be able to go to an LLM and say, “Hey, what’s the, you know, what’s the conventional wisdom about this?” What we don’t yet see in LLMs is what’s the spread of wisdom, so to speak, LLMs right now, are concentrating on the kind of statistical average of this is the conventional wisdom, so to speak. And probably with similar methodology, we’ll be able to see something about the distribution of conclusions and things like this. But I tend to think that what we’ve got in an LLM is a good encapsulation of the conventional human, so to speak. One of the things that happened over the last 300 years, 2000 years or whatever, is that we realized that there was this kind of ultimately computational idea of formalism that lets us go beyond what is the mere human capability. And the issue with that is, it’s both. It allows us to build tall towers that are far beyond what we can do with our minds. But also those towers may be things where we say well, that’s a tower, I don’t care about that tower. That’s a separate issue, is whether the towers we build that way are towers that we humans with the current state of what we’re like as humans with brains and our neural nets and so on, whether we might not care about those things, maybe in some future human that is sort of fused with this kind of computational tower will be like, of course you care about that, because somewhere in the thing that we count as us is a thing that contains that kind of computational capability, in addition to containing our sort of base neural kinds of capabilities.

Watch on Youtube

Want to be a guest on the show? Contact us:

Contemporary Dance

Masterclass

with

Lori Nelson