Byron Reese, Speaker, Author, Entrepreneur

Doug Hohulin on Generative AI, Technology & Regulation

Navigating the AI Frontier with Doug Hohulin: Generative AI, Technology and Regulation

Innovation, from Cellular Systems to Generative AI, Regulation, and Shaping a Responsible Technological Future.

In this episode of the Agora Podcast, join us as we embark on a captivating journey with Doug Hohulin, a seasoned professional with 33 years of experience at Nokia/Motorola, spanning continents and industries. Doug’s diverse career has led him to the forefront of thought and artificial intelligence, where he envisions a future defined by generative AI, immersive technology, and responsible governance. From cellular telephone systems to connected vehicles, Doug’s story is one of adaptation and innovation. He shares his insights on the exponential growth of AI, its profound impact on society, and the challenges of regulating this rapidly evolving technology. As we delve into Doug’s remarkable journey, we’ll explore the significance of reaching a billion users in technology adoption and what it means for the future of AI.


After recently retiring from Nokia after 33 years (and Motorola 22 years) in account management and business development, Doug has retired to work on immersive metaverse technology full-time. He has worked with 1G-6G, automated vehicles, telepresence, distance learning, telemedicine, and metaverse technologies.


Key Takeaways:

  • Generative AI: including models like GPT, has witnessed rapid advancements, outcompeting various aspects of human experience as training data and processing power increase.
  • The future of AI regulation: is a complex challenge, with the United States and Europe taking different approaches, emphasizing responsible AI tech and governance.
  • AI Integration: Doug draws parallels between AI integration and the historical partnership between humans and wolves, highlighting the importance of fostering safe and effective AI for the benefit of humanity.
  • The journey of AI: from a wild wolf to a friendly dog is a metaphor for the evolution of AI’s role in society, with the goal of preventing harm and maximizing its positive impact.

Byron Reese: Hi there. I’m Byron Reese. Welcome to the Agora Podcast. Today, my guest is Doug Hohulin. He is an old friend of mine. And somebody I get a lot of inspiration from, he is just an absolutely brilliant, brilliant person who is at the forefront of thought in artificial intelligence. And he was doing it before it was trendy. And he will be doing it after it is as well. He’s an amazing individual. I think our big challenge today is going to be keeping our chat to 30 minutes. His biography is so long and impressive, I’m gonna ask him to just give us a couple of highlights from it. And then we’ll launch right in. Doug, welcome to the show.

Doug Hohulin: Well, thank you so much. I’ll keep my bio short. Basically, I was with Motorola and Nokia, for 33 years, on rolling out cellular telephone systems around the world actually, and worked in four continents. I then started working with automated and connected vehicle technology. Actually, one of my first jobs in the 80s was actually working on artificial intelligence as well. But about a year and a half ago, I switched over to immersive technology focusing on healthcare, and education. And now I’m focusing on AI and how we can use AI for good and for healthcare.

And I remember one of the very first things when we ever talked, that you thought was an amazing benchmark was when a technology got a billion users, and you just keep coming back to that you bring it up and I guess imply some kind of critical mass or something. Can you just give a brief history of like that metric and why you think it’s meaningful? And what does it mean with AI?

Yeah, so basically, my dad was a history teacher and so I liked history. I started with Motorola, actually, in 1989. And there were 7 million people using cell phones on the planet at that time. And I just happened to hear a statistic, I was getting ready for a presentation, and in 2002, there were a billion people using cell phones. And it’s like, Okay, that’s interesting. And I knew in 2011, there were a billion people using smartphones. So I started to keep this blog together saying, when will we do a billion of different things, so billion Internet users, a billion 5G users, a billion cars on the planet, and then where are things going in the future? So I have a past and now I have a present and a future of when we have a billion things, but why that’s important is just like when there are billion humans on the planet, often we have this Agora, where all of a sudden we have this immersive capability where we did some amazing things with the industrial age. And so the question is, what can we do when a billion humans start using one technology?

And go ahead and go out on a limb?

So like, for instance, a billion… artificial intelligence AI, I’m predicting by next year, we’ll have a billion people using artificial intelligence. So very quickly, though, of course, people could say, well, you know, if I’m doing predictive text on my smartphone, aren’t I using artificial intelligence already? And the answer is, yes. But really engaging with artificial intelligence, just like right now people engage, like over three hours a day on their smartphone, will they be using this technology to do amazing things? And I believe in the next year, we’ll have that happen, at least a billion people being very productive, doing amazing activities using artificial intelligence, it will truly be the artificial age.

And like a lot of people, you got really excited about generative AI. So we’re recording this in November of 2023. ChatGPT came out a year ago…

November 30th is when ChatGPT actually launched in November of last year. We had 8 billion people on the planet as well, November 15th. So the same month, we had a billion people on the planet, generative AI came out.

I mean, I was taken aback, because it was just such a big leap from what we had, were you and since then, you keep throwing all these new ones at me, like you were talking about Claude, you were talking about Pi, all of these things before I even knew anything about them. And also on the image side, talk about your journey over the last year with these technologies. Did they surprise you? And what have you done with them? And where do you see it going? Give us the last year of your life with that.

Well, like I mentioned, you know, in 2015, I started, you know, focusing on artificial intelligence and connected vehicles. And so I’ve been exploring, you know, how do we use this artificial intelligence and then of course, generative AI, generative just means to generate, so whenever AI is doing creative things, now the thing is, is like the transformer model came out, Google actually invented it in 2017. They said, attention is all you need. But they didn’t really think it was all that important because as you started throwing training data at it, and fully pointed operations at it, not much was emerging out of it. It’s kind of like a toddler, you know, like the toddlers are cute, and kind of interesting, but you know, they’re not very productive, right. And then all of a sudden, when we got to 10 to the 22nd, flops of processing power, of actually training data, all of a sudden things became very interesting, now we’re at 10, to the 24th 10, to the 25th. And we keep going and adding more and more of this training data, and all of a sudden these immersive capabilities, they’re now able to out compete on various elements of the human experience.

Stephen Wolfram who was on the show earlier, postulated that, you know, we got this big quantum jump, but it’s gonna plateau now, and we’re not gonna keep seeing those. It’s like, I don’t remember his example. It may have been image recognition, or it may have been handwriting recognition or something. But he was like, you know, we crossed his threshold, but then it kind of levels out for a while. Do you see that? Or do you think we’ve only started?

Yeah, a key question, is this going to be an S-curve, right, or an exponential curve? And when will the exponential curve stop? You know, I don’t think there’s enough information to give a meaningful answer to quote, Isaac Asimov. Yeah, so we’ll see. You know, Gemini is supposed to come out in the next month. And we’re supposed to have 10 times more training, or at least five times more training data, and floating point operations around it. So as we get more and more training data, in fact, there’s a chart that shows every year, in fact Mustafa Suleyman talked about every year, we’re getting 10 times more training processing power thrown at the system. And in fact, the AI executive order from the Biden administration, said that it’s talking about regulation because that’s another area I’m very involved in is governance policy, responsible AI, tech and working with a company called AI and partners around AI. But anyway, it’s like how do you make sure that the AI is safe and doesn’t do harm to society? And one of the regulations is we only start regulating you if you’re I believe, is it 10 to the 22nd floating point of operations. So before that is like, okay, just like a kid, you don’t regulate a kid as much, but then as an adult, we have laws that say, okay, now you’re an adult, now you need to be regulated as an adult.

Well, okay, I want to come back to like the progression of it. But what does regulation look like? I mean, first of all, the first question is, would we know how to do it? And second, could the government do it? It’s easy to pile on the government and all that. But I mean, just practically speaking. You know, you have the greatest deliberative body in the history of humanity, the United States Senate, as it’s called, you know, asking Mark Zuckerberg the difference between Twitter and Facebook. And I mean, can we really expect that sort of institution to, in any way, shape that in a meaningful way? And what would that even look like? Like, how do you? That’s like saying, let’s regulate math, that, you know, there’s certain math problems you shouldn’t be allowed to do, like, what would that even look like to you?

Well, so I joke that, you know, in Europe, they pass laws and have regulations through laws and in the United States, we have lawsuits, and we do policy, through lawsuits, you know, so we look at old laws, like communication, we looked at the train act of I think, like 1917, and then applied that to communication into the 1990s. So you know, it’s gonna be interesting to see where this goes. But like I said, the Biden administration has this 80-page document, a lot of it is just to write policy papers and position papers on how we should regulate. Now in Europe, they’re just finishing up this EU AI act. And just like GDPR, you know, for data privacy. And I like to joke that move fast breaks things, Zuckerberg, but then pay lots of fines. In fact, the winner of the GDPR largest fine was 1.2 billion euros, because they moved fast, they broke things, but then they had to pay a lot of fines. So likewise, there’s a lot of fines for the EU AI act, they spent four years developing. And in fact, it’s not yet quite signed as of early November. But it’s expected by the end of this year, that it will be finalized, they’ll then have two years to implement that. And then starting in 2026, is when the fines kick in. And so if you don’t follow the auditing process, the fine process, you’re gonna get in trouble and you’re gonna be paying lots of fines. So these next few years, you know, the next five years, we’re going to have to figure this out. In fact, I’m giving a presentation tomorrow, specifically around the executive order, and I’ve tried to communicate that into you know, the lay person’s mind. And in fact, you know, your book thinks about big ideas and long timeframes. And I think of 50,000 years ago, when the first hunter gatherers were bringing the wolf into the, into the campfire, right. And it’s kind of like, in fact, there’s a book called The wolves in the Parlour, where it’s kind of this, this migration of how humans and wolves work together to do amazing things, but actually, both the human brain and the wolf brain shrunk through this process, right. There’s kind of two things there’s two puppies, the there’s the cute puppy that when you touch it, it licks your hand, and the other puppy that bit you, right, the wolf puppy that bit you, so the wolf puppy that bit you wound up in the soup, and the wolf puppy that was friendly, grew up to breed more wolf puppies, puppies that were friendly. So it’s gonna be interesting to see, you know, we’re bringing AI into our parlor, into our businesses, into our environments. And the question is, how can we make them safe and effective AI, they’re responsible, they’re not biased, they don’t discriminate, these kinds of things. So then as the AI grows up, it’s not a wolf that terrorizes the campfire or the society but it’s a dog that actually benefits humanity.

I just wonder like, from a practical standpoint, when you look at the letter, the pause giant AI experiment for six months, you remember that 30,000 people signed it. They were 30,000, like, smart people and there were two parts of it. One was a conclusion, which was in boldface that said, you know, these technologies shouldn’t be deployed until we’re confident that there will be net positives. I’m paraphrasing that but I’m actually toning it down a little bit because it’s very strong language and that the threats can be limited or mitigated. Is that even possible, because you know what I’ve pointed out is there’s not a technology in the world that could pass that test. The printing press couldn’t, you couldn’t know ahead of time that the printing press was gonna be good or bad, the internet couldn’t pass it. Because I don’t even know, at this point, if the internet could pass that test, do we know? So the idea that you know, a priori, that it’s going to be good. And then second, when you look at their policy recommendations. If you look at every one of them, we don’t have any of them for the internet. And it’s 30 years old. So it’s all just a lot of like, what’s going to be different about this one that you think somehow we’ll, or do you think, we’ll intelligently be able to regulate it?

Yeah. So I guess a couple things, right. For the next couple of years, I’m much more concerned about humans blowing up the world and doing bad things than I am around AI doing it, you know, AI is a tool, like any tool that can be used for harm, but right now, I’m more concerned about the humans than the AI is the first thing. The second is we’re in this experiment, right? You know, we’re bringing the puppy, the AI puppy, into the campfire, and what will it look like when it grows up? And they talk about these guardrails, like, what do we do with the boundaries? You know, do we put a leash? How do we put the leash on the puppy, so that when it grows up, we do not do more harm than good, you know, and you rightly say, the Internet. In fact, one of the comments, you know, this AI executive order is like, okay, who are the winners and losers? You know, because even this, you know, the people that said, hey, do a 6 month pause, then what they were doing is they were going out, and they were doing more training like they said, do this pause, but oh, by the way, don’t pause us. So they’re really encouraging other people to pause, but I want to keep my business going. So in some way this is rent sinking, where it’s causing, you know, the government to discourage other people from doing things, but let me keep doing what I want to do, right? And so there’s winners and losers, and how are they affecting this technology?

But the 30,000 people who signed that letter, it wouldn’t be fair to question their motives?

Well, actually, Max Tegmark, I’m a big fan of Max Tegmark, I respect him highly. And so I think it is valuable to think about this. I’m actually so pleased that a year ago, probably no politician was thinking about AI. And now they’re having all these Senate hearings. And you know, Vice President Harris, she’s on the council, she’s actually in Europe right now in Great Britain, you know, talking about AI safety and policy they’re working together as countries trying to figure this out, there’s, you know, so there’s like the OECD, in fact, as AI and partners, the AI auditing, working with the OECD to make sure that auditing and the policy is in place, so that you say, Hey, here’s our policy, then can we audit it to make sure that you’re following the policy? So this is happening. And, you know, actually, overall, I would say, I give ourselves a B right now of the focus that we should be looking at this. Again, I’m more concerned about humans using AI for autonomous drones that will kill people, than I am about, you know, releasing the AI to do, you know, taxes or something like that.

Did I send you my article on the 5 billion year history of ChatGPT?

You did and it was great. Yeah.

So it’s an article about the relationship of information and life, and how DNA is just encoded information. It’s not living. And the series of numbers I go with, for 4 billion years we just had DNA to store information. And then 400 million years ago, we got brains. Then 400,000 years ago, we got anatomically modern Homo sapiens, 40,000 years ago they got language, 4000 years ago we got writing, 400 years ago we learned to reproduce that writing inexpensively. 40 years ago, we had the PC revolution. Four years ago, we had the first LLMs. If that’s a meaningful curve? Are we? I’m not a singularity in the Kurzweil sense, but the idea that you reach a point where it is impossible to see beyond it, like, you just can’t, you just can’t see past the immediate future. Are we at that where if you say, what are things going to be like in four years and then 40 years, or is that even a meaningful question right now?

Yeah, I get I go back to maybe there’s not enough information to give a meaningful answer. I do know that in 20 years, the smartphone transformed our society and our world GDP grew radically because of the smartphone. I do think and that’s why I have this blog of when do we reach a billion of something? And I do think that what took, you know, 20 years, AI is going to do in five years or even two years. In fact, Sam Altman has a paper similar to yours, talking about that, that things are speeding up, and then do we have enough time to put in proper guardrails? You know, to put the leash on the AI. So that we do less harm to society, and so forth, so, you know, I’m actually very pleased, you know, just like this letter that was signed, people are echoing that, hey, there’s things that we need to be cautious about. And then the question is about innovation. In fact, one of the comments about this executive order, which again, isn’t a law, so it’s not going to go into effect is that if the internet was regulated like this executive order, if it became a law, the internet would slow down greatly, right, as we wouldn’t have had the internet the way we’ve had. Now, you know, people may say, well, social media has caused a lot of issues to our society. There’s a lot of people that are lonely, though, I would argue that for me, you know, I really engage with a lot of people, one on one, like we are doing today, thinking these big ideas. And if I didn’t have the internet, if I didn’t have zoom sessions, you know, some of my best friends are ones that I’ve only met once in person. And then we engage weekly or even daily, on the internet, to think about big ideas. So you know, I guess, I’m a big believer in the more powerful the tool, the more benefit or risk there is. But that means that you have to have more training. So you know, in fact, this organization I’m part of is Next CoLabs. What we do is we do a lot of training on how to use these tools responsibly and properly.

So I have this theory that people make these robots, androids, and they don’t really have brains, but they have very realistic features, right, where there’s, you know, skin, and they can do facial expressions with servos, and all of that. And I had this theory or not a theory, in three months, somebody’s going to put one of these LLMs in one of those, and it’s going to be like a bad week for me, because those people are going to call me and be like, Oh, my gosh, that thing’s alive, isn’t it? Because you can tell it a joke and it’ll laugh and smile, and then it’ll tell you a joke, and it has a name. And it refers to itself as I. I always worry about those things. Do you think that’s premature to be concerned about that aspect of it that people already have, because you talk about hallucinations, a lot, you’ve talked about that… Do you worry that we hallucinate it into being a living creature? And that really works? What we do with it and what do we think it can do?

Well, yeah, and hallucinations, for your audience who’s not familiar with that is when the AI makes things up very convincingly. Right? And in fact, working on healthcare and AI is like, how much of healthcare do we make things up? If I put butter on your burn or pop a blister? Or in the old days, you know, put leeches for bleeding, right? You know, get rid of that bad vial. So, you know, we hallucinate all the time as humans. In fact I’m working on a paper with Dr. Harvey Castro on AI and hallucination because, you know, AI is trained on the human experience, maybe the AI is hallucinating because it’s trained on our hallucinations, right. So, you know, the question is, how do we minimize that hallucination? How do we make sure that things are factual, right? Unfortunately, we have politicians who are telling us what we want to hear, so we vote for them. We have salespeople that tell us what we want to hear. So we buy from them, right? And we have news reporters that tell us, you know, if it bleeds, it leads, right? Get you excited about something, as you know, either it’s known that they’re lying, or they’re exaggerating, they’re trying to shape the message so that you engage with them, right. It’s all about engagement. And, you know, that’s going to be the challenge it’s like, can we tone down the engagement? And actually the listener it’s important to say, okay, as I’m listening to news reports, I’m listening to the salesperson even as I’m listening to the doctor, make sure I fact check the information to make sure that I validate that. So my biggest concern actually of all this AI is that people will be lazy. And that they won’t fact check and that the AI will take control not because they’re the Terminator world, but the Wall-e world where, you know, we’re just lazy and like, okay, whatever the AI gives me, I’m gonna just repeat.

But is that really a worry? I mean, I use the analogy in one of my books if I buy a metal detector, and I go to the beach hoping to find some treasure. And I swing that thing around and go beep-beep-beep. Now I can dig anywhere on the beach I want to, but I’d be a fool not to dig where it’s beeping. Right? For me to say, well, you’re not gonna be the boss of me metal detector, I’m gonna dig over there, what do you know? I mean, it at some point isn’t the whole point of it, that you trust it? And that you don’t check everything?

You need to understand when to trust and when not to trust, but verify, right? And, you know, like Google Maps is a good example, right, is that, you know, when you’re first learning to drive, one of the things you need to learn how to do is navigation skills and learn to drive around the city. In fact, it’s probably good if you have a teenager when they start learning to drive and tell them that they need to tell you, okay, we’re going from this point to this point, you give me guidance, you’re the navigator, right? So get them thinking about navigation skills. So the danger is, you know, we rely only on GPS, it’s like, I don’t know where I’m going, I’m just letting the computer tell me what to do. Right. That is my biggest concern about this technology is that we overly rely on the technology, and we don’t understand its limitations. Because even GPS today, when it gets keeps getting better and better, you know, if you put in the wrong address, or someone sends you the wrong address, you know just the other week it sent me to a place down the street, and took me an extra five minutes, because you know, if I would have checked ahead of time, I could have seen that it was sending me in a weird place, and I need to validate it. So you know, how do we use this technology to make sure that we’re using the tool responsibly, we know its limitations, its guardrails. And so that, you know, we can use it to our benefit.

Do you remember those AT&T commercials from the 90s? And they were always like, “Have you ever sent a fax from the beach?” Which is actually a funny one. But there were things like, “Have you ever made a dinner reservation while on the beach?” I don’t remember what they were. But they were painting these very realistic, but big time futuristic things that kind of all came through other than sending a fax from the beach. So I was at a big event last week. And it was all these big brain people and what came up was how much Star Trek inspired everybody there. Specifically the Star Trek computer. Everybody, you know, saw that where you’d say “computer” and you would ask it a question, and it would answer it. And that was just mind expanding. And I think Star Trek in particular because it was not dystopian. It was a vision of what the future could look like, you know, still human problems. And then I think about those, you will see commercials. And I’m wondering, do you think we have a good narrative of what the future looks like with personalized AI? And like, and if not, you give me one. I’m putting you on the spot here. This wasn’t like a prearranged question. What do you see, when you have a billion people using it three hours a day, to your thing, walk me through my day when I wake up, and I say what’s the weather going to be like? And it’s my weather? What should I wear? Give me the image or picture?

Yeah, no, no, this is fantastic. Yeah, actually, last year, I was working with a company called Metaview. And we competed for the 10 G challenge. So the cable labs, the cable companies, had this challenge to tell us what the future is going to look like whether you live, work, learn or play. And so Metaview is a company that uses needle guidance, just like GPS for the car, for the city. They have needle guidance and holograms. You know, hololenses, to give you guidance to the body and you know for that technology. Anyway, what CoLabs did in 2018 is gave a series of videos to say how am I going to live? Which is healthcare. How am I going to learn? Which is education. And gave a great vision of that in the 2018 timeframe. And so one of them and working in healthcare now with Dr. Harvey Castro, is looking at how we can use this technology, that’s science fiction in 2018, to where it’s going to be in the next two years, right. So like, you have AI, that’s following, you know, like I have an elderly parent, and can they kind of monitor my mother, making sure that she’s taken her medication, you know, this Zoom meeting, so that they don’t have to go to a doctor’s office as they don’t need to. So there’s a vision of this technology, self driving cars, is another area, which in the next five to 10 years where you have this mobility, people as they get older, they can’t drive anymore. And so, you know, as we get older, as our society ages, we’re going to need to have some of this technology, either we’re going to have to have more young people, or we’re going to have AI to do this work for us. So that is the one category. And then they had one in education, so they had an elderly gentleman, and what was his day like? Like I said, it’s self-driving cars, so he can have zoom meetings and interaction with his doctors, he can have sensors in his body that monitor his blood state, his vital signs, his blood pressure, and so forth. And like, right now, I got my mom an Apple Watch. So I can do some of that, right? I can get a taste of that. But I think in the next five years, we will have a much better understanding of, at the end of the day, what five things can I do to improve my health? You know, what, five things can you do? Every, everyone listening, what five things, put that in a ChatGPT or in Claude or whatever, and say, Okay, here’s, here’s my medical state, what five things should I think about to prove my health, and then, see what the AI gives you and see if that makes for a better day.

That’s interesting, because in the early days of search engines, not even the early days, like five years ago, or 10 years ago, if you and I did a search, you in Kansas City, and me in Austin, we’d get the same results. And now we don’t, because it’s got all this history of everything we’ve ever searched for. And I think it trains on emails and everything. But our AIs are still very much like Chat GPT, aside from the fact that it remembers a conversation you’re currently having. It doesn’t remember that week after week after week it’s not building up knowledge about you. And so I guess it still feels like we’re at that very first step where it’s one size fits all. And what we really need is a way for it to collect data about us passively.

And that’s called the context window, like right now, you know, like Claude is one of the bigger context windows of 75,000 words, but after a while, it loses that context, right. So you’re trained on all this data, you know, that’s actually the initial training and the Parameter Set, but then you have the context window of the working memory. And, so you have both going on in these large language models. And the goal is to increase that context window that follows you, and be able to interact. I mean, one of the biggest problems I have with PI right now, and even the other large language models, but Pi I like because it has natural language processing, where you can talk to it very well. All of a sudden, I realize after a few minutes, there’s nothing behind that. You know, it’s kind of like if you ever talked like a salesperson, right? You know, at first you’re thinking, Oh, they really know, you know, if you’re talking about, let’s say, computers, or whatever topic you’re talking about, you’re like, oh, they really know their material. And then after a while, you start asking questions and like something that should be obvious, and if they’re not answering correctly, it’s like, okay, they don’t really know what they’re talking about.

But I mean, I was really blown away, so this was just a couple of days ago, you emailed me and said, hey can I put your book in this. And then you asked it to write a positive review about the book, and I read it. And it was amazing. Like, it was amazing, for two reasons. One, it pulled meta ideas out of the book. So it wasn’t just repeating back words or sentences, but big picture ideas. It’s sort of like Joe Biden’s State of the Union address, doesn’t have any of those words in it, Joe, Biden, State of the Union, address, and yet, if a system could read it and say, Ah, this is Joe Biden’s State of the Union address without ever having seen those words in it, that’s really amazing. And that’s sort of what this was doing. That was amazing. And then second, it didn’t zero in on some tangent I went on, so it pulled meta ideas out and it pulled the right ones, the big ones, and then it wrote it in this very, like, wow, you could not tell that was not written by a person, you just cannot.

And being an engineer who’s not the best writer, it gets me 90% of what I want. And then I just, I didn’t make any edits to that one, by the way, but…

But did that surprise you? Does that still not wow you when you use it, or…?

I expect it, you know, expect the best large language models to do that. And like I said, Gemini, this new one coming out is supposed to be, you know, five to 10 times bigger on the training set. So it’ll be interesting to see how all this works. So you know, like, though people have started, you know, there’s only what is it 19%/18% of people have actually used GPT, according to Pew Research, at least as of I think it’s July, and only 15% say they use it, they find it very useful, extremely useful. And then 20% say they find it very useful. There’s a lot of people, like I have a friend who, she said, I tried it once and it didn’t do anything for me, right? But it’s kind of like the 1990s where someone said, I tried the internet once and, you know, it didn’t do much for me, right? Some of the tools like, you know, if someone gave me a bulldozer to use, you know, you have to know how to use it to bulldoze, and properly, to have the benefit of using a bulldozer, right. So if I had two lessons to give to your audience, one is I would you know, just like the Agora, this concept of the Agora, find a community of people that want to think big. And if you’re interested in AI, find a community that wants to think big about AI, and there’s a lot of them out there, reach out to me on LinkedIn if you want, and I can give some suggestions. I’m part of one group called Next CoLabs. Another one is People Centered Internet. But there’s a whole bunch of other groups that I could suggest as well.

Well we’ll wrap up on that, you mentioned the Agora, were you using it in its historic Greek sense of the podcasters? Or about my book about the emergent…?

No, the sense of the Greek community, right. And I was thinking about the Augmented Agora, but the key thing is, one is don’t lose your humanity. Okay? So lessons learned is to find a community group to learn about whatever you’re interested in. So if it’s AI, the second thing is don’t lose your humanity with this technology. And you know, it’s going to be easy to get sucked in just like you wowed at this information. Actually, we’re using these tools like, wow, this is really cool. And sometimes being around, you know, family and friends aren’t as cool with this technology, right? But we really need to focus on the humanity aspect of life. And making sure that, you know, I really like the last part of your book, which says, you know, we each day, you have to make things a little bit better. You know, there’s a lot of horrible things going on in this world. And the only thing that will stop that is little acts of kindness, to improve our world on a daily basis. And you know, I tend to at the end, after I read the book, I look at the very beginning of the book, and the very end of the chapter. Because sometimes the book ends with, you know, high level concepts, and your concept about, you know, of all the negatives in the world, focus on how we can make the world a little bit better, one kindness at a time.

That was a real, real surprising conclusion to me when I wrote that book, first of all, I didn’t know where it was going, and second, I rewrote that last chapter, more times than I’ve ever rewritten anything in my life. And I mean, I kept redoing it. And the book is such a big book, it’s about how the collective of all humans formed this creature that can do all these amazing things, like, make iPhones. And then what do I do in it, and I kept trying to make it something big until I went back to honeybees. And you know, the hive works because all the bees do their little bit, not because there’s some overachieving bee. You said people can reach out to you on LinkedIn, I think people will take you up on that. Give us your name, how to find you on LinkedIn. I guess just search for you.

I only have one other second cousin, with this name as well. So yeah, just reach out to me. I write a lot about AI, healthcare, and I try to follow the Agora spirit of thinking big ideas, celebrating humanity. I’m trying to find people who share that. The great thing about the technology is, you know, if you search out for that, you can find that. And so just like I met you through the internet, and looking forward to meeting some of your guests, but encouraging them to reach out to other people that share common big ideas.

Well, thank you for being on the show. And everybody that wraps it up, I learned that every time I talk to Doug, I learned things I didn’t know before. He thinks about these issues all the time. And you can tell he is a person who isn’t just passionate about technology, even though he’s always bursting to tell you like the coolest thing he’s recently found, but he’s a deeply human person who is very interested in the future of humanity. So I think, you know, he loves technology, but he really loves people. And that’s, I think, the coolest thing of all. So until the next episode, thanks very much. See you soon. This is Byron Reese and this is The Agora Podcast.

Want to be a guest on the show? Contact us:

Contemporary Dance



Lori Nelson