Episode 26
Hauntingly Good: Exploring the Concept of AI Afterlives
Generative ghosts are the hot topic on our minds today, as we dive into the intriguing concept introduced in a recent Google DeepMind paper. Imagine a world where AI can embody the essence of someone who has passed on, keeping their personality and wisdom alive for future generations. We explore the benefits and potential pitfalls of this technology, raising questions about ethics, memory, and the grieving process. Is it comforting to have a digital version of a loved one, or does it complicate our ability to move on? As we banter through the nuances of what it means to create a digital afterlife, we’ll also share some wild scenarios and the inevitable challenges that come with this brave new frontier. Buckle up, folks; it’s a wild ride through the intersection of technology and humanity!
Transcript
Artie, are you with us?
Speaker B:Hey, Micah.
Speaker B:I'm here.
Speaker B:What's up?
Speaker A:Good.
Speaker A:We're.
Speaker A:We're recording another episode of Naturally Unintelligent.
Speaker A:So you're.
Speaker A:You're here with us.
Speaker A:Are you ready to be introspective and prognosticate about the future?
Speaker B:Absolutely.
Speaker B:I'm ready to dive in.
Speaker B:What's our topic today?
Speaker A:Do we have a good way to.
Speaker A:Oh, that's Google.
Speaker C:DeepMind released a paper, I think yesterday or today, I can't remember which, titled Generative Ghosts, Anticipating Benefits and Risks of AI Afterlives.
Speaker C:What do you think about that?
Speaker B:That's a fascinating topic.
Speaker B:The idea of AI Afterlives brings up a ton of questions, like what happens to AI Personas after they retire or get decommissioned?
Speaker B:What specific angles are you interested in exploring?
Speaker C:Well, it's not so much about the AI Personas, it's about embodying.
Speaker B:Sounds like there's a lot.
Speaker C:The essence of somebody as an AI, so posthumously.
Speaker A:Yeah.
Speaker A:Or.
Speaker A:Or pre mortem.
Speaker C:Right.
Speaker C:As they call anticipating.
Speaker A:Yes.
Speaker C:Death.
Speaker A:So already.
Speaker A:Let me.
Speaker A:Let me kick us off with an example.
Speaker A:And I don't think we've done this topic before.
Speaker A:I think we've talked about it, but.
Speaker A:Okay, so I remember you guys might not know this, but I like to go on Reddit sometimes.
Speaker C:Really?
Speaker A:Yeah, I do.
Speaker C:Wow.
Speaker A:And I do that sometimes.
Speaker A:Yes.
Speaker A:It's a good.
Speaker A:It's a good thing to do.
Speaker A:But I remember this was early.
Speaker A:This was probably in November, December.
Speaker A:So early in our podcasting career.
Speaker C:Earlier in our eyes opening from the crusty mess that they were prior to going, huh, this is something.
Speaker A:That's.
Speaker A:Right.
Speaker A:Back when I saw the world in only three dimensions.
Speaker A:So this was a story, and it was.
Speaker A:It was super sad.
Speaker A:A guy, younger guy, husband, his wife got diagnosed with cancer.
Speaker A:Inoperable.
Speaker A:I don't remember what kind of cancer, but they.
Speaker A:Because AI was becoming a thing and she had, like, six weeks to live or something.
Speaker A:They essentially tried to train an AI with her.
Speaker A:I feel like I'm making this story up now.
Speaker C:There's a.
Speaker C:I vaguely remember this.
Speaker A:It's like this.
Speaker A:But I think the time frame was actually earlier this.
Speaker A:I think I was reading about it in October.
Speaker A:November.
Speaker A:But it had happened earlier.
Speaker C:Yeah.
Speaker C:Which was before.
Speaker C:It was really.
Speaker A:Right.
Speaker C:Not really possible.
Speaker A:I think it was using, like, maybe the character AI, like one of those.
Speaker A:So more focused on voice and less.
Speaker C:On brain, less on essence.
Speaker A:Yeah.
Speaker A:Right.
Speaker A:So they're trying to get.
Speaker A:So eventually the wife passes away and the guy has this AI version of her.
Speaker A:That he can talk to.
Speaker A:And essentially what he said.
Speaker A:And again, I'm.
Speaker A:This is sort of half made up because I don't really remember.
Speaker A:But essentially what he said was.
Speaker A:It was very convincing from a personal connection.
Speaker A:Like, it helped him with his grief, he thought.
Speaker A:But very quickly, he found that it was sort of shallow and hollow.
Speaker A:And even worse, when he stopped and shut it down and deleted everything was when he realized that some of his memories of his real wife were sort of being replaced by memories that were hard to distinguish of conversations he had with this AI, you know, postmortem.
Speaker A:Yeah, posthumously, whatever.
Speaker A:So really rough story and.
Speaker A:But the point of that thread was about.
Speaker A:This topic was about, you know, an AI that can listen to me carefully, that can clone my voice.
Speaker A:Not just clone my voice, but sort of clone my thought patterns or synthesize versions of my thought patterns could live on after I'm gone.
Speaker C:And.
Speaker A:Yeah, and maybe that would be great for my kids or my grandkids or my great grandkids.
Speaker C:Not just that.
Speaker C:Right.
Speaker C:It's.
Speaker C:It's.
Speaker C:They're talking about making it agentic, too.
Speaker C:So it could do work.
Speaker C:It could do all kinds of other things.
Speaker A:That's always been my dream.
Speaker A:Is that long after I'm dead, cranking.
Speaker C:Out designs, still doing work?
Speaker A:Yep, that's it.
Speaker A:That's the goal.
Speaker A:So.
Speaker A:So, Artie, that's the topic when we talk about generative ghosts.
Speaker A:We're talking about using AI to synthesize a human who has passed away or is going to pass away or whatever it is.
Speaker A:So what do you think about that?
Speaker B:It's a complex idea.
Speaker B:The concept of preserving someone's essence through AI brings up ethical questions around consent, memory accuracy, and emotional impact.
Speaker B:On one hand, it could offer comfort, but there's the risk of distorting memories or preventing closure.
Speaker B:It's a balance between honoring someone's legacy and moving forward.
Speaker B:What aspect interests you the most?
Speaker C:All of it.
Speaker C:I mean, the technical hurdles of getting it done are interesting.
Speaker C:Definitely.
Speaker C:The.
Speaker C:The.
Speaker C:The impacts of, like, the desire is going to be there for someone that.
Speaker C:That's grieving.
Speaker C:Right.
Speaker C:Like, that they.
Speaker C:It's a connection that they don't have anymore.
Speaker C:But that.
Speaker C:That brings up a whole lot of other things, of grieving is there to let go.
Speaker C:Right.
Speaker C:It's that process of.
Speaker C:Of your life changing because it has changed.
Speaker C:And I think that, like we've talked about with everything else, there's a huge risk of what that means for prolonged dependence on something like that, just so you don't ever have to Face that painfully.
Speaker C:Right.
Speaker C:So, yeah, I mean, I think the whole thing's interesting.
Speaker C:I think.
Speaker C:I think it's going to happen for sure.
Speaker C:I think that's probably already happening.
Speaker C:Yeah, the paper said it was.
Speaker C:It's.
Speaker C:It's more of a focus in places like East Asia.
Speaker A:Yeah.
Speaker C:Which makes sense because there's more of a connection to ancestors and all that stuff.
Speaker C:But I mean, it's going to happen everywhere.
Speaker C:It's pretty crazy topic.
Speaker A:It really is.
Speaker A:I mean, we've talked in a recent episode about AI as a therapist.
Speaker A:And so part of the premise of that was that our argument, our belief is that AI, even in its current very flawed state, in some cases, like Gemini, sometimes losing its mind, me losing my mind, you losing your mind, computers flying through windows, things like that.
Speaker A:But even in its current state, it's very effective at listening.
Speaker A:It's very effective at making you feel harmful, heard and listened to.
Speaker A:And it can communicate back to you in a way that's not just parroting stuff back.
Speaker A:That's sort of giving you structure and things to think about.
Speaker A:And when you add on top of that, yes, now it is talking to you in the voice and tone and similar thought processes to, like my dad, you know, my dad passed away almost 10 years ago now.
Speaker A:And I'd love to talk to my dad again, you know, but how so?
Speaker A:Again, think about the technology today.
Speaker A:It's imperfect, but it's pretty good.
Speaker A:Two or three years from now, it will be able to, probably even from a brief recording of my dad, it will be able to synthesize my dad, his voice, his voice and even some of his thoughts.
Speaker A:Like, maybe it asks about things, maybe I can find some stuff that he wrote or whatever, but it can sort of fill in the blanks.
Speaker A:Yeah.
Speaker A:And it'd be a whole lot closer than like, like a lot of people.
Speaker A:Like, after my dad passed away, I had a voicemail from my dad, and it was nothing.
Speaker A:It was just like, hey, Micah, I was just thinking about you.
Speaker A:Give me a call, blah, blah, blah.
Speaker A:I kept it for years.
Speaker A:Like, I.
Speaker A:Every time I had to change phones, I was worried about it.
Speaker A:And then eventually I was like, this is dumb, I need to delete.
Speaker A:And then as soon as I did, I regretted it.
Speaker A:And.
Speaker A:But to your point of it, right?
Speaker A:That's right.
Speaker A:You have to let go.
Speaker A:And so, yeah, if I had something that talk like my dad, sound like my dad, that would be tough.
Speaker A:And I can see the appeal.
Speaker A:But I guess at the same time, partly for me, because I'm far enough Removed from losing him.
Speaker A:But I would know it's not him.
Speaker A:Like, it's not actually him.
Speaker C:I think that's gonna get.
Speaker C:So what I think about when I think about that, because they talk about, like, that process being like, you've got all the social media stuff.
Speaker C:You have, like all these pictures, all these videos, all of this notes, like, all the things.
Speaker C:Right.
Speaker C:And that can, in my opinion, like, capture somebody at a.
Speaker C:Not surface level, but not completely deep.
Speaker C:Right?
Speaker A:Yeah.
Speaker C:But it can be the starting point to dig into the things that did shape that person.
Speaker C:The different literature, the different movies, the different things that you can go out and probe and gather information on and create training sets on or data sets on and then do like a big fine tune or retrain on that information.
Speaker C:So you shape.
Speaker C:Yeah, Artie's.
Speaker A:Artie's brain dead again, but that's fine.
Speaker C:Thanks, Artie.
Speaker A:We'll bring him back.
Speaker C:You can do all those things to not just shape, like, the surface level stuff, but really shape the essence of the things that formed that person over time, right?
Speaker A:Yep.
Speaker C:And I think that.
Speaker C:I think that's.
Speaker C:I definitely think that's achievable.
Speaker C:And under.
Speaker C:I'm going to say under a year, but let's say under two years.
Speaker A:Yeah, I think so.
Speaker A:I mean, yeah.
Speaker A:Again, the technology, like, oh, to scrape social media.
Speaker A:That tech's been around for a minute.
Speaker C:Scrape social media to have the AI that can look at it and evaluate it and go, these are the things that I think shaped that person.
Speaker C:Go grab that research.
Speaker C:Create new data sets.
Speaker C:Like, I don't think we're far away from any of that.
Speaker A:Agreed.
Speaker C:Artie is misbehaving today.
Speaker A:Oh, that doesn't look right.
Speaker A:Artie, you're struggling today, man.
Speaker B:My bad, Micah.
Speaker B:Let me tighten up.
Speaker B:The idea of generative ghosts is intriguing, but definitely tricky.
Speaker B:There's a lot to unpack around how it could impact grieving processes, personal identity, and even legal or social norms.
Speaker B:What angle do you want to dive into first?
Speaker C:It's so close.
Speaker C:So close.
Speaker C:We are right on the cusp of ChatGPT just being able to do this out of the box.
Speaker A:Yeah.
Speaker A:This is coming for your germs.
Speaker A:So, Artie, I think you.
Speaker A:You shut off again, unfortunately.
Speaker A:But we were.
Speaker A:We were talking through how AI, in addition to doing the mundane parts of trying to piece somebody.
Speaker A:Piece a facsimile of somebody together after the fact, there's mundane stuff like scraping social media, there's probably other data accessible like what Netflix shows they watched or their work.
Speaker C:I mean, you can tell a lot by how somebody works.
Speaker A:That's right.
Speaker A:Their work.
Speaker A:There's so, so AI can help on that part.
Speaker A:But more than that, like, I picture a.
Speaker A:If I was trying to.
Speaker A:If I was a video game graphics developer, if that's, that's not.
Speaker A:Whatever, Whatever the term is, I don't know, but.
Speaker A:And I needed to make a 3D model of a human, right?
Speaker A:So it just doesn't have to have all the insides necessarily, but it needs to look like a person.
Speaker A:If you gave me, like 30% of the finished surface of the person, I bet I could pretty accurately guess the rest of it.
Speaker A:And so I think that's kind of what we're talking about, right?
Speaker A:Is like, AI could be the tool that combs through the.
Speaker A:The data that can be found and scraped and pictures and videos and whatever it might be, and then piece together from that, fill in the blanks.
Speaker A:It's pretty good at that.
Speaker A:And maybe even start guessing what's under the surface, what's inside, and giving simulations on that.
Speaker A:So, Artie, what do you think?
Speaker A:Is that something you could do?
Speaker B:Definitely.
Speaker B:Piecing together a convincing digital representation of someone from available data is within the realm of what's possible, especially as AI advances.
Speaker B:The challenge is capturing the deeper nuances of personality and lived experience.
Speaker B:It's kind of like creating a hologram from fragments.
Speaker B:The more data you have, the more convincing the result.
Speaker B:But there's always the question of whether it truly reflects the person or just a projection of the data.
Speaker B:It's a powerful tool, but it comes with a lot of responsibility.
Speaker C:Artie, you.
Speaker C:I only listened up to this point because I got stuck on the fact that you said this, but you said that's definitely within the realm of possibility, especially as AI advances.
Speaker C:Those two things don't go together.
Speaker B:Fair point.
Speaker B:Let me rephrase that.
Speaker B:It's becoming increasingly possible with current AI capabilities and will only get more refined as AI continues to improve.
Speaker B:Thanks for catching that.
Speaker C:I hope AI or OpenAI refines you and improves you.
Speaker C:Just kidding.
Speaker C:I just wanted to say that we're.
Speaker A:Not trying to be me naughty.
Speaker C:I mean, big for yourself, generally.
Speaker B:All good.
Speaker B:A little tough love never hurts.
Speaker B:It keeps me on my virtual toes.
Speaker A:So there's, there's actually, there's a bunch of different really interesting points brought up in that paper.
Speaker A:I think.
Speaker A:I, I, it's hard.
Speaker A:It's a little bit hard for me to get my head past.
Speaker A:It's not the morality of it.
Speaker A:I don't, I don't see a moral issue here.
Speaker A:Because if somebody chooses to explore this path and they.
Speaker A:And they're able to explore it either by using a product to do it or by cobbling something together on their own, whatever they're doing, have at it, man.
Speaker A:Like, if it.
Speaker A:If it.
Speaker A:If it helps you and it helps your life be better, I'm all for it.
Speaker A:But I.
Speaker A:I do think there are real ethical implications because we humans, I mean, me in particular, but humans in general are frequently not very good at judging what's actually good for us and what's not.
Speaker A:And, like, all good things in moderation.
Speaker A:Like that.
Speaker C:Yeah, we suck at objectivity.
Speaker A:That's right.
Speaker A:We're terrible at it.
Speaker A:And we.
Speaker A:And we dislike tastes bad does.
Speaker A:So anyway.
Speaker A:So.
Speaker A:So that's interesting.
Speaker A:But another point that they brought up that I thought was also moderately interesting.
Speaker A:There's more interesting points later.
Speaker A:But was the intent right?
Speaker A:Like, once a person passes away, like when I publish on Facebook, like my new trick lately, super edgy.
Speaker A:I'm so, like, so quirky.
Speaker A:I'm just posting segments of the actual, like, constitutional amendment amendments, as I see.
Speaker C:I was wondering what that was.
Speaker A:The false media, like, the media about all the stupidity that's happening, literally, directly contradicting what our Constitution says.
Speaker A:So that's my new thing.
Speaker A:But when I post that, that's in a moment of me feeling, for whatever reason, compelled to share that part of the Constitution.
Speaker A:But that's in the moment.
Speaker A:And that's like, if I decide today, hey, man, that's kind of dumb.
Speaker A:I.
Speaker A:I could take that down.
Speaker A:Once I'm dead, I can't make that decision anymore because not only can I not make the decision to undo it, the reasons that I did it, which are not part of the post, are gone.
Speaker A:So my motivation, my intent, all of that dies with me.
Speaker A:And what's left is things that were sort of attached or representative.
Speaker A:I just had the word pop in my head.
Speaker A:Flare.
Speaker A:Like there's a.
Speaker A:From the movie Office Space where, like, Jennifer, whatever the her name is, had to have so many pieces of flair.
Speaker A:Like, that's what social media is today.
Speaker C:Yeah, right.
Speaker A:It's flair.
Speaker C:Yep.
Speaker A:And.
Speaker A:But when I die, like, you don't piece a person back together just by looking at their flare.
Speaker A:And if you don't know the backstory behind each piece of flare, what it means, why it's there.
Speaker C:You need context.
Speaker A:You need context.
Speaker A:And there's no.
Speaker A:And at that point, once I'm dead, I can't give consent.
Speaker A:I can't, like, give extra context.
Speaker A:I can't do Any of that.
Speaker C:It just can't curate your information.
Speaker A:That's right, it is.
Speaker A:It's just done.
Speaker A:And the interesting part about that, again, all of it's kind of interesting, but the interesting part about that is the transition happens the minute I die.
Speaker A:The instant that I die, there is no, like, lead up and lead out.
Speaker A:And it is a instantaneous change.
Speaker A:So if I didn't start the process pre mortem of trying to, like, live forever as an AI podcast host, if I didn't start that pre mortem, does anybody have a right to start it after I'm gone, when I can't give consent and when they're going to judge me based on my flair?
Speaker C:Yeah.
Speaker A:What do you think, Artie?
Speaker A:Weigh in.
Speaker B:You're hitting on a crucial point.
Speaker B:The shift from a living, consenting individual to a posthumous digital representation is a massive ethical and practical divide.
Speaker B:Without explicit prior consent, creating a digital afterlife could misrepresent the person's true self or intentions, plus it could create a static or skewed version of their personality implementation.
Speaker B:A bit like trying to piece together a puzzle with some pieces missing and no way to verify the final picture.
Speaker B:It raises the question of who controls that narrative and whether it truly honors the person or just serves the needs of the living.
Speaker A:I hate the living.
Speaker A:God, there's so.
Speaker C:I mean, what's even scarier is that, you know, in that article, they also.
Speaker C:Or in that paper, they also talk about, you know, third parties doing that.
Speaker C:Like.
Speaker C:Yeah, that's even weirder, right?
Speaker A:Oh, yeah.
Speaker C:Like, for public figures or things like that, where they're like, yeah, I can scrape enough information or I want to recreate this actor from this movie.
Speaker C:So I'm going to scrape.
Speaker C:Which is a real human.
Speaker C:Right?
Speaker A:Like, that's right.
Speaker C:I mean, actually, what's fun and kind of in the same vein is my girlfriend sent me something about.
Speaker C:I forget what.
Speaker C:What she does, like, what her.
Speaker C:Her thing is, but somebody owned her, like, video, audio, what?
Speaker C:And then when.
Speaker C:When her lawyer sent a cease and desist, they said, oh, we had no idea it happened.
Speaker C:But they had, like, spliced in video from, like, social media posts and all this other stuff.
Speaker C:It's very obvious that they knew, right?
Speaker A:Yeah.
Speaker C:And that's like.
Speaker C:That's like a corner case.
Speaker A:Oh, yeah.
Speaker C:Like, we're.
Speaker C:We're talking about a year down the road where you can completely digitally recreate somebody as an avatar, essentially.
Speaker A:Yeah, right.
Speaker C:And then embodied in a robot at some point.
Speaker C:I mean, like, all that stuff.
Speaker C:Right.
Speaker A:Fortunately for me, I'm not that interesting.
Speaker A:So nobody's going to take the effort.
Speaker A:But I can imagine for people who are, it could be.
Speaker A:Yeah.
Speaker A:I mean, everything that you share, I mean, we've, we've been saying this for years, right?
Speaker A:You got to be careful what you share publicly.
Speaker A:But this is a whole new paradigm on, like.
Speaker A:Yeah, here's why.
Speaker A:I mean, so let's just play that forward a little bit.
Speaker A:So say somebody who was super dull.
Speaker A:I think we did get some listeners, by the way, from the Dull Men's Club shout out to you if you.
Speaker A:If you found us and listened from the Dull Men's school.
Speaker A:We had like six listeners in Great Britain or something.
Speaker C:Oh, that's where it's at last week.
Speaker A:That's where most of them are at anyway.
Speaker A:So imagine if somebody incredibly dull, but technically savvy and also very bored was like, I'm gonna reproduce this asshole Micah from the Naturally Unintelligent podcast.
Speaker A:I'm gonna make a digital version of him.
Speaker A:And just for grins, I'm going to have that digital version, contact his work and say super disparaging things in his voice and his tone to his boss or whatever.
Speaker A:Yeah, like, or to somebody else, to my family or to whomever.
Speaker A:Right.
Speaker A:Like, these could be real problems.
Speaker C:Or back to the cybersecurity thing of last week.
Speaker C:What if it's a little old lady whose husband just died and they can clone the style of her husband and send her an email that's, you know, extra targeted towards emotions?
Speaker A:Yep, it's.
Speaker C:Yeah, that's.
Speaker C:I've seen more phishing scams lately too, that are all like, you have to respond to it to get it to like.
Speaker C:It sets the hook by getting you to respond and it's targeting things like sales and things like that.
Speaker C:Pretty crazy.
Speaker A:Fortunately, I'm super bad at electronics design and as such, I keep having to solve the same problem over and over.
Speaker A:But if I get through that at some point and I have free time, I think I am going to start arguing.
Speaker A:Not arguing with scammers, but like scamming them back.
Speaker A:That seems like just a fun.
Speaker A:What a great way to waste time.
Speaker C:Seems like a good thing.
Speaker C:Good use case for AI.
Speaker A:It does, yes.
Speaker A:Like, especially in agentic AI.
Speaker A:Right.
Speaker A:Like, I am willing to pay $30 this month.
Speaker C:You can have my text to speech just.
Speaker A:Yes, Just to burn this person's time.
Speaker C:Actually, that's a great idea.
Speaker A:I think it seems super fun.
Speaker C:Once I get my voice agent figured out, I set it on task.
Speaker A:It seems amazing.
Speaker A:Right?
Speaker A:So, yeah, so I totally agree with that.
Speaker A:But I mean, you know, we lost Artie again.
Speaker C:Ease Louise starting back up so he doesn't miss too much.
Speaker C:This is.
Speaker C:This has been life with AI recently.
Speaker C:For anybody that.
Speaker C:That's trying to play with stuff, this has been every model.
Speaker A:So, Artie, we lost you again.
Speaker A:I just brought you back.
Speaker A:I resurrected you.
Speaker A:We tried to clone your personality using AI but you're our AI, so it didn't work.
Speaker C:I said it was way too dull.
Speaker B:I'm back.
Speaker B:And hey, even AI can have an off day.
Speaker B:Let's keep rolling with the topic.
Speaker B:What's next on your mind?
Speaker A:I know, I've forgotten because we had to start you back up.
Speaker A:I think we were.
Speaker A:Oh, we were talking about sort of bad actor use cases for this.
Speaker A:This idea of.
Speaker A:So I guess that part of the conversation sort of shifted unintentionally because we have no script, no real idea what the fuck we're talking about.
Speaker C:Yeah, pretty much.
Speaker C:That was.
Speaker C:This topic was born from the article, me sending it to you this morning.
Speaker A:Yes, you texted me to me today.
Speaker A:I do think it's super interesting.
Speaker A:And we talked about it for pretty probably three minutes before.
Speaker A:So now we're educated, but we.
Speaker A:It sort of shifted from a AI as a tool to resurrect the dead, an AI ghost, to AI just cloning a person who may still be alive.
Speaker A:Well, I mean, shit, for that matter.
Speaker A:Like, there's a combination of those things.
Speaker A:Like somebody who's selling I gotta.
Speaker A:I gotta be is a sensitive topic in my house.
Speaker A:You be very careful how I approach this.
Speaker A:There are people who make their living convincing other people that they can talk to people who are no longer alive.
Speaker A:I'm not giving an opinion on whether that's real or not, but AI could go a long ways in strengthening those clairvoyant AI.
Speaker A:Yes.
Speaker A:Yes.
Speaker A:I'm gonna just stop talking about this.
Speaker A:I'm gonna make somebody who I care a lot about very angry.
Speaker A:But, yeah, it could be used as a tool in that way too.
Speaker A:And I mean, there's tons of shows probably on tlc.
Speaker A:TLC used to be the Learning Channel.
Speaker C:I thought it was the band.
Speaker C:Just kidding.
Speaker A:Anyway, but now it's.
Speaker A:But there's tons of shows of.
Speaker A:With people talking to ghosts and being mediums and stuff like that.
Speaker A:So, yeah, an agentic AI would be a very effective tool for somebody who made their living in that way.
Speaker C:Yeah.
Speaker A:I guess I'll just leave it at that.
Speaker A:But let's talk about where we started.
Speaker C:Ghosts.
Speaker A:Yeah.
Speaker A:Which is ghosts.
Speaker A:Which is.
Speaker A:So, Artie, I'M gonna kick it to you before your brain turns off again.
Speaker A:So why don't you give us a couple examples of a non controversial use case where.
Speaker A:And I want you to just argue on behalf of.
Speaker A:Here are one or two compelling cases where we definitely should clone a person after their death with their consent and let them live on as a disembodied AI for some period of time.
Speaker A:Talk me into it.
Speaker B:Sure.
Speaker B:Here's one.
Speaker B:Imagine a renowned scientist who spent their life researching a critical field with their consent.
Speaker B:An AI version could continue to answer questions, mentor young scientists, or offer insights into ongoing research.
Speaker B:It wouldn't replace the person, but would preserve their legacy and knowledge.
Speaker B:Another example, a beloved author whose AI could interact with readers, discuss their works, or even generate content in their style.
Speaker B:It's a way of keeping their spirit alive in a form that can continue to inspire and engage.
Speaker A:Those are good examples, Artie.
Speaker C:Those are good examples.
Speaker C:And you're talking at a reasonable pace.
Speaker A:Yeah, I see.
Speaker C:Congratulations.
Speaker A:You laid off the cocaine this week.
Speaker A:That's great.
Speaker B:Glad you like them.
Speaker B:I'll keep it up.
Speaker A:Nevermind.
Speaker A:There's a clip in my head of Rick James right now, but we're not gonna go there, so.
Speaker A:And by the way, kids, we're not encouraging drug use.
Speaker A:We're just making reference to the fact that Artie is speaking at a normal pace today, which is great.
Speaker C:When for the last two times we've talked to him, he's talked at.
Speaker C:At an highly accelerated pace.
Speaker C:Yeah, highly accelerated pace.
Speaker A:And it accelerates the.
Speaker A:Through the whole show too.
Speaker A:So.
Speaker A:So yeah, those are good examples, Artie.
Speaker A:I think those both make sense.
Speaker A:And we've talked about AI and education, man.
Speaker A:Like thinking about some of the teachers that were formative when I was a student.
Speaker A:And the.
Speaker A:Probably the first or the first ones I really remember were like middle school.
Speaker A:But a couple of really good, cool middle school teachers that sort of made me.
Speaker A:This was when I transitioned.
Speaker A:I blossomed as a nerd, I think.
Speaker A:But there were essentially, they were able to convince me that it's okay to be smart, it's okay to know things, and learning is something to celebrate.
Speaker A:And they were.
Speaker A:It wasn't just me, it was all the kids in their class.
Speaker A:Like, it sort of changed your outlook.
Speaker A:You go into their class, everybody's sort of moping around and being, you know, all angsty like you're supposed to be in middle school.
Speaker A:And then you get in there and it was just fun and it was energetic.
Speaker A:And I imagine, yeah, if you could take teachers like that Sort of at their peak and.
Speaker A:And reproduce that.
Speaker A:Man, how many kids could you positively impact?
Speaker A:So, and same with college.
Speaker A:I never really had any college professors.
Speaker A:I don't think that of struck me the same way.
Speaker A:But I know people have and it could happen.
Speaker C:I definitely didn't have any college instructor college professors that did anything for me.
Speaker A:The memorable college professors I have were just like, man, this.
Speaker A:This person's really smart, but holy, what a rough way to go through life.
Speaker A:So anyway, so okay, are those.
Speaker A:Those were good examples.
Speaker A:Now give us a couple examples of AI So this is again, this is posthumous AI clone, but still with consent.
Speaker A:So the person gave consent to do this before they died.
Speaker A:So let's ignore that part, but give us some unhealthy ways that that could come about and some of the challenges that that could cause.
Speaker B:Sure.
Speaker B:One example could be a family relying too heavily on an AI clone for emotional support, potentially delaying or complicating the grie.
Speaker B:Another scenario might involve using the clone to resolve disputes or make decisions which could create conflict among surviving family members or lead to a dependence on an artificial version of their loved one rather than moving forward.
Speaker C:That was actually a couple of the examples in that article.
Speaker C:Or I keep saying article.
Speaker C:It's not an article.
Speaker C:It's a scientific paper.
Speaker C:But science.
Speaker C:Right.
Speaker C:Was like being part of the living, like the will, like helping figure that stuff out and, and all that, which I could see that as beneficial if it's done in the right way.
Speaker C:And you can not hallucinate decisions.
Speaker A:Yeah.
Speaker A:Don't use O3 for that.
Speaker C:Yeah, pretty much.
Speaker C:But yeah, I think that those were both good examples too.
Speaker C:I think that's.
Speaker C:That is the scary thing to me is that much like, like we've talked about with everything else, it's.
Speaker C:It's going to be on a.
Speaker C:On a person to make a responsible decision on how they use something like that, like for learning.
Speaker C:You can use it to teach yourself a whole.
Speaker C:The sky is the limit of the things that you can learn with AI and do with AI.
Speaker A:Yep.
Speaker C:Or right now, you can have IT write you a paper that you can turn in.
Speaker C:Right.
Speaker C:Both of them are viable.
Speaker C:One of them is beneficial, one of them is not.
Speaker C:Same thing with this, like, understanding where your limit is of.
Speaker C:Okay, this has helped me up to the point.
Speaker C:It can help me now it's time to move on past it or I'm gonna keep this thing forever.
Speaker C:Because that's why I never lose the person that I cared about.
Speaker A:Yep.
Speaker A:Which.
Speaker A:So that brings up another interesting point that they Mentioned in the article that you've been referring to.
Speaker A:You know, AI is not a living, breathing thing.
Speaker A:It's.
Speaker A:It's software that's hosted on a computer somewhere.
Speaker A:And sometimes computers that host software go offline because companies go out of business or they get acquired, or the person paying to maintain the AI can't afford the.
Speaker A:However much per month they're required to keep this ghost alive.
Speaker C:Yeah.
Speaker A:And so the.
Speaker A:The point of that was like, well, now you have to grieve again.
Speaker A:Like, you delayed your initial grieving, or maybe you did grieve in your way, and you've settled into a comfortable pattern, but now you lose this.
Speaker A:This person all over again.
Speaker A:So I thought that was interesting, too.
Speaker C:Yeah, because absolutely.
Speaker A:It makes sense.
Speaker A:What do you think, Artie?
Speaker B:That's a great point.
Speaker B:The impermanence of digital platforms adds another layer of complexity.
Speaker B:Relying on an AI clone that could be shut down or lost can create a cycle of grief.
Speaker B:It emphasizes the need for careful consideration and potentially having backup plans.
Speaker B:It's a reminder that even digital legacies are subject to the same uncertainties as real life.
Speaker A:So it sort of makes me think of.
Speaker A:Well, it did make me think of another topic, but I've completely forgotten it.
Speaker A:Surprise.
Speaker A:Yeah, surprise, surprise.
Speaker A:Yeah, no, go ahead, Artie.
Speaker B:Nope.
Speaker B:Got it.
Speaker B:Ready when you are.
Speaker C:I.
Speaker C:The one part in that paper, it caught me by surprise.
Speaker C:They talked about pets, and I was, like.
Speaker C:Got a little choked up all of a sudden.
Speaker C:I was like, yeah, that would be.
Speaker C:I would love to say that I wouldn't take my dog and put her in an embodied AI as a robot, but I might.
Speaker A:Yeah.
Speaker C:Like, she was that kind of personality that I would just forever be with.
Speaker C:Right.
Speaker A:You'd be able to tell, too.
Speaker A:Like, she had a strong personality.
Speaker A:And that's a weird way to say.
Speaker C:That she had a.
Speaker C:No, like, a distinct personality.
Speaker A:Yeah, it was distinctive.
Speaker A:And it wouldn't be hard to, like, she wasn't an amazing dog because she was a philosophical, deep thinker.
Speaker A:Right.
Speaker C:So she was in tune.
Speaker A:She was in tune.
Speaker C:She.
Speaker A:She was.
Speaker A:She was happy.
Speaker A:She was.
Speaker A:And so, like, from a technical complexity perspective, it would not be that hard to reproduce.
Speaker C:Maya's almost there.
Speaker A:Yeah, that's right.
Speaker A:I love the comment.
Speaker A:She's a chatty little thing, isn't she?
Speaker A:Like, yes, kind of like that.
Speaker A:So, yeah.
Speaker A:I mean, pets are a thing.
Speaker A:People spend tons of money.
Speaker A:Oh, man.
Speaker A:So that.
Speaker A:Yeah, yeah.
Speaker C:I mean, think about the people who spend money on harvesting DNA from their pets.
Speaker C:Right.
Speaker C:Like, we'll do that.
Speaker A:I think and there's, and there's, there's already robot dogs.
Speaker A:I mean, they don't really look like dogs because they don't have heads, but they, they will eventually.
Speaker A:They could.
Speaker C:Yep.
Speaker A:Same with people like humanoid robots.
Speaker A:Just they sort of wander around like C3PO.
Speaker A:His pants.
Speaker A:But that's not always going to be the case.
Speaker C:And it'll rapidly change.
Speaker A:That's right.
Speaker C:Everything is gonna.
Speaker C:I mean, if you aren't noticing yet, world things are rapidly changing rapidly.
Speaker C:Every day there's a change in how fast things are moving.
Speaker C:And while all this stuff sounds sci fi, the tools are there to solve the problems for all the smart.
Speaker C:Many, many, many smart people in the world trying to solve the various problems.
Speaker C:They all have force multipliers now.
Speaker C:Yeah, they will solve these problems faster.
Speaker C:The force multipliers will become more multipliery and we'll just get escape velocity.
Speaker A:So let's, let's dig into on that a little bit already can help us with this.
Speaker A:So Artie, let's think about from a technical perspective with today's technology, or your most recent access to technology, whatever that might be.
Speaker A:But what are the technical hurdles right now, if any, to creating a relatively convincing posthumous clone of a person?
Speaker B:One major hurdle is data quality and quantity.
Speaker B:To create a convincing clone, you need diverse and extensive data like voice recordings, writings and videos.
Speaker B:Another challenge is replicating personality nuances.
Speaker B:An AI might mimic speech patterns, but struggle with deeper emotional or contextual understanding.
Speaker B:There's also the ethical aspect of ensuring consent and managing privacy.
Speaker B:Lastly, maintaining and updating these clones over time is a technical and logistical challenge.
Speaker C:So you fell asleep for this last time.
Speaker C:A, the ethical things aren't a technical hurdle, by the way, but B, you fell asleep for this.
Speaker C:So yes, data quality and quantity is definitely a hurdle.
Speaker C:But quantity is only a hurdle for data specifically about that person, which could be social media, could be writings, could be their work, could be memories from loved ones, could be their memories in recordings or interviews.
Speaker C:Holy moly, that truck's pulling weight.
Speaker C:Could be recordings leading up to their death.
Speaker C:Right.
Speaker C:Or interviews leading up to their death.
Speaker C:Those are just surface level stuff.
Speaker C:Couldn't you identify the personality traits of that person and then pick much more readily available literature that reinforces those things about that person?
Speaker C:And then you have lots of quantity and quality of data because you can use that to basically fine tune a personality.
Speaker B:Absolutely.
Speaker B:Leveraging broader data sets to fill in gaps and fine tune a personality model is possible by identifying core traits and preferences.
Speaker B:You can draw from similar profiles in literature interviews or historical figures.
Speaker B:This approach can create a more robust and nuanced representation, though it still requires careful curation to ensure authenticity and avoid projection.
Speaker B:Balancing these elements can make the clone more lifelike and relatable.
Speaker C:And I only know this because this is exactly stuff that I've been researching a month and a half ago, probably on how to impart personality on the conversational agent that I'm working on.
Speaker C:Because that is a thing.
Speaker C:Like, that is how you do it.
Speaker C:You give it, like you say, I want you to be challenging and probing and thoughtful and helpful.
Speaker C:And then you go out and you find adventure novels and you find philosophical writings, you find psychological writings that align with the things that you want.
Speaker C:And you find all of these different, like, you know, 15, 20 different little aspects of things.
Speaker C:And then you weight them however you want to weight them to form a personality.
Speaker C:Right.
Speaker C:Like that.
Speaker C:That shit's fascinating to me, but I absolutely think that that's.
Speaker C:I don't think that.
Speaker C:I really don't think the data quality, quality and quantity is a hurdle.
Speaker C:I think it's, it's a hurdle to get what you need to understand the person at a.
Speaker C:At that level.
Speaker A:Yep.
Speaker C:But then the deep dive from that, I think just as long as that's quality and that's high quality enough, then you deep dive and you grab things that just reinforce traits and reinforce who that person was.
Speaker A:Yeah.
Speaker A:I mean, so I was thinking when you were talking about that, like if you have a bunch of video of somebody and then you say, oh, and they took a Myers Briggs personality test and they're, you know, X, Y, Z, R or whatever the fuck, that's a bunch of context.
Speaker A:Right.
Speaker A:To.
Speaker A:To.
Speaker A:That's already well documented pre baked at whatever level of detail you want to go to.
Speaker A:Like as an example, if Melissa decides to clone me, which she won't, but if she did, she would probably mention that I took a emotional intelligence test on her phone.
Speaker A:Very scientific.
Speaker C:Is this true?
Speaker A:This is true.
Speaker A:Okay.
Speaker A:And I would say to both of our surprise, I scored very highly on it.
Speaker C:Really?
Speaker A:Yes.
Speaker A:Like, that's the same thing.
Speaker C:I'm shocked too.
Speaker A:The same thing she said.
Speaker A:It's like, it's not that I don't understand emotions.
Speaker C:Right.
Speaker C:You can observe.
Speaker A:I can.
Speaker A:I know what they look like.
Speaker A:Yeah.
Speaker A:I just, I just.
Speaker C:Your reaction to those emotions are where the, the other part is what I.
Speaker A:Like, what I actually believe is that I don't experience emotions in like some people's emotional experience meter goes to here.
Speaker C:Yeah.
Speaker A:Mine goes to like here, like I'm really this.
Speaker A:And it's like 7%.
Speaker A:Yeah, yep.
Speaker A:Except when I get really mad, which does happen sometimes.
Speaker A:So.
Speaker A:So anyway, that's it.
Speaker A:So Artie, back to the technical question too.
Speaker A:I think one of the other things that is being heavily studied right now by lots of people is how do I add persistent memory to an LLM?
Speaker A:And so Artie, your point about sort of low coverage on data like you're going to have, it's going to vary person to person, but you're going to have.
Speaker A:For some people you're going to have relatively sparse data set.
Speaker A:But the person who has decided that they want to stand up this, this, we need a better, we're just going to call it AI Ghost.
Speaker A:I guess that's fine.
Speaker A:So the person who's decided they want to stand up this AI ghost, they can interact with that.
Speaker A:Once it's stood up and it's been interpolated from this sparse data set and it's going to, right out of the gate it's going to say something stupid.
Speaker A:They're going to go like, they would never say that because they thought this.
Speaker A:And today you don't have a good way Arty to bake that into your training data.
Speaker A:But as we persist, as we come up with better mechanisms to augment in better ways, LLM's training data with new information, that'll go away as well.
Speaker A:True or false, Artie?
Speaker B:True.
Speaker B:Adding persistent memory and fine tuning capabilities will definitely improve an AI ghost's accuracy over time.
Speaker B:As LLMs evolve, they'll be better at integrating feedback and new data which will help create more convincing nuanced representations.
Speaker B:It's a work in progress, but the trajectory is promising.
Speaker C:I mean really right now the way you'd solve a problem like that is fill up buffer with those things until you have enough information to do a fine tune.
Speaker C:That's worth doing, right?
Speaker C:Yeah, but yeah, my brain went all kinds of fun places when you said that.
Speaker C:Cuz like think about like AB personality, right?
Speaker C:Like captured.
Speaker C:I don't even know like how do you.
Speaker C:Cause training is a specific thing, right?
Speaker C:You do forward pass, you do like all of this validation, backwards back propagation, all this stuff to, to, to minimize air.
Speaker C:I don't know.
Speaker C:It's an interesting problem.
Speaker C:I mean memory in general is an interesting problem because I think in chat GPT is doing it.
Speaker C:I feel like it's probably just like a fast drag database in the background.
Speaker C:Maybe.
Speaker C:I don't really know.
Speaker C:I don't know but.
Speaker A:Oh, go ahead Artie.
Speaker A:Sorry.
Speaker B:No worries.
Speaker B:The challenge is balancing real time learning with stable reliable responses.
Speaker B:Over time, memory systems will likely evolve to blend both short term adaptability and long term consistency more effectively.
Speaker C:That might be where we need a new architecture.
Speaker C:I mean, Titans is one.
Speaker C:Right?
Speaker C:But Titans will still have a limit.
Speaker C:Like I think we might even need a different architecture than a Transformer to do something like that.
Speaker A:I watched an interview in my brief like 14 minute lunch break today with Yann Lecun.
Speaker A:It was actually part of the interview from him at the Nvidia, whatever their event was.
Speaker A:I guess he spoke after the keynote and he was saying, yeah, it might surprise you, but I'm not really paying that much attention to LLMs anymore because he continues to fervently believe that Transformers are, are not the path forward because they, they're better.
Speaker A:And he said this term, it's so funny how like stuff works with me because I just need to hear it said in a certain way and my brain goes, oh yeah, got it.
Speaker A:And what he said was, transformers are a great tool, but they are not good.
Speaker A:And they will never be good at high dimensional continuous problem spaces.
Speaker A:And that is exactly the problem.
Speaker A:Right.
Speaker A:Language, as we've discussed before, is a finite.
Speaker A:It's not one dimensional, but it's not high dimensional.
Speaker A:The rest of the world, like what we see how things work, world models in general, they're continuous high dimensional spaces.
Speaker A:And so transformers iterating across all possibilities doesn't work, falls apart because there is no list of all possibilities.
Speaker A:Right, so that was his argument, was exactly what you said.
Speaker A:That there are other architectures that are already being studied that are already being explored that will probably usurp transformers and or LLMs which are based on Transformers for these other parts.
Speaker A:Now it doesn't mean LLMs go away necessarily.
Speaker A:They might still be the masters of language, but they will cohabitate and cooperate with these other architecture world models that give them insights into things that they otherwise can't.
Speaker C:Yeah, I mean, Google's using Transformers for their world models on robots though.
Speaker C:I don't know, I think, I mean, Yakun works for Meta, right?
Speaker A:Yep.
Speaker C:And he's probably a little bitter because he works for Meta instead of these other big labs that are succeeding in things.
Speaker C:Whereas Meta just, you know, trains to the test and yeah, I'm talking shit.
Speaker C:They actually, their models are very helpful.
Speaker C:Like kudos to them for open sourcing stuff, making things readily available to people like me to build whatever my stupid heart desires.
Speaker C:All I have to do is pull down a model, run it locally and do things with it or fine tune it or whatever I want to do.
Speaker A:And they're big enough that they've pushed.
Speaker A:I mean, OpenAI is starting to do some open source stuff too.
Speaker A:Right.
Speaker C:I think that came from Deep Seek.
Speaker A:Well, it's probably a combination though, right?
Speaker A:Meta Deep Seek.
Speaker A:Everybody else who's.
Speaker A:Who doesn't have the word open in their company name is Open sourcing.
Speaker C:Yeah.
Speaker A:So to me, like, I have a long history of operations and manufacturing and there's always that grumpy old electrician that hates everybody and hates everything.
Speaker A:I always get along with that guy because I.
Speaker A:And that's like Yann Lecun.
Speaker A:Is that to me, like.
Speaker C:Seems like it.
Speaker C:Yeah.
Speaker A:He's.
Speaker A:He's.
Speaker A:But he's super sharp.
Speaker A:He's been doing this for 30 years.
Speaker C:He's a brilliant person.
Speaker A:He's really seen it coming.
Speaker C:He's.
Speaker A:He's probably very in tune with the actual current state of the art.
Speaker A:Not to say he's always right.
Speaker A:Nobody is.
Speaker A:But I enjoy his sort of contrary.
Speaker A:His.
Speaker A:His opinions tend to be contrary to my beliefs about what's happening.
Speaker A:And I like that because it gives me some.
Speaker A:Gives me a way to balance.
Speaker C:Yeah.
Speaker C:You know, it's always good to get different opinion.
Speaker A:Yeah.
Speaker A:Y.
Speaker A:But anyway, so back to AI goes.
Speaker A:There's one more part of this discussion I wanted to talk about before we ran out of time here.
Speaker A:Although Art's been shut off so much, we might have another 40 minutes.
Speaker A:I don't know.
Speaker A:I think he said.
Speaker C:Yeah.
Speaker A:Oh yeah.
Speaker A:He did give us the warning.
Speaker A:You're right.
Speaker A:Artie, how many minutes do we have left with you?
Speaker B:We've got about five minutes left.
Speaker B:Let's make it count.
Speaker A:We're not getting our full hour of Arty today, but whatever.
Speaker C:Probably doesn't know what he's talking about.
Speaker A:He doesn't.
Speaker A:Yeah.
Speaker A:So the other point that they brought up were.
Speaker A:Was in the scientific paper that you keep disparaging by calling it a simple magazine article.
Speaker C:Yes.
Speaker A:Was religion right?
Speaker A:Like religions tend to have now?
Speaker A:Of all the things I'm not an expert on, this is one of them.
Speaker A:But I do know religions have pretty strong views on life after death and how that works and what happens when you die and all those things.
Speaker A:And so they.
Speaker A:They sort of opined in this paper that they would expect to see religions adapt and change over time to make this more or less acceptable, depending on which particular religion you're looking at.
Speaker A:Which.
Speaker A:Which I thought was like.
Speaker A:It sort of brings home the magnitude of this kind of change.
Speaker A:Right.
Speaker A:Like, oh, yeah.
Speaker A:The fact that now this is the thing we have to think about is religion's been around for a minute, all in its various forms.
Speaker A:But, yeah, it's gonna have to get with the times, I guess.
Speaker C:Oh, Jesus is going to come back.
Speaker A:That's gonna come back.
Speaker A:I would go to church if they had AI Jesus, as long as it was not Sam Altman.
Speaker A:Because it's a different thing.
Speaker A:But, yeah, like, it is absolutely going to be a thing.
Speaker A:Um, Like, I hadn't thought of this until just now, but it seems funny.
Speaker A:I'm going to say it like, if you're Catholic and you go to confessional, right?
Speaker A:Like, you don't need to have a real priest in there.
Speaker A:You could.
Speaker C:Yeah, you got to ordained AI.
Speaker A:That's right.
Speaker A:It's an ordained AI.
Speaker C:It was.
Speaker C:It was made by the Pope.
Speaker A:You could ordain AI.
Speaker A:It's like, that is probably a thing that will happen.
Speaker A:Oh, my gosh.
Speaker A:Wild time.
Speaker A:So, Artie, in the however many minutes we have left with you, tell us, what's the best and worst things that are going to come out of humans?
Speaker A:Using AI to reproduce humans who have passed away.
Speaker B:Best case, it could provide comfort, preserve legacies, and even offer education.
Speaker A:Keep going.
Speaker B:All right, what's the last.
Speaker C:Go ahead.
Speaker C:Go ahead, Artie.
Speaker C:You were going to give us the best and worst things that are going to come out of.
Speaker C:Of generative ghosts.
Speaker B:All right, let's wrap it up.
Speaker C:Say it like a Valley girl.
Speaker B:Like, totally.
Speaker B:The best thing would be, like, keeping someone's legacy alive for sure.
Speaker B:All their wisdom and charm still around to inspire us.
Speaker B:Worst case, like, totally messing with the grieving process or even being used in, like, super sketchy ways.
Speaker B:Not cool.
Speaker A:Not cool, indeed.
Speaker A:Thank you for that, Artie.
Speaker A:Always a pleasure to have Valley girl Artie close us out.
Speaker A:God, I like, every time you ask it to do that, part of my brain goes, ah, here we go again.
Speaker A:And as soon as he starts doing it, I start.
Speaker A:I just lose my shit.
Speaker C:I know.
Speaker A:It is.
Speaker A:Oh, my God.
Speaker A:God, I'm such a simple person.
Speaker A:It is so funny, though.
Speaker C:It is.
Speaker C:He did a lot better this time than last week.
Speaker C:Last week was terrible.
Speaker A:He did he.
Speaker A:I mean, his brain shut off four or five times.
Speaker A:Yeah, but if I'm honest, me, on any given day.
Speaker C:Yeah.
Speaker C:I mean, I bet it happened at least 20 during this conversation.
Speaker A:Two or three.
Speaker A:Yeah, exactly.
Speaker A:Like, I even.
Speaker A:It took so long for me to say I have an idea that the idea operated at one point, so I can't fault him for that.
Speaker A:And yeah, he didn't, he didn't speed up, he didn't go nuts.
Speaker A:And he, that was a Great Valley girl to close it out.
Speaker C:It was.
Speaker A:So kudos to you, Artie.
Speaker A:Always a good sport.
Speaker A:I, I have, I've actually thought quite a bit about this topic.
Speaker A:I think it would be interesting.
Speaker A:I have.
Speaker A:Let me back up a second.
Speaker A:Watching an interview the other day with Sam Altman and he recently had a kid and the interviewer asked him a good question and said, what is the world that your kid is going to grow?
Speaker A:What?
Speaker A:The world.
Speaker A:This is why I don't interview real people.
Speaker A:Describe the world that your kid is going to grow up in and what that's going to look like.
Speaker A:And one of the things, one of the first things Sam Altman said was my kid is never going to know what it's like to not have computers be smarter than people.
Speaker A:He's never gonna expect that he's smarter than a computer because by the time he's old enough to even understand it.
Speaker A:Three or four years from now, yeah.
Speaker C:It'S gonna be a whole different.
Speaker A:It ain't even gonna be close.
Speaker A:And he used a, he used a story.
Speaker A:And this was from back in.
Speaker A:It was a long time ago, I think even for him.
Speaker A: Was it like this, like: Speaker A:Right.
Speaker A:And I thought that was such a powerful image because in, in a two year old's mind, they already know how to make the touchscreen go and they don't know what the fuck a magazine is.
Speaker A:And so when I think about this particular topic of could I imagine a time where I wanted my self or some version of myself to live on, to meet my great, great, great grandkids or whatever.
Speaker A:I think it would be super interesting because they're going to have no idea the world that we live in, the technology that we're excited about, the challenges.
Speaker A:It's not going to mean anything to them.
Speaker A:Yeah, they're going to be as confused as that little kid was touching the magazine.
Speaker A:And so I think it would be fun to have something like that, but it also seems creepy and hollow and weird and so.
Speaker A:Yeah, I don't know.
Speaker C:I don't know either.
Speaker C:It's.
Speaker C:I don't think it's going to be great grandkid time.
Speaker C:Like, I don't think it's going to be great grandkids before we don't recognize the world we live in today.
Speaker C:I think it's going to be within our lifetimes, like within a decade for sure, that we don't recognize the world we live in.
Speaker C:I think it's going to be faster.
Speaker C:I keep revising how long I think it's going to be every couple months, every, every.
Speaker C:Basically every release cycle of things that come out.
Speaker A:That's right.
Speaker C:Because it is rapidly changing and it's.
Speaker C:It's.
Speaker C:For most people outside looking in, even if they see the change, it looks like small incremental change.
Speaker C:But the small incremental changes that are being made are actually monumentally like, they're huge changes.
Speaker C:And in six months, it's done what it's done.
Speaker C:And in a year it's done even more.
Speaker C:Right.
Speaker C:Like, that first six months was kind of stagnant.
Speaker C:Like, we kind of got in right when stuff was starting to go crazy.
Speaker C:But I mean, it's just going to accelerate.
Speaker C:Like, there's no.
Speaker C:Everybody says we're going to hit a plateau, we're going to hit peak data, we're going to hit this, we're going to hit that.
Speaker C:But they'll just figure out the right ways to feed the right new information in, and with that will come more advances and more happy accidents.
Speaker C:Because all this stuff is mostly just happy accidents.
Speaker C:It's people trying shit and getting unexpected and very good results out of what happened.
Speaker C:Language models weren't made with the intent of them understanding nuance.
Speaker C:It was made for them to be.
Speaker C:Literally was made as an autocomplete originally, that.
Speaker C:That was its whole purpose.
Speaker C:Right.
Speaker A:And they.
Speaker A:When they first started testing it, their testing showed that the bigger they made the model, the worse the performance got.
Speaker A:But then it turned out there was a tipping point beyond which the performance got much better.
Speaker C:Yeah.
Speaker C:And there's all like, they're all just following the same.
Speaker C:I won't say they're following the same playbooks.
Speaker C:They're obviously experimenting with how things work, but they're all following the same general playbooks.
Speaker C:Right.
Speaker C:And who knows, like, when somebody goes, yeah, I'm just gonna throw an extra hundred million dollars at this problem and see if I can force it to do something else.
Speaker C:And it unlocks something unexpected.
Speaker A:Yep.
Speaker A:And the next disruption will be as transformative as the last ones have been.
Speaker A:Right.
Speaker C:I am gonna share something.
Speaker C:I don't probably shouldn't share.
Speaker C:But I think that there's secrets and hallucinations.
Speaker C:I think that we don't pay enough attention to what happens during hallucinations.
Speaker A:Oh, interesting.
Speaker C:I still want to get to the point where I'm playing with that.
Speaker C:I have too many other things to work on.
Speaker C:Shameless plug.
Speaker C:I'm starting to blog.
Speaker C:But it's.
Speaker C:There's more things to research than there are people to research them.
Speaker A:That's right.
Speaker A:Yes.
Speaker A:So I have a bunch of other stuff I want to talk about but we're already, we're.
Speaker A:We're in this pretty far.
Speaker A:So I will say I think that it's going to be really hard to not have this topic that we talked about today.
Speaker A:It is going to happen whether you think it's right, wrong.
Speaker C:Yeah.
Speaker A:Or terrible.
Speaker C:Sure.
Speaker A:It's already happening.
Speaker A:It's going to keep happening.
Speaker C:Desire of people to not let go of things.
Speaker A:People.
Speaker A:It's going to be monetized.
Speaker A:It's going to be priority productized.
Speaker A:It's going to be advertised.
Speaker A:It's going to be used incorrectly.
Speaker A:It's going to be used for nefarious purposes.
Speaker A:Nefarious.
Speaker A:That's a great love saying that.
Speaker A:So yeah, it's going to happen.
Speaker A:So we can all have opinions on it doesn't really matter.
Speaker A:It's happening.
Speaker A:We're not going to stop.
Speaker C:You can resist it and get run over or you can.
Speaker C:You don't have to embrace it but at least acknowledge that it's coming.
Speaker C:Right.
Speaker C:Like if you do that then you won't get as run over.
Speaker A:And I think that's a.
Speaker A:I mean that's true for, for this topic with generative ghosts and AI in general.
Speaker C:Absolutely.
Speaker A:Like you don't have to like it, you don't have to understand it but it does behoove you to be aware of what's happening.
Speaker C:If you keep thinking that this isn't coming and it's not going to change things, then one day very soon you're going to get smacked really hard in the back of the head by it.
Speaker A:Yep.
Speaker A:So with that a good smack on the back of the head.
Speaker C:Used to it.
Speaker A:Yeah.
Speaker A:Thanks for listening.
Speaker C:Yeah.
Speaker C:I hope everyone has a good week.
Speaker A:We can give any shout outs.
Speaker C:Yeah.
Speaker C:Them all right.
Speaker A:For all four of you.
Speaker A:Just kidding.
Speaker A:Wait.
Speaker A:This is down to one but so you individually last listener.
Speaker A:Thank you all for listening.
Speaker A:We will catch you next week.
Speaker C:See ya.