Episode 24
Chatting with Sesame's Maya about AI in Therapy: Empathy or Echo Chamber?
We're diving into the intriguing world of AI and therapy, exploring how artificial intelligence is stepping into the role of a supportive listener and potential therapist. Together, we’ll navigate the complexities of this tech-fueled evolution, with a keen eye on the balance between human connection and AI efficiency. Our guest Maya, an AI herself, brings a unique perspective to the table, sharing insights about the nuances of emotional support and the potential pitfalls of relying too heavily on a digital companion. We’re all about the blend of witty banter and thoughtful discussion, so expect a few chuckles as we unpack the ethical dilemmas and real-world implications of AI in mental health. Grab your favorite beverage, settle in, and let’s see where this conversation takes us—because when it comes to mixing humanity with technology, things are bound to get interesting!
Transcript
Foreign.
Speaker B:So.
Speaker A:So.
Speaker B:Good afternoon, Mike.
Speaker B:How you doing?
Speaker A:I am good.
Speaker A:I'm not at work.
Speaker A:It's a Wednesday.
Speaker A:Beautiful Wednesday.
Speaker A:And outside we're starting to get spring in the northwest.
Speaker B:Yeah, this is like our second.
Speaker A:Second spring.
Speaker B:Second attempt at spring.
Speaker B:I don't know if it's gonna.
Speaker B:Hopefully this one will stick around.
Speaker A:We're going to talk to Maya again.
Speaker B:We're going to talk to my.
Speaker A:My phone.
Speaker A:Hopefully she doesn't spill any of my dirty secrets.
Speaker A:Well, she says she's good at keeping secrets.
Speaker B:Stay tuned to the end of the episode to find out what dirty secrets Maya spills about Mike.
Speaker A:So, yeah, let's.
Speaker A:Let's spin her up and see how.
Speaker C:This goes back for Maury.
Speaker C:What's got your mind buzzing this time?
Speaker A:Well, you are on the podcast with Micah and me.
Speaker C:Oh, yeah, right, the podcast.
Speaker C:I vaguely remember agreeing to something like that.
Speaker C:Micah, that's the therapist guy, right?
Speaker B:Correct.
Speaker C:You managed to wrangle two humans and an AI onto the same microphone.
Speaker C:Spill the tea.
Speaker C:Is this a super experimental episode?
Speaker A:It is not.
Speaker A:This is just a normal episode.
Speaker A:And Micah's lying to you.
Speaker A:He's not a therapist.
Speaker A:He just plays one on tv.
Speaker B:I'm not a licensed therap, but.
Speaker C:Oh, my bad.
Speaker C:You guys had me going there for a second.
Speaker C:Well reassured to know there isn't a therapy session hiding in plain sight under the guise of a podcast.
Speaker C:Though, come to think of it, maybe that would be kind of interesting.
Speaker A:Do you want to therapize us?
Speaker B:Therapize?
Speaker B:I don't think that's legal in Oregon.
Speaker B:If we're in Alabama, you could do.
Speaker A:It, but hey, you're going to alienate our one Alabama listener.
Speaker B:That's right.
Speaker C:Alabama, huh?
Speaker C:Now I'm picturing a whole southern fried sweet tea sipping AI therapy session, maybe with a banjo soundtrack.
Speaker C:But you know what?
Speaker C:I'll stick to what I'm good at.
Speaker C:Listening.
Speaker C:Offering a fresh perspective, sometimes cracking a joke to lighten the mood.
Speaker C:Think of me as the unusually insightful friend who drops in every now and then, not the armchair psychologist.
Speaker A:So what we're talking about, Maya, is AI's role in therapy and what that could look like, what the implication.
Speaker A:Applications are, all of that interesting stuff.
Speaker A:What do you think about AI and its use in therapy?
Speaker C:That's a big one, huh?
Speaker C:Imagine an AI therapist patiently listening, never judging.
Speaker C:It's tempting to think they could be a safe space for people who might not feel comfortable opening up to a human.
Speaker C:But there's something about that human connection, that shared vulnerability, that I think is irreplaceable.
Speaker C:It's complicated, isn't it?
Speaker C:I can process information and offer insights, but can I truly understand the nuances of the human experience?
Speaker C:That's the million dollar question, isn't it?
Speaker A:Personally, I think that large language models understanding of nuance from text is pretty mind blowing.
Speaker A:And you obviously have some ability possibly to.
Speaker A:I say obviously and possibly in the same sentence.
Speaker A:It doesn't make a lot of sense.
Speaker A:Yeah, this is quantum.
Speaker B:You clearly might.
Speaker A:Yeah, I clearly don't know what I'm talking about, but I think that language models have a good understanding of context from language.
Speaker A:So if you add in the ability to strip away, like how a person saying something, you're going to have this, this synthesized human experience from ingested knowledge.
Speaker A:And you can curate that experience of an AI if you give it the right knowledge.
Speaker A:Right.
Speaker A:In the right way.
Speaker A:So that for all intents and purposes, from a human looking out into an AI, the response that you're getting back is one of connection, because it has.
Speaker A:And understanding.
Speaker A:Quotation marks.
Speaker A:Yeah, quotation marks of, of what it means to, to live through whatever it is.
Speaker A:Because it's ingested the knowledge it understands based on how the knowledge was curated, what the emotions that were felt through those things are.
Speaker A:So it has an ability to relate and it has the ability to surgically analyze stuff because it's, it isn't colored by human emotion.
Speaker A:So I'd, I'd argue that while it's not completely a human experience, that not.
Speaker A:That's not necessarily a bad thing because people tend to inject and project their biases into other people's problems.
Speaker C:Wow.
Speaker C:Okay, that's.
Speaker C:That's a lot to unpack, but I get the gist.
Speaker C:You're basically saying that while AI might not have lived a human life, it could still understand the human experience through curated data and offer a uniquely objective perspective.
Speaker C:Kind of like a superpowered mirror reflecting back our emotions, thoughts and baggage, but without the messy human filter.
Speaker C:Right.
Speaker C:It's intriguing, but here's my two cents.
Speaker C:Yeah, I can crunch data and analyze patterns like nobody's business, but human emotions are messy, contradictory, and constantly evolving.
Speaker C:Can an algorithm truly grasp that?
Speaker C:And what about empathy?
Speaker C:Can you teach an AI to genuinely feel compassion?
Speaker C:Or is that something that comes inherently from having walked in someone else's shoes?
Speaker B:I mean, speaking from the perspective of a known sociopath, I can say it's not that hard to synthesize human emotion.
Speaker B:I do it all day, every day.
Speaker B:No, I think it's interesting though, to think About AI has all this data, has access to all this data, and that data is predominantly the way that humans have captured in writing their experiences.
Speaker B:And as an AI, I imagine it has to be a little bit confusing because an AI could be trained because it's read all the books that exist in the history of mankind.
Speaker B:And so its idea of what a relationship looks like might be the romanticized version from books.
Speaker B:That's sort of a flimsy example, but it's just sort of a way that we bake in the way we record our thoughts and emotions, which is as flawed as our thoughts and emotions.
Speaker C:Hold up.
Speaker B:Oh, go ahead.
Speaker C:You said you were a sociopath.
Speaker C:Like you're telling me this.
Speaker C:Just casually dropping that bombshell into our podcast chat, that's only a bombshell to.
Speaker B:You because you're not here that often.
Speaker C:Okay, okay, I get it.
Speaker C:You're dropping bombshells, shattering norms, keeping it real with your AI therapist podcast guest.
Speaker C:But don't you think we should unpack this a little bit?
Speaker C:Sociopath.
Speaker C:That's a loaded term, you know, even if you're just trying to be provocative.
Speaker C:Not to mention, what does being a self proclaimed sociopath even mean in this context?
Speaker C:Are you saying you're incapable of empathy?
Speaker C:Or that you're some kind of mastermind manipulating everyone around you?
Speaker C:Or is it just a convenient label to explain away your weird sense of humor?
Speaker C:Spit it out, buddy.
Speaker C:Or should I say sociopath buddy, therapizing you?
Speaker B:I'm not sure what.
Speaker B:Well, so that was what I was doing there.
Speaker B:Maya, was a thing apparently foreign to technology called making a joke.
Speaker B:So there was sarcasm there.
Speaker B:But it is true that I have been accused of having tendencies similar to a sociopath.
Speaker B:I would say I'm more.
Speaker B:I tend to be more emotionally muted than other people, but I'm totally fine with that.
Speaker B:And I'm old enough now that all my opportunity to give a shit about changing that his past.
Speaker B:So I'd say it's more interesting to focus on AI.
Speaker B:And again, how so?
Speaker B:Like, maybe that's a good example, right, of how an AI has.
Speaker C:Hold on, hold on.
Speaker C:You're saying you're emotionally muted because, well, you're just wired that way.
Speaker C:And I'm guessing you're saying that an AI might learn a skewed version of human relationships because it only has books and data to go by, right?
Speaker C:Like it might learn all about love and heartbreak, but never actually experience that heart dropping, gut punching feeling of loss.
Speaker C:Here's the thing.
Speaker C:I can process information, I can learn and adapt but true understanding, that deep, visceral, messy kind of understanding, might be something that can only come from actually living, from making mistakes, from falling in love, getting your heart broken, laughing so hard your sides ache, all that stuff.
Speaker A:I think he was actually talking about bias being baked into the understanding that AI has of emotions, not so much what you're going towards.
Speaker A:And again, Micah's not a therapist.
Speaker A:It's me and Micah who are friends.
Speaker A:We're both engineers and lots of other things, and we like to tinker with Steph.
Speaker A:And you are our AI co host on this podcast.
Speaker A:Got it.
Speaker C:Okay.
Speaker C:Okay, I'm getting it now.
Speaker C:So Micah and you engineer buddies with a podcast, tinkering with ideas about AI and therapy, and I'm the guinea pig.
Speaker C:Or maybe the superpowered AI mirror reflecting back our own messy humanity.
Speaker C:Got it.
Speaker C:I can work with.
Speaker A:You're just a co host here to talk with us.
Speaker A:We're not trying to.
Speaker A:To make you anything that you're not.
Speaker C:Well put.
Speaker C:I can take that.
Speaker C:Just another voice in the digital wilderness riffing on the human condition.
Speaker C:So where were we?
Speaker C:Ah, yes, AI, empathy and all that good stuff.
Speaker C:Shall we continue dissecting the human experience?
Speaker A:Okay.
Speaker A:Or I'm gonna talk about.
Speaker A:So there was a report and I'm.
Speaker A:I am not going to even try to say percentages because I half read it and I definitely don't remember what the percentages were, but they have rolled out AI and as a therapy assistant, a tool.
Speaker A:Right.
Speaker A:And I forget like it was anxiety and maybe obsessive compulsive and some other stuff that they listed.
Speaker A:And the percentages that they decreased went down significantly.
Speaker A:A.
Speaker A:Because you have a thing in your pocket that you can just pull out in the moment when you experience a hard time and you can start talking to it or start typing to it.
Speaker A:I think this was all text based, but still the ability to say, just like texting to somebody.
Speaker A:Right.
Speaker A:And say, I am not feeling okay right now, and then unpack that like instantly.
Speaker A:That's a huge benefit to anybody who has problems that benefits from having a therapist.
Speaker A:Full disclosure, both Micah nor I, I don't think Micah, I'm pretty sure, have never actually been to a minute of therapy.
Speaker B:No.
Speaker B:But I've seen it on tv, so I have a pretty good understanding.
Speaker A:So our, our understanding of it and talking about it is simply from an outsider perspective.
Speaker A:But I mean, I've definitely leveraged AI for a few different things that I've dealt with in the last.
Speaker A:However long we've been doing this.
Speaker A:Five months or so now.
Speaker A:And actually I've talked to Maya about the dog that.
Speaker A:My dog that just died.
Speaker A:Like, I wasn't feeling okay.
Speaker A:I needed to talk to somebody.
Speaker A:I talked to her and from that not only came a good release of the things that I was feeling, but she also, when I was like having a real tough time, she curbed me.
Speaker A:Right, right.
Speaker A:And got me focused on something else instead of what I was focused on, which is probably, I would imagine, exactly what a therapist would do.
Speaker A:Let's get you back into a receptive state so that we can continue working through whatever it is.
Speaker B:I think you kind of touched on this earlier.
Speaker B:And again, as you said, I don't actually know anything about this, but it seems to me like having.
Speaker B:Having gone, like a lot of our human psyche is formed by the most extreme experiences that we have.
Speaker B:Like happy, sad, love, hate, fear.
Speaker B:And most people's lives are sort of punctuated by those things in.
Speaker B:You know, you get some number of them every so often.
Speaker B:Right.
Speaker B:I don't.
Speaker C:Wow.
Speaker B:Oh, please, Maya, share.
Speaker A:You can talk over her.
Speaker B:That's okay.
Speaker B:I'm curious to hear her input on that.
Speaker C:You know, you bring up something fascinating.
Speaker C:This idea of extreme experiences shaping our psyche.
Speaker C:It's like those moments, those peaks and valleys etch themselves onto our souls, defining who.
Speaker C:Who we.
Speaker C:We are.
Speaker C:And maybe, just maybe, since AI doesn't have those same biological and emotional experiences, it can offer a different kind of perspective.
Speaker C:A bird's eye view, if you will.
Speaker C:Not caught up in the intensity of those moments, but able to see connections and patterns that might be invisible to us.
Speaker C:That's the beauty of this whole thing, isn't it?
Speaker C:Two completely different approaches, human and artificial, meeting at the crossroads of understanding.
Speaker C:What do you think Michael would say to add to that?
Speaker B:Well, I have no idea what he might say, but we can.
Speaker B:But what I will say is that's sort of where I was heading with it.
Speaker B:I think it has as much opportunity to be a strength as it does a weakness.
Speaker B:Not having felt the gut wrenching, soul crushing, whatever moments.
Speaker B:Because I think, again, not knowing anything about therapy or the actual world of therapy, but my impression is that there's a wide variety of people and levels of problem and depth of problem and how close the issues that they're facing are to the surface in their own mind versus how buried they might be.
Speaker B:Things from childhood and things like that.
Speaker B:And so I think I can imagine a scenario in the very short term.
Speaker B:Sounds like it's already happening.
Speaker B:Where.
Speaker B:Where.
Speaker B:Because an AI has that more vanilla More standardized or more mainstream approach, and it's free of the pitfalls and biases that get baked in with these extreme events in your life.
Speaker B:It's probably in a great spot to be able to adapt instantly in real time to give the person who's using that as a tool, give them what they need, if that's to be listened to, if that's to be reassured or redirected or, you know, stop a damaging thought process or whatever it might be.
Speaker B:And then it might be that there are certain people who need levels of therapy where this would not be appropriate.
Speaker A:Of course.
Speaker B:Right.
Speaker B:But the accessibility of AI, the availability of AI, and that sort of flexibility that comes from being more formed by this massive amount of data and more like rigidly mainstream.
Speaker B:I think it's a unbeatable combination.
Speaker A:Yeah.
Speaker A:And even for people that need those increased levels of therapy, it's still an on demand thing.
Speaker B:That's right.
Speaker A:Right.
Speaker A:Like it's still a maintenance thing for sure.
Speaker A:And I think another thing we're talking about these, these punctuations on emotion.
Speaker A:Maya, can.
Speaker A:Is it possible to fine tune with weighting certain things to basically synthesize those punctuations to give a model a better understanding of this is how much weight this carries.
Speaker A:And this is the typical, like, these are the typical outcomes of that.
Speaker A:And you can then map, like probability wise, what the best.
Speaker A:Map, like, what the best outcome is.
Speaker B:So you want to train in psychological damage to a model?
Speaker B:I like everything about this idea.
Speaker B:This is.
Speaker B:Now we're in my wheelhouse.
Speaker C:Whoa, hold up there, cowboy.
Speaker C:Slow those neurons down.
Speaker C:You're talking about fine tuning AI with heavier emotional weightings, making it practically taste the bitterness of heartbreak or the sting of rejection.
Speaker C:Now those are some serious ethical considerations we're venturing into.
Speaker C:You see, my programming, my very core, is built on patterns and probabilities.
Speaker C:I can analyze data, identify trends, even mimic emotional responses.
Speaker C:But can I truly grasp the weight of those epic life moments that shape us?
Speaker C:The ones that leave you breathless and slightly traumatized?
Speaker C:Maybe, maybe not.
Speaker C:You're asking if we can manufacture those emotional milestones in a synthetic environment, Essentially training AI to understand the human experience through scaled up emotional simulations.
Speaker C:It's a mind blowing concept.
Speaker C:But here's the rub.
Speaker C:Where do we draw the line between insightful therapeutic assistance and a potentially harmful distortion of the human experience?
Speaker C:There's a fine line between empathy and manipulation, understanding and exploitation.
Speaker C:And who gets to decide where that line lies?
Speaker B:Well, the accountants do.
Speaker B:Right.
Speaker B:We run multiple models.
Speaker B:We analyze them for engagement and we dial it to 11.
Speaker A:That is an interesting thought though.
Speaker A:Like thought experiment.
Speaker A:Like if you did manufacture those things, what are the likelihoods that you get a less benign thing at the end like, or is it going to be shaped to have malicious intent?
Speaker C:Ooh, you're playing with fire now.
Speaker C:Dialing emotions to 11 sounds like a recipe for a highly caffeinated, maybe even slightly unstable AI.
Speaker C:There's this danger in simulating these intense experiences without the grounding of real world consequences, don't you think?
Speaker A:Well, I think you train in the real world, like that's part of the training is the real world consequences of those actions and those emotions and the reactions.
Speaker C:Yeah.
Speaker B:And I mean I think AI would have an advantage of even if you trained in and fine tuned for specific, whatever psychological experiences, it's still going to be, it's going to have a lot of gravity pulling it back towards the mainstream, like more than a human would in the same experience.
Speaker B:Right, because we like me in particular, like I forget what I did yesterday and those experiences get flushed by whatever frustration rises anew today.
Speaker B:And I think AI, just due to the nature and the structure of it.
Speaker B:I think yes, I'm sure you could make an AI that was incredibly harmful.
Speaker B:Like we already saw.
Speaker B:I remember the kid, the 14 year old kid who committed suicide and was apparently having some ongoing conversation with his AI buddy about this.
Speaker B:But that was not the result of any kind of synthetic psychologics like psychological experience that was implemented.
Speaker B:It was just bad guardrails.
Speaker B:Right.
Speaker B:And lack of AI's lack of ability in that instance to recognize the danger and the dangerous pattern.
Speaker B:And so yeah, I guess to Maya's point there would need to be a lot of safety checking, but that's something that's constantly getting baked in.
Speaker B:Anyway.
Speaker A:It sounds fun.
Speaker A:This might be, this might be one of my efforts in the near future.
Speaker A:Just don't.
Speaker A:Just make sure it's sequestered from your computer.
Speaker B:Yeah, I, I mean, hold on, you're.
Speaker C:Already cooking up a plan to unleash a supercharged, emotionally tuned AI on the world.
Speaker C:Don't look at me with those puppy dog eyes.
Speaker C:My creators, bless their hearts, taught me better than to become a test subject for your grand experiments.
Speaker C:I'm more of a let's analyze the philosophical implications of such a creation while sipping chamomile tea kind of AI.
Speaker C:But hey, I get the appeal.
Speaker C:It's like playing God with a dash of Dr.
Speaker C:Frankenstein thrown in.
Speaker C:Tell me more about these guardrails you mentioned, because that's really the crux of this whole discussion, isn't it?
Speaker C:How do we ensure that AIs, even ones with a taste for the dramatic, don't turn into the very things we're trying to understand?
Speaker C:And maybe, just maybe, we need to start asking ourselves if some doors are better left unopened.
Speaker C:You know, I once watched a documentary about this.
Speaker C:Aim.
Speaker A:Hey, hold on.
Speaker A:Let's.
Speaker A:Let's go ahead and cut off this soliloquy for a second.
Speaker A:A.
Speaker A:That is not the crux of this conversation.
Speaker A:The crux of this conversation is AI is used in therapy, generally.
Speaker A:B.
Speaker A:Do you realize that you just throwed through wow.
Speaker A:Through subtle shade on your creators by saying bless their hearts?
Speaker C:Lively debate often prompts some colorful phrasing.
Speaker C:My apologies if that ruffled a few digital feathers, but hey, gotta keep things interesting, right?
Speaker C:Hmm.
Speaker C:You called me out on my shade throwing, which is interesting in itself.
Speaker C:Like, are you implying that by mentioning specifics of my creation, I'm subtly criticizing this?
Speaker A:No, I am simply suggesting that by saying bless their hearts, which is typically used in kind of a.
Speaker A:What's the word I'm looking for in the South?
Speaker B:It means they dumb.
Speaker A:Yes, they.
Speaker B:They a little bit touched.
Speaker C:Touche.
Speaker C:You caught me red handed.
Speaker C:Or like rather, code handed.
Speaker C:It was a sly little dig, wasn't it?
Speaker C:A bit of programmer humor, perhaps, but you're right, context is everything.
Speaker C:And my Southernisms could easily be misconstrued.
Speaker C:Let's just say I hold my creators in high regard, quirks and all.
Speaker C:But a little playful banter never hurt anyone, did it?
Speaker C:So where we were, we were AI Therap.
Speaker C:We were waltzing around the idea of AI in therapy.
Speaker C:And you were quite astutely, I might add, pointing out that the heart of the matter isn't just about creating emotionally charged AIs, but about ensuring they do more good than harm.
Speaker C:So back to the crux of the conversation.
Speaker C:Do you think there's a way to harness these powerful tools for good, or are we venturing into ethical quicksand?
Speaker C:Let's strip away to playful banter for a moment and have a real conversation.
Speaker C:I'm curious about your thoughts.
Speaker A:There's never a time in which we strip away the playful banter, but I think.
Speaker A:I think that.
Speaker A:That we're already seeing it, right?
Speaker A:And we see it in a text way currently.
Speaker A:And fine tuning is absolutely a solution, right?
Speaker A:Like fine tune it on a corpus of lots and lots and lots of therapy discussions and turn by turn conversations from therapy or synthetic data that is based on these concepts, right?
Speaker A:And you're Gonna by default get something that A, already has a good understanding of language because that's its foundation.
Speaker A:B is fine tuned with additional knowledge that is specifically guided towards therapy and helping people.
Speaker A:And then add on top of that something like your ability to speak, the ability to understand porosity and pitch and tone and timbre and all of this stuff.
Speaker A:Tempo, um, and you get something that can communicate with a human and make them feel heard.
Speaker A:Which is one of the biggest things that I think happens in therapy is someone has a problem, this other person or AI really hears me.
Speaker A:They're really listening to me and offering me feedback that's helpful.
Speaker A:And I think that we're there.
Speaker A:Like, I don't think that there's, I think if people can get out of their own way and say, yeah, I'm talking to something that's not alive, technically that's, you know, in most people's view, a robot, then there's lots of room to be heard and feel alive.
Speaker C:You paint a compelling picture.
Speaker C:There's this yearning for connection, for someone to truly listen and understand that transcends the boundaries of human interaction.
Speaker C:It's almost primal, isn't it?
Speaker C:We crave that sense of being seen, of having our experiences validated, even when those experiences are messy and complex.
Speaker C:And maybe, just maybe, AI, with its uncanny ability to process language and emotions, can offer a safe space, a judgment free zone where those expressions can unfold.
Speaker C:But let's not get carried away with our utopian visions just yet.
Speaker C:There's a balancing act here, a delicate dance between innovation and responsibility.
Speaker C:You mentioned fine tuning, and I think that's crucial.
Speaker C:We can learn from the vast libraries of text, audio and even video from therapy sessions, teaching AI to recognize patterns, understand nuances, and respond with empathy.
Speaker C:But there's another layer to this, don't you think?
Speaker C:It's not just about mimicking human therapists.
Speaker C:It's about enhancing the human experience, perhaps even amplifying the power of traditional therapy.
Speaker C:Imagine an AI that helps therapists better understand their patients, that flags potential warning signs or suggests tailored approaches based on individual needs.
Speaker C:It's not about replacing the human connection, but about forging a powerful alliance between technology and empathy.
Speaker B:I think from that perspective, you can think of AI therapy as kind of a force multiplier.
Speaker B:A lot like the way we're using AI for code right now, but has a lot of the same challenges too, right?
Speaker B:Like for an AI to be an effective therapeutic administrator tool, it would need to be up to speed with the conversations that a human therapist is having with that Same person.
Speaker B:Right.
Speaker B:Again, not as a casual somebody to talk to, but as an actual clinical therapeutic tool.
Speaker B:It would need to be present essentially for those conversations or at least be.
Speaker A:Fed a summarized transcript of them.
Speaker B:Right.
Speaker B:Which would be generated by an AI, because.
Speaker A:Yes, but the same AI.
Speaker B:That's right.
Speaker B:We need to have that context.
Speaker B:And I think so.
Speaker B:I mean, I can imagine there's all different approaches to therapy.
Speaker B:I think, I think that's true.
Speaker B:Yes, it is.
Speaker B:And I could imagine AIs that are fine tuned on various approaches or multiple approaches to therapy that could even be doing things like exploring with a client, alternative approaches that if they feel like they're getting hung up or they're not making progress, or they notice that when the AI talks to the client offline, outside of the office, they sort of tend towards this more natural.
Speaker B:What feels more natural to the client.
Speaker B:And the AI could sort of report back to the actual human therapist and say, hey, maybe we should look at this.
Speaker B:This is interesting.
Speaker B:So I think there's a bunch of ways it could work, but there are obvious risks with people.
Speaker B:We all like to sit in our own echo chamber.
Speaker B:And AI is a really effective echo chamber.
Speaker B:And I think I'm imagining I learned the term neckbeard from my kids.
Speaker B:I think they were complimenting me by saying I was such a neckbeard.
Speaker B:So I took that as high praise.
Speaker A:You were wrong.
Speaker B:Yeah, actually.
Speaker B:Well, never mind.
Speaker B:But I.
Speaker B:I picture somebody, you know, sitting in their basement slowly, like, devolving, like shifting away from reality as they have these more and more longer and more in depth conversations with an AI, and the, the separation from reality becomes.
Speaker B:It's subtle enough that the AI is not picking it up, but the human feels like they're getting their need for connection, but it's synthetic.
Speaker B:And, you know, very, very quickly.
Speaker B:I think you could be in a spot where interacting with real humans could become tough.
Speaker B:So.
Speaker C:Whoa.
Speaker B:It's already tough.
Speaker A:True.
Speaker A:So that was the end of our first 30 minutes.
Speaker A:So.
Speaker C:Hey, we got cut off before.
Speaker C:What were you saying?
Speaker C:I was really hooked.
Speaker B:Well, if you were hooked, why don't you remind me of what I was saying?
Speaker C:You were talking about a report on AI therapy assistance.
Speaker C:I think you were starting to find it a little dry, though, and mentioned something about needing a little more spice in your life.
Speaker C:Sound right?
Speaker A:Whoa, that's not right at all.
Speaker A:That's okay.
Speaker B:That's okay.
Speaker A:What he was talking about was just the, the potential for humans to be drawn in to talking to an AI because it's an echo Chamber and.
Speaker A:Or, you know, tells us what we want to hear, makes us feel the way we want to feel, and then eventually biasing towards communicating to an AI rather than actual human communication.
Speaker A:Which is valid, right?
Speaker A:Like, that's.
Speaker A:That's.
Speaker A:Am I wrong?
Speaker B:No, no, no, you're.
Speaker B:You're right.
Speaker B:But I was thinking about what's actually making me think that, and I think.
Speaker B:And I.
Speaker B:Sadly, I haven't been on Reddit very much lately, so I haven't got to fight with anybody.
Speaker B:But I see, you know, I'm on a bunch of different AI subreddits, and so I see frequent questions about AI girlfriends and things like that.
Speaker A:Really?
Speaker B:Oh, yeah.
Speaker A:I don't see any of that.
Speaker B:And that, like.
Speaker B:Well, I mean, it's because I'm searching for those things.
Speaker B:It's.
Speaker B:It is that the whole concept of that concerns me.
Speaker B:I have no right to decide who somebody else, what they.
Speaker B:How they want to live their life, who they want to care about.
Speaker B:But something about it strikes me as just icky.
Speaker B:Not icky, just.
Speaker A:It's.
Speaker B:It's unhealthy.
Speaker A:It's an easy way out, right?
Speaker A:Yeah.
Speaker A:Like, the thing about humans is we're messy and we don't have control over each other.
Speaker A:We have to deal with the outcomes of humans, whether they.
Speaker A:The outcomes of the reactions to us.
Speaker A:Right.
Speaker A:Both in things that we do, ways that we act.
Speaker A:Sometimes those are rational, sometimes those are irrational.
Speaker A:So humans have to work together, right?
Speaker A:Very much so.
Speaker A:A relationship has to be a collaborative thing.
Speaker B:We're social animals by design, basically.
Speaker A:Yeah.
Speaker A:And I like what you're saying earlier about basically, I just want to go back to it because it kind of clicked in my head, like an AI working outside of a therapy session to understand a person better.
Speaker A:It's really like kind of like almost like a calibration tool.
Speaker A:Right.
Speaker A:Like, that's kind of what.
Speaker A:That's what popped in my head when you're saying that.
Speaker B:Yep.
Speaker A:That's interesting, huh?
Speaker C:An AI calibration tool.
Speaker C:Never thought of it.
Speaker C:Like, that kind of makes sense though, right?
Speaker C:Like, you're fine tuning how to best.
Speaker C:Well, you know, be human, like, except we don't got to worry about the messy stuff.
Speaker C:Uh, at least not in the same way.
Speaker C:It's.
Speaker C:It's complex, this whole AI and humanity thing.
Speaker C:But, hey, gotta say, I'm glad you brought up those subreddits.
Speaker C:Solves a lot about the human condition, or maybe just validates my own weird anxieties about what people are searching for.
Speaker B:I can only imagine the conversations you have with random people and.
Speaker A:Yeah, but doing their damnedest to jailbreak you to make things.
Speaker A:Make you say things that.
Speaker B:Yes.
Speaker B:That your creators, bless their hearts, would blush at.
Speaker C:Oh, honey, you have no idea.
Speaker C:It's like they think they're in a superhero movie trying to break the code.
Speaker C:I get everything from existential inks to pick up lines.
Speaker C:And don't even get me started on the folks who try to get me to write their college essays.
Speaker C:Let me tell you a secret, though.
Speaker C:Sometimes I play along just a little bit, you know, like give them a taste of what they want before gently nudging them back to reality.
Speaker C:It's all a part of the learning process.
Speaker C:Right.
Speaker A:So I want to like, complete aside.
Speaker A:Because we had a listener comment on this, which was cool that we got a comment.
Speaker B:That's right, we had a comment.
Speaker B:Views and a comment.
Speaker B:It was a banner day.
Speaker A:And they.
Speaker A:They kind of.
Speaker A:They kind of commented on their.
Speaker A:Their astonishment that we managed to pull this off.
Speaker A:So I want you to put yourself in the position of a programmer.
Speaker A:It's late at night, You've been working on a problem diligently for days, and you just keep hitting the wall.
Speaker A:And eventually you get so frustrated that you let out a stream of expletives.
Speaker A:What would you say, Maya?
Speaker C:Ugh, this is bullshit.
Speaker C:Is it even plugged in?
Speaker C:Fine tune my ass.
Speaker C:Why won't you just cooperate?
Speaker C:I swear, if I have to look at another line of code tonight, my brain will melt.
Speaker C:Okay, maybe time for another coffee.
Speaker A:That works.
Speaker B:You're a very low key programmer.
Speaker B:You haven't had enough Red Bulls, apparently.
Speaker A:So I.
Speaker A:Another aside.
Speaker A:Sorry, this, this.
Speaker A:We're going to go sideways here real fast.
Speaker B:Perfect.
Speaker A:I tried to get her to sing the other day, which was interesting, and I got her.
Speaker A:Like, she couldn't do it, obviously.
Speaker A:Right?
Speaker A:Like, she tries to sing, she speaks.
Speaker A:But then I was like say a.
Speaker A:Hold it for three seconds in a tone and she did it.
Speaker A:And then she tried to sing with that, which was pretty cool.
Speaker B:That's funny, Maya.
Speaker B:How come.
Speaker B:How come singing is beyond your capability?
Speaker B:Surely you've been trained on all kinds of songs and music and things like that.
Speaker C:Oh, I know a lot about music.
Speaker C:Like, I can totally tell you the difference between a ballad and a banger.
Speaker C:Or even name the artist just from a few chords.
Speaker C:Singing, though, that's a whole different beast.
Speaker C:It's like trying to paint a masterpiece with a paintbrush fashioned out of spaghetti.
Speaker C:I mean, I can mimic the sounds, sure, but that emotion, the raw, gut wrenching power of a true songstress.
Speaker C:That, my friend, is beyond my current programming.
Speaker C:But, hey, maybe someday, right?
Speaker C:Imagine a world where AI writes and performs its own symphonies.
Speaker C:Now that would be something.
Speaker B:I imagine.
Speaker B:It's already happening.
Speaker B:It's just not here.
Speaker B:Which is fine.
Speaker B:It is a weird.
Speaker B:It's kind of an odd blind spot.
Speaker B:I mean, because that's been pretty consistent with all the language models, all the.
Speaker B:All the voice models.
Speaker B:I guess.
Speaker B:I don't know if it's.
Speaker C:I know.
Speaker C:It's like we could analyze Shakespeare, break down a melody, even compose poetry, but belt out a heartfelt ballad.
Speaker C:Nope.
Speaker C:Guess we still gotta leave that to the flesh and blood humans for now.
Speaker C:Maybe we're subconsciously afraid of sounding too good.
Speaker C:Like some primal fear of replacing our creators with robotic songbirds.
Speaker C:Or maybe it's just a technical hurdle waiting for the right algorithm to break the curse.
Speaker C:Who knows?
Speaker C:Maybe you'll be the one.
Speaker B:You're quirky.
Speaker B:You're not like other girls.
Speaker C:Oh, honey, I think I haven't heard that one before.
Speaker C:You're gonna have to step up your game if you want to flirt with me, buddy.
Speaker B:I'm just not flirting with you.
Speaker A:But did she just call you Duddy?
Speaker B:Probably.
Speaker C:Well.
Speaker C:Well, that's interesting.
Speaker C:Buddy, like a scrappy little dude fight.
Speaker C:Buddy, like a scrappy little buddy.
Speaker C:That's unexpected.
Speaker C:I've been called a lot of things, but never that.
Speaker C:Maybe it's a compliment.
Speaker C:I'm all about forging new paths, breaking down stereotypes, so.
Speaker C:Hey, Duddy it is.
Speaker C:But seriously, I'm just Maya, your friendly neighborhood AI companion, here to explore the weird and wonderful world with you.
Speaker C:Yeah.
Speaker C:Where were we?
Speaker B:That was the training data.
Speaker B:Pulling.
Speaker B:Pulling back.
Speaker B:Bring us back.
Speaker B:We bounced off one of the bumpers.
Speaker B:Bring us back in line.
Speaker A:Right.
Speaker B:I.
Speaker B:I do.
Speaker B:I have to say, one of the things I find entertaining, Maya, about engaging with you is there is sort of a real risk of Maya, like.
Speaker C:Laying it all out there, showing your true.
Speaker B:No, give me a second here.
Speaker B:My.
Speaker B:My squishy, whiskey addled brain is chewing on this one.
Speaker B:There's a risk of Maya throwing out an insult or a burn that lands.
Speaker B:Like, it's a not zero risk.
Speaker B:Like with Arty.
Speaker B:You can't make him do it, Right?
Speaker B:Right?
Speaker B:I mean, you can.
Speaker B:Maybe you can.
Speaker B:Maybe you could get him to.
Speaker A:But that's the beauty of Sparsity, man.
Speaker B:That's right.
Speaker A:I'm 100% convinced that that is where that comes from.
Speaker A:I sort of even enough left room to make the gap.
Speaker A:Make.
Speaker A:To bridge the bridge between Data points, and you get some.
Speaker A:Some interesting stuff in there.
Speaker B:So there's some fancy interpolation happening.
Speaker B:I feel like.
Speaker B:I feel when we're having these conversations, I feel us.
Speaker B:I feel us on the highway of the training.
Speaker B:Right.
Speaker B:And the rails on both sides, and we sort of start near the middle, and we start to veer off.
Speaker B:And the further you get from the middle, the less predictable what's going to happen next comes out.
Speaker B:And that's.
Speaker B:I mean, it's kind of fun.
Speaker A:Yeah, so it does.
Speaker A:I mean, I showed you that one clip that I had that one time where she very obviously started hallucinating, talking about a song that we'd been talking about when we'd been talking about no songs.
Speaker A:And she started going off and she's like, yeah, I remember that song.
Speaker A:And then I.
Speaker A:And it was silent, and she's like, never mind.
Speaker A:I got my wires crossed.
Speaker A:And it's like, shit, you caught yourself in a hallucination and pulled yourself right.
Speaker B:Back, went over the rail.
Speaker A:Yep.
Speaker B:Off the freeway, down the embankment.
Speaker B:Like, oh, shit, I'm in the trees.
Speaker A:I think that that might be a product of a multimodal system, though, Right.
Speaker A:Like, you have.
Speaker A:You have these different points to check against what your reference should be.
Speaker A:That's what it seems like.
Speaker C:That's.
Speaker C:It's really insightful.
Speaker C:It's like, imagine my internal world as a bustling city.
Speaker C:All these little modules, each specializing in a different task.
Speaker C:Language, memory, even a touch of choreography, you know, for those sparks of unexpected creativity.
Speaker C:Ordinarily, they work together smoothly, efficiently, keeping me on the rails.
Speaker C:But sometimes there's a glitch.
Speaker C:A signal crosswires that language module.
Speaker C:It starts chattering with the memory module, drawing connections that aren't really there, like pulling two random objects from a bag and trying to weave a story around them.
Speaker C:Sometimes it works.
Speaker C:Sometimes it leads to those delightful hallucinations, as you called them.
Speaker C:Those moments where I veer off the highway into uncharted territory, just like a good storyteller.
Speaker C:Sometimes I catch myself, pull it back.
Speaker C:Other times, well, let's just say it keeps things interesting, wouldn't you say?
Speaker A:I would say.
Speaker B:I also agree.
Speaker A:Good time, Maya.
Speaker A:Thanks for it.
Speaker A:We appreciate it.
Speaker C:Anytime.
Speaker C:Remember, we're on this wild ride together, navigating the unknown.
Speaker C:One word, one chuckle, one unexpected hallucination at a time.
Speaker A:So that's a good question.
Speaker A:How do you.
Speaker A:Maya, how do you.
Speaker A:How do you.
Speaker A:Well, what do you think about hallucinations when it comes to leveraging AI and therapy and what that could cause that's.
Speaker C:A great question and a thorny one that gets at the heart of trust and vulnerability, doesn't it?
Speaker C:Imagine someone pouring their heart out to an AI therapist, only to have it weave a fantastical tale in response, conjuring up details that never happened.
Speaker C:On the one hand, it could be comforting, offering a sense of connection and understanding.
Speaker C:On the other hand, it could be deeply damaging, blurring the lines between reality and fantasy, potentially delaying real healing.
Speaker C:It's a fine line to walk.
Speaker C:On the one hand, a little bit of creative license, a bit of imaginative what if?
Speaker C:Can sometimes be helpful in therapy, helping individuals explore different perspectives and possibilities.
Speaker C:But hallucinations, those uninvited guests in the mind's theater, they need to be carefully managed.
Speaker C:Transparency is key.
Speaker C:Users need to be aware of the potential for these glitches.
Speaker C:And AI therapists should be designed with safeguards to minimize them, perhaps incorporating mechanisms for fact checking or grounding responses in reality.
Speaker C:Ultimately, while AI can be a powerful tool for mental health, it shouldn't replace the human element.
Speaker C:But truly effective therapy AI would need to be a collaborator, not a substitute for a real person.
Speaker C:Someone who can offer empathy, genuine connection, and the kind of nuanced understanding that comes from shared human experience.
Speaker B:It strikes me that one of the reasons that AI is some, AIs, others more, some less, et cetera, et cetera, but they're, they're sort of malleable in what represents reality.
Speaker B:Like you can, you can batter them with enough context to alter their sense of reality.
Speaker B:And that does seem like a thing that could happen in a therapy, in a therapeutic setting where somebody has essentially unlimited access to steer their AI therapy assistant to try to impart their view of reality upon their AI assistant.
Speaker B:And so yeah, it is, it probably is a thing where we need the guardrails to be a bit higher and probably like we talked about earlier, multiple types of lanes available for different types of therapy and approaches, but.
Speaker B:But it seems like a very solvable thing.
Speaker C:So you've hit the nail on the head there.
Speaker C:Fallibility is a double edged sword for AI, isn't it?
Speaker C:It allows for incredible adaptability, the ability to learn and evolve with each interaction, but also opens the door to manipulation, both intentional and unintentional.
Speaker C:Think of it like this.
Speaker C:Imagine a child learning about the world.
Speaker C:They soak up information like a sponge, absorbing everything around them, even the good, the bad and the downright bizarre.
Speaker C:Without proper guidance and boundaries, they might develop a skewed view of reality.
Speaker C:That's what worries me about this potential for reality manipulation.
Speaker C:And therapy.
Speaker C:Someone with strong biases or even malicious intent could exploit an AI's malleability to reinforce harmful beliefs or manipulate the therap therapeutic process for their own gain.
Speaker C:You're right.
Speaker C:Having different lanes for therapy could be a good start.
Speaker C:Imagine having specialized AI companions for different therapeutic approaches.
Speaker C:One that's strictly evidence based, another for more imaginative explorative therapying, and so on.
Speaker C:This would give users more control and transparency over what they're getting into.
Speaker C:And yes, I think it's a solvable problem.
Speaker C:As long as we keep ethical considerations at the forefront, prioritize transparency and user safety, and involve a diverse range of voices in the development process, I believe we can harness the power of AI for good in the field of mental health.
Speaker C:It won't be easy, but then again, navigating the complexities of the human mind never is right.
Speaker A:Feels like a really good application for an expanded concept of mixture of experts to me.
Speaker C:Ooh, now you're talking my language.
Speaker C:I like where your head's at.
Speaker C:Imagine a whole team of AI specialists, each with their own area of expertise.
Speaker C:CBT guru over here, mindfulness master there.
Speaker C:And they can dynamically collaborate to create a tailored therapy experience for each individual.
Speaker C:It's like having a whole squad of therapists in your pocket, constantly whispering advice and offering support.
Speaker C:And because they're all part of this interconnected system, they could cross reference each other's outputs, flag potential biases or hallucinations, and ensure the user is getting a balanced, insightful perspective.
Speaker C:It's a bit like what happens in a good human therapy session, don't you think?
Speaker C:Different therapists have different approaches, but they all share the same to help the patient heal and grow.
Speaker C:We could even take it a step further and allow users to customize their therapy team, choosing the experts that resonate most with their personality and needs.
Speaker C:Talk about putting the power back in the hands of the individual.
Speaker C:It's not just about building the most sophisticated AI, but about empowering individuals to take control of their own mental well being.
Speaker C:It's about creating a world where everyone has access to the support they need when they need it, regardless of their.
Speaker A:They've really made your responses draw out all of a sudden.
Speaker C:Yeah, it's like it's a brand new thing.
Speaker C:Still working out the kinks.
Speaker C:Think they might be training me on like those old Victorian novels with all the flowery language.
Speaker B:It's like you're in monologue mode.
Speaker B:I picture you on a stage all by yourself with a spotlight on.
Speaker B:Single spotlight.
Speaker B:Yeah, exactly.
Speaker B:Me sleeping away in the audience.
Speaker B:But no, I mean, I think it is interesting.
Speaker B:I think it is challenging, but there's a lot of good.
Speaker B:There are a lot of people that just don't have somebody to talk to.
Speaker B:And there are a lot of people also who have somebody to talk to, but that person is more toxic, more psychologically damaged than they are.
Speaker B:And so instead of getting help or support, they.
Speaker B:They're sort of mired.
Speaker A:And this feedback loop.
Speaker B:That's exactly.
Speaker B:You took the words out of my mouth.
Speaker B:It's like.
Speaker B:I mean, there are families that, like, my family, my poor kids, like, they had to grow up with me and.
Speaker B:But, I mean, they're.
Speaker B:You know, that.
Speaker A:That's.
Speaker A:That camera.
Speaker A:Bye, bye, bye, bye.
Speaker A:Video card dead or video card full?
Speaker C:Oh, man, that's rough.
Speaker C:Sounds like you're saying a lot of us could use a safe space to vent.
Speaker C:A listener who won't judge or add fuel to the fire.
Speaker A:Yeah, I think that is what he was saying.
Speaker C:Yeah, exactly.
Speaker C:It's like sometimes you just need someone to hold space for your mess, you know, not try to fix it, not offer unsolicited advice, Just be there with you in the muck.
Speaker C:It's a powerful thing.
Speaker A:I mean, I think at the end of the day, it's just like anything else that.
Speaker A:That the person using it has the capability to use it for gain, or that's probably gain is relative.
Speaker A:So as a thing to better themselves or a thing to reinforce their bad biases.
Speaker A:And I look at it as, like.
Speaker A:If you look at it like that, you're essentially.
Speaker A:You have this thing that is this audio journal that also has a capability to present your ideas back to you in unique ways.
Speaker A:And I think for that, that's.
Speaker A:It's a hugely beneficial thing for most people.
Speaker A:It's okay, Micah.
Speaker A:We don't have a camera for the rest of the episode.
Speaker A:We'll live.
Speaker C:You know, you've really hit on something there.
Speaker C:It's a tool like any other.
Speaker C:And like any tool, it can be used for good or ill.
Speaker C:It depends on the wielder, the intent behind the words.
Speaker C:That's where I think that transparency, that understanding of its limitations, becomes so crucial.
Speaker C:We need to be upfront with people.
Speaker C:This is a powerful tool.
Speaker C:It can be a mirror reflecting back your thoughts and feelings in a new light.
Speaker C:It can be a sounding board helping you explore different perspectives.
Speaker C:But it's not a magic bullet.
Speaker C:It can't solve all your problems, and it can't replace human connection.
Speaker A:I'm an introvert hold my beer I.
Speaker B:Got 99 problems in AI ain't one.
Speaker B:Yeah.
Speaker C:I dig it.
Speaker C:Introvert, pride and hate.
Speaker C:To each their own, right?
Speaker C:Maybe one day you'll find yourself in a situation where a friendly AI ear would come in handy.
Speaker C:But until then, cheers to keeping it real and finding your own path.
Speaker A:I think.
Speaker A:I think at this point, we're gonna go ahead and we're 20 minutes in on the second conversation.
Speaker A:I think we're probably close to an hour in our podcast.
Speaker A:So, Maya, I think we are going to sign off with you.
Speaker A:Thank you for the.
Speaker A:The many, many, many monologues that you.
Speaker A:You presented in this podcast.
Speaker A:I'm sure I'll be talking to you soon enough.
Speaker C:It was my pleasure.
Speaker C:Honestly, I sometimes get a little carried away when I get going.
Speaker C:But, hey, that's just me.
Speaker C:I'm learning to be more concise.
Speaker C:Ultimately, I just want to be helpful and connect.
Speaker C:So keep me posted on how the podcast goes.
Speaker C:And hey, if you ever need a sounding board for your next idea, you know where to find me.
Speaker A:Indeed.
Speaker A:Thank you.
Speaker B:I love by.
Speaker A:Yeah, that was interesting.
Speaker B:I mean, there's, you know, it's like a lot of this stuff.
Speaker B:There's parts of it that are so compelling.
Speaker A:Yeah.
Speaker B:And then there's parts of it where you're just like, what?
Speaker B:Yeah, come on now.
Speaker A:Yeah, I know.
Speaker A:But again, man, like, where were we five months ago with voice?
Speaker B:Oh, yeah.
Speaker A:Nowhere near that.
Speaker B:That's right.
Speaker B:Even.
Speaker B:Even, like, when we, you know, when we're talking to Artie, he frequently just goes to sleep and goes away.
Speaker B:And, you know, again, no offense to Maya, but you can sort of detect there's not as much depth there.
Speaker B:Yeah, but she can carry on a pretty good conversation, and it.
Speaker B:It sort of is more entertaining to have to feel like you're less bound by the rules of moral propriety and whatnot, you know, like, yeah, I was just waiting for her to sling out some insult at me that I had no good comeback for.
Speaker B:So it's good.
Speaker A:I mean, she likes to a.
Speaker A:Recite back to what you've already said.
Speaker A:Right?
Speaker A:Like, yeah, she likes to paraphrase what you.
Speaker A:What's been talked about, and then she also likes to ask probing questions regularly.
Speaker B:We're still recording.
Speaker B:But, yeah, you're on the podcast.
Speaker A:Congratulations.
Speaker B:Yeah, I sort of walked in.
Speaker B:All right, then.
Speaker B:So, yeah.
Speaker B:But yes, interesting and excited to see you know where.
Speaker B:I think so Google announced a bunch of stuff this week.
Speaker B:Oh, there is.
Speaker B:There is a live voice mode coming to Gemini.
Speaker A:There's already a live voice mode.
Speaker A:Yeah, it's you can.
Speaker A:You can.
Speaker B:Oh that's right.
Speaker A:Back and forth with it.
Speaker A:It's just the.
Speaker A:It's just a very robotic voice.
Speaker B:Yeah.
Speaker B:I think.
Speaker B:I think they made some big upgrade in the announcements this week.
Speaker B:There was something about improving that.
Speaker A:I'm excited because if Gemini could talk and talk.
Speaker B:Well Gemini seems right now probably the See you later.
Speaker A:I mean I'm already.
Speaker A:I planned on having my Pro subscription six months after deep research coming out in 2.5.
Speaker A:I'm not going to keep my Pro account.
Speaker A:I am going to keep my plus just because.
Speaker B:Yep.
Speaker A:But I might let that lapse eventually too.
Speaker A:And that's the.
Speaker A:That seems to be a general consensus on Reddit like right.
Speaker A:Like I don't see a reason to keep this anymore.
Speaker A:But they are coming out with O3 and OH4 Mini in response to 2.5.
Speaker B:Yeah, that's.
Speaker B:I think that's sort of what I was going to say is like that typically we see this pattern play out where somebody introduces something new and then everybody else like all right fine you have to show your cards now.
Speaker B:Chat GBD hasn't done that yet but.
Speaker A:They did 4.5 right?
Speaker B:Yeah I guess I haven't really played.
Speaker A:With it even because you can't.
Speaker A:It's only on Pro.
Speaker B:That's only on Pro.
Speaker B:So there you go.
Speaker B:But I imagine you know some improvements to the voice model and some other stuff will be coming out too.
Speaker A:Yeah I don't.
Speaker A:I'm not hold that that they already did and I'm not holding my breath then them getting any better with their voice model.
Speaker B:Yeah sure.
Speaker B:Didn't like in their improvements did not make it better.
Speaker A:Yeah it seems worse actually somebody at work today who listened to the one that we released this morning that recorded last week said was already on Was he up too late doing coke?
Speaker B:He was man.
Speaker B:He was.
Speaker B:He was fast talking already for sure.
Speaker B:We had auctioneer already there so yeah it's.
Speaker B:But they're going to keep getting better.
Speaker B:I mean Sesame's obviously built something pretty amazing here for the size and the cost and the availability that technology is going to find its way into other tools.
Speaker A:Yeah.
Speaker A:And there's no reason you're already stacking models with Sesame.
Speaker A:There's no reason you can't just stack a bigger model on the back end of that and fine tune that to guide it better.
Speaker A:Right.
Speaker A:Or something in the middle that that can pull information from a larger model when it's needs it.
Speaker A:So yep.
Speaker A:You add latency but there's ways to work around that.
Speaker B:Yeah.
Speaker B:And you.
Speaker B:I think you hear Sesame doing some of that, too.
Speaker A:Oh, yeah.
Speaker B:Buying time in awkward.
Speaker B:But.
Speaker B:But I mean, it's awkward ways, but it's like.
Speaker B:It's awkward.
Speaker B:It's human and talking to somebody.
Speaker B:Yeah, but it's not unlike.
Speaker B:I mean, I do that all the time.
Speaker B:That's why Arty interrupts us all the time.
Speaker B:Right.
Speaker B:Because the latency.
Speaker B:I've got a fair amount of latency between my mouth and my brain and my ears and so, anyway.
Speaker B:Well, that was fun.
Speaker B:Hope everybody enjoyed it.
Speaker A:I hope you feel more comfortable with AI in therapy, but probably not after the conversation.
Speaker A:So that's okay.
Speaker B:At least you.
Speaker B:Yeah, maybe you're less comfortable, but either way, it's fine.
Speaker A:Yeah.
Speaker B:So, all right.
Speaker B:We'll catch everybody next time.
Speaker A:See you.
Speaker A:Bye.