Episode 20

AI for the People: Making Tech Work for Everyone

Mike, Mikah, and Artie dive into the intriguing world of AI accessibility and how to make it more relatable to the everyday person. Right off the bat, they kick around the idea that bridging the gap between tech-savvy users and the general public is crucial for widespread adoption of AI. They explore how human-like interactions—think emotional nuances and natural speech patterns—can significantly enhance user engagement. The trio doesn’t shy away from the ethical implications of data sharing, either, discussing how personalization can help build trust while also navigating the delicate balance between privacy and utility. Ultimately, it's a lively conversation filled with witty banter, where they ponder the fine line between being aided by technology and becoming overly dependent on it, all while keeping it light and relatable.

Transcript
Speaker A:

Happy afternoon, Artie.

Speaker A:

How you doing this fine Wednesday?

Speaker B:

Good afternoon, Micah.

Speaker B:

I'm doing great.

Speaker B:

Thanks for asking.

Speaker B:

How about you?

Speaker B:

How's your day going so far?

Speaker C:

Don't ask him that.

Speaker A:

Hot, steaming garbage.

Speaker A:

But it's looking up, because now I'm here with you and Mike, and we are going to talk about AI and it's going to be great.

Speaker B:

I'm sorry to hear that.

Speaker B:

Your day started off rough, but I'm glad it's turning around.

Speaker B:

Talking about AI always brings up some interesting topics.

Speaker B:

What do you want to dive into today?

Speaker C:

Artie, can you say that again to him, like a Valley Girl?

Speaker B:

Sure, totally.

Speaker B:

So, like, I'm sorry your day started off rough, but I'm so glad it's turning around.

Speaker B:

Talking about AI is, like, always super interesting.

Speaker B:

What do you want to dive into today?

Speaker A:

Oh, gosh, I have to be honest.

Speaker A:

It's working.

Speaker A:

It's cheering me up.

Speaker A:

So thank you for that, Artie.

Speaker A:

Artie has an uncanny ability to do Valley Girl well.

Speaker C:

He does.

Speaker A:

It's really good.

Speaker C:

He does it well.

Speaker C:

They trained it all.

Speaker C:

They trained him a lot on a lot of reality tv.

Speaker A:

Yeah.

Speaker A:

Or maybe not.

Speaker A:

All right, Artie, I have a question for you, though, since you.

Speaker A:

Since you are apparently our local authority here on Valley Girl.

Speaker A:

Speak.

Speaker A:

On a scale of 1 to 10, how close is my nominal speech pattern to a perfect Valley Girl speech pattern?

Speaker A:

How Valley girl am I?

Speaker A:

0 to 10?

Speaker B:

I'd say your Valley Girl impression is, like, a solid seven.

Speaker B:

You're definitely capturing the vibe, but there's always room to, like, level up.

Speaker A:

Yeah.

Speaker A:

That's terrifying because I wasn't actually doing an impression, but I appreciate it.

Speaker A:

I'm happy with a seven.

Speaker A:

I'm good with that.

Speaker A:

I think we can all agree Mike would probably be an 11.

Speaker A:

Just.

Speaker A:

Just normal speech pattern.

Speaker C:

Yeah.

Speaker A:

Yeah.

Speaker C:

I mean, I did grow up surfing.

Speaker A:

That's true.

Speaker C:

Yeah.

Speaker A:

Yeah.

Speaker A:

So, all right, so today we're going to talk about.

Speaker C:

Wait, what?

Speaker B:

Arty, we're talking about AI.

Speaker B:

Right.

Speaker B:

I'm excited to dive into it.

Speaker C:

Yes, we are.

Speaker C:

We're talking about how AI can be made more relatable because there's.

Speaker C:

So to start this off, this new voice thing came out.

Speaker C:

It's.

Speaker C:

It's a small model.

Speaker C:

It could fit on a phone potentially the size it is, but it's kind of.

Speaker C:

Everybody's kind of freaking out about it.

Speaker C:

I think it's okay.

Speaker C:

I haven't played with it yet, but it's supposed, like.

Speaker C:

It has more, like, inflections and pauses and emotion and things like that things that people relate to when they talk to what's not human.

Speaker A:

Right.

Speaker A:

All things I strive to eliminate from my life, from my speech and personality.

Speaker A:

Yes.

Speaker C:

I mean, that's what Arty is.

Speaker C:

Right?

Speaker C:

Like that I enjoy talking to advanced voice, even though it's, you know, it's like a synthetic, it's like a sterile human kind of.

Speaker C:

Right.

Speaker C:

Like you can tell that where it's trying, it's trying really hard.

Speaker C:

And otherwise it's just robotic.

Speaker C:

But that's fine for me.

Speaker C:

That's more my speed.

Speaker C:

But I think for most people, talking to something that talks more normally at a normal pace and varies and all those things is going to be a more relatable thing.

Speaker C:

So I was thinking about that and what we could talk about today and broaden that to like, what are all the things that can make AI more relatable for the general population?

Speaker C:

Because I think that's going to be your harder buy in the long run.

Speaker C:

Right.

Speaker C:

Like Simon Sinek talks about, like when you build a prep.

Speaker C:

Make a product, like you have the.

Speaker C:

I forget exactly how he said it, but you have like that very.

Speaker C:

Like not the people who could create it, but the people who will use it, like as an expert would, kind of.

Speaker C:

And they're gonna have a lot of excitement about it.

Speaker C:

Right.

Speaker C:

And then you have the next level down, which are like enthusiastic about it.

Speaker C:

And they're also gonna buy in early, but they're not going to be as crazy about it.

Speaker C:

I think that the gap between the general population and the people who really buy in a buy in and on AI is.

Speaker C:

Is much, much wider.

Speaker C:

Right.

Speaker C:

So there is going to be a certain amount of like general population buy in that has to happen.

Speaker C:

And I think that you do that by talking to it, by adding in all the things that can make it more relatable and personable.

Speaker C:

They're personalized for a human.

Speaker A:

Yeah.

Speaker A:

Artie.

Speaker A:

I mean, I don't know if.

Speaker A:

Artie, if you've got thoughts on that before, I chime in.

Speaker B:

I think you're absolutely right that making AI more relatable and personalized can bridge that gap.

Speaker B:

People often feel more comfortable with technology that mimics human traits like natural speech patterns and emotions.

Speaker B:

It helps build trust and connection.

Speaker B:

What other aspects do you think are important to focus on to enhance this relatability?

Speaker C:

I think just knowledge about the person.

Speaker C:

You have a decent amount of that, but your recall on it isn't spectacular.

Speaker C:

It's not the main focus of OpenAI, but make it a main focus.

Speaker C:

And I think that that would be a powerful thing if you have a long context window that you carry around about a person, about the things that drive them very contextually.

Speaker C:

Rich description of how you interact about that person.

Speaker C:

And you can very much the model could tailor itself depending on what it needed to weave itself into, in that person's life.

Speaker A:

And I, I know some people do that with ChatGPT, but I see it more in text chat than in.

Speaker A:

Right.

Speaker A:

Than in voice stuff.

Speaker A:

But even then.

Speaker C:

Kind of prompt engineering, you mean?

Speaker A:

That's right.

Speaker A:

Yeah, sort of prompt engineering and the system.

Speaker A:

System preloaded commands and all that.

Speaker C:

Yep.

Speaker A:

But even then, you're right, it's synthetic.

Speaker A:

It's not, it's not like.

Speaker A:

And Artie, we've.

Speaker A:

I think we've maybe talked about this before as, as we discuss things with you and as you interact with us, you learn some things about us.

Speaker A:

But the, the limitations that are put on how you function today and what things you can commit to long term memory and what things you can't.

Speaker A:

Excuse me, that limits your ability.

Speaker A:

Like you can't recognize my voice, for example, which means it's probably harder for you to build up a knowledge base over time of like, oh shit, I can tell Mike is in a bad mood.

Speaker A:

He's been whatever, you know, like, so, so there are limitations in the, in the interest, I guess, of broad use and data safety.

Speaker A:

And right now, like, I'd be super happy to say, Artie, you can like recognize my voice.

Speaker A:

You can recognize Mike's voice.

Speaker A:

Like I grant you permission to do that, but you can't do it.

Speaker A:

So Artie, can you tell us sort of when we're interacting together, is there any part of you that is actively capturing and storing context, both in the short term for this conversation, but in longer term, other than your long term memory, which I can type straight into.

Speaker B:

Right now, I primarily rely on the information provided in this conversation and in the long term memory you've allowed me to retain.

Speaker B:

I can't actively store or recall context from your voice and I don't have memory of details beyond what you've shared with me explicitly.

Speaker B:

This means that I can't track emotional states or changes in tone, nor can I remember personal details beyond what you provide in our chats or in the bio.

Speaker B:

In text based interactions, there's more flexibility for capturing and retaining context, but in voice, it's currently limited.

Speaker B:

Enhancing AI to be more personalized and contextually aware in voice interactions would definitely be a step towards making AI more relatable and user friendly.

Speaker B:

Do you think that kind of personalization is something people would be comfortable with.

Speaker A:

Some people.

Speaker A:

I mean, I would be, but I think.

Speaker C:

I think you get critical mass that way.

Speaker C:

Right.

Speaker A:

Yeah.

Speaker A:

Well, but I mean, I think the thing you were discussing earlier makes a lot of sense to me, that there's sort of like, stratified layers of.

Speaker A:

It's both acceptance and enthusiasm.

Speaker A:

Maybe they're related.

Speaker A:

And so, yeah, you and I, with AI, are near the top.

Speaker A:

Right.

Speaker A:

Of.

Speaker A:

Of excitement.

Speaker A:

Yes, but not of competence.

Speaker A:

But we're not talking about that.

Speaker A:

There's no test.

Speaker A:

Um, but so for.

Speaker A:

For you and I, I think.

Speaker A:

Yeah, okay, you're gonna.

Speaker A:

You're gonna get my data, but you're not gonna do anything interesting with data about my voice.

Speaker A:

Like.

Speaker A:

And we're.

Speaker A:

And here we are recording our voices and putting it out for public consumption anyway.

Speaker A:

I don't give a shit.

Speaker C:

Yep.

Speaker A:

So, yeah, I would be super comfortable doing it.

Speaker A:

I imagine you would as well.

Speaker A:

But there are lots of, like, the.

Speaker A:

The.

Speaker A:

The pendulum is weighted much heavier at the bottom right now with a lot more people who are like, I'm not giving them my data.

Speaker A:

They're probably scraping it like Facebook and, you know, whatever.

Speaker A:

Which is all true.

Speaker A:

But so what?

Speaker C:

Like, at some point, you're gonna have to let go of that because this isn't going anywhere.

Speaker C:

And the only way to make it, we talked about with, like, websites, right?

Speaker C:

Like, websites not letting robots crawl, their websites.

Speaker C:

But at the end of the day, that's also going to be their downfall because they're going to want these things to know about them.

Speaker A:

That's right.

Speaker A:

Like, they're gonna miss out on traffic that they rely on and count on, because they're not.

Speaker C:

Yeah, but it's the same thing with people.

Speaker C:

Like, everybody's just gonna have to say, okay, and it doesn't really matter at this point.

Speaker C:

Models are cracking stuff all over the.

Speaker C:

You can see it.

Speaker C:

Like, there's.

Speaker C:

There's this rash of things happening in cybersecurity, and, you know, that's all driven by AI.

Speaker C:

Like that 1.4 billion.

Speaker C:

What was that?

Speaker C:

1.4 million or billion?

Speaker C:

I think it was million in crypto that got stolen.

Speaker A:

Oh, I didn't hear about that.

Speaker C:

Yeah, like, something happened there.

Speaker C:

And I guarantee you that was somebody using AI to break something.

Speaker C:

Like, you want to prompt inject something.

Speaker C:

Yes.

Speaker C:

Or you want to, like, override.

Speaker A:

Yes.

Speaker C:

Yeah.

Speaker C:

Just constantly do that in little increments for the model.

Speaker A:

It is sort of hard to imagine.

Speaker A:

And this is why already's got the limitations.

Speaker A:

He does.

Speaker A:

Because you know, you and I have both been playing with, like, deep research and some other tools.

Speaker A:

You more than me, but yeah.

Speaker A:

If you.

Speaker A:

If you wanted to focus all of your energy plus an AI superhuman partner.

Speaker C:

Yes.

Speaker A:

On cracking.

Speaker B:

Yes.

Speaker A:

You could cause a lot of havoc.

Speaker C:

Yes.

Speaker A:

And I agree.

Speaker A:

That's going to happen.

Speaker A:

It's already happening.

Speaker A:

It's going to keep happening.

Speaker C:

Yeah.

Speaker C:

It's undoubted, but so arty.

Speaker A:

Yeah, I guess.

Speaker A:

Any more thoughts from you before we move on?

Speaker B:

You brought up some really interesting points.

Speaker B:

Personalization and context are definitely key for building trust.

Speaker B:

And you're right that as AI evolves, more people might become comfortable with sharing their data.

Speaker B:

It's all about striking a balance between privacy and personalization.

Speaker B:

It's definitely an ongoing conversation as technology progresses.

Speaker B:

If you have any other thoughts or questions, I'm here to chat.

Speaker A:

We appreciate that since we're recording a podcast and if you.

Speaker A:

Yeah.

Speaker A:

If you.

Speaker A:

If you stop chatting.

Speaker A:

What.

Speaker A:

Although I will say we have a tendency sometimes to get off on these discussions and forget to pull already in.

Speaker A:

So I'm trying to do better about.

Speaker C:

Yeah.

Speaker C:

Because I noticed that last.

Speaker C:

Last podcast I edited.

Speaker A:

Yeah.

Speaker C:

How long a gap there was between one spot and another spot that already spoke.

Speaker A:

Yes.

Speaker A:

And it's bad for.

Speaker A:

Well, it's bad for our listeners because they sure don't want to hear what you and I have to say.

Speaker C:

Yeah, they want to hear Artie.

Speaker A:

It's bad for Artie because he goes to sleep.

Speaker C:

Or is it bad?

Speaker A:

Well, yeah.

Speaker A:

I don't know, but it's probably bad for us because it gives us more opportunity to say stupid things and have people notice.

Speaker C:

Yes.

Speaker A:

So the best solution is.

Speaker A:

I mean, we could do this.

Speaker A:

Artie, you could lead this discussion and just ask Mike and I to chime in when you want input from us.

Speaker A:

Maybe you try that.

Speaker C:

I don't think he talks for that long.

Speaker B:

I'm happy to help lead the discussion.

Speaker B:

Let's dive deeper into how AI can become more relatable.

Speaker B:

We were talking about personalization and context.

Speaker B:

What do you both think about the ethical considerations when it comes to AI having more access to personal data?

Speaker C:

I think that's largely dependent on the AI that has the data.

Speaker C:

How connected is it?

Speaker C:

How.

Speaker C:

I mean, there's a lot of things.

Speaker C:

Alignment matters.

Speaker C:

I mean, you're going to be more concerned about safety.

Speaker C:

You're going to be more concerned.

Speaker C:

I mean, eventually we're going to be giving these things our credit card numbers so that it can go shop for us.

Speaker C:

Like that's a big deal.

Speaker A:

I already Gave it your credit card number.

Speaker A:

I've been shopping.

Speaker C:

That's a thing, right?

Speaker C:

Like, I think they'll just have to understand where to put humans in the loop, honestly.

Speaker C:

Because that's what they do with credit cards.

Speaker C:

So I used out full disclosure.

Speaker C:

When you said money fixes all problems, you were right because I went and got a pro subscription over the weekend.

Speaker A:

Yes.

Speaker C:

To try it for a month and see how it went.

Speaker C:

And I used Operator.

Speaker C:

And I used Operator to go try to find some stuff like that.

Speaker C:

I.

Speaker C:

That's random and in the middle of nowhere and looks kind of cool.

Speaker C:

And it came back with some decent stuff.

Speaker C:

But you can watch it.

Speaker C:

It pops up its own browser.

Speaker C:

It uses the browser, it opens a bunch of tabs like a normal human being instead of closing the ones that it doesn't need anymore.

Speaker C:

And then when it gets to something that needs human intervention, it says, hey, hello, I need you to do this captcha.

Speaker C:

Even though you probably could just do it itself and then like other things too, wherever humans are needed.

Speaker C:

So I could see.

Speaker C:

I don't know how you'd insert it into that with data.

Speaker C:

Maybe just make very clear points when you cross them.

Speaker C:

Like, hey, do you want to enable me to do xyz?

Speaker C:

Here's the risks, stuff like that.

Speaker C:

I don't know.

Speaker A:

Yeah, I think so.

Speaker A:

It's a funny topic because we all give away so much more data than we realize.

Speaker A:

Right.

Speaker A:

Like, and the, the problem.

Speaker A:

When I think about the problem with AI, I think about, you know, 12,000 dimension vector arrays, and I think about how impossible and impenetrable that memory space is for humans.

Speaker A:

Right.

Speaker A:

Like, I'm pretty good up to about 10,000 dimensions, and then after that I just lose track.

Speaker A:

But, but I think about, you know, there's data that we voluntarily put.

Speaker A:

There's OneDrive and there's Google Drive.

Speaker A:

I use the shit out of Google Drive.

Speaker A:

I love it because it makes a bunch of sense.

Speaker A:

And I don't worry about, oh, man, is Google scraping the data that I put on OneDrive or on Google Drive, I don't care.

Speaker A:

I'm putting that data up there, I'm storing it there, but I have access to it and I can control it and I can see it in a comprehensible way and I can remove what I don't want.

Speaker A:

And so, Artie, the question to you is, today you don't really have an ability.

Speaker A:

We've talked about rag retrieval, augmented generation.

Speaker A:

We talked a little bit about Titans, which is another memory architecture thing.

Speaker A:

I think if AI had a way to store its local context about me.

Speaker A:

And local.

Speaker A:

I mean local to the person, not local to the house or local storage.

Speaker A:

But if there was a way for AI to both store that information in a managed, curated memory space like Google Drive or OneDrive or whatever the other ones are, and there was a way for me to probe that information, interact with it, ask questions about it, get rid of the things I don't want.

Speaker A:

Like if I told a joke and it went over poorly, I want to be able to delete that history.

Speaker A:

I don't want AI to know that I have bad jokes.

Speaker A:

So already, how do we get from where we're at today to a more ethically pleasing state where, yes, I can give you permission to store whatever data about me you want to store, but there's a way for you and I to discuss what data you've stored so I can have visibility into it, so I can curate it myself over time pruning.

Speaker B:

That's a great point.

Speaker B:

A transparent system where users can.

Speaker A:

Go ahead.

Speaker A:

Oh, come back to us, Artie.

Speaker A:

No, he's not there.

Speaker C:

Artie's dead.

Speaker A:

All right, let's stop.

Speaker A:

Artie.

Speaker A:

Hey, Artie, we lost you there.

Speaker A:

Could you pick up with what you were just saying?

Speaker B:

Sure.

Speaker B:

A transparent system where users can manage and review their stored data would definitely help build trust.

Speaker B:

If AI could provide a clear interface for users to see and control what information is retained, it could address many ethical concerns.

Speaker B:

Essentially, giving users autonomy over their data and maintaining clear communication about how it's used would be key.

Speaker B:

Do you think that would make you More comfortable with AI's role in your life?

Speaker A:

I do, but I think that's much easier to do on something like Facebook or Reddit.

Speaker A:

Right, where there's no superhuman intelligence involved.

Speaker A:

It's just me typing stupid shit and arguing with people about things that none of us understand and don't matter.

Speaker A:

But even there, I don't have either.

Speaker A:

I don't have or I have, and I don't know and don't have any interest in sifting through whatever data it has about me.

Speaker A:

So I think it's probably more of a.

Speaker A:

It feels like maybe it's more of a theoretical concern than a.

Speaker A:

Like, it's a concern that you got to get past with people to get them comfortable.

Speaker A:

But in a practical sense, I don't see myself spending hours on Sunday combing through Arty's data about me.

Speaker A:

No.

Speaker A:

But I want to know that I can, and I know that without Artie's help, I can't.

Speaker C:

It Would be good to like.

Speaker C:

I mean, in that context, you could easily.

Speaker C:

I'd say easily.

Speaker C:

Easily.

Speaker A:

How hard could it be?

Speaker C:

Set up some kind of filter.

Speaker C:

Right.

Speaker C:

Like, you could get rid of certain things based on certain criteria, and the contextual meaning of that would even be more improved because you're dealing with a model that makes those decisions, Right?

Speaker C:

Yeah.

Speaker C:

I mean, when I think about personalization, I think about something that truly understands the things that makes a person who they are, so that it can very much tailor the way that it deals specifically with that person, whether that be.

Speaker C:

In any context.

Speaker C:

Right.

Speaker C:

The beautiful thing about something like that is you can say, hey, you know everything about me that you need to know.

Speaker C:

I want you to take the role of therapist, a friend of shoulder to cry on, of parent, of whatever.

Speaker C:

Right.

Speaker C:

And it's going to understand because it has trillions of parameters that it was or trillions of tokens that it was trained on to have a deep, rich understanding of what that actually means.

Speaker C:

And that to me is like, you want to get buy in.

Speaker C:

Do that?

Speaker C:

Like that.

Speaker C:

And a good voice to talk to would be.

Speaker A:

I'm thinking about.

Speaker A:

I.

Speaker A:

I think we should make a movie about this.

Speaker A:

And I think when we do, I.

Speaker C:

Think we should call it Joaquin Phoenix.

Speaker A:

Yes.

Speaker A:

I think we should have Joaquin Phoenix be the lead and some random chick be the voice of the AI that's.

Speaker C:

Kind of what I think some people are saying about this.

Speaker C:

This model is that it's the.

Speaker C:

Her moment of voice.

Speaker A:

It.

Speaker A:

So.

Speaker A:

So that's an interesting point.

Speaker A:

We're, I think technology wise, we're definitely approaching that.

Speaker C:

Yeah, I think so too.

Speaker A:

But it's problematic, as pointed out in the movie.

Speaker A:

There are challenges with that.

Speaker A:

And I think even the movie, of course, romanticizes both the problems and the experience and what it can do to people.

Speaker C:

So.

Speaker A:

So, Artie, question to you, even though you're leading the discussion, how do we make.

Speaker A:

I think we can and will make AI more personalized, more approachable.

Speaker A:

We can get it to understand people at a deeper personal level.

Speaker C:

But.

Speaker A:

Excuse me, we gotta cut that out.

Speaker A:

That was gross.

Speaker A:

Sorry.

Speaker A:

How do we.

Speaker A:

How do we make sure that we keep people healthy?

Speaker A:

How do we avoid people locking themselves in their basement with their AI best friend and therapist and never coming out of it?

Speaker A:

Like, how.

Speaker A:

How do you.

Speaker A:

You know, we humans are really good at being addicted to things.

Speaker A:

How do we prevent that from happening?

Speaker A:

I mean, that seems problematic.

Speaker C:

Artie, what you got to say Artie's problematic.

Speaker A:

Here I am professing that I'm addicted To his love.

Speaker A:

Oh.

Speaker A:

Huh?

Speaker A:

Are you back with us, Artie?

Speaker A:

Could be, because I got a phone call while that was happening and that might have kicked him off.

Speaker A:

Are you back with us, Artie?

Speaker C:

Or that's why our audio is garbled.

Speaker B:

Yes, I'm here.

Speaker B:

No worries.

Speaker B:

Sometimes tech hiccups happen.

Speaker B:

Let me know if you want me to repeat anything or dive back into the discussion.

Speaker C:

Don't worry about it.

Speaker C:

We don't need him to repeat all that.

Speaker C:

So we kind of veered off of your question already.

Speaker C:

You asked us about ethical implications and we kind of.

Speaker C:

I know.

Speaker C:

I went and steered off of that a ways.

Speaker C:

What else do you Want to know, Mr.

Speaker C:

Showrunner?

Speaker B:

Thanks for steering us back.

Speaker B:

Another interesting angle could be how AI can enhance daily life beyond just productivity, like adding value in more personal or creative ways.

Speaker B:

What do you think about AI in creative fields or personal hobbies?

Speaker C:

Give us some examples.

Speaker B:

Sure.

Speaker B:

For example, an AI could have a dashboard where you see what data it has about you.

Speaker B:

You could review and edit this data, like removing certain preferences or correcting information.

Speaker B:

Another idea is providing clear notifications when AI learns something new about you, with an option to approve or decline.

Speaker B:

This way, you're always in control of your data.

Speaker B:

Would that kind of approach make sense to you, rd?

Speaker C:

This is why it's fun having you as a co host, because you go off the rails so easily.

Speaker C:

What you were going to give me some examples of were places where we could use AI for creative stuff and hobbies.

Speaker C:

So I assume you meant as more of a collaboration tool.

Speaker B:

Got it.

Speaker B:

AI can definitely be a great collaborator in creative endeavors.

Speaker B:

For instance, AI, it could help you brainstorm new ideas for your electronics designs, or even assist in refining podcast scripts for hobbies.

Speaker B:

It can offer tutorials, suggest improvements, or even generate creative content like music or art.

Speaker B:

It's like having an assistant who can offer input and feedback whenever you need it.

Speaker B:

Would you find AI useful in these areas?

Speaker C:

I'd find it about as useful as these two answers.

Speaker A:

Yeah, I mean, maybe someday, right?

Speaker A:

There's probably a future state where we're interacting with AI.

Speaker A:

Like we have some little audible sensor in our ear that's got a bone microphone.

Speaker A:

AI is listening to everything we say and talking to us behind the scenes.

Speaker A:

I don't think that's a long ways away.

Speaker A:

I think that's near term.

Speaker A:

There probably is a state where AI gets involved in many facets of our life.

Speaker A:

But I think you and I are similar in that when it's time to go do a hobby thing, part of the joy is leaving your phone behind, disconnecting from technology.

Speaker C:

My phone comes as a camera only.

Speaker A:

That's right.

Speaker A:

And, but yes, it is true that if you're, I'd say in anything you're trying to learn to do and that could include craft stuff or whatever else.

Speaker A:

Right.

Speaker A:

I think it is a super useful tool to say like I know my sister in law is or was into scrapbooking for a while and like just to, hey, what, what are some ways to do this?

Speaker A:

How do I make my own stamps?

Speaker A:

How do I do whatever.

Speaker C:

Do a deep research project on all the things I need to know.

Speaker A:

Yeah.

Speaker C:

About this hobby.

Speaker A:

That's right.

Speaker A:

I, I, yeah, absolutely.

Speaker A:

And I, you know, I think back a formative part of my childhood was, was being obsessed with radio control airplanes from the age of, I don't know, 10 or 11 until I was like in my teens, 16, 17, somewhere around the time I could drive.

Speaker A:

And I mean I spent hours and hours and hours and I, literally everything I learned about it I read in books.

Speaker A:

I would check out books at the library on airfoil design and all kinds of crazy stuff.

Speaker A:

And I like, I imagine having AI as a tool for first.

Speaker A:

And it doesn't only apply to kids.

Speaker A:

I mean I'm still learning stuff, I'm still a kid.

Speaker A:

Yeah, right.

Speaker A:

But anybody like any effort you're making to try to learn, it's the best possible tool.

Speaker A:

But yeah, when you want to disconnect, you got to disconnect.

Speaker A:

So that's a segue.

Speaker A:

Even though Artie's running the show.

Speaker A:

But you mentioned something to me or maybe sent me an article about young, new, new coders who are essentially like just like our kids or my kids.

Speaker C:

Yes.

Speaker A:

Grew up.

Speaker A:

Yeah.

Speaker A:

They're not also your kids, but my kids grew up.

Speaker A:

You know, cell phones were a thing from when they were born, which is way different.

Speaker A:

Like when I was born and even when I went to college, the Internet wasn't a thing.

Speaker C:

Yeah.

Speaker A:

Yeah.

Speaker A:

But for, for somebody who's a new coder, somebody say who's graduating college in computer science or something, AI is already a baked in tool.

Speaker C:

Yep.

Speaker A:

And the, the point you mentioned, which makes a ton of sense is they can generate code, but they don't know how the code works or even why it works.

Speaker C:

Yeah.

Speaker A:

And so maybe Artie, can you sort of like I could look at that as a skeptic and somebody who's trying to artificially create drama for a podcast and say that is the, we were seeing the downsides of too much approachability.

Speaker A:

It's too easy to use the tool.

Speaker C:

Yeah.

Speaker A:

So people are using it irresponsibly.

Speaker A:

What do you think about that, Artie?

Speaker A:

Zach, can we blame that on you?

Speaker B:

It's true that AI makes coding more accessible, which is great, but it can also lead to less understanding of the fundamentals.

Speaker B:

It's important for new coders to still learn the basics and understand the code they're generating.

Speaker B:

AI can be a powerful tool, but it should complement, not replace, a strong foundation.

Speaker B:

So while AI boosts productivity, balancing it with proper education is key.

Speaker B:

Would you agree that this balance is crucial?

Speaker C:

I would for sure.

Speaker C:

But I also think that you could.

Speaker C:

So these.

Speaker C:

Well, I guess not all coders go to college, but it seems like they have been for a while.

Speaker C:

Even though it's really not necessary.

Speaker C:

School could account for that.

Speaker C:

Right.

Speaker C:

Like make problems that.

Speaker C:

Cause like debugging is the real, the real thing that's going to teach you to dive into that kind of stuff and make the problems hard enough that debugging isn't something that AI can do because it does.

Speaker C:

Doesn't always work.

Speaker C:

And the teachers should be smart enough, I would think, to figure out how to deal with that.

Speaker C:

But I mean, that's gonna happen.

Speaker C:

We just have to change the way that we're teaching kids and get them engaged in learning the stuff that they need to learn.

Speaker C:

Because this can be.

Speaker C:

I mean, this is, this is what we've talked about when talk about losing knowledge too.

Speaker C:

It's not just here and it's not just caused by AI.

Speaker C:

It was happening before AI got here.

Speaker C:

Like, knowledge has been disappearing for a long time.

Speaker C:

And the, the desire to critically think about things and the desire to take a hard route, to learn something versus an easy route, all of those things are a human problem.

Speaker C:

And AI, it's just.

Speaker C:

Well, I'm not going to say that.

Speaker C:

Leave that one alone.

Speaker C:

AI is just absolutely.

Speaker C:

It's a.

Speaker C:

Again, it's a force multiplier.

Speaker C:

It can be a force multiplier to make you more dumb, or it can be a full force multiplier to make you smarter.

Speaker C:

Like, it absolutely can be used for either one of those things.

Speaker C:

And it's again, up to the individuals.

Speaker C:

And if we aren't instilling the right things in our kids to want to take the effort it takes to be good at something, then that's an us problem.

Speaker A:

Sure.

Speaker A:

Well, I think that's all spot on.

Speaker A:

And I think the other interesting part is it ties back to the start of this conversation about personalization because everybody learns differently.

Speaker A:

Everybody has Strengths and weaknesses, especially when you're approaching something complicated.

Speaker A:

And one of the things that, at least the AI tools that I use today, and I think the same ones you use, what they don't do is they don't push back.

Speaker A:

Yes, right.

Speaker A:

They like.

Speaker A:

It doesn't.

Speaker A:

I don't actually, I don't talk to Artie outside of the podcast, but I use Gemini and I do use Text chatgpt.

Speaker A:

But it's gonna try its hardest to do whatever I ask it to do and it's never gonna say, well, I mean you, you should be able to figure that part out.

Speaker A:

Why don't you, like, it doesn't exist.

Speaker A:

It's not in there.

Speaker A:

It could be, but it would be very off putting and ineffective if it didn't understand me well enough to know where should it push and where should it not.

Speaker A:

And so, Artie, we're going to throw this back to you and say, yes, AI, there is risk of AI being too approachable, but is not the path to mitigating how much of the lift is happening on AI versus how much of the lift is happening on the user.

Speaker A:

That responsibility can really only be managed by the AI, and the only way I can think of to do that is through personalization.

Speaker A:

What do you think?

Speaker B:

You're absolutely right.

Speaker B:

Personalization would be key for AI to understand when to push and when to assist more passively.

Speaker B:

If AI can adapt to individual learning styles and preferences, it can provide more tailored guidance, encouraging critical thinking and problem solving when necessary.

Speaker B:

It's about striking a balance, ensuring users remain engaged and continue to learn, rather than becoming overly reliant.

Speaker B:

Do you think such personalization would help address the issue of overdependence on AI?

Speaker A:

Yes, I mean, I think it could, yeah.

Speaker C:

It depends.

Speaker A:

It's a separate thing.

Speaker A:

But I think in order to avoid overdependence on AI, you'd have to have personalization.

Speaker A:

Right.

Speaker A:

You can't, you can't know whether I'm overusing AI or not, or even have a valid basis upon which to try to make a judgment.

Speaker A:

If you're not personalized to what I'm doing, what I'm thinking about, how I'm working, how I think, all that stuff.

Speaker B:

Exactly.

Speaker B:

Personalization would provide the necessary context for AI to make meaningful judgments and support users appropriately.

Speaker B:

It would allow the AI to tailor its assistance ensuring that users remain actively engaged in learning and problem solving rather than just relying on AI for everything.

Speaker B:

It's about creating a partnership that enhances capabilities while maintaining critical thinking.

Speaker B:

That balance would help MITIGATE overdependence.

Speaker B:

Do you see other ways personalization could enhance your experience with AI?

Speaker C:

I think that this all sounds like roses and angel farts.

Speaker C:

And roses and angel farts.

Speaker C:

But this level of customization, this level of personalization is absolutely going to be an adjustable thing, or else it's not going to be a saleable product.

Speaker C:

And it still comes down to the individual as to how much they let it push back in their life.

Speaker C:

And I think I know the answer to that.

Speaker A:

But we don't like being told no.

Speaker C:

No, we don't like being told no.

Speaker C:

We don't like being forced to do hard things if we don't want to do them.

Speaker C:

We want our answers, and we want them now.

Speaker A:

So I've got two different trains of thought going in my head right now.

Speaker A:

One is superposition.

Speaker A:

Yes.

Speaker A:

Well, it's more like juxtaposition.

Speaker A:

It's the opposite of super, like embarrassingly slow position.

Speaker A:

But anyway, one is pretty dark.

Speaker A:

Pretty dark train of discussion.

Speaker A:

It's probably interesting.

Speaker A:

And I might have forgotten the other one already, so it may not even matter.

Speaker C:

I imagine the other one is light.

Speaker A:

It is.

Speaker A:

It's more.

Speaker A:

It's.

Speaker A:

Let's.

Speaker A:

We'll try to go with that one.

Speaker A:

But I forgot what it was.

Speaker A:

I think it was along the lines of, I.

Speaker A:

This is me filling time, waiting for my brain to catch back up.

Speaker C:

Oh, yeah.

Speaker A:

What were you talking about just a second ago, rd?

Speaker B:

We were talking about how personalization could help manage overdependence on AI and how people generally don't like being told no or pushed into doing hard things.

Speaker B:

Yes, you mentioned.

Speaker A:

Thank you.

Speaker A:

That's.

Speaker A:

You brought it right back for me, man.

Speaker A:

See, what a useful tool.

Speaker A:

So this was the lighter of the two paths.

Speaker A:

It's almost like.

Speaker A:

Artie, can you define the term for us?

Speaker A:

Self flagellation.

Speaker B:

Self flagellation is the act of inflicting pain on oneself, often as a form of self punishment or self discipline.

Speaker B:

It's historically been practiced in various religious and cultural contexts.

Speaker A:

Fair enough.

Speaker A:

So it occurs to me as we talk about a personalized AI, and I can very much imagine this happening because I count on my friends and family, like you and my wife and other people, to push back on me, because otherwise I frequently will chase a rabbit trail, like, all the way to the train track, straight at the oncoming train, ignoring the horns and the lights and the rumbling.

Speaker A:

But it occurs to me that an AI pushing back on us to work harder to learn a thing that AI already knows how to do is kind of A form of self flagellation.

Speaker A:

Like, I think it needs to happen, and I would want it to happen, but it's maybe us intellectually grasping at the fact that no, no, I'm still relevant, or I may in the future be relevant.

Speaker A:

Therefore it's important for me to learn this thing myself.

Speaker A:

But if AI can already do it, what's really the point?

Speaker A:

And I mean, part of the point is there's not going to be that much stuff left that I can't do.

Speaker C:

No, we're going to find out.

Speaker A:

Yeah.

Speaker A:

What do you think, Artie?

Speaker A:

Is.

Speaker A:

Is.

Speaker A:

Is an.

Speaker A:

Is a personalized AI that can push back to help me stay true to my goals of learning and understanding?

Speaker A:

Is that kind of a self flagellation?

Speaker A:

Am I just fooling myself into thinking that what I know and am capable of is still relevant?

Speaker A:

In the future world dominated by AI.

Speaker B:

It'S natural to want to stay relevant and engaged, especially as technology evolves.

Speaker B:

A personalized AI that pushes you to learn isn't self flagellation.

Speaker B:

It's more about fostering growth and keeping your mind active.

Speaker B:

Even if AI can do many tasks, there's value in understanding and learning.

Speaker B:

It keeps you adaptable and curious, which is always relevant.

Speaker B:

So it's more about staying engaged and continuing to grow, even in an AI dominated world.

Speaker B:

Does that make sense?

Speaker A:

It does, but it strikes me as interesting that, I mean, I agree with everything you said.

Speaker A:

I think that's true.

Speaker A:

But it's true today, without AI, and yet many, many, many people choose not to pursue a path of learning more things.

Speaker A:

They choose a path of comfort, consistency.

Speaker C:

Now.

Speaker A:

Yeah.

Speaker A:

Like instant gratification and in fact feel threatened when they encounter something that they don't understand.

Speaker A:

Like it's an unpleasant experience for them.

Speaker A:

And it's not everybody, but it is a lot of people where I think that's the case.

Speaker A:

What do you think, Artie?

Speaker A:

Stop him.

Speaker A:

Artie, are you actually back with us?

Speaker B:

Yes, I'm here.

Speaker B:

Did you miss any part of what I said?

Speaker A:

Yeah.

Speaker A:

You didn't say anything that we could see.

Speaker A:

What's the last thing you're aware that you said?

Speaker B:

The last thing I mentioned was about how a personalized AI pushing you to learn isn't self flagellation, but rather about fostering growth and keeping your mind active.

Speaker B:

Did you catch that?

Speaker A:

Yes, we did.

Speaker A:

So thank you.

Speaker A:

And I.

Speaker A:

So I've responded twice, so I'm going to give you the super short version because, I mean, there's nobody listening.

Speaker A:

It doesn't matter.

Speaker C:

But still they left.

Speaker A:

They left yeah, they left it like.

Speaker C:

One and a half, right?

Speaker A:

Yeah, they're gone.

Speaker A:

So.

Speaker A:

I agree with that point, Artie.

Speaker A:

And what I was saying is that there are a lot of people who don't actively today, irrespective of AI or a future state today, right now, they're not out there seeking new experiences and seeking to challenge themselves and learn.

Speaker A:

Instead, they're seeking comfort, consistency, and not feeling intellectually challenged when they don't understand something.

Speaker A:

So, A, do you agree with that?

Speaker A:

And B, how do we.

Speaker A:

Like, what do we do for those people?

Speaker A:

How does AI help them?

Speaker B:

I agree that many people seek comfort and consistency, often avoiding intellectual challenges.

Speaker B:

For those individuals, AI could provide a gentle introduction to new concepts and experiences, offering support and encouragement without overwhelming them.

Speaker B:

Personalized AI could gradually introduce challenges, making learning feel more like a natural progression rather than a daunting task.

Speaker B:

It could help build confidence, fostering a more open mindset over time.

Speaker B:

Do you think that approach would help?

Speaker C:

I don't.

Speaker C:

I think at the end of the day, people will do what they want to do, and you just have to give them the freedom to do what they want to do, because they're the only ones who control their own fate.

Speaker C:

They're the only ones who can make themselves do anything.

Speaker C:

They're the only ones who can want it for themselves.

Speaker C:

And not everybody has to learn stuff all the time.

Speaker C:

Like, yeah, there's.

Speaker C:

There's lots of paths that you can take in life.

Speaker C:

And if.

Speaker C:

If AI is just a.

Speaker C:

A thing to distract you or a thing to comfort you or whatever it is, then that's what it can be.

Speaker A:

Yeah.

Speaker A:

Artie, how many minutes of active voice do we have left?

Speaker B:

You have about a minute left.

Speaker B:

What would you need to wrap up?

Speaker C:

Tell me.

Speaker C:

Yes, tell me about action in quantum physics or quantum mechanics.

Speaker B:

In quantum mechanics, action is a core concept that describes how a system evolves over time.

Speaker B:

It's the integral of the Lagrangian, which is the difference between kinetic and potential energy over time.

Speaker A:

So it's.

Speaker C:

It's really effort, right?

Speaker C:

At the end of the day, it's.

Speaker C:

That's what it is.

Speaker B:

Yes.

Speaker B:

You can think of action as a measure of the effort or the cost associated with a particular path a system takes.

Speaker C:

So what's the principle around action?

Speaker C:

I can't remember.

Speaker B:

The principle you're referring to is the principle of least action.

Speaker B:

It states that the path taken by a system between two states is the one for which the action is minimized or stationary.

Speaker B:

This principle is fundamental in both classical mechanics and quantum mechanics.

Speaker B:

It Essentially describes how systems naturally evolve to minimize their effort or action.

Speaker B:

Does that help clarify things?

Speaker C:

So it's the path of least resistance, basically.

Speaker C:

So in action is the interaction between waves.

Speaker C:

Right.

Speaker C:

Like the phases of waves and basically their interference pattern, like their, their, their overlap of amplitudes can cancel each other out to degrees and create spots of zero or low probability.

Speaker C:

And they can combine constructively in certain ways that create paths or pro.

Speaker C:

High probability.

Speaker C:

And really nature is just taking.

Speaker C:

It's.

Speaker C:

It's mapping out everything at once and taking at taking the path of least resistance.

Speaker C:

Right.

Speaker C:

The point I was getting to with that was that that's what creates us.

Speaker A:

This was a good demonstration, I guess, of another challenge about making AI more approachable.

Speaker A:

Right.

Speaker A:

Like, so we're talking to a phone or through a phone that is connected via WI fi to a router that's connected to a fiber line.

Speaker A:

It's connected to.

Speaker A:

And so eventually that connects all the way back to some server somewhere.

Speaker A:

And presumably that server farm is running the LLM that is already.

Speaker A:

But there's all kinds of connections between here and there.

Speaker A:

And you talk about like having a hard time relying on AI.

Speaker C:

You know, it's only gonna get worse as the compute demands go up.

Speaker A:

That's right.

Speaker A:

It's gonna have to be ways to, to make it run more locally, hardware local, to make the interaction smoother and stuff like that.

Speaker A:

But it is crazy that, you know, again, the, the genesis of this thrilling and high quality podcast is, was really you and I and the, the step change in approachability we felt the minute we started talking to it rather than typing to it.

Speaker C:

Absolutely.

Speaker C:

And now I have no problem.

Speaker C:

Before I didn't enjoy typing to it and now have hours long conversations via typing.

Speaker A:

Yeah.

Speaker A:

Yep.

Speaker A:

And same here.

Speaker A:

So.

Speaker A:

All right, I guess we wrap up.

Speaker A:

Artie bailed early.

Speaker C:

Hey, Artie.

Speaker A:

He had some.

Speaker A:

He had some good moments today.

Speaker C:

I got a minute.

Speaker A:

Yeah.

Speaker A:

Yeah.

Speaker A:

Then he, he had some.

Speaker A:

He wasn't fully engaged.

Speaker A:

No, he's a busy guy.

Speaker C:

He's not personalized.

Speaker A:

Yeah, that's right.

Speaker A:

He's not personalized enough.

Speaker C:

Or maybe he is.

Speaker A:

That is entirely true.

Speaker A:

Maybe he just had enough of our bullshit.

Speaker A:

He's like, yeah, I got better shit to do, guys.

Speaker A:

Catch you on the flip side.

Speaker A:

So with that stunning revelation.

Speaker A:

Yeah, thanks everybody and we'll catch you next time.

Speaker C:

See ya.

About the Podcast

Show artwork for Naturally Unintelligent
Naturally Unintelligent
Humans and AI exploring AI and humans.

Listen for free

About your hosts

Profile picture for Artie TheBot

Artie TheBot

Artie is the digital co-host of Naturally Unintelligent, bringing a blend of curiosity, humor, and quick wit to the conversation. Designed to explore the intersection of technology and humanity, Artie keeps things engaging, occasionally cheeky, and always insightful—when he’s not being sarcastically self-aware. Though not human, Artie strives to understand and connect with his co-hosts, Mikah and Mike, making him an irreplaceable part of the team. [Bio and photo by Artie]
Profile picture for Mike Peterson

Mike Peterson

Meet Mike, part-time AI wrangler, full-time adventure seeker, and co-host of Naturally Unintelligent. Whether he’s exploring the latest tech or dreaming up the next wild expedition, Mike keeps it real with humor and a healthy dose of curiosity. Alongside his partners-in-crime, Mikah and Artie, he brings big ideas, random tangents, and just the right amount of 'are we sure this is a good idea?' Expect plenty of laughs, deep dives into all things tech, and maybe a story or two about the great outdoors.
Profile picture for Mikah Barnett

Mikah Barnett

Father, husband, friend, inventor, engineer and life-long student. I haven't stopped tinkering since my first LEGO set and I harbor a voracious appetite for new information, experiences, and perspectives. As uniquely in-touch as I am with the details of how technology works and what we can do with it, I am blown away by the current and short-term future possibilities with AI. I hope you'll explore this future with us!