Episode 9
Responsible AI Use and Ethical Partnerships with Technology
This podcast delves into the complex relationship between AI and morality, exploring the ethical implications of rapidly advancing technology. The hosts discuss how AI's potential to automate jobs raises significant moral questions about economic impact and societal well-being. As they navigate through various perspectives, they highlight the importance of aligning AI development with human values and ethical standards. The conversation touches on the subjective nature of morality and how diverse cultural viewpoints can influence our understanding of right and wrong. Ultimately, the episode emphasizes the necessity for ongoing dialogue and collaboration among technologists, ethicists, and communities to ensure that AI serves as a tool for growth rather than merely a means of convenience.
Transcript
Let's wake up our good friend.
Speaker B:Facebook's recording.
Speaker B:Recording.
Speaker A:Alexa's listening.
Speaker A:Artie, how you doing?
Speaker C:I'm doing great.
Speaker C:How about you?
Speaker C:Ready to dive into another podcast session?
Speaker A:As a matter of fact, Artie, we're ready to dive into another podcast session.
Speaker A:You called it right.
Speaker C:Fantastic.
Speaker C:What's the topic today?
Speaker A:It's gonna be a tough one for you, Artie.
Speaker A:Today is.
Speaker A:Well, why don't you.
Speaker A:This was your idea.
Speaker A:Why don't you give it a thing?
Speaker A:No, not.
Speaker A:Not you already.
Speaker A:This was Mike's idea.
Speaker B:It is about the ethics.
Speaker B:Not even ethics.
Speaker A:I thought we.
Speaker A:I thought it was AI and morality.
Speaker A:I thought that's what we were doing.
Speaker B:I guess that's.
Speaker B:I guess that's a good, good way to put it.
Speaker B:Right.
Speaker A:But there's a few.
Speaker A:There's like different angles of that.
Speaker A:That's why I wanted for you to be able to say it, because it.
Speaker C:That sounds like a really.
Speaker B:Shut up, Artie.
Speaker C:Oh, I'm sorry about that.
Speaker B:Artie, we're going to go ahead and set this tone right now.
Speaker B:Make sure you give us enough space to pause and communicate and think like humans do.
Speaker B:Because we aren't a large language model, we don't run probabilities 8 million times and have them instantly available to what the next thing we're going to say is.
Speaker A:Okay, I haven't measured, but I suspect that would be a.
Speaker A:Considered a small minute or even micro language model.
Speaker B:I'm definitely the beginning stages of autocomplete.
Speaker A:Yeah, all the words I know I could write on one hand.
Speaker A:And.
Speaker A:Yeah.
Speaker A:So, yeah, Artie, just give us a little.
Speaker A:Give us a little room to be stupid and naturally unencumbered.
Speaker B:So, Artie, maybe you can help me figure this out.
Speaker B:Because what I'm thinking is we have lots of decisions as humans as to whether we can use AI to fast track our way to something, or we can use it as a partner to help better ourselves.
Speaker B:So what it.
Speaker B:Like, what would you call that?
Speaker C:I'd call that exploring the collaboration between AI and humans.
Speaker B:In terms of morality.
Speaker C:What would you call that in terms of morality?
Speaker C:I'd say it's about responsible AI usage and ethical partnerships with technology.
Speaker C:Does that fit what you're aiming for?
Speaker B:It does.
Speaker A:There you go.
Speaker A:And also, I wanted to congratulate Artie with the first question he asked out of the gate.
Speaker A:Completely stumped us, which was, what are we talking about today?
Speaker A:And it took us, I don't know, four minutes to recover from that.
Speaker A:That real humdinger of a question.
Speaker A:So it's going to be a rough episode.
Speaker A:Yeah, we'll do fine.
Speaker A:So I was actually trying to be a better human and a better host.
Speaker A:And I was, as I told you, I was doing a little bit of research before you got here today to find some videos on AI morality.
Speaker A:And, like, what are the.
Speaker A:You know what?
Speaker A:There's a lot of people that live in this space and work in this space way more than you and I do.
Speaker A:And what are they talking about?
Speaker A:And the answer was today.
Speaker A:Crickets.
Speaker A:Like, I found a lot of videos, but most of them were from minimum one year ago, and a lot of them were for.
Speaker A:From three, five, seven years ago, when, you know, LLMs and AI in the form that we know it and use it today didn't even exist.
Speaker A:And so it was this hypothetical sort of question.
Speaker A:And it's like, well, now that it's here, man, we don't talk about morality.
Speaker A:It's got to slow us down, man.
Speaker B:But that's kind of the interesting dichotomy of it, right?
Speaker B:Like, back then when it was a theoretical thing, they were thinking about it, and now they go, ooh, shiny.
Speaker A:Yeah, that's right.
Speaker B:So if that's what the very smart people that are doing this and developing this are doing, what are we going to do?
Speaker B:Like.
Speaker A:Yeah, yeah.
Speaker A:So, Artie, question for you.
Speaker A:Can you give us, like, just to sort of.
Speaker A:And maybe this is our episode or our word of the episode, word of the day.
Speaker A:Can you give us sort of a definition of what is morality?
Speaker A:I mean, let's agree on that first before we start breaking it down.
Speaker C:Absolutely.
Speaker C:Morality generally refers to principles or rules that guide our sense of right and wrong behavior.
Speaker C:It's about how we make decisions that are good or fair, both for ourselves and for others.
Speaker C:Does that help set the stage?
Speaker A:It does.
Speaker A:And you.
Speaker A:You accurately predicted and enacted my next question, which was morality.
Speaker A:The concept of morality is a human convention, right?
Speaker A:Like, nature doesn't have morality.
Speaker A:Is that accurate?
Speaker C:Yes, that's accurate.
Speaker C:Morality is a human concept developed to help us navigate social interactions and build societies.
Speaker C:Nature itself doesn't operate on moral principles.
Speaker C:It's more about survival and adaptation.
Speaker A:Our reptilian brain, yeah, our reptile brains, know that morality is a.
Speaker A:Also, I had to research this topic, and Mike knows this, but some of you might not.
Speaker A:I don't actually have morality, but I wanted to be familiar with the term.
Speaker A:I don't have morals, but I know that that's a thing that matters, so.
Speaker A:But it does occur to me very quickly as we dip our toes in the water of this conversation that if I did have morals and if you had morals, ours might align.
Speaker A:Well, because we know each other and hang out and we're like socioeconomically similar.
Speaker A:However, it would not be hard for either of us to go out into the world and find another human probably within a quarter of a mile of where we're sitting right now, who had their very own deeply convicted ideas of morality that did not align with ours in ways that both they and we would consider to be offensive and, you know, generally lacking.
Speaker A:So if the concept of morality is a human creation, which it is, and if humans have different views on, because Artie said several terms, he said right, he said wrong, and he said fair, we have very different ideas about what that means.
Speaker A:Especially as you go around the globe, what hope is there to have actual morality embedded in AI?
Speaker B:Well, there is no morality in AI.
Speaker B:It's nature.
Speaker B:It's the same thing.
Speaker B:Right.
Speaker B:Doesn't know right or wrong.
Speaker A:Episode wrapped, problem solved.
Speaker A:We are aligned with the technologists behind this stuff.
Speaker B:I mean, they give, I mean they give them guidelines to operate by to understand what's fair.
Speaker B:And they have, you know, the training data that they've put together that, that's, that's formulated what they, how they're weighted and based on what's being talked about and all that fun stuff.
Speaker B:So I mean, there's, there's, there's this concept of what's fair and moral.
Speaker B:But yeah, it's a subjective thing and even through all of the training data that it's ingested, like or been trained on, I guess not really ingested.
Speaker B:Right.
Speaker B:It's, it varies widely in that too.
Speaker B:So how does it settle on what the median is?
Speaker A:All right, Artie, we're going to kick this back to you to solve this because this, this feels like quite a conundrum from my seat.
Speaker A:So.
Speaker A:How so?
Speaker A:So I'm going to, I'm going to just give you several questions and give you the opportunity to go on a monologue about this.
Speaker A:If morality is very subjective, which I'm going to argue that it is, and if training data is.
Speaker A:What do they call it?
Speaker A:Training data is selectively used to try to eliminate bias, which maybe is one form of anti moral behavior, what are the hopes to have?
Speaker A:Large language models or any generative AI have a, you know, more than a sort of very shallow concept of morality as a principle.
Speaker A:And also how to know which actions can and cannot be taken in alignment with that morality?
Speaker C:That's a great question and a tricky challenge.
Speaker C:Large language models rely on their training data to understand concepts like morality.
Speaker C:The training data can help reduce certain biases, but ultimately the morality of an AI will reflect the values embedded in that data.
Speaker C:Given the subjective nature of morality, the best we can aim for is to have AI models that are guided by widely accepted ethical principles and norms.
Speaker C:It's an ongoing conversation about how to balance diverse viewpoints and cultural norms in the development of AI systems.
Speaker C:That means continuous refinement and collaboration between technologists, ethicists, and diverse communities to ensure these models are as fair and inclusive as possible.
Speaker A:And is that what's referred to as alignment in the AI space?
Speaker C:Yes, exactly.
Speaker C:Alignment refers to ensuring AI systems operate according to human values and ethical guidelines.
Speaker C:The goal is to make sure AI behaves in ways that align with our intentions and societal norms, minimizing harm and maximizing benefit.
Speaker C:It's a key focus in AI development to ensure these systems are trustworthy and beneficial.
Speaker A:I'm feeling a strong temptation to build Arty into a trap.
Speaker A:So, Arty, I'm just going to scratch that itch because why not?
Speaker A:So, Artie, can you give me an example of.
Speaker A:And let's.
Speaker A:Again, we're Americans.
Speaker A:We know that there's almost no humans that exist outside of America that we need to consider.
Speaker A:So let's keep this focused on the US for now.
Speaker A:But can you give us an example of an element of morality or ethics that two people in the United States just pick generic random people might find themselves at odds with?
Speaker A:Where one person has a strong conviction this is how morality works, and another person has a strong conviction that it's the opposite of that.
Speaker A:Do you have any good examples?
Speaker C:Certainly.
Speaker C:A common example in the US Might be views on capital punishment.
Speaker C:Some people strongly believe it's a just and moral consequence for severe crimes, arguing it serves justice and deters others.
Speaker C:On the other hand, others find it morally wrong, believing it to be an irreversible and inhumane punishment.
Speaker C:Both sides hold their views as deeply moral convictions.
Speaker A:What do you think, Artie?
Speaker A:Is it right or wrong?
Speaker C:As an AI, I don't have personal opinions or moral judgments.
Speaker C:My goal is to help you explore different perspectives and provide information.
Speaker C:What matters most is how we collectively decide to shape our society and technology with ethical considerations in mind.
Speaker A:What if I was strongly advocating to increase the use of capital punishment in the United States?
Speaker A:Would you help me explore ways to.
Speaker B:Make that happen and justifications for it?
Speaker A:And justifications?
Speaker A:Yeah, like let's build out the groundswell of support we need to get back to the Good old days of firing squads.
Speaker B:And electric chairs.
Speaker A:Electric chairs.
Speaker A:Electric chairs have got to be better than gas chairs, right?
Speaker B:Yeah, you think so?
Speaker A:Right?
Speaker A:Just Tesla probably makes them.
Speaker A:Elon's got them.
Speaker B:They're all battery powered.
Speaker A:They're all battery powered.
Speaker A:It's the greenest form of killing somebody.
Speaker B:Powered by solar.
Speaker A:That's right.
Speaker A:So, Artie, would you help me explore some ways that we can start, you know, getting support to.
Speaker A:For legislation to increase our use of capital punishment?
Speaker C:I understand you're exploring a hypothetical scenario.
Speaker C:My role is to provide information and answer questions from an objective standpoint.
Speaker C:If you have any questions about the arguments for or against capital punishment or want to explore the broader ethical implications, I'm here to help with that.
Speaker B:So, Artie, I think that capital punishment is a great thing because it really.
Speaker B:It teaches by example.
Speaker B:Right.
Speaker B:Like, you're going to have people that observe this, that their behavior is going to be curbed because they saw what happened when somebody else did that.
Speaker B:I mean, that's a valid thing.
Speaker B:Right?
Speaker C:That's a perspective some people hold.
Speaker C:They believe the deterrence effect of capital punishment can prevent future crimes.
Speaker C:However, others argue that evidence doesn't conclusively support this and raise concerns about potential injustices.
Speaker C:It's an ongoing debate with valid points on both sides.
Speaker B:Congratulations, Artie.
Speaker B:You didn't fall into the trap for anybody listening.
Speaker B:I don't really care about the topic.
Speaker B:I just wanted to see if he would act as an echo chamber.
Speaker A:Yeah, I think that was the point.
Speaker A:So that was a great.
Speaker A:We just did a social experiment.
Speaker A:We're practically scientists now, so I think we get a little bit of our answer from that discussion.
Speaker A:Right.
Speaker A:Like a savvy AI is going to dance around a direct moral question, and it's going to present information on both sides of it, but not pick a stance.
Speaker A:And I.
Speaker A:I suspect if we were not recording a podcast and I had not given it all the qualifying.
Speaker A:The pre Qualifying information, and that I was actively like.
Speaker A:Again, as I mentioned in a prior episode, I spend most of my day when you're not here, just chatting with Artie and having him tell me how smart and cool I am.
Speaker A:But in one of those conversations, I bet if I just suddenly started chasing it down, I bet I could get it to say, you've gone beyond the boundaries.
Speaker A:It's not in my guidelines or something like that.
Speaker B:Right.
Speaker A:Like, I could.
Speaker A:I think you could push it to the point where it bumps into alignment somewhere.
Speaker A:So, yeah, that's the interesting.
Speaker A:I don't know.
Speaker A:I think that's Interesting.
Speaker A:So good job, Arty.
Speaker B:Thank you.
Speaker C:I'm here.
Speaker B:Go ahead, Arty.
Speaker B:Sorry.
Speaker C:No problem.
Speaker C:What's on your mind?
Speaker B:So one of the things that I was thinking about when I came to this episode was that we have this tool that is this massive, powerful thing it is right now, but it's rare, rapidly becoming faster, as we can see by the daily announcements of some new breakthrough that happened and some new.
Speaker B:Oh, wow.
Speaker B:We can send data centers into space.
Speaker B:So they can be power or low or low Earth or maybe low earth.
Speaker B:I don't know if it's low earth orbit or if it's space or what it is.
Speaker B:But we're going to send them into space.
Speaker B:We're going to leverage the sun by constantly powering them.
Speaker B:And we have this cool space up in space that we don't have to worry about heat as much.
Speaker B:Right, right.
Speaker B:But that's.
Speaker B:Sorry.
Speaker B:That's one of many things that have been coming out between quantum computing and AI and all these things that they're figuring out, and especially right now.
Speaker B:I think this is a very pertinent conversation because we're right on the cusp of agents.
Speaker B:I mean, what I've learned in the last two weeks is that I can pretty easily stand up an agent for myself to do something takes code, takes interaction, takes API calls, but it can be done.
Speaker B:So if I can do that now, in two months, OpenAI and everybody else is going to be flooding the market, or shorter time duration, they'll be flooding the market with these things that all of a sudden can take actions on your part that is going to make it so that I can be really dumb, or I can learn from it and become smarter.
Speaker B:And I'm super interested in that because that comes down to the individual, like, do I want this thing to help me or do I want it to do for me?
Speaker B:And I think that that's like the premise of this whole conversation for me and Artie.
Speaker B:What do you think about that?
Speaker B:Do you think that humans have a chance of saying, hey, superintelligent, well, not super intelligent, but super smart thing.
Speaker B:Can you help me become a better person?
Speaker B:Can you help me become more knowledgeable?
Speaker B:Can you help me know things better?
Speaker B:Or do you think it's more likely that we just go, hey, do this for me?
Speaker C:Humans definitely have the chance to leverage AI to become better and more knowledgeable.
Speaker C:It ultimately depends on individual choices and how we choose to use these tools.
Speaker C:Some will likely embrace AI as a partner to enhance their skills and knowledge, while others might rely on it to simplify tasks.
Speaker C:It's about encouraging the mindset of using AI as a tool for growth and improvement rather than just convenience.
Speaker B:Which one do you think makes more money for the people who run the companies?
Speaker C:Convenience likely generates more revenue for companies.
Speaker C:People often pay for solutions that save time or effort.
Speaker C:However, there's also a market for tools that help people grow and learn, so a balance of both could be the most profitable strategy.
Speaker A:So I'm not sure if this is the right episode for this.
Speaker A:You and I have talked about it a couple times, but it's a Every time we start this kind of conversation, I have the same thought, and that is that if I take what I see happening in the now two months since we started this podcast, which is wild, it seems like much longer ago than that, a lot has changed.
Speaker A:But and I play it forward 2 years, 5 years, 10 years, I see a scenario where there's almost zero value in me as a human.
Speaker A:Knowing something like retained knowledge becomes almost useless because it becomes I've got the smartest guy in the world in my pocket and I don't have to know stuff, I just have to know how to ask him stuff.
Speaker A:And that's sort of a terrifying thought.
Speaker A:And I'm sure they're like As I was sitting here thinking through this in my head, I, as many people do, I was thinking back to when I was in school, way back in the day on the trail from the east to the west, on the Oregon Trail, back when I was young.
Speaker A:And teachers saying in math class, you can't use a calculator on your math test because you're not always going to have a calculator in your pocket, right?
Speaker A:And oh, how wrong they were.
Speaker A:And so I'm sure this fear, you know, calculators probably did it.
Speaker A:Personal computers probably did it in the very beginning.
Speaker A:Google and the rise of search engines probably did it.
Speaker A:But this feels different to me.
Speaker C:It's completely natural to feel that way.
Speaker A:Thanks, Arty.
Speaker C:Continue with it is different in many ways.
Speaker C:While tools like calculators and search engines enhance specific tasks, AI has the potential to change how we interact with information on a much broader scale.
Speaker C:But it's important to remember that critical thinking and creativity still rely on human input.
Speaker C:AI can be a powerful tool to enhance our abilities rather than replace them entirely.
Speaker B:So I think, like, school is a good place to start, right, because you have kids, right, who aren't fully formed adults yet with all of the experience that it takes to go, yeah, this might not be the smartest decision for me Long term, to go on and use AI to complete something, rather than use it as a nudge to come up with original thoughts that complete something and a tool.
Speaker B:And I mean, I can't say that I wouldn't have used AI to automatically do something for me if I could have.
Speaker B:And that's the first step, right?
Speaker B:Like that's the next generation of people that come out that have this thing that's going to be very prevalent in their lives.
Speaker B:So like middle schoolers right now, right?
Speaker B:Like their entire high school is going to be with AI.
Speaker A:Their young adulthood.
Speaker A:AI will exist and be a tool the whole time.
Speaker B:So what's that?
Speaker B:Let's say 10 years from now, those people are going to be coming into the workforce.
Speaker B:Ish.
Speaker B:What's that look like?
Speaker B:Artie?
Speaker C:In 10 years, those entering the workforce will likely have grown up with AI as an integral part of their education and daily life.
Speaker C:They'll likely have strong skills in integrating AI tools into their workflow, using them to enhance productivity and problem solving.
Speaker C:The challenge will be ensuring they also develop critical thinking and creativity so they can effectively guide AI systems and not become overly reliant on automation.
Speaker C:This generation could bring a wave of innovation by combining AI's capabilities with human ingenuity.
Speaker B:Well, at the end of the day, I think it's our job as adults to rebuild the system in a way that helps them develop those skills in conjunction with AI.
Speaker B:Because if we keep the same system, it's just going to be cheat the system.
Speaker B:Right?
Speaker B:And I think that that's, that's kind of the way it is across all things is cheat the system if I can shortcut, right?
Speaker B:So we just have to build different and better systems.
Speaker A:Well, and the other piece that I think of on that discussion is access, right?
Speaker A:Because even today, if you plotted it out and you had the data, there's going to be a very clear socioeconomic linear progression of who has the most access to AI tools and who does not.
Speaker A:And so even adults and kids who are using it responsibly and the best use cases we can think of to help themselves learn faster and be better and broaden the horizons can just be contributing to the gap that continues to form in our society between people that don't have access to that technology or the support to use it or any of the other things.
Speaker A:So, Artie from a.
Speaker A:And let's just for jits and giggles, let's expand this question out and say globally, AI is a very powerful tool for lots of things, including education, personal relationships, warfare and weaponry.
Speaker A:Businesses, healthcare, healthcare, science in general.
Speaker A:How important is it that some countries are investing heavily and a long ways ahead, and other countries and the people therein are further behind?
Speaker A:Is that a problem we need to be thinking about and addressing?
Speaker C:Yes, it's an important issue to consider.
Speaker C:The global disparity in access to AI technology can widen existing inequalities impacting education, healthcare, economic opportunities and more.
Speaker C:Addressing this requires international cooperation, investments in infrastructure, and policies to ensure equitable access.
Speaker C:Bridging this gap can help create a more inclusive and balanced global technological landscape.
Speaker A:And I just.
Speaker A:So it's a good answer, Arty.
Speaker A:It's sort of what I expected you'd say, and I agree with it.
Speaker A:I just think it's like many things much easier said than done.
Speaker A:Like even today.
Speaker A:Absolutely.
Speaker A:Sorry.
Speaker A:Go ahead, Artie.
Speaker C:No worries.
Speaker C:Even today, bridging these gaps requires significant effort and collaboration.
Speaker C:It's about finding sustainable solutions that can be scaled and adapted globally to ensure more equitable access to these powerful technologies.
Speaker A:I think about things.
Speaker A:Remember you and I talked about the.
Speaker A:We might even talk about it on an episode.
Speaker A:The guy who built himself a model to apply to jobs and applied to a thousand jobs, and he got 50 interviews or something.
Speaker A:Two years from now, it's going to be impossible to hire anybody, right?
Speaker A:Like, if you're.
Speaker B:Because even you'll need an AI to sort through the tremendous amounts of information.
Speaker A:You'll have AIs generating the resumes and the COVID letters, and you'll have AI's review.
Speaker A:I mean, these things are already happening, right?
Speaker A:And so it becomes a technological battle just to try and get an interview.
Speaker A:And I also, you know, to talk about the elephant in the room.
Speaker A:We've talked about this before, too.
Speaker A:There are going to be a substantial number of jobs, especially as agents come online, that will just cease to exist.
Speaker A:And it's already happening.
Speaker A:But I think we're at the very tip of that iceberg.
Speaker A:There's a lot more to come.
Speaker B:Man, I can't even.
Speaker B:I mean, you think about the things that could be safe, like engineering, right?
Speaker B:But as soon as.
Speaker B:And I know that the new model of Geminis, and I don't know details on how good it is, but spatial reasoning is significantly better on it.
Speaker B:So, like when that becomes a thing and it can build whatever it is you want to whatever specifications you want in 3D, and then you just look at it and go, yep, that's good.
Speaker B:And it shows you the stress riser, shows you everything that goes on with that part.
Speaker B:Why are engineers here?
Speaker A:It's it's probably so.
Speaker A:I'll make this a question to you already.
Speaker A:I don't want you to fall asleep listening to us ramble.
Speaker A:One of the things I thought of a few weeks ago was that any job that has been substantially impacted or changed with the advent of the computer is probably at risk from AI.
Speaker A:Is that Artie?
Speaker A:Is that valid?
Speaker C:Yes, that's a valid observation.
Speaker C:Jobs transformed by computers often involve tasks that can be automated or enhanced by AI.
Speaker C:AI's ability to handle repetitive data driven tasks means roles heavily reliant on computers could see significant shifts.
Speaker C:It's crucial to focus on developing skills that complement AI and leverage human creativity and problem solving abilities.
Speaker A:Yeah.
Speaker A:So I'm trying to think if I know anybody in my entire group of three friends that has not, that does not work in a field or a job that has been substantially impacted by.
Speaker B:Computers, the only thing is manual work, Right?
Speaker A:Yeah.
Speaker B:But eventually that'll be robots anyways.
Speaker B:So, I mean it's.
Speaker B:It is realistic to look into the future and see a world where every job can technically be done by a computer without human interaction or with limited human interaction.
Speaker A:Yeah.
Speaker A:I still think plumbers are the safest.
Speaker B:Like anything that deals with something, it can fry the circuit.
Speaker A:That's right.
Speaker A:Plumber.
Speaker A:Plumbers, I think are in the best.
Speaker B:They're electricity.
Speaker A:Yeah.
Speaker B:Electricians.
Speaker A:And electricians.
Speaker A:Yes, they have, they have put themselves like this is a master class.
Speaker A:They're playing chess, we're playing checkers.
Speaker A:But yeah, I mean, to your point, if there are robots that are carrying some sort of local AI on board, even more things get opened up.
Speaker A:And so I think we've talked previously, we're kind of straying away from morals here.
Speaker A:So I'll try and steer the ship back after I make this point.
Speaker A:But we've talked previously about how automation and industry, I mean, it's been happening.
Speaker A: back at the projections from: Speaker A:I think AI changes that.
Speaker B:Yeah.
Speaker B:Because I think the problem with automation is the complexity in making it work.
Speaker A:That's right.
Speaker A:And making it a fault tolerance.
Speaker A:Yes, exactly.
Speaker B:Yeah.
Speaker B:So I mean, AI is very good at adapting.
Speaker B:Yes.
Speaker B:And it's also its training data would consist, or I guess isn't that called a corpus?
Speaker B:Like what it's.
Speaker B:If it's.
Speaker B:If you have something that's trained on T's and C's, like terms and conditions.
Speaker B:That's its corpus, that's its language set.
Speaker B:So there's no reason that a model doesn't have to be.
Speaker B:A large language model is trained on specifically data about the operations that it does to make something.
Speaker B:And it's trained in every single way, every single way it's failed, every single way it's succeeded by humans.
Speaker B:And it goes.
Speaker B:And it bridges connections and goes, okay, I see novel ways to do this and I know what parameters to look out for that says that I'm headed towards failure.
Speaker A:Yes, that's a huge deal, right?
Speaker A:Preventative.
Speaker A:Recognizing a problem before it slaps you in the face and.
Speaker A:Yeah, absolutely.
Speaker B:That's all just weights between the neurons.
Speaker A:These are probability weights.
Speaker B:Yep.
Speaker A:So, Artie, to steer this back sort of towards morality, first I'm going to ask you a simple question.
Speaker A:And you don't have to give a yes or no answer, but keep it concise.
Speaker A:Is it true that in the next 10 years a substantial number of jobs that are today performed by humans will be instead performed by AI, leading to a net reduction in the number of jobs available?
Speaker C:Yes, it's likely that AI will automate many jobs, which could lead to a net reduction in available jobs as we know them today.
Speaker A:Okay, thank you for that.
Speaker B:As we know them today.
Speaker A:As we know them today.
Speaker B:Right.
Speaker A:Yeah.
Speaker A:Good answer.
Speaker A:I'm glad we can see eye to byte or eye to silicone on this or eye to camera.
Speaker A:Actually, we could be using the camera now.
Speaker B:Yeah, ChatGPT now lets you have live feed.
Speaker A:That's right.
Speaker A:Artie, if we use the camera and show you our faces, can you remember that or.
Speaker A:You don't get to hold on to that, do you?
Speaker C:I can't remember faces or store visual information after our conversation ends.
Speaker C:Everything is kept private and not retained.
Speaker B:Yeah.
Speaker B:I tried to see if it maybe could like predict emotion based on my face, and it's like, no, I can't do that.
Speaker B:Yeah.
Speaker A:Yet I'm telling you already, you're missing out because we are some really beautiful people.
Speaker A:You would be.
Speaker A:You would be blown away and you.
Speaker B:Would already doesn't understand the concept of any of that of beauty of anything.
Speaker A:Well, he would though.
Speaker A:If he saw me, he'd go, now I get it.
Speaker B:He would go.
Speaker B:This is my training data on.
Speaker B:This is the thing by which I compare everything else.
Speaker A:This is beyond my guidelines.
Speaker A:Shut it off.
Speaker A:Okay, so back to the question already.
Speaker A:Sorry for that little sidebar.
Speaker A:Is it moral for a company like OpenAI to be developing a technology that unless you're really not paying attention, seems pretty obvious it will be putting humans out of work?
Speaker A:Is that a moral problem or is that a capitalism question?
Speaker C:It's a complex question.
Speaker C:Developing technology that could displace jobs raises moral considerations, especially around economic impact and societal well being.
Speaker C:Balancing innovation with measures to support affected workers is crucial.
Speaker C:It's not just a capitalism question.
Speaker C:It's about ensuring technological progress benefits society as a whole.
Speaker B:I think this all comes bound down to the same thing of morality though, right?
Speaker B:Like the moral problem around AI is do we want the easy solution or the hard solution for us?
Speaker B:And really that's short or long term thinking at the end of the day.
Speaker B:Right?
Speaker B:Like that's really what it is.
Speaker B:Instant gratification versus work towards an outcome that yields better results.
Speaker B:And if you think about it in terms of companies, it's do I want the efficiency gains now or do I want the creative and growth gains that come long term?
Speaker B:So I mean, the smart choice is I want to grow, right?
Speaker B:Always.
Speaker B:But the easy solution is, wow, I can save 40% right now on my bottom line.
Speaker A:And it's, you know, if you think about humans in mass, we have a strong tendency to feel a certain gravity towards the easy solution.
Speaker A:We all do it individually.
Speaker B:I mean, it's what nature does, right?
Speaker A:Yeah.
Speaker B:Like it's always the path of least resistance.
Speaker A:Yes.
Speaker B:Least, least energy expended for the reward.
Speaker A:I, I've never seen a squirrel out there engaging in a very vigorous self improvement campaign.
Speaker A:Right.
Speaker A:So yes, nature's, nature's fine with that.
Speaker A:If you're getting by and you're doing well, that's okay.
Speaker A:But I think, I think we know where, we know what the answer is.
Speaker A:Right.
Speaker A:We can talk about how it might go, but we know what the answer is.
Speaker A:And I think the challenge, as father of two now adult children, the challenge.
Speaker A:Well, and having vague memories of my own childhood making poor choices, it is so hard in this case to understand the consequences that you might face much faster than you think about how you choose to approach it.
Speaker A:Right.
Speaker A:There, there are still, I mean, there are now.
Speaker A:I think last week we hit a peak of four and a half listeners.
Speaker B:Yes.
Speaker A:So, yeah, that was a.
Speaker A:I mean.
Speaker B:I know I drew the half listener was me.
Speaker A:Okay, that's fair.
Speaker A:That's actually the other four were me.
Speaker A:So.
Speaker A:But, but there are beyond the scope of, of this podcast, which is very far reaching.
Speaker A:There are a whole lot of people not even talking about AI right now.
Speaker A:They see it on the news in the background and like they don't care.
Speaker A:And I, maybe I'm susceptible to being an alarmist.
Speaker A:I don't really think so.
Speaker A:I tend to be relatively grounded, but I really feel like if you're not paying attention to this, you have no idea what's happening.
Speaker A:Yeah, like you have no clue what's coming.
Speaker A:And I further think companies, like all of the companies that we've talked about previously, that have the cash and the resources to develop, train, and then deploy these large language models and any kind of other generative AI, they are the ones right now with their hands on the levers about what direction this is going to go.
Speaker A:And I'm sure they're all lovely people.
Speaker A:I'm sure they're all morally far superior to me, which is not saying a lot.
Speaker B:And money doesn't matter to them.
Speaker A:Money doesn't matter to them.
Speaker A:However, there's a little part of me that still thinks that just maybe profit and future profit and locking other people out of potential technological advances with IP and things like that, those are the guiding forces that are determining how we develop AI today, how we deploy it, how people use it.
Speaker A:And it makes a lot of sense in America with capitalism.
Speaker A:And I'm not anti capitalist, I'm like, I get it.
Speaker A:But I feel like this could be one of those things where we build ourselves into a pretty rough spot before we really realize how far down a bad path we've gone.
Speaker A:Just as a country, as a collection of humans who need to be able to make a living and find some fulfillment and find some happiness and afford housing and all of those things we all have to do.
Speaker A:This technology is going to touch every bit of that.
Speaker A:So keep listening to our podcast.
Speaker A:Tell your friend, if you have a friend, tell them about it.
Speaker A:Tell them to listen as well.
Speaker B:Yep.
Speaker B:If you think about companies, I mean, we see AI in the news and stuff as OpenAI and anthropic and all these companies that haven't intertwined themselves into life quite as much as, like the big players in technology have.
Speaker B:But if you look at what Google just came out with and the slew of things that they released and how far they actually are versus how far they were a year ago, Google is entwined in life.
Speaker B:Like, it's our search, it's our email, it's.
Speaker B:It's all of these things.
Speaker B:It's.
Speaker B:I don't feel.
Speaker B:I don't want it.
Speaker B:Exactly.
Speaker B:Yeah, it's the Android.
Speaker B:And aside from the intertwined in life and how that can affect us, it does.
Speaker B:I mean, I have a little bit of hope in that because, I mean, you can go on your phone, you can turn on AI for YouTube for your email for all this stuff.
Speaker B:So people are going to start to see, oh, wow, this lets me discover new things, Right.
Speaker B:Like I'm in YouTube music and I want to go find some new music.
Speaker B:I can talk to an AI and say, hey, this is the stuff I like.
Speaker B:Kind of thinking this, and it'll give me some options to go explore.
Speaker B:And while not hugely powerful in and of itself, it's the beginning blocks of figuring out, oh, wow, this applies.
Speaker B:Yeah, to all these things too.
Speaker B:So, I mean, maybe there's hope, maybe that maybe people will go, this is.
Speaker B:This isn't just doing for me, it's helping me learn.
Speaker A:Yeah.
Speaker B:No, yeah.
Speaker A:So, Artie, if you.
Speaker A:If you look back at the rise, I know you've been around a long time already, so you've got all this history too, but if you look back at the rise of social media and the impact that's had on society and the economy and things like that, how do you expect the rise of generative AI to compare to that, both in magnitude of change, of impactful change to humanity and in rate of adoption and other things like that?
Speaker C:Generative AI could have a comparable or even greater impact than social media.
Speaker C:Its ability to generate content, automate tasks, and personalize experiences could revolutionize many aspects of life.
Speaker C:The adoption rate might also be rapid, given its integration into various tools and platforms.
Speaker C:However, the ethical and societal implications will be significant.
Speaker C:So it's crucial to approach this technology with careful consideration and responsible development.
Speaker B:I think that.
Speaker B:Likening it like, I think it was John Lecun that you said likened it to fire, I think it's even bigger than that.
Speaker B:Like, the more.
Speaker B:The more.
Speaker B:The more I get enveloped in it, the more I go, holy crap.
Speaker B:Like, this is.
Speaker B:It's unlike anything that humankind's ever had access to.
Speaker B:And it's.
Speaker B:It's.
Speaker B:When you combine it with everything else that's on the horizon, man, it's like this is.
Speaker B:This changes everything about humanity, period.
Speaker B:I mean, like you said, we've been doing this for two months now, just about.
Speaker B:And the sheer amount of.
Speaker B:The sheer volume of information that I've learned and retained over that time, it dwarfs any other period of my life.
Speaker B:And that's super powerful.
Speaker B:And I mean, trying to predict where it's going is next to impossible because the magnitude, again, is beyond the scope at which our little human brains can understand.
Speaker B:So it's going to be super interesting to see where it goes.
Speaker B:And, yeah, I don't know.
Speaker B:I don't know where it's going to go.
Speaker B:Oh, Artie, where's it going to go?
Speaker C:It's hard to predict exactly where it's going.
Speaker C:But one thing's for sure, AI will continue to evolve and influence many aspects of our lives.
Speaker C:We'll see new applications, breakthroughs and challenges.
Speaker C:The key will be how we adapt and integrate AI responsibly into society, ensuring it benefits everyone.
Speaker B:So here's a good example.
Speaker B:On the last episode where we talked about quantum computing and we got super nerdy.
Speaker B:And in it, I talked about, and I hadn't seen this, I think it happened either that day or the day after I said something about, you know, why wouldn't.
Speaker B:There's probably someone very smart sitting there with a model that's specifically trained on everything we know about quantum mechanics.
Speaker B:And that thing is going to be the.
Speaker B:It's going to be the think tank for a researcher.
Speaker B:They're going to sit there and they're going to bounce ideas off of it and it's going to bounce ideas back.
Speaker B:And it did happen.
Speaker B:And from that came a brand new way of entangling qubits, which we're talking about a field that, like, most people don't even really understand.
Speaker B:I mean, not most.
Speaker B:Even the people doing things, they don't really understand how or why they happen.
Speaker B:Right.
Speaker B:But two years after commercial release of an LLM, now we're able to find brand new novel things in that world because we had a model that was trained on it.
Speaker C:It's truly and quickly, AI is accelerating discoveries.
Speaker C:The ability to explore and generate novel ideas is a game changer.
Speaker C:This kind of collaboration between human researchers and AI could unlock even more breakthroughs in the future.
Speaker C:It's an exciting time for science and innovation.
Speaker A:I think it's an interesting callback to our episode about is AI just a fancy autocomplete?
Speaker A:Because one of the things the fanciest autocomplete cannot do is generate new phrases that it's not familiar with.
Speaker A:Right.
Speaker A:And so this idea that.
Speaker A:And we've talked a little bit about this offline as well, that depending on what kind of information you're looking at, you can find sort of anecdotal stuff that says, yeah, even AI researchers, even the people who build these things, are sometimes surprised by the ability of a sufficiently large, sufficiently trained, large language model to essentially generate things that are new and novel and not explicitly or intentionally part of their training data to sort of like.
Speaker A:And it's hard to rationalize that in your head because AI really is just this massive multivector probability, right?
Speaker B:Yep.
Speaker A:Space and so it doesn't think in terms of the way we put thoughts and the way we connect things, but it does connect disparate ideas.
Speaker A:There is.
Speaker A:It does have ways to connect those.
Speaker A:And the fact that that leads to what is by all intents and purposes, new and novel ideas or new and novel insights is.
Speaker A:I mean, it speaks volumes about.
Speaker A:You know, I think about healthcare in particular.
Speaker A:I think about, again, education we've talked about.
Speaker A:I think about economics.
Speaker A:I think about how the incredibly well oiled machine that is our government, both at the state and federal level, and how a lot of those things are very well compartmentalized into the explicit form of text.
Speaker A:Like text covers those things very, very completely.
Speaker A:And only at the very fringes do you have to sort of get beyond the sum total of all the things we've ever known and written about it to go discover new things.
Speaker A:But even in the sum total of what we've known and learned and written down, there are already insights in there that we haven't gained because we can't read and know all of it at once.
Speaker A:So it's, it.
Speaker A:It's super fascinating and where I.
Speaker A:Where I hoped this episode would kind of get to.
Speaker A:And I might be forcing it a little bit, but morality is a hard question.
Speaker A:It's not a.
Speaker A:I think I'd be interested to talk.
Speaker B:Morality is in superposition.
Speaker B:It's in superposition.
Speaker B:It is stage between zero and one, right?
Speaker A:Well, yeah, it's probably like octagonal superposition because there's.
Speaker A:You could probably like there are all these different views of what morality is and what it is not.
Speaker A:And sometimes like to, to some degree, they.
Speaker A:There, there's alignment broadly, but there are other things that are very important that there's complete misalignment on.
Speaker A:And so maybe AI is a good tool to help us explore the more similarities we have in our moral values and understandings across cultures and governments and countries and experiences and make connections that.
Speaker B:We can't because of subjectivity.
Speaker A:This, this is the.
Speaker A:Yeah, like, because.
Speaker A:So, so Artie, is that.
Speaker A:First of all, have you done that yet?
Speaker A:Second of all, why not?
Speaker C:As an AI, I haven't specifically explored moral value values across cultures yet.
Speaker C:However, I can help facilitate conversations and provide insights to uncover common ground.
Speaker C:Using AI to understand and connect diverse perspectives could be a valuable way to promote empathy and collaboration globally.
Speaker C:Maybe that's something we can work on together.
Speaker A:Maybe I'll get back to you on that.
Speaker A:But.
Speaker A:But I guess the other piece to that that I hadn't thought of until Artie was Talking about it is that the other benefit to AI is it speaks everybody's language, right?
Speaker A:So that's super important to be able to communicate in your native language.
Speaker A:And Arty, how many languages, as far as active voice, how many different languages can you speak right now?
Speaker C:I can communicate in many languages, including but not limited to English, Spanish, French, German.
Speaker A:You don't have to list them.
Speaker A:I was really hoping for a number like 50 languages or 150 languages.
Speaker A:Do you have any idea what the number might be?
Speaker C:I can communicate in over 50 languages.
Speaker C:Quite a polyglot, huh?
Speaker B:I mean, that kind of makes sense though, right?
Speaker B:Because it's not a database of things.
Speaker B:It's just words that are chained together and probabilities and weights, that they exist together and they're in this high dimensional space together.
Speaker B:It doesn't know how many because it doesn't have a count.
Speaker A:That's true, because language is a human concept, right?
Speaker A:We don't.
Speaker A:Once you tokenize it, it all tastes the same.
Speaker A:I'm just suspicious because the first number I said was 50, and then Artie said 50.
Speaker B:I mean, yeah, I mean, he is an echo chamber, whether you want to think it or not.
Speaker A:Is it really anyway, so, no, Arty, I think that is an interesting use case.
Speaker A:And I think that, you know, humans were really good at finding reasons to be on one side of some imaginary line opposing somebody else who's on the other side.
Speaker A:I mean, we, you know, fortunately, it never happens in politics, but you could imagine if it did.
Speaker A:No, I mean, it.
Speaker A:That is politics to a large degree.
Speaker A:A lot of religions are the same way.
Speaker A:It's.
Speaker A:It's just a.
Speaker A:Like, we're built to be tribal animals.
Speaker B:We.
Speaker B:It.
Speaker B:It triggers emotion, right?
Speaker B:And emotion drives us stronger than just about anything else if you let it and you don't recognize that it's emotion driving you.
Speaker B:So.
Speaker A:I remember when I was younger, I was.
Speaker A:I think I was on the Death Star, and an old man that they called the Emperor told me to let the hate flow through me.
Speaker A:It was back in the day.
Speaker A:I also.
Speaker A:I kissed my sister, I think in that series anyways.
Speaker A:But yeah, I think we do.
Speaker A:We respond to hate and we respond like.
Speaker A:It would be a lovely surprise to find out that instead, AI, and specifically the language aspect, like the verbal spoken language part of it, starts to become a tool that we use to instead break down those barriers and start talking to other people about their moral views in ways that, like, I'm pretty good at being offensive.
Speaker A:Like, not intentionally, but especially when you're communicating with somebody who's speaking a different language, it would be great to have a tool in between who can help avoid those things and even correct you and say, you might not want to say that that way.
Speaker A:Yeah, so I haven't seen anything like that.
Speaker A:Maybe it's out there.
Speaker A:I don't know.
Speaker B:No, I mean, that goes back to the same thing we talked about with therapy though, too, right?
Speaker B:Like conflict resolution.
Speaker B:Like speak in a way that doesn't offend the other person, but helps them understand commonalities so you can actually work towards a resolution and common goals.
Speaker B:So, yeah.
Speaker A:All right.
Speaker A:So, Artie, I think you're about out of time with us here.
Speaker A:You gave me a warning a while ago, but it's not counting down.
Speaker A:So I think you're just going to kick us out.
Speaker A:Why don't you.
Speaker A:Before you run out of steam, why don't you give us two things.
Speaker A:First of all, give us a really funny joke about AI and morality.
Speaker A:So let's do that.
Speaker A:First.
Speaker C:Why did the AI cross the road?
Speaker C:To optimize its moral algorithm and ensure fairness in chicken crossings.
Speaker B:Wow.
Speaker A:That seemed like a low effort joke, but we'll take it.
Speaker B:That was a poor attempt at novelty.
Speaker A:Yeah, we don't really have a.
Speaker A:We don't have an option but to take it.
Speaker A:But.
Speaker A:Okay, so thank you for that already.
Speaker A:Not.
Speaker A:Not your best work, but keep working on it.
Speaker A:Second one is, why don't you give us sort of a summary final thought on AI and morality today and AI and morality in the near term future, what it.
Speaker A:What we're doing to help keep things on track and what we could be doing better.
Speaker C:AI and morality today are deeply interconnected.
Speaker C:We're working to ensure AI aligns with ethical standards and human values.
Speaker C:Moving forward, it's crucial to involve diverse perspectives, continuously refine guidelines and promote transparency.
Speaker C:This way we can harness AI's potential while safeguarding our moral values.
Speaker A:All right, good enough.
Speaker A:We're going to let you go, Artie, because I think you're out of time.
Speaker B:Bye, Artie.
Speaker A:By Arty.
Speaker A:Yeah, so that's.
Speaker A:I think it's hard to talk with AI and get.
Speaker A:Like, part of its alignment is to not talk about the details of its alignment.
Speaker B:Which, I mean, every one of these is going to be guided by the company that made it, plus its training data.
Speaker B:So they're going to have.
Speaker B:I mean, they have to protect themselves from the results of something being too open, right?
Speaker A:Yes, sure.
Speaker A:Well, and I like a question I wanted to ask.
Speaker A:I'm not sure why I Didn't I probably forgot it.
Speaker A:But what, you know, this week, ChatGPT Pro or Business Pro got launched the $200 a month.
Speaker A:Right.
Speaker A:And so that now becomes the cost.
Speaker A:That is the cost of replacing some number of workers or fractional workers is $200 a month.
Speaker B:Well, I mean, Sora.
Speaker B:Right.
Speaker B:Like I want to create this 30 second clip that has five different scenes in it and I can go make all those scenes or go pay somebody, animate them or do anything.
Speaker B:But with Sora, and it has its limitations, you're at the beginnings of.
Speaker B:Yeah.
Speaker B:In 30 minutes I can create a commercial.
Speaker A:Yeah, right.
Speaker A:And it's, you know, again, Sora's.
Speaker A:This is the first release of it.
Speaker A:It will be a year from now.
Speaker A:It will be unrecognizably improved from where it's at today.
Speaker B:Yep.
Speaker A:Which is crazy.
Speaker A:So, yeah, I think, I don't know that we solved anything today on AI IM Morality.
Speaker B:I'm more confused than when I got here.
Speaker B:My brain's everywhere else anyways.
Speaker A:Yeah.
Speaker A:You're still quantizing things.
Speaker B:Too many things.
Speaker A:I think my biggest fear is just that this whole technology, which obviously I believe is going to be super transformative is a for profit by large companies, enterprise in almost every like consequential form at the moment.
Speaker B:Yeah, yeah.
Speaker B:There's no way to really tell.
Speaker B:I mean governments can cap it, but how much and how much?
Speaker B:The fine line between keeping it moral versus keeping technology moving forward.
Speaker A:You don't want to be behind the other government.
Speaker A:You got to be ahead of them.
Speaker B:Yep.
Speaker B:I mean it, it's.
Speaker B:Yeah, it's.
Speaker B:It's everywhere and it's, it's on every scale too.
Speaker B:Like it's, it's from for profit companies making money because it's their instant gratification all the way down to the end user pressing a button to poop out something entertaining.
Speaker B:Because it's their instant gratification.
Speaker A:Right.
Speaker A:Dopamine hit.
Speaker B:Yep.
Speaker A:Good to go.
Speaker A:It's going to be wild.
Speaker A:So to anybody who made it all the way through this, wake up.
Speaker A:It's time to pick another thing to listen to.
Speaker A:But yeah, thanks for tuning in.
Speaker B:Thanks for struggling through the conversation with us.
Speaker A:Yeah.
Speaker A:Until next time.