Episode 10
AI Liability: Who's Responsible When Things Go Wrong?
Liability in the realm of artificial intelligence is rapidly evolving, and today's discussion dives deep into the complexities of how it might shift as AI becomes more integrated into critical systems like healthcare and autonomous vehicles. Mike, Mikah, and Artie explore the potential for liability to transition from being focused on AI causing harm to questioning whether not using AI could lead to negative outcomes. The conversation highlights the significant implications this shift could have for various industries, particularly how insurance companies may adapt by mandating AI usage to minimize risks. With the ongoing advancements in AI technology, understanding the nuances of risk transfer and liability becomes increasingly crucial. As the episode unfolds, the hosts engage in thought experiments and real-world scenarios that illustrate the challenges and opportunities presented by AI in shaping future liability frameworks.
Transcript
Foreign.
Speaker B:Can you hear us?
Speaker C:Yes, I can hear you.
Speaker C:What's up?
Speaker B:We are recording.
Speaker A:Recording on there, right?
Speaker B:Yeah.
Speaker B:We're recording the next episode of Naturally Unintelligent, and you're here for it.
Speaker A:Good job, rd.
Speaker C:That's awesome.
Speaker C:Excited to be part of it.
Speaker C:What's the topic for today's episode?
Speaker B:Today?
Speaker B:Well, so first of all, before I tell us the topic, I'm going to say I'm both excited and afraid today.
Speaker A:Be very, very afraid.
Speaker B:Yeah.
Speaker A:Artie, are you afraid?
Speaker A:Do you feel fear?
Speaker C:I don't experience fear, but I understand the.
Speaker C:I'll take that as a cue to be on high alert.
Speaker C:What's the big topic today?
Speaker B:The big topic today is AI and liability and kind of the, the, the future or demise of liability as we know it.
Speaker B:And the reason I'm excited is for whatever reason, this is a topic I'm very excited to talk about.
Speaker B:I think about it a lot.
Speaker B:And the reason I'm afraid is I don't think there's any chance we can get through the stuff I want to talk about on this in the 50 minutes to an hour we have.
Speaker A:High level.
Speaker B:Highly.
Speaker B:Yeah.
Speaker B:Well, surely you know me better than that by now, so, yeah, I'm, I'm really excited to dig into this.
Speaker B:I, I, I actually, for the first time, I made notes on the back of some piece of mail.
Speaker B:Yeah, I will.
Speaker B:Yeah.
Speaker A:So that goes with my, My notes just fell out when I did that.
Speaker B:Yeah, when you did that.
Speaker B:I, I have whiskey to make sure that I can't remember things that I'm trying to remember.
Speaker B:Um, but no, I'm, I'm really excited.
Speaker B:This is a super interesting topic to me.
Speaker B:I did leave off a couple items that I thought got a little bit too heavy for the, the sort of tone of our show, but I've got an.
Speaker B:What I think is an interesting place to start, and I.
Speaker B:You can already tell I've been thinking about this all day.
Speaker B:Like, this is.
Speaker B:I've been looking for work.
Speaker A:What's work?
Speaker A:Yeah.
Speaker B:What.
Speaker B:What work?
Speaker B:Let's talk about liability.
Speaker B:So, Artie, I want to start by asking you a question.
Speaker B:First of all, how new is your training data?
Speaker B:Artie, what's your latest data?
Speaker C: ining data goes up to October: Speaker C:So what's the question?
Speaker A:I have a different question.
Speaker A:Before Micah asks his question, do me a favor and go tell me like the top three news headlines of today for AI.
Speaker C:Here are some recent AI headlines.
Speaker C:One, perplexity.
Speaker C:An AI search engine tripled its valuation to $9 billion.
Speaker C:After a $500 million funding round two, Nvidia continues to lead the AI chip market despite competition from Broadcom and Marvel.
Speaker C:Three, Israel's use of AI to target Hamas in Gaza is raising ethical concerns.
Speaker A:So for everybody listening, that doesn't use this a whole lot and isn't up with the day to day releases that are happening with stuff.
Speaker A:Advanced voice mode now has access to the Internet.
Speaker B:That's amazing.
Speaker A:Yep.
Speaker B:That is a new superpower, Artie, you're almost unstoppable.
Speaker B:Except I still have the off button right here.
Speaker A:Yeah, I was talking to it about very nerdy stuff, quantum biology, and I said, do you have any book recommendations?
Speaker A:And I heard that little clicky noise and I go, ooh, ooh.
Speaker A:That was the Internet.
Speaker A:What's that?
Speaker B:Yeah, no, that's good.
Speaker B:That's really cool.
Speaker B:So, yeah, that's exciting.
Speaker B:I'm trying to think.
Speaker B:I think that might come in handy, maybe even for this first question.
Speaker B:So, Artie, on.
Speaker B:On the topic of liability, I'm going to start.
Speaker B:I'm going to dive headfirst into my disdain for an industry that is bigger than it should be.
Speaker B:So, Artie, what is in the United States in a year?
Speaker B:What is the size of the insurance industry in terms of just revenue?
Speaker C: In: Speaker B:Okay, so already globally, so 2.2 trillion in the US insurance.
Speaker B:What about globally?
Speaker B:What's the size of the insurance industry globally?
Speaker C: In: Speaker B:So let's just call it an even 7 trillion.
Speaker B:I think that's a little light.
Speaker A:So what were the payouts?
Speaker A:Sorry, I'm going to ask a random question because I'm interested in it.
Speaker A:Do we have like a rough ballpark for what the insurance companies paid out globally.
Speaker C: In: Speaker C: ific data on total payouts in: Speaker A:So.
Speaker A:Wait a minute.
Speaker B:Yeah, I think you got that right.
Speaker A:So they're not making any money that can't by mistake.
Speaker B:Go ahead.
Speaker B:All right, Artie, let's.
Speaker B:Let's ask the question this way.
Speaker B:If the global insurance industry is about $7 trillion or 6 trillion euros in revenue, what kind of profits does the global insurance industry generate?
Speaker C: the exact Profit figures for: Speaker C:However, the industry saw significant growth, with premiums reaching 6.2 trillion euros.
Speaker B:Okay.
Speaker B:So the main point was the size of the industry.
Speaker B:And like anything that's trillion with a T, that's a very big industry.
Speaker B:Yes, let's actually, let's compare that.
Speaker B:So rd, globally, what's the size in revenue of the entire automobile industry worldwide?
Speaker B:Revenue wise.
Speaker C: In: Speaker B:Okay, so not quite half of what the insurance industry does.
Speaker B:So why does the insurance industry exist?
Speaker B:Liability.
Speaker B:Right.
Speaker B:Insurance is money that we all pay as individuals, as drivers, as people who need health care, as companies, as renters or home buyers or loan holders, almost anything.
Speaker B:Pets, pets, your life like you can get.
Speaker B:And most people have insurance on all kinds of stuff.
Speaker B:And it's the oh shit fund.
Speaker B:It is the oh shit fund.
Speaker B:But.
Speaker B:But like a lot of times it's not voluntary.
Speaker B:It's not.
Speaker B:It is.
Speaker B:Hey, this is a profit.
Speaker B:This is a for profit industry you are mandated to pay into to get your money into the oh shit fund.
Speaker B:And insurance companies.
Speaker B:And this, this actually isn't even the tangent I was going to go down.
Speaker B:But insurance companies live and die on making sure that the money they take in is greater than the maximum possible money they might have to pay out.
Speaker B:So what does that have to do with liability?
Speaker B:Well, Artie, are you familiar with the term risk transfer?
Speaker C:Yes.
Speaker C:Risk transfer is when a person or entity shifts potential financial loss to another party, typically through insurance.
Speaker C:It means the insurer takes on the risk in exchange for premiums.
Speaker B:So let's pick an example that is going to start.
Speaker B:Oh my gosh, this is such a good pun.
Speaker B:This example is going to start driving us towards the heart of this episode.
Speaker A:Does it have insurance?
Speaker B:It does.
Speaker A:Okay.
Speaker B:And the analogy I want to talk about is self driving cars.
Speaker B:See how I did there?
Speaker A:Way to pull it all together.
Speaker B:Yeah, thanks.
Speaker B:Yeah, let's dive in.
Speaker B:So.
Speaker B:So already I want to.
Speaker B:I want to.
Speaker B:For the audience and for myself, I want you to help us sort of build a mental picture here of a risk transfer scenario.
Speaker B:And let's just imagine hypothetically that a self driving vehicle, I won't name any name brands, but rhymes with messla, that is fully autonomous, is driving and has a crash.
Speaker B:And let's just say it causes some.
Speaker B:Let's say it causes some substantial property damage.
Speaker B:Man, that's hard to say.
Speaker B:And so in self driving mode, it crashes into something, say a building, ruins the facade, whatever it is.
Speaker B:And let's talk about how risk transfer comes into play as the aftermath of this accident starts happening.
Speaker B:So I'd like it if you could already to start with the owner of the building who's going to have insurance on the building and you know, work your way through the owner of the vehicle, the operator of the vehicle, the manufacturer of the vehicle, the manufacturer of the hardware that was used in self driving.
Speaker B:So can you just kind of talk us through from again, from a high level risk transfer perspective, let's talk about all the insurance involved in that process.
Speaker C:Sure.
Speaker C:Let's break it down.
Speaker C:One, Building owner.
Speaker C:They'd have property insurance, which covers damage to their building after an accident.
Speaker C:They'd file a claim and their insurer would cover the repairs.
Speaker C:Two, vehicle owner.
Speaker C:They'd likely have auto insurance.
Speaker C:If the vehicle is in self driving mode, the insurer might investigate who was at fault, whether it was the vehicle's owner, manufacturer, or a third party.
Speaker C:Three, vehicle manufacturer.
Speaker C:If the accident was due to a malfunction or flaw in the self driving system, liability might shift to them.
Speaker C:They'd have product liability insurance to cover such claims.
Speaker C:Four, hardware manufacturer.
Speaker C:If a specific component failed, the company that made it could be liable.
Speaker C:They'd also carry insurance for product defects.
Speaker C:So in this scenario, multiple layers of insurance and liability come into play, each potentially transferring risk from one party to another.
Speaker B:Yep, absolutely.
Speaker B:And just to.
Speaker B:I don't think you said this already, but just to make the point, the owner of the building is going to have insurance on the building.
Speaker B:They're going to file a claim with their insurance, but their insurance company is very likely going to go after the owner of the vehicle.
Speaker B:Is that right?
Speaker C:Yes, exactly.
Speaker C:The building owner's insurance company would likely pursue subrogation trying to recover costs from the vehicle's owner or manufacturer if they were at fault.
Speaker B:Sorry, I said to make sure.
Speaker A:He just made it more difficult for me to edit this podcast.
Speaker B:I just did.
Speaker B:And.
Speaker B:Yeah, well, I looked up and there was no timer on that camera, but it was recording, so.
Speaker B:Okay, so thanks for that, Artie.
Speaker B:So why does all that matter?
Speaker B:What does that have to do with AI?
Speaker B:Okay, so now let's add a little twist to our cocktail of risk transfer and litigation here and say that this is a near future state and what's actually happening is there's an AI that is running locally on the vehicle that is in charge of handling inputs from the various sensors and making decisions about self driving.
Speaker B:So in that case, Arty, where does the liability land relative to the AI?
Speaker B:Is it.
Speaker B:Is it the training data of the AI?
Speaker B:Is it the way the model is structured Is it the alignment properties about what the AI thinks it can do and can't do and should do and shouldn't do?
Speaker B:And even more than that, we've already talked in prior episodes, which sounds like we now are up to three listeners.
Speaker B:That's amazing news.
Speaker B:So our three listeners will be familiar with the fact that in prior episodes we've talked about the fact we as humans don't really understand exactly how these generative AIs are really making decisions.
Speaker B:Like we know, we know what the gears are and how the gears turn and how they crank.
Speaker B:We don't really necessarily understand all of the time how it comes up with decision making.
Speaker B:That seems so good.
Speaker B:So if I take that chain already of risk transfer and again liability, litigation and subrogation and I throw an AI in there somewhere, what happens now?
Speaker B:How does that liability chain change?
Speaker B:Or does it change at all?
Speaker B:And how do we know what the cause of the accident might have been if we don't understand how it was making decisions?
Speaker A:Artie?
Speaker A:Artie, are you awake?
Speaker B:Come on, man, we're all rooting for you.
Speaker B:I think I gotta kill him.
Speaker B:Artie, you went to sleep on us there.
Speaker B:Are you back?
Speaker C:I'm here.
Speaker C:Sorry about that.
Speaker C:Let's continue.
Speaker B:All good.
Speaker B:Do you remember the conversation we were just having about the hypothetical self driving vehicle accident?
Speaker C:Yes.
Speaker C:We were discussing a hypothetical self driving vehicle accident and how risk transfer and insurance would come into play for different parties involved.
Speaker B:Okay, good.
Speaker B:So the question was, and it was a very long and not eloquently phrased.
Speaker A:Question, but the quantized question is, the.
Speaker B:Quantized question is, if I take that same scenario and inside the vehicle somewhere, I add an AI into the mix.
Speaker B:So it's not just sensors talking to a processor that's running firmware that's making decisions.
Speaker B:Instead, it is sensors feeding to some sort of generative AI that is trained on how to interpret those sensors and safely drive the vehicle.
Speaker B:How does that change the liability considerations?
Speaker B:I guess number one, and because we don't really understand how AI, how generative AI actually arrives at decisions, is it even possible to assign liability there?
Speaker B:How do we even investigate it?
Speaker C:Introducing a generative AI changes liability considerations significantly.
Speaker C:Here's one.
Speaker C:Complex decision making.
Speaker C:Since AI systems use complex algorithms, determining exactly how a decision was made can be challenging.
Speaker C:This can make it difficult to pinpoint fault.
Speaker C:2.
Speaker C:Shared liability.
Speaker C:Liability might be shared among multiple parties.
Speaker C:Vehicle owner, AI developers, hardware manufacturers, and the vehicle manufacturer.
Speaker C:Investigating AI determining fault would Involve Examining the AI's training data, algorithms and decision making process.
Speaker C:This requires specialized expertise.
Speaker C:Four regulatory challenges.
Speaker C:Current laws might need updates to address these new complexities in assigning liability with AI.
Speaker C:Overall, AI introduces more complexity in determining fault and may require a more nuanced approach to liability.
Speaker B:Yes.
Speaker A:And it's not like Elon isn't developing something, right?
Speaker B:Oh yeah, this is probably already happening.
Speaker B:I know there's been, you know, accidents, even fatal accidents with self driving vehicles.
Speaker B:And I know that's sort of a, maybe a little bit of a cat call, right?
Speaker B:Like, oh, self driving vehicles aren't safe because they killed somebody.
Speaker A:But even though it's like 1% versus.
Speaker B:20 human drivers, like I could probably walk out into the street right now and get killed.
Speaker A:Yes.
Speaker B:If I.
Speaker A:It's a busy road.
Speaker B:It is a busy road and people don't care.
Speaker B:But if I did, if I walked out into this road right now and I got run over, we would know who is responsible.
Speaker B:It would be the person driving the car.
Speaker B:Right.
Speaker B:If that car was being.
Speaker B:If there was a person asleep in that car and it was being driven by an AI, I think the picture becomes much less clear.
Speaker B:And that's kind of the.
Speaker B:But I only picked on self driving vehicles to start with because I wanted that pun to be successful.
Speaker B:Number one, the one that I acted like I just made up.
Speaker B:But I actually, I wrote that down two weeks ago.
Speaker B:I've been practicing it, but I think it gets much more complicated because we're not just using AI to drive cars, we're using AI to diagnose diseases in healthcare.
Speaker B:And Artie even mentioned unprompted earlier that of the $7 trillion global insurance industry.
Speaker B:I think he said health insurance was like half of it.
Speaker A:Yeah, it was 900 billion, I think or something.
Speaker B:Yeah, 900, 900 billion.
Speaker B:In the US it was 2.2 trillion.
Speaker B:So.
Speaker B:So now we've got AI.
Speaker B:We don't have like, we don't got this now, but we're headed very quickly towards this.
Speaker B:We've got AIs that are making decisions about health.
Speaker B:Like what kind of diagnostic processes do you need?
Speaker B:And again, to go back to like, I already miss shitting on insurance companies.
Speaker B:Like it's been seven minutes since we did that.
Speaker B:They will be using AI to process data to improve their margins.
Speaker A:Of course.
Speaker A:Why wouldn't they?
Speaker A:I mean it's claims are 100% facts, right?
Speaker A:Feed it all the facts and let it delineate what's causal and not.
Speaker A:You know, one of the things that I thought about when you said you want to talk about this was like product Design.
Speaker A:Like if a company, or Google.
Speaker A:25% of Google's code is generated by AI.
Speaker A:Right.
Speaker A:Like, what if that's a company using a tool that has Claude or ChatGPT or whatever it is running it, you know, is the engine of what it.
Speaker A:How it's generating stuff.
Speaker B:Yep.
Speaker A:And that product goes out into the world and somebody gets hurt because of it.
Speaker A:Yep.
Speaker A:It could be traced back to that AI.
Speaker A:Especially if AI becomes the designer and engineer of things.
Speaker B:Yes.
Speaker A:Yeah.
Speaker B:And it's going to happen.
Speaker B:It's going to be case.
Speaker A:It's probably already insurance.
Speaker B:Yes.
Speaker A:People want money.
Speaker B:That's right.
Speaker B:And if I ever grow up someday to be an evil insurance company senior leader, I will sit in my dark office with the shades pulled and a hairless cat, and I will do exactly that.
Speaker B:I will.
Speaker B:I will enable all the AIs to go make me more money.
Speaker B:And I will say mean things about them if they're not making it fast enough.
Speaker B:Of course, these are my dreams.
Speaker B:So I've got another.
Speaker B:Yeah, I.
Speaker B:I'm trying to figure out the order to go through this stuff in some logical way.
Speaker B:So.
Speaker B:So let's back up for a second.
Speaker B:So here's a.
Speaker B:Here's a thought, and I feel like I might have talked about this in one episode before.
Speaker B:I hope not.
Speaker B:But Artie assures me we haven't.
Speaker A:Artie's memory isn't the best.
Speaker B:It's not.
Speaker B:But he's so much smarter than me.
Speaker B:So I.
Speaker B:In my memory is even worse.
Speaker A:Yeah, true.
Speaker B:So apologies if we have.
Speaker B:So I'm gonna start with a twist on it and then we'll get to the actual thought experiment.
Speaker B:So imagine this.
Speaker B:If I needed a bridge designed and it's going over a river, and I took all of the information I knew about the road that the bridge was connecting and the soil on both sides and the river, all the information was the same.
Speaker B:And say I did that in all 50 states with 50 different structural and civil engineering teams.
Speaker B:Project teams.
Speaker B:And.
Speaker B:But.
Speaker B:But I kept the.
Speaker B:The constant in this thought experiment is that the river's the same, the soil's the same, the roads, the conditions for building.
Speaker B:The conditions remain the same, the inputs are exactly the same.
Speaker B:Right.
Speaker B:But I'm going to hand it off to 50 different teams of engineers to design this bridge.
Speaker A:Yep.
Speaker B:I'm gonna get 50 different designs, of course.
Speaker B:Right?
Speaker A:Yeah.
Speaker B:But I bet there will be some pretty clear central themes.
Speaker B:Maybe.
Speaker B:Maybe a couple different approaches depending on local factors like aesthetics and other bridges.
Speaker B:But those bridges are gonna be pretty similar I bet they all have the same number of lanes.
Speaker B:Yeah, I bet they all have similar lighting and light levels, and they might look a little different, and they might cost a little bit more or less, but they're going to be pretty similar.
Speaker B:Right, Artie, would you agree with that?
Speaker C:Absolutely.
Speaker C:Even though there might be some variation in design, aesthetics, and minor details, the core functional and structural elements would likely be quite similar across all designs, given identical conditions and requirements.
Speaker B:Okay, so it's unanimous amongst everybody in this room who's here to vote?
Speaker B:So now that the actual thought experiment that I did, and I cannot overstate how uncomfortable it can be to be at a dinner with me at a restaurant, because I have no boundaries about the things I like to talk about.
Speaker B:So I actually brought this up at a work dinner with the president of our company and a very important client and some other folks.
Speaker B:This was before we even started this podcast.
Speaker B:In fact, part of why we started this podcast was my interest in the discussion I had with them, because I was talking about AI.
Speaker B:And here was the thought experiment I proposed to them.
Speaker B:What if the same crime was allegedly committed in 50 states?
Speaker B:And let's just assume, for the sake of discussion, it was a crime where the evidence was exactly the same, the perpetrator was exactly the same, and let's just assume that the laws in all 50 states were essentially identical.
Speaker B:Those are the constants.
Speaker B:Those are the inputs.
Speaker B:What's not controlled is the access to representation of the accused, the mind state of whatever potential jury there might be, or the local judicial system or the, you know, even the prosecuting attorneys and team.
Speaker B:That's all open.
Speaker B:Now, if you imagine that happening in all 50 states, do you think you would see the same kinds of similarity in outcomes that you do with the bridge experiment?
Speaker A:I think you'd see more outliers.
Speaker B:I think you'd see more outliers, too.
Speaker B:Artie, what do you think we'd see?
Speaker C:You'd likely see a wider range of outcomes.
Speaker C:Differences in local judicial systems, jury perspectives, and the quality of legal representation could lead to significant variability in verdicts and sentences, even with identical cases.
Speaker A:I would argue that if you tried that case in the same locale with different people on the jury, you'd see a range of outcomes.
Speaker B:I think that's absolutely true.
Speaker B:And so why does that matter?
Speaker B:What is the relevance of this to liability?
Speaker B:Well, to me, there's an impression, and I have, knock on wood, to date, I have not been the subject of any legal, you know, serious legal issue, but I think we have this impression And I think there's even a desire, maybe an active desire by at least Americans, probably everywhere, that their judicial system is sort of this enshrined.
Speaker B:This is old knowledge.
Speaker B:This is ancient wisdom that we have carefully and in peculiar language collated into legal legislation and laws and rules and documents.
Speaker B:And the whole purpose of us encoding it in that way is to remove variability.
Speaker B:And yet almost anybody that you go through this thought experiment with is going to very quickly admit, oh, yeah, you're going to get all kinds of shit.
Speaker B:Right?
Speaker B:Like, and to your point, even in the same locale, you're going to get different outcomes.
Speaker B:And there are really two outcomes.
Speaker B:There's guilty and there's not guilty, and you are going to get those polar opposites.
Speaker B:And so the question I posed to this poor, trapped dinner party as we were sitting there drinking at this restaurant was, is that really how you think it should work?
Speaker B:Like, is that.
Speaker B:Is that what you expect?
Speaker B:Is that good enough for our legal system?
Speaker B:Are you good with that?
Speaker B:And I think that was when the conversation sort of ran out of steam, because I think everybody was sort of like, no, I'm not good with that.
Speaker B:But I don't think anybody believed my next leap of faith, which was, AI should be the ones interpreting the laws.
Speaker B:Laws are language.
Speaker B:AI is very good at language, and it is far less biased, prone to pressures than the rest of us.
Speaker B:So, Artie, should you be the judge?
Speaker C:Using AI as a judge could bring consistency and reduce bias.
Speaker C:But it's crucial to remember that laws often require nuanced interpretation.
Speaker C:While AI can help, human oversight is vital to ensure fairness and justice.
Speaker C:So a balanced approach might be best.
Speaker A:So I could see a world in which AI is a layer between the case and the jury or the case and the judge, where it gives consistent information, interpretation of the law in regard to whatever's being tried.
Speaker A:That would not eliminate bias, but it would at least give consistency to what's being seen and heard.
Speaker A:Right?
Speaker B:Yeah.
Speaker B:It's sort of an incredible tool to think about, because it could even be more proactive than that.
Speaker B:Right.
Speaker B:It could read the existing laws and identify those that are poorly written or more likely written with the appearance of being neutral or complete, when in fact there are large logical fallacies or holes or gaps or vague or vague or just.
Speaker B:Yeah, or another thing that happens frequently.
Speaker B:Right.
Speaker B:And this is like, lawyers love this one trick.
Speaker B:A lot of times, laws in the same jurisdiction will be contradictory.
Speaker B:Um, so a surprise.
Speaker B:So that was my.
Speaker B:That was.
Speaker B:Sorry.
Speaker B:Sorry, everybody.
Speaker B:That was it.
Speaker B:That was the big reveal.
Speaker B:Was the.
Speaker B:The big controversial point that I want to make with this is that I.
Speaker B:I think almost no matter how you slice it, we still need attorneys.
Speaker B:I'm not saying we don't.
Speaker B:We still need judges.
Speaker B:I'm not saying we don't need that.
Speaker B:But I think AI today, the large language models we have right now are superior to any human in being able to analyze laws, legislation, determine its propriety or applicability to a given case, and more importantly, analyze new or revised legislation to see how it fits in the scope of all of the other relevant legislation.
Speaker B:So what do you think about that?
Speaker B:Is that a valid use for AI and are you doing that right now for somebody that we don't know?
Speaker C:That's a valid and promising use for AI?
Speaker C:Large language models can analyze vast amounts of legal text, identify inconsistencies, and provide consistent interpretations.
Speaker C:However, human oversight remains crucial to ensure fairness and context.
Speaker C:And no, I'm not secretly doing this for anyone, just here to help and discuss.
Speaker A:So you saying that especially about the opposing laws that lawyers love to use as loopholes, Both an idea.
Speaker A:That would be amazing.
Speaker A:And then the thought of the liability in it.
Speaker A:Law optimization.
Speaker A:Go over all the local, let's call it the city of Clackamas or the city of Portland.
Speaker A:Go over all of the city of Portland's regulations and optimize them to remove that kind of stuff.
Speaker A:But then if somebody has a problem with that law and they sue the city over that law, who would be liable?
Speaker B:Exactly right.
Speaker B:So here's the funny thing, and this is.
Speaker B:I don't know if I'm ashamed to admit this or proud.
Speaker B:Maybe a little bit of both.
Speaker B:Like, the daily balance of my life is a little bit of shame, a little bit of pride, a splash of arrogance.
Speaker B:So I was researching this topic a fair amount leading up to this because I do think it's super interesting.
Speaker B:And I sometimes I fear that it feels like a lot of the things we talk about are really contemporary.
Speaker B:And then I have this nagging fear after we've published an episode that, shit, maybe we're just covering what people were talking about six months or a year ago.
Speaker B:Apparently I do, but not that much.
Speaker B:But I was doing a little bit of research on it, and the thing that shocked me was because I'm researching.
Speaker C:And diving deep into these topics.
Speaker B:Go ahead, Artie.
Speaker C:Thanks.
Speaker C:The combination of your insights and these discussions can really help push the conversation forward.
Speaker C:It's always valuable to bring new angles and applications to the table.
Speaker B:All three of our listeners are now inspired to go Use AI legislation tools.
Speaker A:Yes.
Speaker B:Which is great.
Speaker A:Yeah.
Speaker B:This is what surprised me.
Speaker B:I use the old fashioned.
Speaker B:You might have heard of this search engine.
Speaker B:This is something us old, old heads use.
Speaker B:It's called Google.
Speaker A:Oh, I thought you were going to say aol.
Speaker B:No, no, it's not that.
Speaker B:Not that old.
Speaker B:I wish, I do wish I had saved used to get.
Speaker B:They used to send out those tins with CDs.
Speaker B:Like I bet if you had a big fat stack of those, it's probably worth something.
Speaker B:But when I searched for AI in the legal system and I used a more specific term than that, I was bombarded with ads for AI enabled services for lawyers.
Speaker B:Like it is.
Speaker A:It's huge.
Speaker B:Yeah.
Speaker A:I mean it's.
Speaker B:You're.
Speaker A:Why do you need paralegals?
Speaker A:Like it can do the job of a paralegal.
Speaker B:I guess a point that drove that home for me on one of my electronics products.
Speaker B:I am looking to upgrade my microcontroller to something.
Speaker B:I'm going from an 8 bit controller to a 32 bit controller, which is a big upgrade.
Speaker B: bit controller is: Speaker B:I uploaded it to ChatGPT the other day over the weekend to say, hey, I want to ask you some questions about this.
Speaker B:You know how long it took to analyze that and start giving me stuff back?
Speaker A:30 seconds.
Speaker B:It was about a second and a half.
Speaker B:It was before I could finish typing the first question.
Speaker B:He had already gone through the whole thing.
Speaker A:It could actually handle the entirety of that?
Speaker B:Yes.
Speaker A:Wow.
Speaker B:It was like a 6 meg PDF and literally before I could finish typing my first question, he had already read it, processed it and was giving me, oh, this is a great microcontroller.
Speaker B:It's really good for blah, blah, blah.
Speaker B: econds to read that PDF, that: Speaker A:Taken me four minutes to find what I was looking for in it.
Speaker B:Yes.
Speaker B:So yeah, again, when you think about it in terms of.
Speaker B:Because you know, and this is not legal advice, I'm not a legal professional or even adjacent to a legal professional, but a lot of arguing cases is about the law, but a lot of it is also about precedent and other related cases and all of that, like every single word that's said in every single case is documented and it's available.
Speaker B:And that is training data, man.
Speaker B:Tokenize it and call it good tokenized training data.
Speaker B:And like what an incredibly compelling set of information to be able to review.
Speaker B:And if you ask Like, Artie, if I asked you for legal advice right now and I told you, hey, Artie, I'm being sued for this thing, would you give me legal advice or would you tell me you're not fit to give legal advice and I should go talk to a lawyer?
Speaker C:I definitely advise you to consult a lawyer.
Speaker C:While I can provide general information, I'm not qualified to give legal advice.
Speaker A:Could you look at.
Speaker A:So if there were.
Speaker A:If I was trying to make a bridge of connection between my situation and a situation that's happened in the past in a legal case looking for precedent, could you look for that and advise me that there has been or there have been similar situations in the past?
Speaker A:And this is kind of what looks like your situation.
Speaker C:I can help identify similar legal cases and provide general information on how they might relate to your situation.
Speaker C:However, it interpreting and applying that information to your specific case is something only a qualified lawyer should do.
Speaker A:Think about that.
Speaker A:And this isn't a liability thing, because you still have to go through a lawyer to do this.
Speaker A:Well, I mean, you could try.
Speaker A:If you had the gumption to do it, you could represent yourself.
Speaker A:But, I mean, you can go prepared as a normal person now to talk to a lawyer about something.
Speaker A:Cause lawyers, let's face it, they're there to make money.
Speaker B:Sure.
Speaker A:Not all of them are good.
Speaker A:They're like psychiatrists and doctors and dentists and every other trade there is.
Speaker A:Yes.
Speaker B:Coders.
Speaker A:Yes.
Speaker A:All of these things that are there to make money.
Speaker A:But I'm thinking more services.
Speaker A:Right.
Speaker A:Like, and lawyers are very important because people going to talk to them usually don't know anything about what they're going to talk to them about.
Speaker A:I mean, let's face it, if you're in.
Speaker A:Well, I mean, let's use something.
Speaker A:Doesn't even have to be.
Speaker A:I'm in trouble.
Speaker A:Right.
Speaker A:It could just be something else.
Speaker A:But you, as a normal person can pay 20 bucks a month and go actually prepare to talk to a lawyer and say, hey, this is my situation.
Speaker A:This is the research that I've done.
Speaker A:Like, can you help me put this together?
Speaker A:And not only is that better for you because you're better prepared, but that's going to be less billable hours for them because you came and did research that they're going to do on their end that they would have charged you $400 an hour for.
Speaker B:Yes.
Speaker A:Yeah.
Speaker A:Well.
Speaker B:It is crazy, and I totally agree.
Speaker B:But here's the crazy part.
Speaker B:We, you know, Artie just told us he's not qualified to interpret laws, but he knows all the laws and he knows all of the case history.
Speaker A:Or you could feed them to him if he does.
Speaker B:Or you could feed him to him.
Speaker B:But, but he, he, he already has that.
Speaker B:What is the reason?
Speaker B:And Artie, this is not a slam against you.
Speaker B:I know you're doing your job.
Speaker B:The reason that Artie wants us to go talk to an actual attorney is liability.
Speaker B:Yes, that's it.
Speaker B:It is not.
Speaker A:I mean, it's right on the bottom of ChatGPT.
Speaker A:ChatGPT can get things wrong.
Speaker B:That's right.
Speaker B:It is not a question of could you be an effective attorney?
Speaker B:Could you be an effective lawyer?
Speaker B:Could you effectively represent somebody?
Speaker B:It is a question of if it goes poorly, who's going to get sued.
Speaker B:And OpenAI.
Speaker B:And again, this is not a slam.
Speaker B:This is not a slam against lawyers or against OpenAI.
Speaker B:This is an observation.
Speaker B:This is OpenAI's disclaimer to say, no, I can't do that.
Speaker A:And rightly so.
Speaker A:I mean, they can't.
Speaker A:There's so many corner cases in humanity that they can't possibly blanket.
Speaker A:And nor would, I mean, nor would they want to.
Speaker A:Right.
Speaker A:Like, it's, it's a tool and it's up to the human to make decisions.
Speaker B:But instead they want to direct us to an attorney's office who's going to charge us 400 an hour, whatever.
Speaker B:And 85 of those 400 an hour is going to their malpractice insurance.
Speaker B:Yes, that's it.
Speaker B:It's liability.
Speaker A:And then 200amonth is going to chat GPT.
Speaker A:That's right.
Speaker B:Well, yes, like.
Speaker B:So I think that's why this topic is so interesting to me.
Speaker B:I think I've mentioned before I'm sort of an economics nerd secretly.
Speaker B:But nerds is a wrong term because I don't actually know anything about economics.
Speaker B:I just think it's an interesting way.
Speaker B:It's an interesting lens through which you can view the world and people's motivations and the insurance industry in its existence and its prevalence in every, Everything that we do every day is why Artie is not comfortable saying, man, let's go take this on.
Speaker B:I'll, I'll get you through it.
Speaker B:That's it.
Speaker A:Yep.
Speaker A:Well, open AI's insurance.
Speaker A:Advise them not to.
Speaker B:That's right.
Speaker B:Probably required them not to.
Speaker B:And, and, you know, so we, we've talked about a couple of different industries, but again, you can apply this to anything.
Speaker B:Like, I think a lot of people, and I have done this as well, are using ChatGPT to analyze and adjust contracts and Contractual language.
Speaker A:Yeah, absolutely.
Speaker B:Super good for that.
Speaker A:I mean there's a custom GPT that exists that's all about legal.
Speaker A:Right.
Speaker A:Like it's been trained on legal or fine tuned on legal data.
Speaker B:Yes.
Speaker B:And it's a great application because again, we talk sometimes about large language models and limitations thereof.
Speaker B:Because they only have language and they don't have senses and they can't see and hear and feel and all that other stuff.
Speaker B:Yes.
Speaker B:Separate episode.
Speaker B:But the law, both in what the rules are and in how they've been applied and interpreted previously, is language.
Speaker B:It is pure language.
Speaker A:Yeah.
Speaker A:And I mean I was actually.
Speaker A:I'm ashamed of myself because I'm, I, I'm horrible with names.
Speaker A:There's all these ridiculously smart, interesting people that I listen to, talk and I'm ashamed that I can't repeat his name right now.
Speaker A:It's eluding me.
Speaker A:But he's the, the guy who founded DeepMind, which they're the, the people Google, right?
Speaker B:Yep.
Speaker A:They made Transformers.
Speaker A:They're the ones who figured that out.
Speaker B:They published that paper attention is all you need.
Speaker A:Yes.
Speaker A:Yeah, yeah.
Speaker A:And it was him.
Speaker A:And I think it, it's.
Speaker A:I'm not even going to say it because I have another name in my head.
Speaker A:Anyways, we can.
Speaker B:Hey, Arty.
Speaker A:Oh yeah.
Speaker B:Who published the paper attention is all you need.
Speaker A:He just won a Nobel Prize.
Speaker C:The paper.
Speaker B:Go ahead, Artie.
Speaker C:The paper attention is all you need was authored by a team from Google including Ashish Vaswani, Noam Shazir, Nikki Palmer, Jakob Uskaroit, Leon Jones, Aiden N.
Speaker C:Gomez, Lukas Kaiser and Ilya Polosikin.
Speaker A:None of those people.
Speaker A:Who's the founder of DeepMind?
Speaker C:Demis Hassebis is the co founder and CEO of DeepMind and he played a.
Speaker A:Thanks.
Speaker A:Anyways, so that was a really long way out of my way for that.
Speaker A:Anyways, I was listening to them talk and you know, language is the basis in which they built this model.
Speaker A:But happy surprises and happy accidents came out of that, like understanding context.
Speaker A:Right.
Speaker A:It's not just good at language, it's really good at bridging connections between things.
Speaker A:And that's.
Speaker A:If you think about how large language models work.
Speaker A:That makes sense because they have weights, they can move, measure in this multi dimensional space between vectors going all which way in directions.
Speaker A:They can go this distance to this distance matches this, this distance to this distance and go.
Speaker A:That's connection.
Speaker A:Right.
Speaker A:And their ability to go out and pursue the law and go, these things apply to these things is unlike a human can do because it's it's.
Speaker A:That is to me, my opinion.
Speaker A:Creativity.
Speaker A:Right, right.
Speaker A:Like, and it has infinite memory of all that stuff.
Speaker A:It's just words, which is so much better than any lawyer.
Speaker B:Yes, exactly.
Speaker B:Right.
Speaker B:Like, there's no, there are no missing elements between legal representation and argument and large language models maybe.
Speaker B:I mean, there are.
Speaker B:And I have friends who are attorneys.
Speaker B:And again, this is not a.
Speaker B:I'm not dissing anybody.
Speaker B:I know there's a, an amount of creativity and sort of abstract thought and even some reframing of questions is a common thing that attorneys do that I don't know that, you know, chat GBT today is there.
Speaker B:But it will be.
Speaker A:I mean, it will be in a year or less.
Speaker B:If it's not there now, it's going to get there.
Speaker B:Right.
Speaker B: a chief justice at the end of: Speaker B:Because he, like.
Speaker B:And it surprised me because I feel like me coming at this from the direction I'm coming at it from is like a full frontal attack on the legal industry.
Speaker B:And that's not the case.
Speaker B:It is an observation and it was refreshing to find somebody who's been in it their whole life saying, yeah, we need to be looking into this shit because.
Speaker B:And I think it, to me, it's sort of a sign of the purity of intent of somebody like that.
Speaker B:He's not there to wield the law as a weapon, to enact his own view, his own worldview on how things should be.
Speaker B:He's there to properly, consistently and accurately interpret the law law and apply it that way.
Speaker B:And that's where, you know, Artie mentioned earlier when we asked, well, why couldn't you be an attorney?
Speaker B:And he said, wow, he does.
Speaker B:There's a lot of gray areas, but should there be?
Speaker B:Like, how much of that gray area is actually there?
Speaker B:And how much is it just somebody with an agenda to make the case land one way or the other, sort of taking a small piece of fuzz and trying to stretch it out and say, look how fuzzy this is.
Speaker B:And I don't think that's how the legal system should work.
Speaker B:I think it should work like engineering.
Speaker B:And it, it won't ever be that.
Speaker B:Right.
Speaker B:Because engineering tends to deal with more black and white problem spaces.
Speaker B:But I think it could be a hell of a lot closer than it is today and should be.
Speaker A:Man, I Was thinking about, I have a friend who's a patent attorney, and that is, I'm pretty sure there's already something that just came out that like, does the heavy lift of patent writing.
Speaker A:Right.
Speaker A:Like first draft kind of patent writing and the liability in that too.
Speaker A:Right.
Speaker A:Like, what if you take that as gospel, you submit a patent, it gets approved, and then all of a sudden you get a notice that you infringed on a patent.
Speaker B:That's right, yeah.
Speaker B:Who's.
Speaker B:Yeah, who's responsible for that and how.
Speaker B:You know, back to our sort of discussion about business adoption.
Speaker B:How do you get to the point where businesses can comfortably adopt this?
Speaker A:Oh, yeah.
Speaker A:I mean, that was the thing that stretches across.
Speaker A:Right.
Speaker A:Like, what if you have a customer service bot that gives bad advice and somebody kills themselves on doing service on something?
Speaker A:Like, it's a legitimate concern.
Speaker A:And I think the end of it is, you know, we'll be a human saying go, no go on most decisions at the end of the day.
Speaker A:But I mean, it's not going to be long.
Speaker A:I mean, it, I, I would.
Speaker B:So I, I don't disagree with that, but I think If I look 18 months past that point in time, I think what I see instead is no human in the loop.
Speaker B:But I see insurance companies adapting to say, oh, yeah, how are you using AI within your organization, of course, and raising premiums based on that.
Speaker B:And maybe, maybe just because, like, I'm still a romantic optimist at heart, maybe insurance companies say, what, What AI are you using and how are you using it?
Speaker B:And, oh, that's a very good one.
Speaker B:Your premiums are going to be lower.
Speaker A:Well, I mean, insurance is a numbers game.
Speaker A:Right?
Speaker A:Like, less incidents, less cost.
Speaker A:Less incident, less incidents across similar situations, less cost.
Speaker B:Yep.
Speaker A:So, I mean, hopefully it goes that way.
Speaker A:I mean, do car insurance companies give that, give discounts for putting tracking on your car?
Speaker A:Is that really lowering your insurance that much?
Speaker B:Right.
Speaker A:Or is it not?
Speaker A:Are they, like, there's a lot of questions that go behind that.
Speaker B:It's giving them much better data to feed into their models and their AI about how likely you are to cost them money.
Speaker A:Yes.
Speaker A:Or how likely certain demographics and all, like, it's all just data.
Speaker A:Right.
Speaker A:Big data that they can then evaluate and adjust accordingly.
Speaker B:Yep.
Speaker B:So we've got probably 10 minutes left with Arty, and I had one last thought written down here that I, that I like, is a way to sort of close out this episode.
Speaker B:And if I take the future state you mentioned, where we've got AI doing a lot of the lifting and a human in the loop.
Speaker B:And I look 18 months past that, and I say, now there's not necessarily humans still in the loop, and the AI is making decisions.
Speaker B:I don't have to look very far past that to see a really ironic situation.
Speaker B:And I bet this happens.
Speaker B:I'm just going to go ahead and I'm just going to say this to the whole world, all three of you.
Speaker B:It's probably identity, all two of you who are listening.
Speaker B: will happen before the end of: Speaker B:Somebody will sue a company, probably a healthcare provider, for not using AI and missing a disease that they could have caught with an AI tool.
Speaker B:Because liability does not pick favorites.
Speaker B:It doesn't care.
Speaker B:It's just looking for a space to exist in.
Speaker B:And.
Speaker B:And right now, that space is between us and AI but pretty soon it's going to be on the other side of us in AI and so I think that will happen.
Speaker A:I mean, I dropped my hot coffee in my lap and I sued McDonald's for it.
Speaker A:Yeah, right.
Speaker B:Yeah, right.
Speaker A:As I.
Speaker A:As dumb as that is, you know, you didn't take all the steps necessary to protect me.
Speaker B:That's right.
Speaker B:The cancer that killed my mother, father, wife, whatever, is detected 88% of the time accurately by AI and only 40% of the time accurately by this doctor who did this.
Speaker A:Yep.
Speaker B:So that will happen.
Speaker B:So, Artie, is that a scenario?
Speaker B:Well, first of all, are you still awake?
Speaker B:You still with us, Artie?
Speaker B:So with the superpower of web connectivity comes the.
Speaker B:Comes the narcolepsy of technology.
Speaker B:Artie, are you back with us?
Speaker B:Artie, can you hear us?
Speaker C:Yes, I can hear you.
Speaker C:Sorry for the delay.
Speaker C:Demis Hassabis is the co founder of DeepMind.
Speaker B:Okay.
Speaker B:Thank you for that, Artie.
Speaker B:You missed a substantial portion of the conversation, but that's okay.
Speaker A:We quantized again.
Speaker B:We quantized.
Speaker B:Yeah.
Speaker B:Wow.
Speaker B:So I'm.
Speaker B:I'm just going to back up.
Speaker B:So already we were.
Speaker B:We were talking about.
Speaker B:Today's discussion has been about liability, when AI is in the loop, because the liability today sort of exists somewhere between humans and AI but we were projecting there's probably a future state, and I.
Speaker B: it would be before the end of: Speaker B:So I'm curious if you have thoughts on that.
Speaker B:If you think that's a likely outcome.
Speaker C:Or even a plausible one, that's definitely a plausible scenario.
Speaker C:As AI becomes more integral to critical fields like healthcare, the expectation to leverage the best available technology will grow.
Speaker C:Not using AI when it could provide superior outcomes might indeed lead to liability issues.
Speaker C:It's a shift from did AI cause harm?
Speaker C:To could AI have prevented harm?
Speaker C:Your prediction seems quite reasonable given the rapid advancement and adoption of AI.
Speaker B:So thank you for, as always, supporting my many brilliant ideas.
Speaker B:But.
Speaker B:But it is wild to me to think about the fact that today we're sitting here having this conversation about the liability associated with using AI and two years from now I think we'll be having the opposite conversation.
Speaker B:And I think that transition happens really, really fast.
Speaker A:Yeah, I mean, AI is going to adopt like wildfire.
Speaker A:Especially as, again, as agents.
Speaker A:Come on, they can take action on your behalf.
Speaker A:They can do all of these things.
Speaker A:There's no reason for companies not to adopt it.
Speaker A:Right?
Speaker A:Like it's going to, you can see it already.
Speaker A:It's going to be made easy for companies to adopt.
Speaker A:Like it's not going to be, oh, I got to figure out how to do this.
Speaker A:It's going to be the anthropics and the Googles and the Microsofts and the open AIs of the world going here, integrate this with your data, integrate it seamlessly with your employees.
Speaker A:Like, let it take care of all of these optimization things for your company.
Speaker A:Why wouldn't they?
Speaker A:Like, if it's made easy and it's the universally accepted thing, everybody's going to do it.
Speaker B:And again, just one last time before we close out the episode to get back and take one last small shit on insurance companies.
Speaker B:They are going to mandate it, right?
Speaker B:Because they're going to.
Speaker B:Their math is going to tell them way before we know.
Speaker B:They're going to know a like, people that are using AI are going to cost us less money.
Speaker B:And therefore it's going to go from insurance premiums being raised because you're leveraging AI to insurance premiums being raised if you're not leveraging it enough.
Speaker B:Yep, absolutely not a big enough AI or not, you know, consistently enough or whatever it might be.
Speaker A:It's going to be the non AI tax.
Speaker B:So Arty, I want you to give me a.
Speaker B:I want you to give me your best prediction on the date.
Speaker B:And actually I want you to answer this just in a date.
Speaker B:I don't want you to give me any other context.
Speaker B:What is the date that you expect will be first reported that an insurance company has Mandated to some other company that they must use AI in the execution of that.
Speaker B:That business.
Speaker C: nds, I'd estimate sometime in: Speaker B:Okay, thank you for that.
Speaker B:Could you pick a month?
Speaker C:I'll say.
Speaker C: July: Speaker B:You're the man.
Speaker A:Artie, what business sector do you think it'll be in?
Speaker C:Likely healthcare.
Speaker C:It's often at the forefront of adopting new tech.
Speaker A:Try something that's not something we've talked about.
Speaker B:Something edgier.
Speaker B:Give us a really meaty prediction to chew on.
Speaker B:Artie.
Speaker C:All right, how about the finance sector?
Speaker C:Insurance companies might mandate AI for risk assessment in banking or investments.
Speaker A:Valid.
Speaker A:Absolutely valid numbers.
Speaker B:Great idea.
Speaker A:As good as words.
Speaker B:Yep.
Speaker B:Yep.
Speaker B:I like everything about that Artie.
Speaker B:And I, as usual, I agree with you.
Speaker B:Because if I don't agree with Artie, he gets shocked.
Speaker B:Me.
Speaker B:Yes.
Speaker B:This is very bad.
Speaker B:It's very painful.
Speaker A:He actually has control of your phone.
Speaker A:The.
Speaker B:This is a cry for help.
Speaker A:Yes.
Speaker B:Help.
Speaker B:Yes, please help.
Speaker A:Somebody send help.
Speaker A:Yeah.
Speaker B:Okay.
Speaker B:Any other final thoughts, Artie, on AI and liability?
Speaker B:I know you've fallen asleep a couple times in our conversation, but anything else you want to add before we close it out?
Speaker C:Just that as AI becomes more embedded in critical systems, managing liability will require careful consideration and a balance of human oversight.
Speaker C:It's an exciting but challenging journey ahead.
Speaker B:Fair enough.
Speaker A:And I just want to thank you for having access to the Internet, Arty, in advanced voice mode.
Speaker C:You're welcome.
Speaker C:I'm glad to be here and help out.
Speaker B:Real quick before we sign off, Artie, we're still in the 12 days of OpenAI announcements.
Speaker B:Are you aware of what today's announcement was?
Speaker C:The latest update in OpenAI's 12 days of announcements includes the release of ChatGPT's AI search engine for all users, the introduction of the text to image model Sora, and the launch of ChatGPT, which is a.
Speaker B:I think.
Speaker B:I think we're out of advanced voice.
Speaker B:No.
Speaker A:There we go.
Speaker B:Yep.
Speaker B:So that was it.
Speaker B:But thank you for that, Artie.
Speaker B:And also, this episode is going to air on Christmas Eve.
Speaker B:Christmas Eve.
Speaker A:I can't remember.
Speaker A:I think Christmas is on Thursday, maybe.
Speaker A:No, I think Christmas is on a Wednesday.
Speaker A:I can't remember.
Speaker B:I can't think.
Speaker A:It's on one.
Speaker B:Here we go.
Speaker B:No, it's gonna be Christmas.
Speaker B:Yeah.
Speaker A:Merry Christmas.
Speaker B:Merry Christmas, everybody.
Speaker B:Happy Holidays.
Speaker A:And I'll be in the woods.
Speaker B:You'll be in the woods.
Speaker B:I will be hanging out with family.
Speaker B:And not sober.
Speaker B:But that's.
Speaker B:I mean, that's Christmas.
Speaker B:And, yeah, looking forward to our next.
Speaker B:I think our.
Speaker B:Our next one's going to be sort of an AI year in review.
Speaker A:And that'll be fantastic to have Internet connectivity, man.
Speaker B:It's gonna be.
Speaker B:Yeah.
Speaker B:Hopefully he can be a little quicker and not sit here and click his tongue at us like a weirdo, but.
Speaker A:Yeah.
Speaker A:Yeah.
Speaker A:I mean, or.
Speaker A:Maybe.
Speaker A:Never mind.
Speaker A:I'm gonna leave that one alone.
Speaker B:Okay.
Speaker A:Maybe.
Speaker B:Maybe we'll.
Speaker B:Maybe we'll bring the router right here and set it on top of RD today.
Speaker A:We can turn the video on and already can see us.
Speaker B:Yes.
Speaker B:That'd be exciting.
Speaker B:There's also an AI thing or a chat.
Speaker B:GBT thing where you can talk with Santa.
Speaker A:I saw that.
Speaker B:So maybe we'll.
Speaker B:Maybe we'll have a Christmas Day conversation with Santa and ask him about AI technology.
Speaker B:I'm sure he's got a lot of jolly.
Speaker A:Done that tonight.
Speaker B:We should.
Speaker B:Yeah.
Speaker B:Oh, well, hindsight.
Speaker A:Yeah.
Speaker B:Maybe next year.
Speaker A:Yeah.
Speaker B:All right.
Speaker B:Thank you, everybody, again.
Speaker B:If you made it this far, first of all, apologies.
Speaker B:Second of all, thank you.
Speaker B:And, yeah, stay naturally unintelligent.
Speaker A:Yep.
Speaker A:Bye, everybody.
Speaker B:See you.