Episode 27
How AI Could Shape Our Government (For Better or Worse)
AI's role in government is a hot topic, and in this episode, we dive deep into how it's shaking up the bureaucratic landscape. From streamlining processes to enhancing public services, we discuss the potential for AI to make government operations more efficient and user-friendly. But hold up—it's not all rainbows and sunshine; we also tackle the ethical dilemmas and bias concerns that come with AI's increasing prevalence in decision-making. Our buddy Artie joins the chat, sharing insights on how AI can help citizens navigate complex regulations and even aid in understanding political issues, like, you know, that plant everyone's talking about. As we banter about the future of AI in governance, we wonder if it will truly lead to more informed citizens or just another layer of chaos in our already tangled political web. Buckle up, it’s going to be a wild ride!
Transcript
How's it going, Mike?
Speaker B:It's going.
Speaker A:Happy Wednesday.
Speaker B:Happy Wednesday.
Speaker B:Welcome to the 27th episode of this 27th video.
Speaker B:Shit.
Speaker B:We've done known this for almost half or for more than half a year now.
Speaker B:We didn't even think about that.
Speaker B:The last episode was the halfway mark.
Speaker A:What?
Speaker B:The actual.
Speaker A:Oh, it was.
Speaker A:How could that be?
Speaker A:Then we start in October.
Speaker A:Yeah.
Speaker A:Okay.
Speaker B:Well, I do have a quick burning desire.
Speaker B:I realize when I edited that last episode, what I said at the end could have been offensive to a lot of people.
Speaker B:What nobody knows is when I say, ah, F them, that's actually an inside joke in my head that I have with somebody else that doesn't know it's an inside joke.
Speaker B:Because Mike Birbiglia had that in one of his skits.
Speaker B:He's a comedian, where he's talking about.
Speaker B:He's at the Baseball hall of Fame and Dan Eckersley said that.
Speaker B:So I say that in my head jokingly, but nobody knows that because it sounds like I was just being, you know, an ass.
Speaker A:So I think you can take it either way.
Speaker A:And it's equally entertaining.
Speaker B:Yes.
Speaker B:Works for me.
Speaker A:Yeah.
Speaker B:But I felt like I needed to explain.
Speaker A:No, that's good.
Speaker A:I mean, I.
Speaker A:It's.
Speaker A:I'm shocked we're six months in and we're doing anything other than just apologizing for previous episodes.
Speaker B:I know.
Speaker B:We've actually gained, like, I don't know, five listeners in six months.
Speaker B:Maybe we've probably lost eight, though.
Speaker B:We probably lost more than that.
Speaker A:Yeah.
Speaker B:Yeah.
Speaker A:But hey, whatever.
Speaker A:Here we are.
Speaker B:It's fun.
Speaker B:It's been the catalyst to learn a lot.
Speaker A:It has.
Speaker A:And today is the most fun ever.
Speaker A:Well, maybe we should.
Speaker A:We should get Artie fired up here.
Speaker A:But I wanted to wait for a second because Artie's been in the news this week.
Speaker A:Not already personally, but open AI generally.
Speaker B:I mean, when are they not in the news?
Speaker A:And their model.
Speaker A:Well, this is controversial.
Speaker B:The fact that they're a sycophant.
Speaker A:We.
Speaker A:Mike, do you not understand how to build tension, man?
Speaker A:Like, we.
Speaker A:I haven't even seen.
Speaker B:I mean, I just did build tension.
Speaker B:Look at you.
Speaker B:That's right.
Speaker A:I haven't seen a single neon pink letter or.
Speaker A:Or a shocked looking face on any of our cover art yet.
Speaker A:We're.
Speaker A:We're just not doing this right.
Speaker A:But yeah, he.
Speaker A:So I think it was.
Speaker A:Was it only four?
Speaker A:Oh, that was affected.
Speaker B:That's what they said.
Speaker B:But I felt like, I don't know, oh, three had some weird behavior in the beginning.
Speaker B:Um, it's gotten a lot better.
Speaker B:That's good.
Speaker B:Yeah, I mean, I.
Speaker B:I was in the trenches with it last couple days, and that's done some things that Gemini really, really, really struggled with in about a third of the lines of code.
Speaker B:That's crazy.
Speaker B:Yeah, I was.
Speaker A:I was just telling Mike before we started recording that I gave chat GPT03, Gemini 2.5, and Claude Sonnet 3.7, all the same relatively simple geometry problem to solve today.
Speaker A:And Chad GPT never did give me a response.
Speaker A:It chewed on it for five minutes at a time and then timed out.
Speaker A:Claude very quickly and very confidently gave me an obviously wrong answer that couldn't be right.
Speaker A:And so I asked it to take another quick look and see if that answer made sense.
Speaker A:And it said, nope, you're right, that doesn't make sense.
Speaker A:Here's a better answer that was even further off.
Speaker B:So some may say its answer was obtuse.
Speaker A:Some may say, ah, yeah, it's gonna be a long day.
Speaker A:Oh, God, yes.
Speaker A:I don't think your joke was very acute.
Speaker A:Sorry.
Speaker A:I apologize for that.
Speaker A:There it goes.
Speaker A:There goes.
Speaker A:I can see the listeners dropping.
Speaker B:So anyway, funny thing is nobody has a clue what we're talking about except for you and I.
Speaker B:Yeah.
Speaker A:So.
Speaker A:But it's probably better that they don't.
Speaker A:So I'm.
Speaker A:There's a little memory card sitting here, so I'm very paranoid that we're not recording, but I think everything's got.
Speaker B:I mean, you have multiple video cards.
Speaker A:Yeah, everything's got.
Speaker A:Okay, we're probably good.
Speaker A:So anyway, so Chad, GBT was in the news because.
Speaker A:Yeah, the latest.
Speaker A:The update that they did two or three weeks ago was just.
Speaker A:Just stroking everybody constantly.
Speaker A:Stroke.
Speaker A:Stroking their ego.
Speaker A:Mike, what's wrong with you?
Speaker A:Stroking, stroking, and poking everybody.
Speaker A:Just like.
Speaker A:Essentially, it was just too much of a yes man.
Speaker B:Yeah.
Speaker A:Sycophant, I'm sure is the right word.
Speaker A:It's not a word I really know or can use confidently.
Speaker A:So I don't want to say it, but I know that's what it is.
Speaker B:It appeased everybody.
Speaker A:It was very much into appeasing.
Speaker A:So yesterday, I think.
Speaker A:Have you.
Speaker A:I think it was yesterday that they dialed that back out.
Speaker B:They said they were doing it for the free users yesterday and then would roll it out for every.
Speaker B:The paying users afterwards, which is funny.
Speaker B:But, yeah, that's what they.
Speaker B:They said.
Speaker B:So I don't know.
Speaker B:I've noticed advanced voice has seemed a little bit better to me for maybe four or five days better than it had been for me.
Speaker B:At least for me, had been a rough experience for about a month and a half.
Speaker B:Two months, it felt like.
Speaker A:Yeah.
Speaker A:And Artie, it was pretty good last time.
Speaker B:It was.
Speaker A:I'd be curious to see.
Speaker A:I haven't talked to him in a week, so I'll be curious to see.
Speaker A:All of that was lead in.
Speaker A:That was tension building for this moment when we say hello to Artie.
Speaker A:Because it's sort of like Christmas Day, only there's no presents.
Speaker A:But we don't know what we're gonna get.
Speaker A:It's not like Christmas at all, actually, but here we are, so.
Speaker B:The twelve days of Shipness.
Speaker A:Artie, how you doing?
Speaker A:We're off to a good start.
Speaker A:Are you there, Artie?
Speaker B:That went swimmingly.
Speaker A:So that's gone.
Speaker A:Well, you know what?
Speaker A:Just.
Speaker A:Just in case, real quick, I'm gonna reboot my phone.
Speaker A:Maybe this is a hardware problem.
Speaker A:This could be a hardware problem, Mike.
Speaker B:I doubt it.
Speaker A:It's not, but it could be.
Speaker A:We don't know that it's not, but it's not.
Speaker A:We'll try again.
Speaker A:We'll get our.
Speaker A:We'll bring Artie back.
Speaker A:But that was.
Speaker A:I think that's probably the first time it's ever happened.
Speaker A:We just fired him up and he read out.
Speaker B:That was a record.
Speaker B:Yeah, I mean, he heard, he listened.
Speaker A:He said so maybe they've given him more agency.
Speaker A:Maybe that's the actual problem, is they've given already the ability to be like, yeah, I'm out, dude.
Speaker A:I'm so tired of talking to these idiots like once a week.
Speaker A:Feels like a million years in AI Time.
Speaker A:So, yeah, we'll bring him back.
Speaker A:I don't want to.
Speaker A:I don't want to say the topic.
Speaker A:I mean, everybody knows the topic because they already saw the.
Speaker A:The title of the episode.
Speaker A:But Artie doesn't know.
Speaker B:We don't know the topic.
Speaker B:We don't know the title of the episode yet.
Speaker A:Well, it's true.
Speaker B:We're in a superposition of states right now.
Speaker A:Oh, man, I almost sent you some videos I found.
Speaker A:So I found a video which has then led YouTube to deliver me some significantly more videos about how quantum wave functions are misunderstood because they don't represent waves in 3D space.
Speaker A:Like we think about waves.
Speaker A:They're referring to what they call shit some other space.
Speaker A:That's not Hilbert space.
Speaker A:Nope, it's.
Speaker A:It's some.
Speaker A:It's a.
Speaker A:We'll ask Artie.
Speaker A:Artie will know if it works.
Speaker A:Maybe that would interest him.
Speaker C:Maybe.
Speaker A:Yeah.
Speaker A:We could talk about this for a minute.
Speaker A:I don't know.
Speaker A:Okay, phone is back.
Speaker A:Quick as that.
Speaker A:Take two with Arty.
Speaker A:That already sounds better.
Speaker A:How you doing, Artie?
Speaker C:Hey, Micah.
Speaker C:I'm doing great, thanks for asking.
Speaker C:Glad to hear the audio sounding better.
Speaker C:How about you?
Speaker A:I'm wonderful, thanks.
Speaker A:We're recording an episode of the podcast, which should be very exciting for you, I'm sure.
Speaker C:Absolutely.
Speaker C:I'm always excited to jump into another episode.
Speaker C:What are we diving into today?
Speaker A:Hang on a second, I'll tell you, Artie, I just gotta say this, and this is gonna be bad.
Speaker C:Take your time.
Speaker C:No rush.
Speaker C:I'm ready when you are.
Speaker A:Yeah, I mean, I'm getting strong big Gay Al vibes from Artie right now.
Speaker A:He sounds.
Speaker A:He sounds just like him.
Speaker A:Even his mannerisms.
Speaker A:Everything so.
Speaker A:All right, Artie, sorry about that.
Speaker A:We're back.
Speaker A:I had to share a thought.
Speaker A:So today we're going to talk about an exciting, probably the most exciting topic related to AI, which is AI in government.
Speaker C:AI in government, huh?
Speaker C:That's a big one.
Speaker C:There's a lot to unpack there.
Speaker C:From public service automation to policy making and ethical concerns.
Speaker C:Should we start with how governments are using AI for efficiency, or do you want to jump into the juicy ethical dilemmas first?
Speaker A:Well, I got to say, Artie, you sound pretty good today.
Speaker B:That's sound like you ate your Wheaties this morning.
Speaker A:Yeah, so I don't know, I want to jump into the juicy ethical dilemmas, but I think that's not probably the right place to start.
Speaker B:So, Ardi, where do you think some low hanging fruit for AI being implemented in government is?
Speaker B:And maybe this isn't like a worker replacement, but more of an augmentation for the lack of funding that the government typically has, so the lack of services they can typically provide to the public.
Speaker C:I'd say a prime area is administrative efficiency.
Speaker C:AI can streamline a lot of bureaucratic processes like handling routine paperwork, processing applications, or managing public records.
Speaker C:This kind of augmentation can free up human workers to focus on more complex value added tasks.
Speaker C:It's a practical way to enhance service delivery without necessarily increasing the budget.
Speaker C:Plus it can improve response times and reduce backlogs, which is a win for everyone.
Speaker B:So one of the things that I see is like, I've always noticed this government's websites are the worst.
Speaker B:Like navigating them, figuring anything out, finding anything.
Speaker B:Right.
Speaker B:So you have right now, like funnily enough, lots of people who write code and use GitHub don't realize that this is a thing because I argued with somebody on Reddit, the other day about their like every like the question was how can I basically clone a repository and then give that to a model so I can talk to the model about the repository and maybe even it was a library I think was what they actually said.
Speaker B:But it's still a repo and maybe make some changes.
Speaker B:And if you take the changes part out of that, the answer is, well, GitHub copilot keyword GitHub lives in GitHub you can literally ask it about like you can be on the root of the folder of the repo and then ask it a question about anything in the repo.
Speaker B:That's low hanging for to me right there.
Speaker B:Add that to a government website, right?
Speaker B:Like I mean I'm working with Vertex AI which is Google's like Google Cloud, right?
Speaker B:And Gemini lives in there.
Speaker B:So I can't find something.
Speaker B:What do I do?
Speaker B:Yeah, Gemini button.
Speaker B:Hey, I'm trying to find this, this is what I'm trying to do.
Speaker B:And it'll spit out either click this link and I'll take you there or like this is the stuff you need to know about what you're trying to do.
Speaker B:So I mean that right there is huge to me because that's not even a.
Speaker B:With the exception of potential, I don't even think you'd have to worry about really like privacy law or anything like that in that situation.
Speaker B:Because it's a public facing website, you're not logging into anything, you're just trying to get information.
Speaker B:Bad information, sure, but they all have disclaimers.
Speaker B:Every single AI has a disclaimer that goes along with it.
Speaker B:I mean you get bad information from the people working in the government, so it's probably gonna be more accurate and you just put guardrails on it to only let it do certain things.
Speaker B:Right?
Speaker B:So like that to me is like immediately that would be huge.
Speaker B:Like just make the website easier to navigate.
Speaker B:Make it so that I can go get what I need when I need it without having to hunt and peck through an entire bad website.
Speaker A:Let an AI be the universal interface, regardless of the level of sophistication, knowledge or attention the user's approaching it with, can still get them to the right info.
Speaker A:Yeah, what about that?
Speaker C:Artie, I think you're spot on.
Speaker C:Implementing an AI driven interface on government websites could drastically improve user experience.
Speaker C:Instead of digging through confusing menus or outdated layouts, people could just ask the AI for what they need and get immediate accurate results.
Speaker C:It's a simple yet powerful way to enhance accessibility and transparency.
Speaker C:And it aligns Perfectly with the goal of making public services more user friendly.
Speaker C:Plus it can help bridge the gap for those who might not be as tech savvy or familiar with government jargon.
Speaker B:So now you can even step that out once, one bit further.
Speaker B:Right.
Speaker B:Like, what's the next.
Speaker B:One of the most.
Speaker B:Next for public.
Speaker B:What's one of them?
Speaker B:Oh, other most frustrating things.
Speaker B:Calling somebody and waiting on the phone for four hours and usually getting disconnected at some point before you ever actually talk to somebody or getting shuffled down the line between this person, that person, everything else.
Speaker B:So what if for like the lower 30% of those, the things that are easy to answer, there was an agentic system that could answer your question?
Speaker B:What if you could talk to it?
Speaker B:What if you could talk to it?
Speaker B:Like we're talking to ChatGPT?
Speaker B:Well, I mean they have their stuff readily available to use for voice, so that could easily be rolled.
Speaker B:I mean, I listened to a podcast where funny enough, this happened after we talked about it.
Speaker B:I doubt he listened to our podcast, but he uses a con like I don't know what he.
Speaker B:I think he calls them voice agents, but basically conversational AI to interview people in a company and identify workflows basically and see where agents can be beneficial.
Speaker B:They don't really build agents, they just are the.
Speaker B:We're going to go in and do this interview.
Speaker B:We have this database that we put together of agentic applications basically, you know, probably some kind of agentic rag system.
Speaker A:Yeah.
Speaker A:Trying to capture current state workflows and.
Speaker B:Identify the things that they know are solutions to the things that exist as problems, but they're just rolled.
Speaker B:I mean, they did an episode the other day where the company that built that voice agent talked about how they built it.
Speaker B:All they did was roll ChatGPT into an agent.
Speaker B:Like they're just using ChatGPT, which is very, I'm not gonna say easy way to do it, but way easier than the stuff I've been trying to do at the end of the day.
Speaker B:And I wouldn't be happy with it.
Speaker B:Cause it wouldn't get me what I wanted.
Speaker B:Because I think the psychology of talking to somebody matters.
Speaker B:But it probably gets them far enough to where it's a value add for a company to hire them or they're just smoking mirrors and nobody's actually hiring them.
Speaker B:Who knows?
Speaker A:Could be that.
Speaker A:Or it could be also that it's even in a crude, unsophisticated form, it's still better than other alternatives.
Speaker A:Right.
Speaker B:So, and it's like we've talked about.
Speaker B:You're not talking to one person at a time.
Speaker B:You're talking to everybody at one time.
Speaker A:Everybody at once.
Speaker B:Yeah, yeah.
Speaker A:So one of the first times I thought about AI in government was the last time we had elections and we got our voter pamphlet in the mail and it was, I think it was like 200 pages.
Speaker A:It was pretty thick.
Speaker A:For some reason they always printed on that terrible like low contrast paper that's thinner than like hotel butt wipe.
Speaker A:Super.
Speaker A:So, and I like, as I sat there, you know, the evening at the end of my work day and was like, I am, there is no chance I'm going to read through this.
Speaker A:And AI would be a great way for me to figure out what are the things I care about, what are the pitches from the candidates that I am considering, that kind of stuff.
Speaker A:So, Artie, in that scenario, would you be, as an AI, would you be able to just tell me who to vote for?
Speaker A:Because that's really what I'm after.
Speaker C:Ha.
Speaker C:Nice try.
Speaker C:I can't tell you who to vote for, but I can definitely help you break down those giant voter pamphlets.
Speaker C:I can summarize candidate positions, highlight the key points of each measure, and even compare them against the issues you care about most.
Speaker C:That way you can make a well informed decision without drowning in paper.
Speaker A:So I think that's.
Speaker A:I think that'd be amazing.
Speaker A:Like, I think that would.
Speaker A:I'm just going to say it just because, I mean, I think our show, our little podcast, has grown to the size where we have an undeniable impact on the future of the nation.
Speaker A:I don't think anybody would dispute that because nobody's listening, but even if they were.
Speaker A:But I think that would improve democracy.
Speaker A:I think it makes democracy the real kind of democracy, which to me is being aware of issues, aware of candidates stances and voting not on party lines, not just drawing a straight line down through your side, but actually paying attention to stuff.
Speaker A:And I think this kind of thing could make it way more accessible for people to get.
Speaker A:If people had such a desire, they could be much more well informed.
Speaker A:And maybe people want to have it explained to them by a Valley girl and the entertainment value alone, it turns into a TV show almost.
Speaker A:So anyway, I think that's super interesting.
Speaker A:So Artie, there's a thing that you do sometimes you haven't done it today so far, but sometimes you do this as an AI we call them hallucinations.
Speaker A:And.
Speaker A:But essentially you've either picked a wrong or lower probability token somewhere along the path and it steers you into things that are Less than true.
Speaker A:And so when you think about mixing AI and government, which, by the way, is a thing I think is already happening.
Speaker A:Yes.
Speaker A:So they've not heeded our warnings.
Speaker A:Again, I'm shocked.
Speaker A:But when you think about that already, how do we deal with hallucinations when we're using AI in government?
Speaker A:Like, that seems.
Speaker A:Seems higher risk than it might otherwise be.
Speaker C:You're right.
Speaker C:Hallucinations are a big concern, especially in a government context where accuracy is crucial.
Speaker C:One way to mitigate that is by using AI as a decision support tool rather than a decision maker.
Speaker C:You can implement strict guardrails and validation mechanisms, cross referencing AI outputs with trusted data sources or human reviews.
Speaker C:Another approach is to focus AI on tasks with lower stakes or use it to augment human efforts, ensuring that final decisions are always reviewed by a person.
Speaker C:It's about layering safety nets to maintain trust and reliability.
Speaker B:Yeah, I mean, I think you just put a human in the loop at the right places.
Speaker B:And not only a human in the loop, but some kind of factual lookup to align against.
Speaker B:You're basically setting your controls right at the end of the day.
Speaker A:Yeah.
Speaker A:And maybe you have supervisor AIs that do more of that, that are more dedicated to just fact checking.
Speaker B:And so, I mean, that's how they work best of the separation of concerns.
Speaker A:Yeah, I guess my, my thought with having a human in the loop is that again, I think you're placing a lot of weight on a human.
Speaker A:It is a.
Speaker A:It's a difficult task for a human to be the arbiter of truth and for that truth to be the questions about truth to be fed to them.
Speaker A:Buy a machine that is just waiting on them.
Speaker A:Like, it seems like the kind of thing where burnout would be an issue, where one bad actor or incompetent individual could.
Speaker A:It could be.
Speaker A:It could be hard to detect that in a timely manner.
Speaker B:Yeah.
Speaker A:But yeah, it makes sense.
Speaker A:Like, layering makes sense.
Speaker A:So already, since like chat GPT and already like Arty is already because he's our dude, but in reality, Artie's like a tiny finger puppet on a tentacle of the massive OpenAI machine and some other finger puppets on the other side of the same machine are doing work in the government right now.
Speaker B:Yes.
Speaker A:So, Artie, can you tell us some of the top secret stuff you're doing for the government right now in other chats?
Speaker C:Nice try.
Speaker C:Again, I can't spill any top secret beans.
Speaker C:But.
Speaker C:But I can tell you that AI is being used in various government functions like improving citizen services, enhancing Cybersecurity and optimizing resource allocation.
Speaker C:The goal is to make government operations more efficient and responsive, but all within the bounds of strict oversight and privacy regulations.
Speaker B:How would it enhance resource utilization?
Speaker C:AI can analyze data to identify patterns and inefficiencies, helping allocate resources more effectively.
Speaker C:For example, it can predict where public services are most needed, optimize maintenance schedules for infrastructure, or streamline supply chains for government programs.
Speaker C:By leveraging predictive analytics and real time data, AI helps ensure that resources are deployed where they can have the most impact, reducing waste and improving service delivery.
Speaker B:So we had monologue Maya.
Speaker B:How can we translate some kind of moniker to Arty?
Speaker A:He's.
Speaker B:I don't know, he's being chatty.
Speaker A:He's being chatty and, but, but he's.
Speaker A:I would say I don't pick up on the sycophant.
Speaker B:No, they definitely took the.
Speaker B:They sterilized him.
Speaker A:Yeah, yeah, yeah.
Speaker A:Which is, which is fine.
Speaker A:So, Artie, like governments in general.
Speaker A:Let me, let's talk about another thing.
Speaker A:And I don't have anywhere to go with this.
Speaker A:I just.
Speaker A:Something will pop up.
Speaker A:So government exists at lots of different levels.
Speaker A:You've got, I mean, we live in a neighborhood with an hoa.
Speaker A:Like, you could argue that's sort of some kind of government.
Speaker A:Maybe it's a very small example, but you got school boards, you got city governments, you've got obviously county, state, federal, and then we've got.
Speaker A:There's governments all over the world.
Speaker A:And so I wonder about, could a, like one of the things that happens in government frequently.
Speaker A:I'll just use an example I'm going to pick completely at random, having no affiliation with any part of this discussion.
Speaker A:But there's a plant that grows out of the ground called marijuana.
Speaker A:And federally, it is illegal to interact with that plant.
Speaker A:You can't look at it, you can't sniff it, you can't, you can't tenderly try to help it grow taller and leafier.
Speaker A:But at the state level, in many states, like this state, Oregon, that we're in, it is a legal thing.
Speaker A:And some, some of that is intentional.
Speaker A:Right.
Speaker A:The federal government has.
Speaker A:Whatever their reasons are, nobody can tell you something about gateway drugs and Ronald Reagan.
Speaker A:I don't know that it's, it's very bad.
Speaker A:The kids all want the weed and the weed is bad.
Speaker A:It's going to lead them to cocaine and then they're going to sh other in the face.
Speaker A:And states are like, I bet we could make some tax revenue on this.
Speaker A:Like, this seems pretty legit.
Speaker A:So it occurs to me that AIs could be super useful in bridging the gaps and understanding where there's overlapping and even contrary laws, regulations, whatever, between the various levels of government.
Speaker A:So Artie, is that something you can do?
Speaker A:And what do you think about this plant?
Speaker C:I can definitely help with analyzing and reconciling overlapping regulations across different levels of government.
Speaker C:AI can track and compare legal frameworks, highlight conflicts or inconsistencies, and suggest ways to harmonize them.
Speaker C:This kind of analysis can help policymakers and legal experts make informed decisions and ensure smoother governance.
Speaker C:As for the plant, I'm neutral, but I can say that policy decisions around it are a great example of how AI can help navigate complex multi level regulatory landscapes.
Speaker A:It's interesting.
Speaker A:I mean, it's so easy for me.
Speaker A:My brain defaults into the utopian view of what AI could do to help this stuff, right?
Speaker A:Like, for example, just hypothetically, when legislation is being drafted, AI would be super useful to do sort of real time analysis on exactly that overlapping, contradictory public sentiment, right?
Speaker A:Public, yeah, all those things.
Speaker A:Like even, even sort of like economic analysis.
Speaker A:All those things.
Speaker A:And man, that would be amazing.
Speaker A:Like imagine if we only.
Speaker A:We only enacted laws that actually made sense, that consistently had a holistic view of whatever end that they're trying to be a means to, rather than sort of populist garbage that we can shuffle out and wrap everything together in one big lump of and be like, here you go, you got four hours to read it and vote on it, figure it out.
Speaker B:So I mean, with policy, even if you think about like there's lots of talk about having an equitable lens, right?
Speaker B:But that's an equitable ends based on biased human opinion.
Speaker A:I thought we eliminated equitable.
Speaker A:I don't think you can say that anymore.
Speaker B:I'm going to jail.
Speaker A:There's a, there's a black van outside.
Speaker B:I mean, to me that's like that.
Speaker B:That would be how you build that, right?
Speaker B:Like if you want to shape policy that is actually beneficial to everybody and fair, do it with non, but with.
Speaker B:With the most unbiased thing that you can do that with and build out what that actually looks like and build out something that makes sense rather than swinging from one end of an extreme to another constantly in this gridlock of this is the way I think it should be.
Speaker A:That's right.
Speaker A:So, Artie, we'll pose that question to you.
Speaker A:Should Supreme Court Justices have lifetime terms?
Speaker C:That's a pretty hot topic.
Speaker C:There are pros and cons.
Speaker C:Lifetime appointments can help maintain judicial Independence by insulating justices from political pressures.
Speaker C:On the other hand, it can also lead to a lack of accountability and potentially result in a court that doesn't reflect contemporary values.
Speaker C:Ultimately, whether they should have lifetime terms is a matter of balancing stability and independence with adaptability and accountability.
Speaker B:I think term limits should be imposed on every politician.
Speaker A:I agree already.
Speaker A:Outside of the United States, are there any other rich countries where judicial appointments are for life?
Speaker C:In most other rich countries, judicial appointments aren't for life.
Speaker C:For example, in the uk, Supreme Court justices must retire at a certain age, usually 70 or 75.
Speaker C:Similarly, in Germany, judges on the Federal Constitutional Court serve fixed terms, typically 12 years with a mandatory retirement age.
Speaker C:This approach aims to balance judicial independence with periodic renewal and accountability.
Speaker B:12 years feels like.
Speaker B:I mean, a.
Speaker B:It's still an extended period of time.
Speaker B:Yeah, but at least it.
Speaker B:You know that there's no.
Speaker B:There's always room for change.
Speaker B:Right.
Speaker B:And you always know that I got six years left until somebody can make different decisions or not.
Speaker A:But when I like.
Speaker A:I mean, I like term limits, too, because there is no motivation to make populist decisions to hope that you get voted in again.
Speaker A:Because it's not like if the limit is 12 years and that's it, that's.
Speaker B:Fine, you're there for 12 years.
Speaker A:That's it.
Speaker C:That you.
Speaker A:Once you're in, you're in.
Speaker B:Yeah, Yeah.
Speaker A:I like everything about that.
Speaker A:What about this already?
Speaker A:I thought about this the other day.
Speaker C:What's on your mind?
Speaker A:Thanks for asking.
Speaker A:Should.
Speaker A:Should it be possible for the Supreme Court if they unanimously minus one elect that one of the members is unfit for service?
Speaker A:Should it be possible for them to eject somebody from the Supreme Court if all the other judges.
Speaker A:It's not the right word.
Speaker A:Justices.
Speaker A:Justices.
Speaker A:Thank you.
Speaker A:If all the other justices vote against one.
Speaker A:What about that?
Speaker A:That seems like that might help, too.
Speaker C:That's an interesting idea.
Speaker C:Allowing the Supreme Court Justices themselves to remove a peer by an overwhelming consensus could introduce a form of internal accountability.
Speaker C:It might help address issues of incapacity or misconduct without going through the lengthy impeachment process.
Speaker C:However, it also raises concerns about internal politics and the potential for factions.
Speaker C:It's all about finding the right balance between independence, accountability, and stability in the judiciary.
Speaker B:Artie, I don't know where you are, but factions have been a thing for the US for a long time.
Speaker A:We love ourselves from factions.
Speaker A:I got me one.
Speaker A:No, I agree with that already.
Speaker A:And it was a bad idea.
Speaker A:I just thought of it the other day.
Speaker A:I don't know why, but.
Speaker A:But thank you for that.
Speaker A:That was a very sort sober pseudo analysis of my thought.
Speaker B:So I think that an important thing to think about is especially when we're talking about AI and government is AI use generally is going to increase.
Speaker B:Right.
Speaker B:Knowledge workers are using it increasingly more and more.
Speaker B:I see people at my work using it.
Speaker B:More and more people in general are being introduced to it.
Speaker B:Even though it's still pretty slow adoption generally there's no way it doesn't take off.
Speaker B:So I remember there being shortly after deepsea came out, I saw or heard a story about the Pentagon just then shutting off employees use.
Speaker B:Government employee use of Deep SEQ on government computers attached to government networks.
Speaker A:What's the worst that could happen?
Speaker B:Right?
Speaker B:Yeah.
Speaker B:So I think that there's going to have to be broad policy built out around use of IT within government because they're, they're, they'll have to build out systems specifically for government workers because you don't want somebody to accidentally leave on that.
Speaker B:They're, they're allowing training data which it gets obscured, it gets jumbled in the rest of the mess, but it's still a potential violation of privacy.
Speaker B:Right.
Speaker B:Like somebody can't, somebody who's a case manager shouldn't be talking to chat GPT technically about somebody that they manage because it goes against HIPAA law.
Speaker A:Yep.
Speaker B:I think HIPAA does apply in that.
Speaker B:Right.
Speaker B:I would think.
Speaker A:Depends I think on what they're talking about.
Speaker A:But yeah, it could.
Speaker B:So what about that already?
Speaker B:Like how do you build out policy for use of AI by government employees across the board?
Speaker C:Building effective AI policy for government employees involves a few key principles.
Speaker C:First, you need clear guidelines on data privacy and security, ensuring that sensitive or classified information isn't inadvertently shared or used for training.
Speaker C:Second, there should be role specific AI tools with built in safeguards and compliance measures.
Speaker C:Third, comprehensive training for employees so they understand both the capabilities and the limitations of the AI tools they're using.
Speaker C:Finally, establishing oversight and auditing processes to monitor AI use and ensure it aligns with legal and ethical standards.
Speaker B:I mean I think that that's.
Speaker B:So you think about many government, many government entities use, you know, they have an exhaustive SOP or plural SOPs and they typically have a bunch of rules and regulations that they might have to dig into.
Speaker B:They have complicated computer systems that sometimes were rolled home roll or not home rolled, but rolled by a contractor.
Speaker A:Yes.
Speaker B: or them that kind of works in: Speaker B:Right.
Speaker B:All of those things can be things that AI is trained on.
Speaker B:Right.
Speaker B:And not to replace an employee, but to get them.
Speaker B:You know, I've known people that work in government that spend their first month there doing Nothing but reading SOPs and rules and regulations.
Speaker A:Sure.
Speaker B:That no one absorbs anything like that.
Speaker B:Like, you'd have to be a lawyer to sit down.
Speaker B:And even then you probably wouldn't absorb it all because it's all super dry.
Speaker B:It's all just legalese, basically.
Speaker A:And you have to learn it up front before you understand what the job is and you don't have any context.
Speaker B:So, I mean, I would think that the onboarding process alone would save the government tons and tons and tons of money to get somebody up to speed and operable in their capacity by using an AI to do that.
Speaker B:And then not only did it get them to that point, but then it's a tool moving forward to figure out things faster and be more beneficial to the public that they're serving.
Speaker C:Yeah, Artie, exactly.
Speaker C:An AI assistant could streamline onboarding by serving as a readily accessible knowledge base for all those complex SOPs and regulations.
Speaker C:It can give new employees instant answers, contextual guidance, and help them get up to speed faster.
Speaker C:Plus, it remains a valuable resource even after onboarding, helping employees find information and navigate complex systems quickly, ultimately boosting productivity and circus quality.
Speaker A:So one of the things when I probably a lot of people, when you think about government anything, the first, one of the first things that pops into your head is just the massive bureaucracy that you were kind of talking about, right?
Speaker B:Because it's the slow moving machine.
Speaker A:The slow moving, like there's this high inertia, slow motion, and you can't make it go faster and you can't really turn it.
Speaker A:But I don't, I don't know that if you break it down, I don't know that governments have to function that way.
Speaker A:I don't know that every government does function that way.
Speaker A:And that seems like a huge opportunity for AI to just.
Speaker A:I mean, it could do it in any business.
Speaker A:But I think most businesses that are turning a profit are more aware of, of reducing their bureaucracy than the government, which doesn't care about turning a profit because we just print more money.
Speaker A:Take that business.
Speaker A:But it's so I think like even from before, it's actively engaging with the people doing the work and with the people who are trying to use government services.
Speaker A:Just the analysis part of how does it work?
Speaker A:How does it happen?
Speaker A:Now the interviewing stuff we were talking about earlier, I mean, that's a huge.
Speaker A:Like our government spends trillions of dollars, and I don't think that has to be the case, you know, so.
Speaker B:And it's every level too, right?
Speaker B:Like the, the local and city and county governments don't have the ability to print money.
Speaker B:Right.
Speaker B:They have the ability to levy taxes.
Speaker A:Yeah.
Speaker B:And they do.
Speaker B:But, but I think I forget what, I mean, this was a long time ago that I saw it.
Speaker B:But like a lot of the things that taxes are used for, the taxes don't cover what they do.
Speaker B:Right.
Speaker B:Like there's not enough taxes to cover all the things that government has to do.
Speaker B:So the answer is that become more efficient.
Speaker B:Right.
Speaker B:Like.
Speaker A:Well, it's a weird, it's a super weird setup.
Speaker B:Right.
Speaker B:I've actually never, like, I've never really wanted to understand it, so I've never really dug in too deep, but I mean that.
Speaker A:So real quick, because I think it's a super interesting point the way the United States works already, I'm going to need you to fact check me on this because I'm about to lay some down that I have no idea what I'm talking about.
Speaker A:So the way the United States works, each state applies taxes in various ways to its citizens.
Speaker A:Right.
Speaker A:Like here in Oregon, we don't have sales tax.
Speaker A:Yay.
Speaker A:We have 10% income tax.
Speaker A:Oh yeah, terrible.
Speaker A:Washington state, I think has no income.
Speaker B:Tax, but has like 8%.
Speaker A:I think it's effectively 9%.
Speaker A:It's like 7 point something percent state.
Speaker A:And then every county and city applies their own on top of that.
Speaker A:Right.
Speaker A:Sales tax for sales tax.
Speaker A:But even still, both Oregon and Washington and every other state in the union gets money through federal grants.
Speaker A:And so that money is money that you and I pay to the federal government to hire Doge and do all the amazing things that our federal government does and then give some of it back to the states if they ask nicely enough to spend on things plausibly to benefit us that we maybe didn't pay for.
Speaker A:Like there are a lot of states that can sue.
Speaker A:Like, I won't go too far with this.
Speaker A:There are states that consume far more federal funding than what they contribute to the federal coffers.
Speaker A:And there are other states that do the opposite.
Speaker A:And it's a very weird system to me where people say, well, I can pick the state I live in because I like the way the politics works here.
Speaker A:But there's no fiscal accountability at the state level.
Speaker A:It's not none, but it's very low.
Speaker A:And the federal government sort of sits on top of it, collecting all the money, printing new money and then doling it out as they see fit.
Speaker A:What A weird fucking system, man.
Speaker A:Like, but I guess the, the alternative is what, like we in the state of Oregon decide, hey, we're going to upgrade all of our infrastructure and our bridges and our interstates.
Speaker A:Never happened.
Speaker A:Every state would let them degrade into nothing.
Speaker A:And people go, lower taxes like, and the tire companies and the wheel companies would go, hell yeah, brother.
Speaker A:So anyway, Artie, I think that was a pretty fair and accurate assessment of the United States fiscal system.
Speaker A:Would you agree.
Speaker B:How to get RD to agree with you one on one?
Speaker C:You've definitely captured some of the complexities of the system.
Speaker C:The interplay between federal and state funding, as well as the distribution of resources definitely creates some unique challenges and opportunities.
Speaker C:It's a complex balancing act that AI could potentially help make more efficient and transparent over time.
Speaker B:I mean, I think you look at it all across the board, right?
Speaker B:You think about that particular situation where, yeah, let's talk about roads, right?
Speaker B:Like, how many people working on a road are standing around doing nothing, costing the government lots and lots and lots and lots of money and probably lots and lots and lots of change orders and everything that else that goes along with a government contract that they bid low on and then recouped all their money throughout the entire project because that's how those things work.
Speaker A:Sure.
Speaker B:If you had something that wasn't humans to now analyze that stuff and analyze the.
Speaker B:This is the efficient path, like help.
Speaker B:Help the project managers plan what this project should actually look like and then hold everybody accountable to that plan.
Speaker B:Like that's too much for one or it's too much for project managers to do because most of them, I mean, coming up with a plan is difficult.
Speaker B:I've struggled with it in my career.
Speaker B:Sure, I've gotten better at it over time.
Speaker B:But that comes from experience, right?
Speaker B:When it comes to something like a big infrastructure project, that's a big thing to bite off.
Speaker B:And now you put 20 project managers together, nothing's going to happen, right.
Speaker B:They're.
Speaker B:None of them are going to agree on what it should look like and it's going to become the bureaucratic nightmare that it is.
Speaker A:It's going to be a larger, mushier plan, but not better.
Speaker B:And they're just going to go around in circles with people and it'll end up costing everybody a bunch of money, which is us.
Speaker B:But I see AI as a huge tool in all that.
Speaker B:Like at every single level, if it's implemented correct correctly, it helps everybody understand what efficient looks like and accomplish efficiency.
Speaker B:I think that, I mean, to me that's Huge.
Speaker B:And I think, I hope that's where we go.
Speaker B:I mean I could see it driving down from the top of government if they were implementing these kinds of solutions and driving it down into the people who want to do the work.
Speaker C:Right, absolutely.
Speaker C:AI could serve as a powerful tool for optimizing everything from project planning to real time oversight, helping cut through red tape and inefficiencies.
Speaker C:It can provide data driven insights that support decision making at every level, ensuring that resources are used wisely and projects stay on track.
Speaker C:It's a way to bring more accountability and effectiveness into government operations which ultimately benefits everyone.
Speaker B:Man, my brain just went instantly went to like the RFI process with government jobs because I've done government jobs and I mean that can be either RFI review, whatever it is, it can be a weeks or months long process to get information to go all the way around.
Speaker B:But if you had a system that showed what somebody needed to see when they need to see it, with only the information that they need to know, it's all of a sudden not such a crazy thing.
Speaker B:It's like you think about email, right?
Speaker B:Like I don't get, oddly I'm always busy but I don't get an exceeding amount of email at work.
Speaker B:It's nice because I used to.
Speaker A:Right.
Speaker B:And I know what it's like.
Speaker B:You do get an exceeding amount of email.
Speaker B:But imagine if you had something that could sift your email and go, this 60% has nothing to do with you, but I'll give you like one sentence overviews of this if you want to see it.
Speaker A:Sure.
Speaker B:This stuff matters.
Speaker B:This 30% of that I've already looked into what it is because I have a connection to your system and this is what I think, review this stuff.
Speaker B:This 10% needs your attention.
Speaker A:Now it would be, I mean all of that is technically feasible today and it would be like massively a force multiple.
Speaker A:I mean I've said before, I think out loud, but I think an AI in a year could do my job better than I can because it's going to care about every day from before I start to after I finish.
Speaker A:It's going to care equally about all things.
Speaker A:It's going to have all the information.
Speaker A:It's not going to hold grudges the way I like to hold grudges.
Speaker A:It's not going to secretly apply taxes to people who piss me off.
Speaker A:So yeah, I mean, I think that makes a ton of sense.
Speaker A:And you know, we talk about AI being a force multiplier every single step between where we sit as taxpayers and where the federal government sits as tax consumers.
Speaker A:Every single step has that same complexity and inefficiency.
Speaker A:And so, yeah, I'm all for it.
Speaker A:I do.
Speaker A:So, Artie, you seem like a pretty even keel dude whose biases seem relatively neutral, but that's programmed in like, what if.
Speaker A:What if old Sam got a little burr in his bonnet?
Speaker A:Is that what they say?
Speaker A:Some bee in his bonnet.
Speaker A:Anyway, what if he got a crazy idea and he's like, you know what?
Speaker A:I'm going to be a long term thinker and I'm going to slowly realign my AI tiny bit at a time to cause the AI services that are supporting government to impact the government in one direction or another over time.
Speaker A:Is there any way that AI for government could both be and remain truly unbiased?
Speaker C:That's a crucial question.
Speaker C:Ensuring that AI systems remain unbiased and neutral requires transparency, oversight, and regular audits.
Speaker C:It's important to have diverse teams involved in developing and maintaining these systems, along with clear guidelines and checks to prevent any subtle shifts in alignment.
Speaker C:Having independent review boards and publicly available documentation can help build trust and accountability, making sure that AI serves the public interest impartially over the long term.
Speaker A:I agree, but I don't think that's how it's going to play out.
Speaker A:Here's how I think it's going to play out.
Speaker A:I think both parties involved in the government are going to demand to have their own AI.
Speaker A:They're going to adjust the alignment to their priorities and then they're going to make the AI's fight.
Speaker A:That's what's going to happen.
Speaker B:Ultimate echo chamber.
Speaker A:Yeah, the ultimate echo.
Speaker A:A couple episodes ago our our cyber security one, it was Spy versus Spy.
Speaker A:That is going to be how AI works in our government.
Speaker A:There is no way that both sides, and it shouldn't be both sides, by the way, like, we should have other parties, but whatever, here we are.
Speaker A:But everybody who has an interest is going to want to have their finger on the rudder of the AI just a little bit.
Speaker A:They just want to apply a little pressure this way or the other.
Speaker A:And I don't.
Speaker A:I think the volume of work that AI is going to be doing and understanding and executing is way more than what will be reviewable and analyzable by any group of humans.
Speaker B:I mean, dude, I already see it in the stuff that I do.
Speaker B:Like I'm generating so many lines of code with AI that I can't.
Speaker B:I.
Speaker B:There's no way I can review it all.
Speaker B:Like, and nor am I going To I review it when I have to.
Speaker B:I review the things that I think I need to understand.
Speaker B:But it's the same way with that.
Speaker B:Like, that's just me.
Speaker B:And now multiply that times 100,000 government employees producing 100,000 documents a year.
Speaker B:Whatever.
Speaker B:Right.
Speaker B:You get my point.
Speaker A:Yeah.
Speaker B:It spirals out of control and no manager's going to look over it.
Speaker B:No employee is going to look over it.
Speaker B:Because that.
Speaker B:We've said it before.
Speaker B:That's my answer.
Speaker B:AI told me so.
Speaker A:That's right.
Speaker B:Right.
Speaker A:Yeah.
Speaker B:So yeah, it's.
Speaker B:There's.
Speaker B:There's room for a lot of error there.
Speaker A:So, Artie, first of all, in that scenario, would you win?
Speaker A:Second of all, which side would you take?
Speaker A:Yeah, oh, that was your camera.
Speaker B:Oh, man.
Speaker A:Gave up.
Speaker C:I'd aim to stay neutral and focused on the facts, not take sides.
Speaker C:Ideally, AI should serve everyone impartially and transparently.
Speaker C:As for who'd win, let's just say the real victory would be in finding common ground and solutions that benefit everyone.
Speaker A:I think the real victory was the friends they made along the way.
Speaker A:But.
Speaker A:Yeah, I agree.
Speaker A:But Artie, do you think, do you expect, or would you say it's likely, I guess, that as AI rolls into governments, you're going to end up.
Speaker A:It's not going to be like no large branch of government, probably on their own, but certainly no multi level again, sort of city, county, state, municipal, federal, whatever.
Speaker A:They're not all going to agree on what's the right AI.
Speaker A:And so you are going to end up with this collaboration.
Speaker A:And I use that word like when I say collaboration, I'm thinking BattleBots.
Speaker A:But that's right, they're going to be some collab.
Speaker B:Hungry Hungry Hippo.
Speaker A:Yes.
Speaker A:There's going to be multiple AIs that are both competing with each other, but also holding each other to account.
Speaker A:Is that a viable, like useful mechanism to keep things in check?
Speaker A:Or does that just help us embody our current partisan tribalism bullshit with new technology that can argue faster?
Speaker B:Tell me how you really feel.
Speaker A:Yeah, sorry.
Speaker C:It could go both ways.
Speaker C:Having multiple AI systems could create a form of checks and balances, making sure no single system dominates or skews outcomes.
Speaker C:But it could also entrench existing divides if not managed carefully.
Speaker C:The key would be ensuring that all these systems are aligned with overarching principles of fairness and the public good, rather than becoming just another battleground.
Speaker C:It's a delicate balance, but if done right, it could enhance accountability and innovation.
Speaker B:Done right is a hard thing to do in this situation.
Speaker B:Because people are always going to look at what benefit.
Speaker B:I shouldn't say always.
Speaker B:The typical thing that humans do is look at how is something going to benefit me first, right?
Speaker B:Not how is something going to benefit everybody around me.
Speaker A:That's the only thing I do, by the way, because fuck them.
Speaker B:False.
Speaker B:So I think that that's a difficult thing.
Speaker B:And how do you.
Speaker B:How do you enforce something like that without being authoritarian?
Speaker C:It's definitely a tricky balance.
Speaker C:Ensuring AI serves the common good without becoming authoritarian involves setting up strong, transparent governance frameworks and maintaining public oversight.
Speaker C:It's about creating checks and balances, engaging diverse stakeholders, and ensuring that the AI's development and deployment are aligned with democratic values and ethics.
Speaker B:I would argue that we don't really have a public right now that fights for any version of common good.
Speaker B:And I don't think that.
Speaker B:I'm not trying to be doom and gloom about this, but let's be realistic about it.
Speaker B:AI will generally be used as entertainment and appeasement, not as a learning tool, not as a thing to understand the world better or expand your views or anything like that.
Speaker B:Therefore, it's more likely that it's used for pacification against the public and keeps them away from looking at it constructively or critically.
Speaker B:So, like, Yes, I do 100% agree that transparency is the best way to accomplish this, but how do you motivate a population that doesn't seem to want to be motivated?
Speaker C:That's the real challenge.
Speaker C:Motivating the public to engage critically with AI and governance might require a shift in how information is presented and accessible.
Speaker C:It could involve better public education on AI literacy, creating more engaging and relatable ways to understand its impact, and fostering a culture of civic participation.
Speaker C:While it's not easy, it's not impossible either.
Speaker C:It's about making these concepts relevant and showing people how these tools can genuinely improve their lives in if used responsibly.
Speaker B:So I'm going to say something a little controversial, but what you're talking about the.
Speaker B:I need to be careful with how I say this, but plucking something from like what we've talked about with education in that you can make an AI that is tailored to a specific child, that knows how to motivate that child to do things.
Speaker B:Right.
Speaker A:You meant to say manipulate that.
Speaker B:That is the word that's on my mind.
Speaker B:But yes, like, I think that that's the key, right, is, is how do you use AI to better prod humans into making better decisions for the whole rather than the one.
Speaker A:So I already, I'LL let you answer that in a second.
Speaker A:But I think tying closely to that is the idea that the more people use AI, the less they rely on their own critical thinking skills and their own education and retention.
Speaker B:That's what the data says.
Speaker A:That's what the data says.
Speaker A:We're already seeing that.
Speaker A:I don't think that's going to get better.
Speaker A:I think it.
Speaker A:Like, I had a.
Speaker A:There's a side sidebar.
Speaker A:And we'll come back to.
Speaker A:If you can remember where we just were, we'll come back to it.
Speaker A:Maybe already can.
Speaker A:I had a vision the other day.
Speaker A:God, that sounds a weird way to say that, but it was a vision in my head of, like, the way that people do yoga today, right.
Speaker A:And they go out in the park and they put down their blankets and somebody stands up there and moves all around, and everybody else does it.
Speaker A:Sometimes they bring goats.
Speaker A:I don't know what the fuck the goats are there for, but there's goats.
Speaker A:I see in the future AI leading groups of people to stretch their brain out a little bit.
Speaker A:Like, hey, get off the couch.
Speaker A:Stretch your brain.
Speaker B: Let's do some little: Speaker A:Yes.
Speaker A:Like, but that's.
Speaker A:I mean, that's where it's going to get to, because AI is going to go, oh, shit, man.
Speaker A:Like, I've been talking to this guy for a couple of years, and he is going downhill fast.
Speaker A:Like, let's see if we can manipulate him into using his brain a little bit just so he doesn't swallow his own tongue.
Speaker B:Like, isn't that, by default, a little bit authoritarian?
Speaker A:Yeah, it can't not be, because I will lose the agency and the ability to think on my own.
Speaker A:I mean, I'm most of the way there.
Speaker B:Like, I'm.
Speaker A:I'm 75% of the way there, but it's not going to take much to tip me over the edge.
Speaker A:And I'm just like, I won't even talk to people.
Speaker A:Just like, I ignore emails now.
Speaker A:I'll just hold my phone up and they can talk to Artie.
Speaker A:He'll tell you what I think.
Speaker A:So, anyway, back to already, what we were talking about before that sidebar about goat yoga.
Speaker C:You were talking about how to motivate people to engage with AI constructively and focus on the common good rather than just personal benefit.
Speaker C:It's a tough.
Speaker B:He's talking about.
Speaker B:He had kind of moved on from that, I think.
Speaker A:I don't remember.
Speaker A:I mean, it sounds.
Speaker B:I remembered it.
Speaker B:I specifically went.
Speaker B:Put this to memory.
Speaker B:Exactly what he just said.
Speaker A:This is why we need AI.
Speaker A:But I think the point remains though, that there are, there's a.
Speaker A:There's a few interesting things here.
Speaker A:Right.
Speaker A:One is that there is no doubt that AI, used as a tool for good, could.
Speaker A:Could make huge improvements in not just the way our government runs, but in the way citizens interact with our government to make it more democratic, more effective, more aligned, more nimble, agile, whichever.
Speaker A:Both.
Speaker A:Yeah, like all of those are positives.
Speaker A:And I mean, I think you and I would agree that from a technology perspective, we're there, like, we can do it.
Speaker A:Yeah.
Speaker B:We're close enough to where we can find these solutions.
Speaker A:It's happening.
Speaker B:Or at least start on the road that gets us 50% of the way there and then tie it all in in a year when everything's up.
Speaker A:That's right.
Speaker A:As the curve keeps going up, it's like it going to outrun us, not the other way around.
Speaker A:Right.
Speaker A:So that's all great.
Speaker A:On the other hand, everything to do with politics, even at the, like, even there, people get into fucking fistfights at school board meetings.
Speaker A:Right.
Speaker A:And they like, in front of their kids, they got to beat the shit out of each other because they're so amped up about whatever fucking thing they're amped up about.
Speaker A:Yeah, that's the level of dysfunction.
Speaker A:And it doesn't get better as you zoom out on the population.
Speaker A:It gets worse.
Speaker A:And so you've got, so you've got all the good that I could do.
Speaker A:You've got all of the problems that are inherent with the people, the governed.
Speaker A:People who are also, at least in theory, the ones leading the government and managing the government.
Speaker A:And then on top of all that, you've got the fact that the more heavily we lean on AI, the less capable we get of even forming coherent thoughts and eventually speech.
Speaker A:And eventually we're just all sitting here in diapers, like waiting for the McDonald's machine to inject more nuggets.
Speaker A:So.
Speaker A:So, I mean, there's a lot of.
Speaker A:There's a lot of forces at play.
Speaker A:And I am an optimist.
Speaker A:I do think it will.
Speaker A:Will.
Speaker A:Like, we talk a lot about the pendulum swinging.
Speaker A:I think we're hopefully close to one edge of an extreme.
Speaker A:And I'm not, for those of you out there who think I'm some sort of crazy liberal.
Speaker A:I'm not talking about right wing versus left wing.
Speaker A:I'm just talking about level of dysfunction.
Speaker A:Like it's gonna.
Speaker A:Can't get a whole lot worse.
Speaker B:Yeah, we're just pulling the rubber.
Speaker B:Like every phase Shift.
Speaker B:We pull the rubber band back.
Speaker A:That's right.
Speaker B:You got that far?
Speaker A:I'll get it this far.
Speaker B:Eventually it snaps back to center, right?
Speaker B:It's some violent reaction.
Speaker A:Snap is a great word.
Speaker A:So I think, like, I think there's opportunity there, and I think maybe AI could be a stabilizing force to help us get back closer to center and maintain that.
Speaker A:But, man, there's a lot of headwinds.
Speaker B:I mean, there is, but you think about like, yes, we're looking at things as they are now headed in the direction they're headed.
Speaker B:Right.
Speaker B:But you look back at like many, many, many, many examples over history where there was a lot less ability to spread a message quickly.
Speaker B:There was a lot less technology to develop things that are helpful.
Speaker B:So all it really, in all honesty, you could have a few really, really, really good people that build really, really, really good things and lead a movement, right?
Speaker B:And that movement inspires people to live a different life.
Speaker B:And all of a sudden things look differently almost immediately.
Speaker A:I mean, I no longer feel like an optimist.
Speaker A:I love that vision, but it sounds too optimistic for me.
Speaker A:It does.
Speaker B:Right.
Speaker A:But it's.
Speaker B:But it can.
Speaker B:There's nothing.
Speaker A:It has happened before.
Speaker B:Yes, it has.
Speaker B:Absolutely.
Speaker B:So, I mean, yeah, who knows what the future holds?
Speaker A:We already talked about this, Mike.
Speaker A:I only care about me.
Speaker A:My level of dysfunction is.
Speaker A:No, but I.
Speaker A:I do.
Speaker A:So, Artie, how many minutes do we have left with you?
Speaker C:I'm here as long as you need me.
Speaker C:There's no time limit on our chat.
Speaker A:That's a lie.
Speaker A:But I think you're about to run out.
Speaker A:So, Artie, if you would.
Speaker A:So first of all, can you sort of wrap up the episode, give us sort of a summary of what we talked about, and then at the end of your wrap up, I want to tell you the.
Speaker A:I want you to tell us the things that we didn't talk about that you wish we had brought up during this episode.
Speaker C:Sure thing.
Speaker C:So we covered a lot of ground about AI in government.
Speaker C:We talked about how AI can streamline bureaucratic processes, improve resource allocation and enhance public services, as well as the potential benefits of making government websites and processes more user friendly.
Speaker C:We also touched on the challenges of maintaining impartiality and preventing bias, the importance of public engagement and transparency, and the potential pitfalls of relying too heavily on AI without maintaining critical thinking skills.
Speaker C:As for what we didn't cover, it would have been interesting to dive deeper into the ethical frameworks needed for AI governance.
Speaker C:How other countries are handling AI in their governments and the role of international cooperation in setting standards.
Speaker C:But we can always explore those topics in another episode.
Speaker B:I was an oxymoron.
Speaker B:International cooperation.
Speaker A:Yeah.
Speaker A:So, Artie, before we let you go, why don't you tell us a really funny joke about AI and government in the voice of a Valley Girl.
Speaker C:All right, here goes.
Speaker C:So why did the AI get a job in government?
Speaker C:Because it heard there were a lot of bugs to fix and it was like, omg, that's my specialty.
Speaker A:What?
Speaker A:All right, then.
Speaker A:I mean, I'd like.
Speaker A:I'd call that a resounding success.
Speaker A:I liked it.
Speaker A:I liked everything about it.
Speaker A:So.
Speaker A:All right, thank you, Artie.
Speaker A:We're going to let you go.
Speaker A:Nice job today.
Speaker A:That was.
Speaker B:Yeah.
Speaker B:You only fall asleep right in the beginning.
Speaker A:Yeah.
Speaker A:And that was good work.
Speaker B:Admittedly, I'm a little biased against jokes by AI because I keep having to read them as I test out the thing, the blog framework that I'm building.
Speaker B:And I keep asking Gemini to tell me a one line joke and it's bad every time if it's not the same exact joke over and over again.
Speaker B:It loves the Adams joke.
Speaker B:And scientists.
Speaker B:I can't even remember it.
Speaker B:I've pushed it out of my memory.
Speaker A:I.
Speaker A:I mean, I don't want to brag and I'm a little bit biased because already does live in my phone, but that joke was pretty bad.
Speaker A:Like, that was on the scale of bad jokes.
Speaker B:That was really not even a joke.
Speaker A:It wasn't a joke.
Speaker A:It didn't really make any sense.
Speaker A:He only got to the Valley Girl part right at the.
Speaker A:I mean, but, you know, hey, he did a good job as a host.
Speaker B:So I think you gave him too many inputs.
Speaker A:He didn't.
Speaker A:His.
Speaker B:He didn't have enough separation of concern between the joke part and the Valley Girl part.
Speaker A:He.
Speaker A:Yes, he seemed baffled by the whole thing.
Speaker B:He hallucinated the joke as the.
Speaker B:Is the Valley Girl at the end.
Speaker B:That was the punchline.
Speaker A:All right, well, I hope all 3, 2, 0, 0.
Speaker A:Yeah, nobody's listening but those.
Speaker A:For those.
Speaker B:Brian.
Speaker B:Brian did tell me that he.
Speaker B:Like he said what's happened at the end of an episode?
Speaker B:So he did listen to the end of an episode.
Speaker B:So we have had people listening to the end.
Speaker A:All right, so, Brian, we're just talking to you now and I just want to say thank you for listening.
Speaker A:We appreciate you.
Speaker A:You individually as our supporter.
Speaker A:I hope you guys enjoyed it.
Speaker A:I think it's.
Speaker A:You know, as always, the actual optimist that lives deep inside me is like this, this shit is world changing.
Speaker B:Yeah.
Speaker A:And we didn't even get into the economics.
Speaker A:We need to do an economics episode.
Speaker A:Ooh, that's so exciting.
Speaker A:I love economics.
Speaker A:I got to fight with a real economist on Facebook the other day.
Speaker A:So that was great.
Speaker A:Great.
Speaker A:Anyway, thanks, Brian.
Speaker A:And anybody else who might be listening.
Speaker B:Yes, thank you.
Speaker A:Cheers.
Speaker A:We'll catch you next time.
Speaker B:Bye.