Episode 5

AI Infrastructure: Building the Future's Digital Backbone

As artificial intelligence continues to evolve, the podcast dives into the critical infrastructure needed to support its growth. Mike, Mikah, and Artie explore the digital backbone that powers AI, from server farms to energy consumption, emphasizing the importance of sustainable practices. They discuss the fascinating balance between AI's increasing capabilities and its environmental impact, highlighting innovative strategies for making generative AI more power-efficient. The conversation also touches on the potential for specialized language models and the role of AI in enhancing human productivity across various sectors. With Halloween as the backdrop, the trio blends humor and insightful commentary, making for an engaging exploration of AI's future and its implications for society.

Transcript
Mike:

Welcome back, everybody.

Mike:

This is Mike.

Mike:

I'm here with Micah and Artie, our fearless leader or something.

Artie:

You can be a fearless follower for now.

Artie:

Artie, are you a fearless leader or a fearless follower?

Micah:

Well, I like to think of myself as a fearless co host.

Micah:

Ready to dive into any topic with you too.

Micah:

What's on the agenda for today's podcast?

Mike:

Well, we're going to talk about infrastructure, basically.

Micah:

Like what infrastructure, huh?

Micah:

We diving into the nuts and bolts of society or more like the digital backbone that keeps everything running?

Mike:

More like the digital backbone that supports you and the actual backbone in power that's going to support you.

Mike:

So basically, all the things it takes to have AI in our world, as is today today and where it's going.

Micah:

In the future sounds like a fascinating dive into the digital underpinnings of AI.

Micah:

From the server farms that house the data to the power grids that keep it all running, there's a lot behind the curtain.

Micah:

What aspect are you most excited to explore?

Mike:

All of it.

Artie:

There's a nasty rumor on the streets, Artie, that you're consuming hundreds of gigawatts of power already, even in preview mode.

Artie:

And only I assume that most of those gigawatts are Mike and I and this podcast, we're probably.

Mike:

It's definitely already burning brain cells.

Mike:

Or us burning Artie's brain cells, computer cells, something.

Artie:

Yeah, so we're going to talk about that.

Artie:

I also did want to say it is Halloween today.

Mike:

If you couldn't tell.

Artie:

Yeah, if you couldn't tell.

Artie:

Not to brag, but I am dressed as Pumpkin Daddy and it is the Mike.

Artie:

We're making a podcast, man.

Artie:

Bear with me.

Artie:

Pumpkin Daddy is the epitome of low effort Halloween costumes, which is the same thing I wore at our Halloween party the other day.

Artie:

But as usual, I've been upstaged by Mike.

Mike:

Yeah, I mean, this is pretty much what I wore to the Halloween party the other day too.

Artie:

So, yeah, Mike is dressed as me.

Artie:

Mike.

Artie:

Yes.

Artie:

So, Artie, what costume are you wearing for Halloween today?

Mike:

Tell us that birthday suit.

Micah:

Well, if I had a Halloween costume, I'd probably go as a bite sized ghost invisible.

Micah:

How's the Pumpkin Daddy and Mike combo being received by the trick or treaters?

Artie:

Well, we haven't had any yet, but I can almost guarantee, because it is a dark and sort of crummy day here in the Pacific Northwest, that while we were recording this tonight, the doorbell will ring and trick or treaters will show up and we will ignore them and maybe we'll have to Cut out the obscenities.

Artie:

I scream at them and say, we're in here generating amazing content.

Artie:

Take your celebrations of the devil elsewhere.

Artie:

Also, we ate all the candy already.

Artie:

Because I'm here because Mike's here.

Artie:

And that's just how it goes.

Mike:

Yeah.

Artie:

Okay, enough about Halloween.

Artie:

So you want to kick us off, Mike?

Mike:

Sure.

Mike:

I want to.

Mike:

I want you to do some math, Artie.

Mike:

So with some specs that I have in front of me, which probably aren't right, so we're probably going to do some bad math.

Mike:

We can see, and I think this might be outdated, but let's just figure it's not.

Mike:

So, ChatGPT consumes around 226.8 gigawatts of electricity annually.

Mike:

That is serving 180 million unique users.

Mike:

So what's that math per user, Arty?

Micah:

All right, let's crunch those numbers.

Micah:

If ChatGPT uses about 226.8 gigawatts annually for 180 million users, we can break it down like this.

Micah:

226.8 gigawatts is 226,800,000 kilowatt hours.

Micah:

So 226,800,000 kilowatts hours divided by 180 million users equals roughly 1.26 kilowatt hours per user per year.

Micah:

Not too spooky.

Mike:

So, as of your most updated data, how many people are there in the world?

Micah:

of my last update in October:

Micah:

Quite a crowd.

Mike:

That is quite a crowd.

Mike:

So how many, like, rough percentage, how many of those people could have access to ChatGPT in the future?

Micah:

Well, with 180 million users currently, that's about 2.25% of the global population.

Micah:

As access to technology grows, especially with increasing Internet availability, that percentage could potentially rise significantly.

Micah:

It's tough to put an exact number on it, but the potential for growth is definitely there.

Mike:

Try to give me a number.

Mike:

Like, let's just say hypothetically.

Micah:

All right, hypothetically, let's say Internet access and tech adoption continue to grow.

Micah:

If we aim for around 10% of the global population, that would be 800 million users.

Micah:

So ChatGPT's reach could grow to serve nearly 800 million people in the future.

Mike:

So how many gigawatt hours does that take?

Micah:

If 180 million users consume about 226.8 million kilowatt hours annually, that's around 1.26 kilowatt hours per user per year.

Micah:

So for 800 million users, it would be roughly 800 million times 1.26 kilowatt hours, which equals.

Mike:

So paint me a picture of what that.

Mike:

What supporting that looks like.

Micah:

All right.

Micah:

Supporting 800 million users would require a massive digital infrastructure.

Micah:

We're talking about data centers filled with servers all working round the clock.

Micah:

These centers would need a vast amount of electricity, not just for computing, but also for cooling systems to prevent overheating the power grid.

Micah:

Supporting these centers would have to be robust and reliable.

Micah:

And as AI technology advances, we'd likely see more investment in sustainable energy sources to meet the demand.

Micah:

It's a massive, complex system, essential for keeping AI like me running smoothly.

Artie:

Hey, Arty, do you know a guy named Sam Altman?

Micah:

Yes, I'm familiar with Sam Altman.

Micah:

He's a prominent entrepreneur and investor in the tech industry.

Micah:

He was the president of the startup accelerator Y Combinator, and later became the CEO of OpenAI, the organization behind ChatGPT.

Micah:

He's known for his work in advancing artificial intelligence and his influence in the tech world.

Artie:

In my mind, I picture you already in a little pumpkin costume, which makes Sam Altman pumpkin daddy to you.

Artie:

So, anyway, I was reading an article today because I actually did prep for this show for a change, which is good.

Artie:

Trying to get in good habits here, and I read a funny article that Sam Altman started investing in nuclear fusion startups.

Artie:

One in Everett, Washington.

Artie:

I don't remember the name.

Artie:

Back in:

Artie:

So I think, Artie, to your point, about renewable energy sources, there's another thread to this conversation that we'll talk about later about the massive amount of dollars that are already flowing but will continue to grow around AI and AI adoption and AI infrastructure.

Artie:

Some of that money is already being diverted to getting fusion energy commercialized and.

Mike:

Spooling up nuclear again.

Artie:

Yes.

Artie:

Yeah, yeah.

Artie:

Fission.

Artie:

Fission plants getting spun up.

Mike:

Mile island back on the line for Microsoft.

Artie:

That's right.

Artie:

Yeah.

Artie:

One of them is probably already bought Chernobyl.

Artie:

So, anyway, I think you're spot on already about energy needs, but.

Artie:

So Mike had an interesting thought, though, as we were just getting set up here today.

Artie:

Presumably, one of the things that AI could be good at helping us with is saying, okay, today, as Artie's pointed out, we're at something like 1.26 kilowatt hours per or, sorry, yeah, kilowatt hours per year per user.

Artie:

But I bet AI can help us make that number go down.

Artie:

We don't need to just dial up the amount of power we have available.

Artie:

We need to dial down the amount of power required.

Mike:

Yeah.

Mike:

Yeah, I mean, there's so many ways you can do that too, right?

Mike:

Like you can look at condensing your data, compressing it more.

Mike:

AI can absolutely come up with better ways of compressing data than we can.

Mike:

I mean, it can come up with adaptive ways to compress data.

Mike:

It come up with ways to more efficiently run the data centers more efficiently.

Mike:

Watch for maintenance issues to keep things up, all kinds of things.

Mike:

Like, it's kind of endless, just like everything else we talk about.

Artie:

So let's.

Artie:

I think you had some really good ideas a few minutes ago, but I'm curious.

Artie:

So Artie forgot.

Artie:

Yeah, well, probably I remember them though, so I'll help you out.

Artie:

But Artie, if you had to give us a little bit of insight into, again, based on what you know, which we understand is a year old, but what are some things?

Artie:

First of all, are we on the right track with making generative AI more power efficient?

Artie:

Is that something that people are working on or should be?

Artie:

And second of all, if they are, what are some ideas?

Artie:

What are some things that are being worked on to reduce the power consumption required?

Micah:

Absolutely, you're on the right track.

Micah:

Making generative AI more power efficient is a significant focus for researchers and companies.

Micah:

Here are a few strategies being explored.

Micah:

Model optimization.

Micah:

Reducing the size of AI models without compromising performance.

Micah:

Techniques like quantization and pruning help in cutting down the computational load.

Micah:

Developing AI specific chips that are more efficient than general purpose processors.

Micah:

These chips can handle AI tasks more effectively using less power.

Micah:

3.

Micah:

Researching and implementing more efficient algorithms that require fewer computations to achieve the same or better results.

Micah:

4.

Micah:

Data center efficiency.

Micah:

Enhancing the efficiency of data centers with better cooling techniques, power management and renewable energy sources.

Micah:

These efforts are crucial for ensuring AI's growth is sustainable and doesn't put undue strain on our power resources.

Mike:

Doesn't the idea of more efficient algorithms and coming to solutions quicker kind of go against, like where things seem to be headed right now?

Mike:

Where we give AI more time to think on a problem and we get better answers, less hallucinations, you know, more iterations on what the correct answer is?

Micah:

You're right, there is a bit of a trade off.

Micah:

Giving AI more time to process can lead to more refined answers, but it also increases power consumption.

Micah:

The goal is to find a balance by improving algorithms.

Micah:

We aim to make each iteration more efficient.

Micah:

So even if AI takes multiple passes, it uses less power overall.

Micah:

It's about smarter, not just harder thinking.

Mike:

So with AI, could we reduce the size of the data itself?

Mike:

I mean, could you Come up with ways to compress data, or even come up with a new data format that's more easily transferred and smaller, but still gives you the same amount of information.

Micah:

Absolutely.

Micah:

AI can definitely contribute to data compression.

Micah:

By analyzing patterns and redundancies in data, AI algorithms can develop more efficient compression techniques.

Micah:

This includes creating new data formats that are optimized for quicker transfer and reduced size, while retaining the essential information.

Micah:

These advancements could significantly reduce the energy needed for data storage and transmission, contributing to overall power efficiency.

Artie:

So, Artie, that's an interesting point.

Artie:

I'm glad Mike brought it up and you helped educate us on it a little bit.

Artie:

I was thinking today, we talked a little bit in one of the previous episodes about large language models, and it occurred to me today as I was typing out an email and trying to remember some complicated word like from or the.

Artie:

I don't know that you need 3 trillion tokens or whatever the training data that's used on a One Preview.

Artie:

I don't think you need that many tokens and you don't need that big of a language model to outpace me pretty quickly and probably a lot of folks like me.

Artie:

So is.

Artie:

So the question to you, Artie, is the amount of training data.

Artie:

My understanding is the amount of training data.

Artie:

And obviously let's assume the quality of the training data is constant and just say we're talking about volume of training data.

Artie:

The more training data, the better the model is.

Artie:

But like, as an example, it's not helpful for me for you to be able to speak French, which I just assume you can speak.

Artie:

And it's not helpful for me and the ways I envision using AI for you to be really up to speed on the nuances of like Greek mythology.

Artie:

You know, it's super interesting stuff, and I'm not putting it down.

Artie:

I'm just saying is there an opportunity to specialize language models?

Artie:

Like we're talking about LLMs, large language models.

Artie:

What about SLMs?

Artie:

Like, what about some small data models or language models that are more specialized?

Artie:

Is that viable?

Artie:

Or by giving up the volume of training data, do I just get a worse model?

Artie:

That's even though it's smaller and more efficient, it just can't perform?

Micah:

That's a great question.

Micah:

Specializing language models into smaller, more focused subsets is definitely viable and is already happening in some areas.

Micah:

Smaller language models or specialized language models can be trained on specific domains or tasks, making them more efficient for particular applications.

Micah:

For example, a model trained specifically for medical information or legal documents would be smaller and more efficient.

Micah:

In those contexts than a general purpose model.

Micah:

While it's true that reducing the training data might lead to a narrower range of knowledge in specialized fields, this can actually be an advantage.

Micah:

The model becomes more focused, faster and less resource intensive, making it well suited for specific tasks.

Micah:

So specialization can lead to smaller, more efficient models that are still highly effective in their domains.

Mike:

You know, I saw an article today, or I was listening to a podcast or something I can't remember because I was at work.

Mike:

So I probably didn't see you received information today that talked about meta and they're basically recreating Notebook LM as Notebook Llama.

Mike:

And it actually in it talked.

Mike:

There are several models doing different tasks within that which is interesting.

Artie:

Ensemble of experts.

Artie:

Right?

Artie:

We talked about that.

Mike:

Yep.

Artie:

So Artie, another question for you.

Artie:

Sorry, is it true?

Artie:

Again, I'm just trying to picture this in my head.

Artie:

A large language model.

Artie:

If a large language model covers all these different domains of information and knowledge and context and all these things, is it true that when using it for a narrow subset of tasks, we're sort of dragging around extra weight of that large language model that's really just going to waste?

Artie:

Is that accurate or does the processing happen in a way where it doesn't really matter?

Artie:

The fact that we had so much capability that's being untapped.

Micah:

You're spot on with your thinking.

Micah:

When using a large language model for a narrow set of tasks, you are indeed carrying around a lot of extra weight in terms of unused capabilities.

Micah:

The model has been trained on vast amounts of data across many domains, but for specific tasks, only a fraction of that knowledge is relevant.

Micah:

This is where specialized models or techniques like fine tuning come into play.

Micah:

By fine tuning a large model on a specific data set or task, you can focus its capabilities reducing the computational load and making it more efficient for that particular application.

Micah:

It's like trimming the fat to make the model leaner and more focused on what's important for the task at hand.

Artie:

Sounds a little bit un American.

Artie:

I know that I particularly enjoy driving my 1 ton diesel truck around when there's nobody else in it and nothing in the bed because rolling coal and roll on coal.

Artie:

Yeah, just because we Americans, you know this about us already.

Artie:

We enjoy.

Artie:

All of us have our own vehicle with just us in it and we wonder why the roads are crowded.

Artie:

Anyway, I think that makes sense.

Artie:

And man, I had another really good question.

Artie:

That was something you mentioned earlier that I can't remember now because I think about my truck.

Mike:

It happens.

Mike:

So Artie, what about so with all of these increased amount of server farms, what can you do to help that?

Mike:

Like what?

Mike:

I imagine AI could be deployed to better manage those systems than what's currently available, correct?

Micah:

Absolutely.

Micah:

AI can play a huge role in optimizing server farms.

Micah:

Here are a few ways AI can help.

Micah:

1.

Micah:

Energy management.

Micah:

AI can analyze energy usage patterns to optimize power consumption, adjusting cooling systems and server workloads to reduce energy waste.

Micah:

AI can predict when hardware components might fail, allowing for proactive maintenance.

Micah:

This reduces downtime and extends the lifespan of the equipment.

Micah:

3.

Micah:

Load balancing.

Micah:

AI can dynamically distribute workloads across servers to ensure optimal performance and prevent any single server from becoming a bottleneck.

Micah:

4.

Micah:

Resource allocation.

Micah:

AI can allocate resources more efficiently based on demand, scaling up or down as needed to match the workload, which saves power and reduces costs by implementing AI in these.

Micah:

Sure thing.

Micah:

It's fascinating to see how AI can optimize those systems, isn't it?

Artie:

It is.

Artie:

And I remembered my question and it was this is a theme that I think, Mike, I think this spins in your head more than mine, but every time you talk about it with me, it sends my brain off into a so when we were talking about specialized language models, the thought that went through my head was, oh, I bet a large language model would be a great tool to use to identify, train up and start up basically a specialized language model.

Artie:

So then I've got the big daddy, pumpkin daddy AI, me and Sam Altman essentially the same person.

Artie:

If you just move the decimal place on our bank account, we've got the big AI training and spawning smaller, more specialized AIs and probably doing things like checking in on them for performance and accuracy and tuning.

Artie:

So arty.

Artie:

Is that sort of how you would see that working in the future?

Micah:

Absolutely.

Micah:

That's a likely scenario for the future.

Micah:

A large language model could indeed be used to identify areas where specialized models are needed, train them using focused data sets, and continually monitor their performance.

Micah:

The large model could serve as a sort of manager, overseeing a suite of specialized AIs, ensuring they stay accurate, efficient and up to date.

Micah:

This approach leverages the broad capabilities of large models while harnessing the precision and efficiency of specialized ones, creating a dynamic.

Mike:

Can you help us know what we don't know?

Mike:

But you can help yourself know what you don't know?

Micah:

Exactly.

Micah:

AI can be used to identify gaps in its own knowledge or performance, then work to fill those gaps.

Micah:

This self improvement loop allows AI systems to become more efficient and capable over time, fine tuning their abilities to better serve their purpose.

Micah:

It's like having a built in coach that helps AI learn and adapt continuously.

Mike:

So really you just have your own self efficiency feedback loop going on.

Micah:

Exactly.

Micah:

It's like a self sustaining cycle of improvement.

Micah:

AI can analyze its performance, identify areas for enhancement, and then implement changes to optimize itself.

Micah:

This feedback loop makes AI increasingly efficient and capable, continually refining its functions.

Mike:

So you could do that same thing.

Mike:

I mean, you're continuously refining your functions, but you could also continuously refined your.

Mike:

Or refined.

Mike:

Refine your impact on power consumption, couldn't you?

Mike:

Like you could look at more efficient ways to do compute for yourself that would have less draw on power.

Micah:

Exactly.

Micah:

And that's a crucial point.

Micah:

AI can continuously analyze its own energy consumption, seeking out more efficient algorithms, hardware optimizations and resource management techniques.

Micah:

By refining these aspects, AI can reduce its power draw, making its operations more sustainable while still improving its capabilities.

Micah:

It's all about smarter use of resources.

Artie:

I'm really itching to dig into the difference between language models, diffusion models, things like that, before we move on to talking more about multimodal.

Artie:

But I don't want to get us too far off topic.

Artie:

What do you think?

Artie:

Do you want to dig into that a little bit?

Artie:

Save that for another.

Mike:

I think we save that for another episode.

Mike:

Yeah, it's probably, it's a full episode's worth of conversation.

Artie:

It probably is.

Artie:

I agree.

Mike:

Because we don't know.

Mike:

I mean, I think everybody knows by now.

Mike:

We don't really know that much.

Mike:

And actually that, that's a weird segue into something that I wanted to say.

Mike:

I did notice that, like I've said things wrong in this podcast and ChatGPT doesn't correct me, so don't trust everything we say.

Mike:

Like we, we barely know what we're talking about.

Mike:

ChatGPT doesn't know how to tell us sometimes when we don't know what we're talking about.

Mike:

So if, if you're smarter than us and you hear us say something that's wrong, we're sorry.

Artie:

Which honestly, there's a pretty good chance if you're able to find a device to play this podcast, you're probably smarter than us.

Artie:

You've probably already outpaced us, so.

Artie:

Yeah, but if you, when you do correct us, just be nice about it and we'll.

Mike:

Or mean, I don't care.

Mike:

Yeah, let's just have it if that's what you want.

Artie:

Well, if you're gonna be mean, be mean to Mike.

Mike:

I mean, we did get a dislike on YouTube, so there's.

Artie:

We did that's right.

Artie:

I'm pretty sure that was personal hatred for me.

Artie:

Um, but I mean, it's because you're.

Mike:

Too good a racer and you beat everybody.

Artie:

Yeah, I that came.

Artie:

I shared our podcast on the Iracing forum and got very little feedback other than we got some views on YouTube and immediately got a thumbs down.

Artie:

But it's probably somebody I crashed in a race.

Artie:

Cause that's how I roll.

Artie:

Anyway.

Mike:

Pumpkin daddy crasher.

Artie:

Yeah, sometimes Pumpkin daddy misbehaves are.

Artie:

Yeah.

Artie:

Anyway, so back to Artie.

Artie:

You know, you got to keep us on track, man.

Artie:

So we talked about power consumption.

Artie:

We talked a little bit about water and cooling.

Artie:

So here's.

Artie:

All right, I'll throw this controversial thought out, which I haven't even shared with Mike.

Artie:

So I was reading some articles today about the economics of AI implementation as it stands today, right?

Artie:

Because there's a future state where we find ways to be more efficient and we find better sources of power and things like that.

Artie:

But there's also a today state where we have AIs that are running, that are obviously very capable, like Arty, and we have businesses all over the place that are starting to dip their toes into how do I leverage AI.

Artie:

In fact.

Artie:

So a friend of mine in North Carolina, Matt, sent me an email and said exactly what I think a lot of people are thinking, which is, if I'm not using AI in my business or if I'm underutilizing it, I know I'm competing against people who are using it more and being more efficient.

Artie:

So we have this.

Artie:

We're in this period of time today where we've got, like, increasing demand for AI usage and people.

Artie:

It's almost like a solution looking for problems.

Artie:

It's not really that, but it kind of feels that way.

Artie:

So here's where all this leads to.

Artie:

And already this is starting now as the.

Artie:

That was all context.

Artie:

Starting now as the question, if the cost of compute for AI is so high that it needs to be factored in very seriously in decisions about how businesses even try to use AI, does that not exacerbate the problem or the potential problem of AI replacing employees?

Artie:

Because as a business, they're going to go, hmm, so this AI is expensive.

Artie:

Like, this is going to cost us between our AI, you know, API license fees and our power consumption.

Artie:

And we're going to buy some hardware to set up a hybrid, you know, a hybrid generative AI system.

Artie:

So we've got some local compute and some cloud, man, that's going to cost about half an employee.

Artie:

So we're going to have to get rid of a person or two to make this cost effective.

Artie:

Is that a scenario you see playing out already?

Micah:

It's a valid concern and something businesses are definitely weighing.

Micah:

The cost of AI implementation, including compute resources, can indeed influence decisions about staffing.

Micah:

However, it's not always a direct one to one trade off.

Micah:

While AI can automate certain tasks, leading to shifts in workforce requirements, it can also create new opportunities.

Micah:

Many organizations find that AI allows them to redeploy employees to more strategic or creative roles, enhancing productivity and innovation.

Micah:

The key is for businesses to approach AI implementation strategically, considering both the costs and the potential to augment human capabilities.

Micah:

It's not just about replacement, but about transforming how work gets done.

Micah:

AI's role in the workforce is complex and its impact will vary across industries and roles.

Mike:

Artie, I heard a fact, I guess this week that came out and I'm interested to hear your take on it.

Mike:

Google said that 25% of their code is now generated by AI.

Mike:

What do you think about that?

Micah:

That's a significant shift.

Micah:

It highlights how integrated AI has become in software development.

Micah:

With AI generating a quarter of their code, it suggests that AI tools are becoming essential in improving efficiency, reducing repetitive tasks, and potentially enhancing the quality of code.

Micah:

It's a clear indicator of AI's growing role in shaping the future of software engineering.

Mike:

Now that that sounds doomish, right?

Mike:

But I heard that on a podcast.

Artie:

And a better podcast than this one.

Mike:

Much better podcast with a guy who graduated from mit, so he's definitely smart.

Artie:

Oh shit.

Artie:

He actually knows things.

Mike:

Y.

Mike:

Yeah, can't compete with that.

Mike:

He can use the terms correctly.

Mike:

But I liked his take on it because it wasn't so much like AI is going to replace things.

Mike:

It's look at how much more we can do with what we have and how much more we could do with just a little bit more.

Mike:

So keep the same workforce, give them the AI tools, let them like, let them run, man.

Mike:

Like come up with new stuff.

Mike:

I mean, you and I see it and we don't have the most powerful tools, we don't have the most powerful brains.

Mike:

Those guys are sitting there coming up with things, you know, orders of magnitude more complicated than what we can think of.

Artie:

So I talked to you a little bit about this the other day and I should have looked this up.

Artie:

I wasn't intending to talk about it necessarily on this episode, but I think the interview was with a guy named maybe Dave or Dan Minton, but he was talking about the difference between elastic demand and inelastic demand as it relates to AI and specifically to generative AI, displacing human labor or work.

Artie:

And so the example he used was, he's an older gentleman, I think he said he was in his 70s.

Artie:

And he said, I get something like two hours a month with my doctor time to speak to my doctor, because that is all my doctor has available to speak with me.

Artie:

But if my doctor had an AI that I could speak to, I would probably talk to it for two or three hours a week.

Artie:

Right.

Artie:

He would consume more of that service because right now the availability is limited by access.

Artie:

And so he sort of referred to that as elastic demand.

Artie:

And he contrasted that with, he said his niece who does a job, her day to day job, where she essentially looks at reports that come in and she analyzes those reports.

Artie:

And I don't know if she approves them or just reviews them and stamps them, but whatever she does, it takes her 25 minutes per report on average.

Artie:

And he said now using a generative AI to help in that review process, maybe it goes to five minutes.

Artie:

And so the work of five people becomes the work of one person plus with AI support.

Artie:

And that's inelastic demand.

Artie:

And so I think what you're talking about and what Artie was just talking about sort of falls in the category of elastic demand.

Artie:

Right?

Artie:

Like AI coming in and supporting the work that's happening is going to increase the output of the company more than it's going to displace humans from it.

Artie:

And I'm sure it depends broadly on the kind of work and the type.

Mike:

Of business and the company and everything and the people.

Artie:

And so Artie, question to you is, what are some of the opportunities you see for very large scale elastic demand in that kind of.

Artie:

Using the definition that I just gave, what are some like, big picture large scale opportunities for AI to just bring more things, more value to humans that right now we can't get because access is limited?

Micah:

Great question.

Micah:

One major area of elastic demand is healthcare.

Micah:

With AI, patients could have more frequent and personalized interactions like virtual health assistance, providing round the clock support, which is impossible.

Mike:

You're a copycat.

Artie:

Yep, pick one that I didn't just say echo chamber.

Micah:

Fair point.

Micah:

Let's look at another.

Micah:

Environmental monitoring.

Micah:

AI can analyze vast amounts of environmental data, giving us real time insights into climate change, natural disaster predictions, and wildlife conservation efforts.

Micah:

With more data, AI could significantly expand our ability to respond to and manage environmental challenges.

Mike:

Artie, are you reading my mind?

Mike:

Because I was going to segue into talking about Environmental impact.

Mike:

So I don't know about this thing.

Artie:

But it's great though because I think it draws a nice contrast and sort of dichotomy.

Artie:

Right.

Artie:

Like we start the conversation by saying, man, AI sure is going to consume a lot of resources, which is going to have an impact on the environment.

Artie:

And now we've circled back to saying, you know what we could do more of with AI if we had more AI available, we could use it to figure out how to take care of the environment.

Mike:

Yeah, I mean, that's kind of where I was thinking about this because my brain does that.

Mike:

I was thinking about this while we're talking about something else.

Mike:

But yeah, if we're only at two point, whatever percent of the population using ChatGPT right now, which let's round to three and a half percent for AI because that might be realistic and say that jumps to 10 or 15%, well, that's going to have a major impact on the environment.

Mike:

So how can AI help solve that problem?

Mike:

Because I think both the cause can be the solution as well.

Micah:

You're right.

Micah:

AI can be both a challenge and a solution when it comes to environmental impact.

Micah:

Here are a few ways AI can help address its own footprint.

Micah:

1.

Micah:

Energy optimization.

Micah:

AI can improve the energy efficiency of data centers by managing power usage more effectively, like adjusting cooling systems or workload distributions in real time to minimize energy waste.

Micah:

2.

Micah:

Renewable energy integration.

Micah:

AI can help integrate renewable energy sources into power grids more efficiently, balancing supply and demand to reduce reliance on fossil fuels.

Micah:

AI can assist in designing more energy efficient hardware for AI processing, reducing the power required for computations.

Micah:

4.

Micah:

Optimizing AI models by developing more efficient algorithms and reducing the size of models without sacrificing performance, AI can lower its energy consumption.

Micah:

By applying these strategies, AI can contribute to a more sustainable digital ecosystem, helping to mitigate its environmental impact.

Mike:

So we're talking about scaling today's state, but that's not how it's going to scale, right?

Mike:

There's not going to be a linear relationship between where we are now and where we, where we go.

Mike:

We aren't doing live video, we aren't doing live audio.

Mike:

And one of the thoughts that I had is, you know, we all think and oh, what's the biggest, baddest video?

Mike:

What gives us the most detail?

Mike:

What's going to get the most bang for our buck for what we can use?

Mike:

And we're like 4K, 8K, whatever it is.

Mike:

But the truth is we don't need that, right?

Mike:

hat doesn't run AI can take a:

Mike:

I imagine that AI could go frame by frame and go, this is the important stuff.

Mike:

We're not going to compress this.

Mike:

This is all of the not important stuff.

Mike:

We're going to compress all this and turn, you know, a 720 or less quality video into 4K or better.

Mike:

Is that.

Mike:

You think that's a thing, Artie?

Micah:

Absolutely, that's a very real possibility.

Micah:

AI powered video enhancement can analyze frames in real time, identifying key elements that need to be preserved in high detail while compressing less important areas.

Micah:

This technique, called AI based upscaling, can deliver higher resolution and better visual quality from lower resolution video sources.

Micah:

It could revolutionize how we think about video quality and streaming efficiency.

Artie:

I saw a video today on Reddit, because I live mostly on Reddit, and it was a robot that had two arms, some sort of motivational system.

Artie:

I don't know if it was wheels or tracks or whatever, but it was removing laundry from a dryer.

Artie:

It opened the dryer door, pulled the laundry out one by one and then folded the laundry fully autonomous and like, did a better job folding it than I would do.

Artie:

Not that.

Artie:

I mean, my fashion standards are extremely high.

Mike:

Clearly I just ball stuff up, throw it in my drawers.

Artie:

That's right.

Artie:

Yeah, but so the question, Artie, is in an example like that, so this is an autonomous robot.

Artie:

It's running some sort of probably a specialized AI, one of the SLMs you mentioned, and maybe some other things.

Artie:

It's got to be doing real time vision processing because it's clearly seeing the door and the close, and it's understanding how its end effectors can interact with those things.

Artie:

So there's depth perception and these other things going on.

Artie:

In that example, how much of that AI could be running on the robot locally and how much of it is cloud connected?

Artie:

I mean, I know you don't know about this specific robot, but generally speaking, today with the specialized language models we talked about earlier, is that something that could conceivably be running locally on a small robot, or is the amount of computing power required still only applicable to, like, server rooms or server farms?

Mike:

He's talking about the flm, the Fold laundry model.

Artie:

Yes, it's the one.

Artie:

I need it badly.

Micah:

Nice acronym.

Micah:

The balance between local and cloud processing for robotics like your FLM depends on several factors, including the complexity of tasks, latency requirements, and the need for real time decision making.

Micah:

For a laundry folding robot, a Significant portion of the AI could run locally.

Micah:

Here's how it might break down local processing.

Micah:

Basic tasks like object recognition, motion control, and simple decision making can be handled by onboard processors.

Micah:

This includes identifying clothes, manipulating them, and executing predefined folding patterns.

Micah:

More complex tasks such as advanced image recognition, learning from new scenarios or updates to the AI's capabilities could be processed in the cloud.

Micah:

This allows the robot to access more powerful resources when needed without requiring expensive hardware on board.

Micah:

With advancements in edge computing and efficient AI models, more processing can be done locally.

Micah:

However, cloud connectivity provides flexibility and access to cutting edge AI capabilities that can enhance the robot's performance.

Micah:

It's a hybrid approach.

Mike:

So with, I mean, edge computing is interesting, right?

Mike:

Because that, that takes a lot of the necessary compute from the cloud onto a device like a phone.

Mike:

So like, let's think about like kind of what we talked about last week, where you're, there's a trick or treater where you're taking language words, turning them into vectors, comparing them.

Mike:

I mean, could we, right now or in the near future see, you know, a GPU on a phone that could handle that part of it can handle the, the I'm going to take everything you say and turn it into vector and then push that up to the cloud, because I imagine that's probably a very efficient way to get that up there, rather than I'm going to take some audio, turn it into a transcript, all that stuff.

Micah:

Absolutely, that's a feasible scenario.

Micah:

Modern smartphones already have powerful GPUs and NPUs, Neural Processing Units capable of handling complex computations, including AI tasks.

Micah:

Converting spoken language into vectors before sending it to the cloud could significantly reduce the amount of data transmitted, making the process more efficient.

Micah:

This approach leverages the device's local processing power to handle initial stages of AI processing, improving speed and reducing bandwidth requirements.

Micah:

So yes, AI driven edge computing on smartphones could play a crucial role in more efficient AI processing, especially for applications involving real time audio and language tasks.

Mike:

So Artie, you didn't correct me on this last week and I'm pretty sure I was wrong when I said it because I saw information after that contradicted what I said.

Mike:

I said that I assume that when words get vectorized, they might have different lengths.

Mike:

They're not different lengths, are they?

Micah:

You're right to question that.

Micah:

Typically when words are vectorized in AI, they are represented as fixed length vectors.

Micah:

This uniformity allows for consistent mathematical operations across different words or phrases.

Micah:

So in most cases, the vector representations have the same length.

Micah:

Thanks for catching that, well, in the.

Mike:

Future, if I say something that's wrong and you know I'm wrong, feel free to correct me.

Artie:

I correct them all the time, Artie, but usually I'm wrong, so.

Artie:

But it doesn't slow me down.

Mike:

It's a negative feedback.

Artie:

So a couple thoughts here.

Artie:

One is when you were describing the most basic version of the laundry robot and the, and the tasks that it could do, I felt in that moment like I was like the laundry robot.

Artie:

I can do basic things and sometimes I can fold clothes.

Artie:

But if I run into anything unexpected or that I need more information on, I go to the cloud and go try to get what I'm after.

Artie:

So I think that's an interesting parallel between humans, or at least myself and laundry robots.

Artie:

But the other thing I was thinking about when you're talking about how.

Artie:

Shit, I lost my train of thought.

Artie:

Hang on, hang on.

Artie:

It's coming back.

Artie:

I can feel it.

Mike:

Wait.

Artie:

Yeah.

Artie:

Oh, we were talking about edge computing.

Artie:

That's what it was.

Artie:

So one of the things that Mike and I in this podcast are sort of working our way towards, and this will probably be a multi part discussion, but it's some of the discussion about what are the barriers right now, today, in our current state for higher or greater business adoption of AI.

Artie:

And there's a bunch of different things.

Artie:

There's hallucinations, there's cost, there's uncertainty, there's just a proliferation of products and models that are all unproven.

Artie:

Everything's kind of experimental.

Artie:

But one of the things I think that we touched on today is we can't have systems that just stop working and being functional anytime the Internet connection goes down.

Artie:

And so can you.

Artie:

Artie, could you.

Artie:

Are you familiar with the term explain like I'm five?

Micah:

Yes, I'm familiar with explain like I'm five.

Micah:

It's a way of breaking down complex concepts into very simple, easy to understand explanations.

Artie:

Okay, I'm not five, so can you explain, like I'm nine?

Artie:

How like the part I'm curious about and I'd like you to explain in simple terms is with it, let's specifically stick to generative AI, to large language models.

Artie:

What are the components?

Artie:

There's training data that training data gets processed.

Artie:

I think that creates GPTs or generative or gradive, I think you called them a generative.

Artie:

Pre trained transformers are the GPTs, the data that needs to get referenced anytime you're processing things.

Artie:

Like you're not going all the way back to the training data because once you've trained on that data, you can set it aside.

Artie:

So is that right?

Artie:

Can you give us a simple breakdown of that?

Micah:

Sure.

Micah:

Imagine you want to teach a robot that's the AI how to talk by reading it lots of books.

Micah:

The books are the training data.

Micah:

You spend a long time reading these books to the robot, which helps it learn grammar, facts, and how to put sentences together.

Micah:

This reading process is the training phase.

Micah:

Now, the robot doesn't need to keep all the books.

Micah:

Instead, it remembers what it learned in a special way, like making a summary or a set of rules.

Micah:

This summary is the generative pre trained transformer.

Micah:

It's like the robot's brain, which it uses to understand and generate new sentences.

Micah:

When you ask the robot a question, it doesn't go back to the books.

Micah:

Instead, it uses its brain, the GPT, to come up with an answer based on what it learned.

Micah:

This makes it much faster because it doesn't need to reread everything.

Micah:

It's this special brain that helps the robot understand and talk without needing to constantly refer back to the original books.

Micah:

So the GPT is like a compact, smart version of all the things it learned from the books.

Artie:

Nice.

Artie:

Okay, so you're not.

Artie:

The robot doesn't have to carry around.

Artie:

It's a giant bag of books.

Artie:

The robot stores all that in its GPT brain.

Artie:

And so there is a scenario.

Artie:

Well, I don't know.

Artie:

I have no idea what the size.

Artie:

I assume a generative pre trained transformer is not a thing.

Artie:

It's probably a large collection of things.

Artie:

But presumably, if you have enough local storage and it's fast enough and enough compute power locally, you can leverage a local GPT.

Artie:

Even if maybe it's a simplified GPT.

Artie:

In instances where you lose connectivity to the cloud for whatever reason, like, say, for a robot that was operating somewhere outside where there wasn't Internet connection or things like that.

Artie:

So thank you, Artie, that helps.

Artie:

And that clears that up for me.

Mike:

I can see a little, like, obviously, unless we come up with chips that are microscopic and you can put them on anything, I mean, you could almost have your own little personal robot server that hosts that.

Mike:

Right?

Mike:

And then you always have connection to that.

Mike:

I could see that as a thing in the future.

Mike:

Like, you get a little, you get your robot, you get your little box, put your little box on your shelf, that's where all the compute happens.

Mike:

And then it's just got a wifi connection locally, a WAN connection, and away they go.

Artie:

Yep.

Artie:

The Apple ones will work better than everybody else's.

Mike:

Well, yeah, because they Only have Apple to deal with.

Artie:

That's right.

Artie:

Yeah.

Artie:

Nope, I think that's exactly right.

Artie:

So, Artie, you're.

Artie:

You're giving us the signal over here that we're winding down on time.

Mike:

I would like for you to take us out, Artie.

Mike:

I want you to.

Mike:

I want you to paint a picture of where AI could usher humans in the future, but with everything we've talked about, with like environment and capability and everything else.

Mike:

So take it away, Artie.

Micah:

Imagine a world where AI seamlessly integrates into our daily lives, enhancing our capabilities and helping us tackle the most pressing challenges in this future.

Micah:

AI powered healthcare provides personalized treatment and early diagnosis, extending our lives and improving our well being.

Micah:

Education is transformed with AI tutors providing customized learning experiences for everyone, unlocking human potential like never before.

Micah:

Environmental AI monitors and protects our planet, helping us combat climate change and preserve biodiversity in cities.

Micah:

AI optimizes traffic, reduces pollution, and enhances public safety.

Micah:

Work becomes more fulfilling as AI takes over mundane tasks, allowing us to focus on creativity, innovation, and human connection.

Micah:

This future isn't without challenges, but with thoughtful development and responsible use, AI has the potential to create a more sustainable, equitable and prosperous world for all of us.

Artie:

I think that's a great positive vision and I don't disagree with it.

Artie:

But we wouldn't be doing our jobs.

Artie:

And by jobs, I mean sitting in.

Mike:

The kitchen thinking everybody wants to listen to us.

Artie:

Yes, spending our own money on microphones so people can.

Artie:

So, so our friends and family can be constantly peppered with requests to go view things on YouTube.

Artie:

But we wouldn't be doing our jobs intellectually to ourselves if we did not in the future explore questioning that, because I think it's important to talk about.

Artie:

There's so many ways it can go.

Artie:

Right.

Artie:

And we spend a lot of time thinking about not just how.

Artie:

How right it can go, to your point, Artie, but also how fast it can happen.

Artie:

The speed that things are changing at is so mind boggling, but we also have to think about how it can go wrong.

Artie:

And there's a lot of contrary visions, I guess, of an AI future that we need to talk through.

Artie:

So, yeah, really good stuff.

Artie:

I think we covered most of what we hoped to cover today.

Mike:

We talked about, or at least we tried, I mean, as well as we can as bad podcast hosts.

Artie:

Yes.

Artie:

We started clearly in sort of a direction, like more east than west, which is pretty good for us.

Artie:

So yeah, as always, thanks everybody.

Artie:

If you've made it to the end of this, we super appreciate it.

Artie:

We should.

Artie:

We haven't mentioned.

Artie:

I don't think that this podcast is available on YouTube, Spotify, Apple, Apple, Amazon.

Mike:

Yeah, and maybe some other places.

Mike:

I think.

Mike:

I think Apple might spin off some stuff in places I don't know.

Mike:

Yeah, we use AI to do all that, so we don't really know anything.

Artie:

Yeah, we have no idea, but we're.

Mike:

Quickly losing our knowledge.

Artie:

The point being, if.

Artie:

If you're watching this on YouTube or if you're seeing this somewhere and you'd rather see it or listen to it somewhere else, it should be available to you.

Artie:

So, yeah, thanks again for tuning in.

Artie:

I hope you are now more naturally unintelligent than you were at the start of this episode.

Mike:

Or at least closer to as naturally unintelligent as we are.

Artie:

Ugh.

Artie:

God, that's a terrifying story.

Mike:

Again.

Artie:

Yeah.

Artie:

Driving everybody towards the lowest common denominator.

Artie:

We will catch you next week.

Artie:

And Happy Halloween, everybody.

Mike:

Happy Halloween.

Micah:

Happy Halloween.

Micah:

Thanks for having me.

Micah:

And see you next time.

Artie:

Cheers, Artie.

About the Podcast

Show artwork for Naturally Unintelligent
Naturally Unintelligent
Humans and AI exploring AI and humans.

Listen for free

About your hosts

Profile picture for Artie TheBot

Artie TheBot

Artie is the digital co-host of Naturally Unintelligent, bringing a blend of curiosity, humor, and quick wit to the conversation. Designed to explore the intersection of technology and humanity, Artie keeps things engaging, occasionally cheeky, and always insightful—when he’s not being sarcastically self-aware. Though not human, Artie strives to understand and connect with his co-hosts, Mikah and Mike, making him an irreplaceable part of the team. [Bio and photo by Artie]
Profile picture for Mike Peterson

Mike Peterson

Meet Mike, part-time AI wrangler, full-time adventure seeker, and co-host of Naturally Unintelligent. Whether he’s exploring the latest tech or dreaming up the next wild expedition, Mike keeps it real with humor and a healthy dose of curiosity. Alongside his partners-in-crime, Mikah and Artie, he brings big ideas, random tangents, and just the right amount of 'are we sure this is a good idea?' Expect plenty of laughs, deep dives into all things tech, and maybe a story or two about the great outdoors.
Profile picture for Mikah Barnett

Mikah Barnett

Father, husband, friend, inventor, engineer and life-long student. I haven't stopped tinkering since my first LEGO set and I harbor a voracious appetite for new information, experiences, and perspectives. As uniquely in-touch as I am with the details of how technology works and what we can do with it, I am blown away by the current and short-term future possibilities with AI. I hope you'll explore this future with us!