Episode Transcript
[00:00:00] Foreign.
[00:00:05] Welcome to ChatGPT Curious, a podcast for people who are, well, curious about ChatGPT. I'm, um, your host, Dr. Shantae Cofield, also known as the Maestro, and I created this show to explore what ChatGPT actually is really, though, are the files in the computer, how to use it, and what it might mean for how we think, work, create and move through life. Whether you're skeptical, intrigued, or already experimenting, you're in the right place. All that I ask is that you stay curious. All right, let's get into it.
[00:00:38] Hello, hello, hello, my curious people, and welcome to episode 18 of ChatGPT Curious. I am your grateful host, the Maestro, and today we are talking about Nvidia.
[00:00:50] So I'm not going to assume that you know what Nvidia is and not because when you assume, you make an asset of you and me, but because I was chatting with Lex earlier when I was researching this episode and I was telling her how stoked I was that we can learn so much for free, right between ChatGPT, which you can use for free, uh, and YouTube, like, you can fucking learn all the things. And so she asked what I was learning about and I mentioned this term, cuda, which we'll talk about later. Uh, and then I said, I'm learning about this because next week's episode is going to be about Nvidia. And. And she said, what's that?
[00:01:27] So I have already once made an ass out of myself by assuming with her. So I'm not going to assume that, you know what is I'm not going to make an ass out of myself again.
[00:01:37] And I will not assume that you know what Nvidia is. But I am going to guess that you're curious, which is why you're listening to this podcast. It's why you're listening to this episode and I'm alone you some shit because it's nice to be in the know about things.
[00:01:53] So Nvidia has become a very integral company in this AI madness and in this economy.
[00:02:03] I've, uh, talked before about the circular deals and last episode was all about will or OpenAI get Old Navy. And we got into, you know, some economics and such. Uh, and I just think you should know about this company doesn't mean you gotta m buy any stocks or change your life in any way, shape or form. I just think it's nice to be an informed consumer. So let's get into it. So Nvidia, that is spelled N V I D I A. You'll see it in all caps, Nvidia Nvidia Corporation. It's an American tech company famous for designing, and that's huge. They design. They're not. They don't actually manufacture them. They. They outsource manufacturing, but they design the chips. We've heard this word, chips. What. What does that mean? What is that? I'm going to talk about that.
[00:02:50] Uh, they design these chips that are more formally known as graphics processing units, or GPUs. And these chips, they are integral. Integral. I had a weird way of saying that they're integral for running.
[00:03:05] Basically, all of the AI models out there, all of the big players that we know and love and use, they be running on Nvidia chips.
[00:03:16] So circling back to what I said, you know, a few minutes ago, about them being a big, you know, economic stock, uh, market player. Like, they are big in this AI madness and in this economy, huge.
[00:03:31] Uh, they are part. Nvidia is part of what's referred to as the magnificent. Wow. The magnificent. What's wrong with me? I just had a stroke. The Magnificent Seven. We're not editing that. We're leaving it in, because this is. That's life. The Magnificent Seven.
[00:03:46] Uh, these are the seven most influential tech companies in the US that's going to be Microsoft, Apple, Amazon, Alphabet, which is Google Meta, Nvidia and Tesla. That guy. Fuck that guy.
[00:03:58] All right. The stock performance of these seven companies has literally, disproportionately move the entire stock market. Us, uh, you know, particularly during this AI boom. And it's. We know, I said it last time, like, AI, we are due for an AI bubble. Actually, in the episode I said, are we in an AI bubble? I talked about this. Yes, we are. We are due for a bubble bursting at some point, I believe. We're.
[00:04:24] I hope we won't see an AI bailout, but do not be surprised if we see the great AI bailout of 2026.
[00:04:31] Uh, but these stocks, they are propping up the US stock market. They are propping up the US economy.
[00:04:40] Uh, and so this is why I'm like, I just want you to know about it, right? I just want you to be in the know. And no, this is not a finance podcast. I'm not telling you to go buy shares of anything. Um, but we are curious people, and I do not think it's ever a bad thing to be informed consumers. Okay? So let's. Let's talk a little bit more about Nvidia, a little background. Nvidia was founded in 1993 in Santa Clara, California, by three dudes, Jensen Huang, Chris Malachowski. Malachi. I don't, I don't know. I'm sorry, Chris, if you're listening to this podcast and I butcher your name, I'm sorry. Uh, and Curtis Priam. Right. The company made graphic cards, graphics cards for gaming PCs. Okay? Very simple. They're making the graphics cards. This is 1993. It's been a minute.
[00:05:27] Why did they choose the name Nvidia? I think this is interesting, right?
[00:05:31] So when the three guys were founding the company, they used the placeholder nv, right? The letter, the two letters nv, like Nevada, right? Nv. It was short for next version or next Vision. As they got closer to the launch, they wanted something that incorporated this nv because it was already like on a lot of the internal files and such that they had. So someone, I don't know which were the partners, but one of them landed on the Latin word Nvidia, which means envy.
[00:05:59] Insert eye roll. NV meaning envy. And so they dropped the I and the rest was history.
[00:06:07] Uh, that name has stuck.
[00:06:10] So that's the background on, you know, where the name came from. It was founded what it originally did. It made graphics cards for PCs, right? For gaming.
[00:06:19] Why are we hearing about Nvidia so much? If you are hearing about it, if you are at all like listening to any kind of news around, around AI, then you've heard this term before, you've heard this name before.
[00:06:30] Um, so why are we hearing about it so much and why are they such a big deal? So like I said before, Nvidia designs the chips, right?
[00:06:41] And chips are more formally called GPUs. That's a graphics processing unit. Like I said earlier, Nvidia started out by making graphics cards for gaming, right? For video games on PCs. And their initial goal in 1999 was to create chips that were create that were capable of 3D graphics. Because at the time all that you had was 2D graphics. I mean we can think back, I think back to when we were younger and our home PCs, we only had one at home, right? Not everybody had a computer, right? Different times and you are playing games and they flat as, right? So their goal was to create, uh, game graphics cards that would allow for 3D graphics.
[00:07:30] So uh, like the company started in 93, all right. Six years later, in 1999, they succeeded. So we can think back to the, you know, having these computers and being like, oh, you know, the games are coming. There were some things that I remember at the time. I, I think we had like a compact personario, right? And I, um. There was this, like, flying game that I could never get past, like one second of it. But I was like, man, it's like, so realistic.
[00:07:55] This is Nvidia doing this stuff. In 1999, six years after the company started on its mission, they succeeded and they released the GeForce 256. And they officially coined the term GPU graphics processing unit. This chip or that chip, I should say the GeForce 256. That is the direct ancestor of the GPUs that are used today to run Chachi PT. Okay, so the summary so far, 1999, Nvidia, they made the chips. That was the first time they called the GPUs, and they were for 3D graphics in video games.
[00:08:40] Around 2005, 2006, researchers outside of the gaming world started to notice something. They were like, yo, the math that's used for these 3D graphics, it's identical to the math used in early neural networks and physics simulations. Neural networks, we're going to use that colloquially and synonymously with LLMs.
[00:09:06] Um, that's what, that's what an LLM M is. It's a. It's a neural network. Right? And ChatGPT is a type of LLM, large language model.
[00:09:14] Right? So 2005, 2006, these guys are like, yo, the math is the same. The math is mathing math that's used in the gaming world for these 3D graphics. It's the same as the math that's used in these neural networks.
[00:09:30] And so they thought to themselves, if we could run these mathematical operations on a GPU on that graphics card instead of what they were doing it on traditionally, which is a cpu, which we've heard this term before, perhaps, uh, it doesn't really matter for the context of this episode, but they were like, if we can run it on a GPU instead of a cpu, it will be so much faster.
[00:10:00] And so that's where that. What they did. So this is where Nvidia came in and solidified their future with AI. Not only did they make these GPUs, they were the only company doing it, making these chips, right, that could do these crazy math that would allow for these crazy mathematical computations.
[00:10:21] In 2006, they released what was called CUDA. CUDA, that stands for Compute Unified Device Architecture. CUDA is a proprietary programming framework. And that is what made it actually possible to run these massively complex computations, including what we'll call AI math, on the GPUs. Right? Because before then it was just like, hey, 2005, 2006, they're like, hey, this math is the same, but we don't know how to do it on this chip.
[00:10:54] But we know that if we could figure out how to actually do that and put the files in the computer, then this would be way fucking faster. And that's what CUDA allowed for. And that like, just to reiterate, was designed this proprietor, this programming framework that was designed by Nvidia. So not only do they have the hardware, which is the chips, they now have the software. Okay? Um, so for any historians in the audience that like dates, I fucking hate dates. But if you like the dates, here you go.
[00:11:26] This approach of, of using GPUs, more specifically Nvidia's GPUs or Nvidia's chips for this AI math is what we're going to call it, that didn't actually take off until 2012. Another six years later, right? 2006, they developed the actual software, if you will, to be able to do it.
[00:11:47] 2012, it actually gets implemented and tried and it wins a competition, uh, an image classifying competition. Feel free to chatgpt the term AlexNet, uh, if you're interested in learning more. But the moment that Alexnet succeeded, Nvidia became the default for the chips that would be used for AI. It was solid, was like, yo, it's, it's Nvidia or it's Nvidia. Is it? Because not only again did they have the chips, they had the software that allowed for it, right? They were the only company already building the hardware and the software ecosystem that would allow to be able to be scaled, right? So from that point forward, 2012, from that point forward, every single major breakthrough in what we'll call AI math, because AI is just math from GPT1 to ChatGPT5, what we're using now has run on these Nvidia chips, these Nvidia GPUs, right? These graphics processing units, okay, which is like just, they are, you know, monopolized here.
[00:12:55] So translation, why are they so why they said, why are we hearing about them so much? Why are they such a big deal? Nvidia? Because Nvidia designs the chips that all the big players, all the big AI players, excluding Google, use to run their AI models. Google has its own proprietary system that runs, uh, on what they call TPUs, their tensor process, that stands for Tensor Processing units. Um, but worth noting, Google still does use Nvidia chips for some tasks, right? So Nvidia is just, it's everywhere. All, all of the, the major AI companies are using it to run their, their program. Um, so just how big of a deal is Nvidia? Well, it is valued, friends. Its market market cap is 4.7 trillion with a T. Trillion dollars.
[00:13:57] They have 92% of the discrete GPU market. GP. The discrete GPU market, meaning the kind of GPUs that are you. Uh, the, the kind of GPUs that are used for data centers and, and high performance computing, right? They own 92% of that market. That's insane. They own the whole fucking thing, right?
[00:14:17] For the nerds who care. Uh, their main chip, and it's one of their, one of the most advanced chips ever made is the H100. Each of these chips can cost anywhere from $25,000 to $40,000.
[00:14:34] What? What? What? What?
[00:14:36] Nvidia will package these into what they call the, they named the DGX H100 systems. And, and these are racks that contain eight of these GPUs. Eight of these chips linked together, that's $320,000 on the high end for one DGX system.
[00:14:53] Now here's where the numbers get crazy. Companies like OpenAI and Meta, they will buy and use thousands to tens of thousands of these DGX units, right? And one DGX unit we set on the high end is $320,000 thousands.
[00:15:12] So it's like 1500, 2500 of these things to tens of thousands of these DGX units. We see. This is stupid money, folks. Stupid money.
[00:15:22] So if you want to see what the units look like. So when I was in the research for this, this, uh, is the way my brain works. Like, I have to do a dw. I have to, like, see things I like, really to understand. I'm like, chips. It's like potato. Like when you hear chips, you're like, potato chips. And then you're like, clearly it's not a potato chip. But then you kind of think about like a memory card. In my mind, I'm like, is it like a memory card size? No. These things are big, right? They are, they are big. Like, like, you'll see, it looks like a brick, right? That's one of the H. Ah100 systems. And then there's eight of those linked together. And it's like in this like, big metal container kind of thing. So if you want to see what it looks like, just to like, help your brain understand that this is what it's making, I will link two videos, two YouTube videos that I, that I watched that I thought were really helpful. Um, the first one is just going to be like a little short overview of the, of this DGX8 100H100 system. And then the second one I thought was pretty interesting is uh, walkthrough of a data center where all of these things are housed. So you can see they have kind of linked together. And just like for me it helps having a visual. Okay, so next part, and we're almost done here, it's a shorter episode. Not mad about it. Uh, next part. Can anyone catch up? Right, Nvidia, we see, like this is just insane money. Almost, almost. Five million dollar value. Five trillion. Um. Five million. Yeah, right. Five trillion dollar valuation.
[00:16:38] 92% market share. Like, can anyone catch up?
[00:16:43] Worth noting, it's not the hardware, it's not just the GPUs, the chips, but rather the software that CUDA that we spoke about before that really protects their foothold. Right? CUDA is what the global AI ecosystem runs on.
[00:16:58] And switching to something like Google's TPUs would just be, it would be expensive, it would be time consuming because of the software that's needed, right? So the best analogy that I could come up with would be like switching from Nvidia's CUDA ecosystem to something like Google's TPU system.
[00:17:19] It wouldn't just be like, oh, just like swap the. Swap the GPUs out and put the TPUs in. You can't do that. It's a completely different system. It would be like trying to turn every single gasoline car that we have into an electric vehicle overnight.
[00:17:34] And then on top of that, even if we could do that, right, the infrastructure around it, like the gas stations, the mechanics that exists, the parts that we have, spare parts, they are all built for gasoline cars, internal combustion engines.
[00:17:53] This is, this is the same thing for, for AI, right? The worlds, the way that it's set UP, the world's AI, the code, the workflows, they are built around Nvidia's GPUs and CUDA. So moving to TPUs would require just like a complete overhaul of everything.
[00:18:13] Clearly not impossible, but like it is definitely not happening anytime soon. All right, so Nvidia gonna stay on top for a bit. So in summary, I know I said a lot of acronyms and things like that, but like, y' all are really fucking smart and I'm grateful that you listen and I'm grateful that you, even if some of it you see, even if some of it, you're like, what the fuck, I know you're still here. And I'm grateful for that. So to summarize, Nvidia is a company that is 100 million percent absolutely integral to AI. They make the. No, they shouldn't say that. They design the chips, right? They don't make them. Um, they. They outsource the manufacturing. They design the chips that nearly all of these AI companies use to run their models.
[00:18:59] The company is worth a lot of money.
[00:19:03] They're making a lot of money, and they likely will not be replaced anytime soon because of both the hardware, the GPUs, graphics processing units, and the software cuda that they designed. Okay? So hopefully now, if and when you hear the name Nvidia, the word Nvidia, you got a pretty solid understanding of who that is, the chip, the chip designers, uh, what they do and why they are so important. Okay, real quick, how I use ChatGPT this week and then we'll wrap it up. So each episode, in case you don't know, I include a section where I briefly discuss how I use ChatGPT this week. This time, it's not about how I use it. I'm sharing a use case that my friend Corin shared with me by way of Instagram reel. I will link that in the show notes. So the reel, uh, she sent me was of a guy named Chris McCowsland. I could be butchering that name. I'm sorry if I am. I. I don't know. Um, but he is a British standup comedian, actor and TV personality. He has a hereditary eye condition called retinitis pigmentosa, which gradually leads to the loss of vision. And in this case, it has led to the loss, full loss of his vision.
[00:20:14] So the real is of a clip from the Graham Norton show where Chris. So Graham Norton is holding up a book and he's like, like, showing. It's like a picture book. And then he's like, I realize, Chris, that I'm holding up this book of, uh, pictures, and you can't see it. What's going on? He's like. But you have said in the past that you use AI. And Chris went on to explain how he uses AI to describe pictures, to describe drawing, his drawings that his daughter did, to describe photos. And this is probably the coolest part, is he goes, it will describe it to him. And this is verbatim with so much more patience than any human.
[00:20:57] That is a super, super fucking cool use case. Right? I think a lot of the shit that AI is being used for this day, these days is exactly that. Is shit. It's trash, it's garbage. It's nonsense. It's a waste, but I also think it is a remarkable technology with so much promise, so much potential, so much value, and what Chris shared is a perfect example of that. So, Corin, if you're listening, thank you for sharing that. And the rest of you, that is all for today. Hopefully you found this episode helpful. If you did, how about you consider leaving a rating or review? I do. I just like looking at them. I would say that it helps people find the show, but, like, I don't actually know if that's true. So I don't want to lie. I just like reading them. And this is a very unidirectional thing. I'm just talking to the screen. Nobody's here with me. Uh, Rupert's here with me. My cat, for those of you who don't know. But that's it.
[00:21:50] So I like reading the reviews. Lets me know that you're listening, that it's helpful, that you're enjoying it. So consider leaving a little rating, a little five stars or a review, whatever tickles your fancy. But do not forget as well, I have a companion newsletter, the Curious Companion, that drops every Thursday along with the episode that is basically the ep, the podcast episode, in text format. So if you prefer to read or you want a written record of this, the episodes join the newsletter fam. You can head to chatgpt curious.com if forward/newsletter or you can check out the link in the show notes. I gotcha either way. All right, all right. As always, my friends, endlessly, endlessly, one more time. Endlessly appreciative. Uh, for every single one of you until we chat again next Thursday, stay curious.