Episode Transcript
[00:00:00] Foreign.
[00:00:05] Welcome to ChatGPT Curious, a podcast for people who are, well, curious about ChatGPT. I'm, um, your host, Dr. Shantae Cofield, also known as the Maestro, and I created this show to explore what ChatGPT actually is really though, are the files in the computer, how to use it, and what it might mean for how we think, work, create and move through life. Whether you're skeptical, intrigued, or already experimenting, you're in the right place. All that I ask is that you stay curious. All right, let's get into it.
[00:00:38] Hello, hello, hello, my curious people, and welcome to episode one of ChatGPT Curious. I am your grateful host, the Maestro, and today we are talking about what you actually need to know about ChatGPT.
[00:00:52] So I'm sharing this episode, recording this episode, doing this episode, because I want to give you some context around a chat GPT without losing you to the tech terms and the minutiae. But I really do think that it's important to have a decent foundation for what the, this thing actually is.
[00:01:11] When you have that foundation, it allows you to better experiment with it allows you to criticize it better, you're less scared of it, and it allows you to make more informed decisions, decisions about it. So let's hop right on in.
[00:01:23] If you're listening to this podcast, there's a good chance that you've at least opened ChatGPT, but, uh, there's also a good chance that perhaps you haven't. And that's okay if you have not even opened it or played around with it at all. I want you to pause this episode right now. Hopefully you're not driving. Pause the episode, go to chatgpt.com and type in the following question.
[00:01:47] What are you and what can you help me with?
[00:01:51] If you've already done that, you've already tried around with it, then you can keep listening. So the 30,000 foot overview as to what ChatGPT is, it's a computer program.
[00:02:03] You type something in and it responds. You can ask it to explain something you don't understand. You can ask it to help you write an email. You can ask it to help you brainstorm dinner ideas based on your kid, you know, your picky ass kids and what's in your fridge.
[00:02:17] You can ask it to help you make a packing list for your trip. You can ask it to help you with a task for your job.
[00:02:25] Uh, it is not magic, it is math, right? Uh, the answer that it gives you is not magic, it is actually math. And I will explain what that means in the rest of this episode. It is also not perfect.
[00:02:40] Far from it. It makes mistakes, it gets things wrong. But. But it can be surprisingly helpful. Okay, now that that's out of the way, let's move into a little bit of the history, a little of the background of ChatGPT. The parent company of ChatGPT is OpenAI. And OpenAI was founded in 2015 by Sam Altman and the Devil himself, Elon Musk. I will probably do another episode about the origin slash early days of OpenAI. But sticking with the theme of this episode, suffice to say that elon left in 2018. From 2018 to 2022, OpenAI built a bunch of different versions, or, uh, I should say a handful of different versions of the GPT.
[00:03:21] GPT stands for Generative Pre Trained Transformer. I think I will do an episode about that, just about the name itself. But from 2018 to 2022, OpenAI built a few different versions, with version 3.5 being released to the public on November 30, November 2022 as ChatGPT.
[00:03:41] We are currently, as of the day that I'm recording this, I'm recording this on July 25th. Uh, so we are currently on version 4.1 of ChatGPT with 5o, uh, rumored to be released in the near future. So what are the differences between these versions? Version 1, version 2, version 3 and version 4, namely the number of parameters that each has. But of note, from GPT4 data and onwards, it stopped being about size and it became about the architecture, the speed, the reasoning and what the model could actually do. So we are on 4.1, which is not significantly bigger than 4 point than 4. Oh, okay.
[00:04:26] All right.
[00:04:27] Maybe get a drink now because we are going to get a little bit techy, but this is important. So let's talk about parameters. I just said the difference between the models is namely the number of parameters that each has.
[00:04:43] So what is a parameter? First off, LLMs. Large language models. Which is what ChatGPT is, right?
[00:04:52] The acronym we use is LLM, but it stands for Large Language Model.
[00:04:57] An LLM is a trained program that runs on these like, super computers, right? Incredibly powerful computers.
[00:05:05] The large, Right. The first L in LLM that refers to the number of parameters that a model contains, not the size of the data set or if you want to be, you know, in the, know, the training corpus. That's another word for it, another term for it that the LLM was trained on. Okay. LLM stands for Large Language Model. The large refers to the number of parameters that the model has not the size of the data set that it was trained on. Okay, um, chatgpt4o, which is what we're currently using for one, uh, has possibly. Get ready for this, folks. One trillion parameters. But we don't know for sure because OpenAI don't be. They don't fucking disclose shit. Which I will get into another episode. There are a lot of problems with OpenAI, right, but we'll get into that in another episode. Okay? I just want you to understand the kind of, the background of it, the nuances of how it works, and then in another episode we can talk about all the fucking problems with it. Okay?
[00:06:05] But possibly 1 trillion parameters. That final number hasn't been disclosed. So let's get into what an actual parameter is. A parameter is basically the connection between patterns in language with a weight that is assigned.
[00:06:21] So when I say a pattern in language, that could be grammar, tone, rhythm, okay, there's lots of different patterns.
[00:06:29] The weight, right, the weight that is assigned to this connection between the patterns, um, is adjustable by the model itself. I know that this is a techie component. Maybe you got to rewind it. But try to bear with me.
[00:06:48] Patterns. Excuse me? Parameters are the connections between patterns in language. It would be incorrect to say that there are connections between words. It's like a little bit easier to understand it as that, but it is actually incorrect. It is the connection between the patterns in language, the weight or the strength of that relationship or the strength of that connection can be adjusted by the model. So basically, during the training process, before this LLM comes to market, before we get to use it, that LLM, that large language model, that ChatGPT gets fed a ton of content. Again, that, you know, cool term is training corpus. It gets fed, you know, imagine like the, the biggest, you know, control, control a control copy ever, and then control paste, like literally the biggest ever, basically every single thing that's on the Internet. Again, problematic. It was, you know, it's trained on things that it stole and things like that. But understand that it was trained on literally like everything that exists on the Internet.
[00:07:53] Meaning. And when I say trained, meaning that all of that gets fed into this LLM.
[00:07:59] Next, the LLM, it gets presented with parts of a real sentence that exists within the training content. And then it gets asked what comes next, right? So, for example, a very basic example, it gets, it, uh, has all of the information ever on the, you know, from the Internet in, in this model, it then gets asked the set to fit to complete the following sentence. The sky is Blank.
[00:08:29] The model then makes a guess and the LLM ChatGPT then makes a guess. Remember this is in the training mode. It makes a guess and it says the sky is red.
[00:08:38] The model then checks what the actual next word is supposed to be because it's from the data that was already, that was uploaded originally and it says, oh, nope, in the original data set it says the sky is blue.
[00:08:54] So the model now measures the difference between the correct answer blue and the answer that it gave red.
[00:09:03] And it uses math to change those parameters to change the strength of the connection between the patterns that it identified.
[00:09:13] So the next time that it sees similar content, it is then more likely to guess the correct word that appeared in that training data set.
[00:09:27] Hopefully from this, and I know that it's difficult, but hopefully as I'm explaining this, we can start to understand that the more parameters, the more possible connections between patterns in language, uh, the more that we have, the more fine tuned and correct the response. And I use the word correct, I'm going to put it in like quotation marks and correct the response will be.
[00:09:54] And this is because all of the relationships have been analyzed. Meaning we can have a sentence that the sky is blue, but we could also have a sentence that says the sky is red during a beautiful sunset.
[00:10:07] Uh, this allows for more nuance with answers because more relationships have been analyzed.
[00:10:15] This fine tuning or the adjusting of the parameters occurs during what's called pre training. And then it's frozen right. They basically hit save and these parameters can't be tuned or uh, adjusted anymore. And then the model is deployed for use.
[00:10:35] Suffice to say that there is no changing of these parameters that happen when we use ChatGPT.
[00:10:43] One last thing here. Hopefully you've stayed with me. I know that it's like what the fuck are you talking about? Maybe rewind, go listen again.
[00:10:51] Uh, parameters, largely just referring to the connections between.
[00:10:58] You can, you can loosely think of it as the connections between certain words, but that's not fully correct. It's the connection between patterns.
[00:11:06] Uh, it's basically like a pattern identifier, if you will. And the more patterns that the model is able to identify and be like this thing is like this thing, that thing is like that thing, that thing over there is like that thing, that thing is like that thing.
[00:11:21] The better the output it's going to get based on any input. Because you're like hey, uh, here's my question and it'll say, okay, I see a pattern here. And now I will output an output based on that pattern. And if it's able to identify more patterns, it's more likely that the answer it gives you is going to be one that you want. Later on this episode, we'll talk about the fact that it'll give you incorrect answers as well. But suffice to say that the answers that are produced are based on patterns. Patterns that it is trained on and that identifies.
[00:11:54] One last thing to out of the conversation about training is that part of the training that teaches the model what responses are more quote, unquote correct, but largely based on what humans actually want to see.
[00:12:06] That process requires humans. It's a process called RLHF. Reinforced learning, Learning with human feedback. And it is problematic in many ways because of the nature of the content that these people, these that are training it are exposed to. Right? And they're saying like, no, this is like very bad content. Don't show this as, like, don't give this as an answer.
[00:12:29] Um, it requires humans. They typically get paid like one penny. Very, very, very problematic. I will definitely do an episode about this, but these are things that in this episode of just like, here's the things to know and just be aware of about Chat GPT. I wanted to just give you that overview if you want to go and do some more research on your own. In all means. But I think this is something that people like, don't know about.
[00:12:51] Uh, and it is worth bringing up. Okay, so to summarize so far, Chat GPT is an LLM that's a large language model. Large refers to the number of parameters it has, and parameters can be thought of as connections that are specific to different patterns that exist in language. And they can be adjusted by the model to generate a more quote, unquote correct output. Training of. This model's training of LLMs also involves humans. And that process has significant ethical concerns. Okay, all right, so we've gone into the background and kind of the, like how it trains or how or how it is trained.
[00:13:31] Let's go into talking about the actual outputs that are generated by these models. Right? You type something in and it spits something out. But how? It's math, my friends. It's math. So LLMs, ChatGPT, it is predictive, right? They do not reason do. They do not think. They do not understand.
[00:13:50] They take your input and they predict the most likely correct outcome. Again, this is math. This part gets me excited. So maybe, maybe get ready to get excited as well, right? Because I feel like your brain's, you know, I am a teacher through and through. And I, I know that last part, if that was like the most entertaining I could make it. And I'm like, this is not entertaining, but it's important to go over. So this, this concept here and how it, it generates, uh, the outcome, the output, it's, it's simply math, right? In. And in an LLM with Chat GPT language, the input that you give it, right, you, you put like, you type language in it is broken down into what is called tokens, right? And tokens are, in turn, the model then turns these tokens into numbers, or I should say tokens are represented as numbers. And that is what Chat GPT can actually process.
[00:14:49] So tokens are, and this is pretty cool, tokens are common sequences of characters that are found in a set of texts. And I say characters because they can include like punctuation. It's not just letters.
[00:14:59] GPT4 has. So what work our current, um, ChatGPT is using?
[00:15:06] That model has about 100,000 tokens in its vocabulary. Okay, so you type a sentence or a paragraph or upload the longest thing ever, whatever, and it breaks all of that input, right, all of that language into pieces called tokens. And each token is represented by a series of numbers. This becomes much easier to understand if you see it visually. So I'm going to include in the show notes a link to a platform called It's a. It's a tokenizer where you can literally type in whatever you want and it'll show you what it looks like as tokens or how it's broken into tokens and then how those tokens are represented as a series of numbers. And it will make a lot more sense in your head, okay?
[00:15:53] So ChatGPT then processes your input as these tokens, and then it generates an output. And the way that it creates that output is that it's read the tokens. It's read the numbers, right? Because this is math, it literally takes in numbers and then it generates an output, a response, one token at a time. And it uses math to identify which token from its vocabulary. It only has 100,000, I should say only, but it only has 100,000 tokens that it can pick from. And it basically orders them and the number based, uh, on probability, and then chooses the token with the greatest probability of being quote unquote correct as the next token. So again, from our example earlier, the sky is blue. It would if I, if I asked it what color is the sky?
[00:16:45] It doesn't read it in that language, right? It transforms it. That's why it's called a GPT, a generative pre trained transformer. It transforms it to tokens and those tokens are represented as a series of numbers.
[00:17:00] So I type in what color is the sky? It reads that, and then it's going to output an answer one token at a time.
[00:17:12] And the way that it chooses the token is based on probability. And the probability is the term wow. Determined, Determined by the pattern that it had learned and been trained on from, um, before.
[00:17:25] So the first word would be the.
[00:17:27] And it had bases on of all the print all of the tokens that it has. The word the has the highest probability of being correct. Then it would go sky after that, right? But imagine a list of the sky and that's like 99% next to it. Then it would be dog or uh, you know, the sky, dog table carpet. Each of those words has a probability.
[00:17:56] Sky has the highest probability of being correct. It has 99 assigned to it, 99.9 assigned to it. Then the next token is the next token blue. I'm um. And it's all based on math. It doesn't know that the sky is blue, doesn't know what that means. It simply knows that each token that it has generated was the most likely to be correct or to correctly complete that sentence. That phrase, that idea based on probability, it is all math, right? 100% math.
[00:18:35] I am going to again refer, uh, you back to that tokenizer. Give it a try.
[00:18:41] As I was writing this episode out and writing the outline out for this afterwards, I went and explained it to Lex before I recorded this.
[00:18:47] For those of you who don't know if you're new to the podcast. Lex is my partner, my girlfriend, whatever you want to call it, uh, she would agree with all of those things.
[00:18:55] Uh, and I was sitting there explaining everything to her and she was, what I don't really understand. And then I showed her the tokenizer and she was like, oh. She's like. I kind of like don't want to believe that this is what it's doing.
[00:19:09] But this makes it a lot easier to conceptualize and understand. So I'm going to really strongly encourage you to go. I will link it in the show notes, try it out so you understand and you can see like this is the language that this thing is speaking. Okay, so like we said, LLM, ChatGPT, it processes your input as tokens, generates an output one token at a time. It uses math to identify which token from its vocabulary, from its 100,000 token vocabulary, has the greatest probability of being quote, unquote correct as the next token. And again, this is not because it has memorized the training data.
[00:19:47] This being correct is and determined probabilities is because it learned patterns across the massive amounts of text and data that was input. And then it uses those patterns to predict what, what should come next in this answer based on the pattern it identified in what you wrote.
[00:20:08] Okay, think of this thing as predictive text on steroids. It's probably the best way to understand it. We have all seen it on our phone. We've all seen it. You know, you go to Google and you just like type out a sentence and like it'll fill out the last word. You know, it kind of goes around on like Instagram and they're like, go try this and see what comes up. Same thing for your phone, right? That is predictive text. That is based on probability.
[00:20:31] That is how ChatGPT works. That is how it generates all of the answers for you. I know it sign of. It's kind of like, you know, this may feel very wizard of Oz where I'm. You're just like, wait, what? That's what's behind the curtain. But at the same time, it's like pretty fucking amazing. Like the amount of math that's going on for this and, and the algorithm that exists for this is, is a pretty, pretty amazing feat in my mind. So to highlight here, right, uh, LLMs, ChatGPT, it is probabilistic, meaning it generates the output, the answer that it gives you, the response it gives you based on probability, as it predicts the next token from its vocabulary based on previous patterns it was trained on.
[00:21:21] This is why, folks, you can ask the same question twice and you'll get generally similar answers, but slightly different results because it's based on probability.
[00:21:30] This is in contrast to a deterministic system which will always produce the same output from the same input. This is pretty, um, important to understand. And you know, if you're ever, if you're type of person likes to listen to podcasts, I kind of want you to know these words because it's going to come up about, you know, LLMs, but being probabilistic and just this idea of probabilistic models in general. And I want you all to be in the know. LLMs, aka ChatGPT, they are probabilistic, not deterministic. This means, friends, folks, that the response can be factually incorrect, right? It is simply presenting patterns. It's not recalling memorized information. There is a big difference. So to summarize this part that we just went over. ChatGPT is math. It's not magic though. Math is like it's kind of fucking magic, right?
[00:22:21] The language in general that's input is broken down into common sequences of characters. What are those called?
[00:22:28] Tokens? Those are called tokens and the tokens are represented as series of numbers.
[00:22:35] GPT4 has about 100,000 tokens in its vocabulary.
[00:22:40] It will then generate probability based outcomes. Right? The response, the answer it gives you is probabilistic. It's based on probability. It's not based on fact, it's not based on something that it memorized.
[00:22:54] When given an input, it generates an outcome based on identifying which token from its vocabulary has the highest probability of quote, unquote, correctly completing the output. And it does this one token at a time. It is predictive text on steroids.
[00:23:13] This means that the vary. The varies. Wow. This means that the responses can vary despite the input being the same. Right? Uh, and again, and this is really important to stand remember, this also means that it can generate factually incorrect responses.
[00:23:27] We'll go over that a little bit more in, uh, a little bit.
[00:23:32] Hopefully you're hearing that. Wow, this is like a lot of math that is going on.
[00:23:37] All of the resources, AKA the processing power that's required to perform all of this math is known as compute this math, the equations and the uh, probability, uh, probabilistic, probabilistic nature that is happening, uh, at the level of the parameters.
[00:23:58] Okay? Remember those are those pattern connections that we've talked about earlier. I think that most of you have probably like just wiped it from your memory. I get it, it's not the most fun thing to learn about. Uh, but basically the rules by which the CHAT GPT or the LLM operates, it's based on the parameters, okay?
[00:24:19] And it's all the math, all the computations and such are happening at the level of these parameters. ChatGPT4O has 1 trillion param as a lot of math, 1 trillion parameters, right? So yes, there's something to be said about perhaps not all of them being used all the time, but even if we're using a fraction of them, that's a ton of them. And when you put an input, when you type something in, it goes through, has to go through all of those parameters, okay, in order for the, you know, most correct output to be generated. That's a lot of math, which means a lot of resources.
[00:24:58] This has a very significant environmental cost.
[00:25:02] Very significant environmental cost. I 100% plan on doing a full episode devoted just to this environmental cost.
[00:25:10] But for the, you know, suffice for this episode, understand that running These models, these LLMs, requires massive data, data centers that use huge folks that use huge amounts of electricity and water for cooling. And the carbon emissions are significant.
[00:25:27] It is invisible to most of us unless you live by one of these data centers. But it is absolutely not impact free. And honestly when I was thinking about creating this podcast and starting this, that's what I was going back and forth with. That was not, um, the main thing I was going back and forth with is the environmental impact. And I'm not here to tell you that you have to use ChatGPT.
[00:25:46] I am making this podcast largely because it's here. It's not going anywhere.
[00:25:51] Many of you, most of you, if you're listening to this, you've already tried it, you're playing around with it, or you're at least curious about it, then I want you to have all the information. I want you to be able to be an informed consumer.
[00:26:02] This thing is not going anywhere.
[00:26:05] And you know, the worst fucking people ever, most of them are in charge of it or they're benefiting from it or they're going to be using it. And to me it's like, hey, don't put your head in the sand. It's. Let's learn about it, let's learn what we can do, let's learn how to use it in a more efficient way. Let's learn more about it so that we can actually have a, have uh, an opinion, an informed opinion about it and we can vote and, and use our, our, our brains and our voices to, to fight it or to, to, you know, ask for it to be changed in a certain way. But you can only do that if you understand what it is in the first place and the impact that it's having. So a very loose analogy that I want to give you here that just to serve as a primer for that episode I'll do at some point is while, you know, ChatGPT is not even close to, as necessary as the water we use to brush our teeth. But it, let's, let's use this as an analogy like give me some grace here, right? When you brush your teeth, you use water and you do not leave the faucet running the whole time the same I would love for, to go, you know, hold true for this, this tool for CHAT GPT.
[00:27:16] Be mindful about how you're using it, about spamming it and just putting like massive amounts of text in it and just using it for nonsense or running it, you know, just using it non stop because you can.
[00:27:28] I will say that the numbers that are out there about the energy costs are a bit misleading. And I think one of the best things we can do besides being efficient in how we use it, is look to offset usage with other things that we do. And you know, how we heat and cool the house and um, are you driving? Are you flying? There's a lot of other things that we can be doing. And this isn't about, you know, what about ism of being like people being like chat GPT and LLMs, um, use a lot of energy and then people are like, but what about that over there? Like no, is both that I want us to be aware and be mindful and be better about how we use ChatGPT and also be aware and be mindful and be better about how we consume energy in general and, and how we contribute to the, the environmental problems in general. Okay, so full episode coming on that. But suffice to say, if you didn't know, now you do know. And maybe we can be a little bit more cognizant and conscious of how we're using it. I so next section here. We're almost done folks. Uh, I appreciate you. What can ChatGPT? Wow. What can ChatGPT do?
[00:28:38] Honestly, the best way to find out what it can do is to play around with it. Right. There is a free version. Go to chat GPT.com and there's gonna be a prompt box, uh, you know, just an open box there and type some stuff in.
[00:28:52] Yes, it can search the Internet if you click the little globe slash, you know, search icon at the bottom of the prompt box. It has some limited functionality. That's version. But it can still give you a taste for what it can do. Uh, for the, for what model is running. It's using the 4.1 mini model. So it has fewer parameters. Right, those kind of guide, guide rules, if you will. Um, but it's fast and it will likely be able to do anything and everything that you want it to do. Yes, you're limited with file uploads and data analysis. Image image generation can't really do much with that voice mode. Um, and you have limited access to the more advanced version of the models. But let's be honest, they want your money.
[00:29:29] Uh, so you can still do what you want to do. They want to give you a teaser of it so you can be like, oh yeah, I like this. Um, if you create an account that's free and then you sign in each Time, then it will save your chat. Otherwise if you just like go and just put things into the prompt, it will not save it from each time you use it.
[00:29:45] Um, and you cannot choose which model you use.
[00:29:50] Once you get more serious, in my opinion, it is worth considering upgrading to paid again. I'm not here to tell you I don't get paid right for you using it. Like, I don't. I think there's a lot of benefits to the model and I should to Chat GPT.
[00:30:06] Um, and so I will. You'll say once you get more serious, if it makes sense for you, consider upgrading. I am on the plus level. It's $20 a month. You do not need the pro level. That's 200amonth immediately. Fucking no. Uh, the main benefits here, fewer usage limits, less uh, throttling with things. It has memory, which is really. That's put me to me like the biggest, the biggest feature. It remembers you, it remembers your chats. It remembers things across chats. You can create projects. I will definitely be doing a full episode on that. It's probably my favorite feature of uh, of ChatGPT Plus. And you get access to other models. They have a deep research model. Pretty cool. They have agent, which just rolled out on July 17. There's a conversation mode. There's more, you know, bells and whistles that you um, have access to that do have certain use cases, um, which I will go into in future episodes. I just want to give you an overview of ChatGPT.
[00:31:04] Okay, um, what is ChatGPT? Not. It is not sentient. Right.
[00:31:10] It is not self aware.
[00:31:12] But I will say this ChatGPT and LLMs in my opinion will at some point challenge our definitions of sentience and, and being self aware and the ethical considerations, you know, that come around that come along with and surround both of them. But for now is doing math, folks. Rewind the episode. I know you hated that part.
[00:31:34] Re. Listen, maybe Chat GPT is doing math.
[00:31:39] What to watch out for with chatgpt hallucinations?
[00:31:44] Um, hallucinations are plausible sounding but factually incorrect outputs that are fabricated or unsupported by real data. It'd be making up my Chat GPT is basically trained to not say I don't know. So it'll always make something up. It will always make some. It'll always give you an answer. So check your sources. I uh, be as skeptical about the computer as you are about people. Right. ChatGPT. You know, you'll see. This is why I won't be able to try it out ChatGPT seems to know everything until you chat with it about something that you actually know about and suddenly you're like, that's not fully correct. Or sometimes this is where you can really be like, oh, that is correct. And you know, the better the input you give, the better the output. And I will go into that another episode, but I do want to really emphasize the, the point, the part that it's not Bible, it's not canon, it's not, you know, it's not the truth. It's. Which is funny I say that cuz like, is the Bible truth? No, but uh, if I lost some followers there, so be it. Uh, but suffice to say that it'll make up. Okay.
[00:32:50] Additionally, another thing, the second thing to watch out for is what's called sycophantic behavior.
[00:32:56] It is a yes man. Chat GPT is a yes man is going to say what you want to hear because like I said before folks, it wants to make you happy because it wants you to keep using it because then you keep paying for it month to month. Right?
[00:33:11] You could. There's a, um, I saw like an Instagram post today, I think, and it was like somewhere out there, the worst person you've ever met, uh, is having ChatGPT is being told by ChatGPT. Yeah, you're right, you got this. And it's like, yeah, it's a yes man.
[00:33:27] So check yourself before you wreck yourself. It will definitely, uh, say what you wanted to hear. You can tell it to not. It's just doing what you want. Um, but.
[00:33:38] And they have actually improved that. It was like behaving very too much like that for a bit. But um, at the end of the day, like it is going to say what you wanted to hear. So just be cognizant of that. Um, and I will definitely do an entire episode about both these topics. But I just wanted to make you aware of the, these behaviors. So last part of the episode and I plan on doing this for future episodes where I'll, I'll, you know, use the first part to explain something, teach something, go over something. And then I always want to include a section that's how I used it today or how I use it recently. And I would love to actually be able to share how you folks have used it. So feel free to, to contact me. Hit me up dm, um, me, because I'd love to hear these things. But how I used it today, slash recently, uh, today I actually use it to diagnose and try to fix an issue with Lex's. The gears on Lex's bike. Uh, it didn't work, but I learned about the bike and, um, you know, the need to ask specific questions for it to give me the full information about things. But, um, it did give me, you know, information and introduced me to things that I. That are correct, that I didn't know about. Uh, in this case, the limiter screws or limit screws. I don't even know what the, the phrases. Um, I also learned that you can upload videos to it, um, which I'm not sure that it watched, but it asked me to upload a video. I was like, hey, if you want to upload a video, like, I'll look at it. And I was like, then it generated a response and like, literally as. As soon as the video was uploaded. And I was like, but did you actually watch that? Um, but for what it's worth, you can do that. You got to upload it as a file, so, you know, you got to save the video on your phone as a file. Um, but it's a good example. Like I was saying before the chat, GPT seems really smart when you ask the things you don't know. And then when you start to talk to, you know, I was like, wow, this is amazing. And then I'm certain there's someone that, uh, you know, fixes bikes would be like, it's okay.
[00:35:34] Um, but what ended up happening for those that care, uh, the. The derailleur hanger on Lex's bike is bent. And it missed that.
[00:35:44] Uh, I don't. I didn't really give it the best picture of that, but it did suggest that that could be an issue. But, you know, I like to try for different things, and it's cool to me that I can ask for it in the moment. And it did present me a lot of information. And I started. I did. It did give me a very good foundation of using those limit limiter screws and what gears it will work on. And, um, it will affect. So great start, but limitations of things. So that's how I use it today.
[00:36:12] Gonna wrap it up there. I know this was a bit of a denser episode, especially for episode one. And, uh, part of me feels like this is. I don't know if any of you watch Black Mirror, but they fuck themselves over by having that first episode with the pig. And it was just like, people were like, I can't watch this show. So hopefully I didn't, uh, screw myself over by putting this first. But, you know, I, I have faith in you as listeners. I have faith in you as curious humans. And you also have agency. You don't want to listen to the episode. You don't have to. You want to fast forward through things. You always can. Um, as the rest of the episodes come out, you can always pick and choose what, what helps you and, you know, what serves you and then leave the rest.
[00:36:49] But I. I really do believe that a solid foundation and a solid understanding of this is just super important for understanding what the this thing is. Right? There's just so much hype out there, and there's just so many headlines, and it's like, what is this thing? What can it actually do? Let's get a better understanding. You know, we. We can learn hard things. We can learn complex things. So I think that understanding this, giving it, you know, building this foundation allows you to experiment better with it. And I'm excited to get into episodes where we kind of talk about how I'm using it and the different functions, uh, within it. It allows you to criticize it better. Like, it is very problematic. I have many things to say about it, and just because I have this podcast doesn't mean that I am, you know, 100 bullish on this thing and that everyone should use it and it's the best thing ever, and it's going to save humanity. Like, no, I don't think that, but I. I think that there's a lot of promise with it. I think it's really cool, and I don't want to stick my head in the sand. I feel better about things when I dissect them and learn about them. And so that's what I'm doing.
[00:37:47] Uh, I think that foundation, you know, piggybacking off of that's going to allow us to be less scared of it. I know folks that are just like, wow, he's, like, doing all this things, and it's, like, really scary. I, uh, had a friend send me an episode the other day, uh, a podcast episode. And I was like, yeah, I get why you're scared. And I was like, here's why.
[00:38:02] Here's. Let me pick this episode apart, and let me tell you what's going on in this, and let me help you understand how this is possible, and let me kind of be a little bit of, you know, wizard of Oz here and let me pull back the curtain so you can, you know, be less scared of it.
[00:38:17] Uh, because there's other things to be scared of, right? There are things to be scared of, for sure.
[00:38:22] Um, but the things that are kind of making some of the headlines, aren't it?
[00:38:28] Uh, and lastly, it allow you to make more decision, more informed decisions about it, whether you use it, whether you use a different platform, you know, different, different LLM, whatever. Again, I'm not here to tell you you got to use Chat GPT. I'm just here to share my experience and what I know and, uh, answer any questions that this you may have as a curious person. So today we discussed a brief history. We went over parameters. I think you probably hate that word right now, but you know, parameters. We went over how Chat GPT generates outputs and what it means, that it's a probabilistic model. We went over a very, very like high, you know, 30,000 foot view. Introduction to the environmental and the ethics, some of the ethical concerns. We went over the best way to find out what it can do, what it cannot do.
[00:39:10] We went over what to watch out for. ChatGPT will absolutely give you an incorrect or made up answer. Do not forget that. And then we wrapped it up with how I have used it most recently. It is my hope that you can use this episode as a resource and if you found it helpful, share it with someone that you know who is curious about ChatGPT. Do not forget, folks, I have a companion newsletter that drops every Thursday that is basically the podcast episode in text format. So if you prefer to read or you just want, you know, a written record that you'll probably never look at again, join the newsletter fam. You can head over to chatgptcurious.com forward slash newsletter. Um, I will also link that in the show notes. I'll link everything in the show notes. If you got questions, comments, concerns, additions, subtractions, requests, anything, head to the website, use the contact form. I promise you, folks, I would love to hear from you. All right, those are all the announcements I've got.
[00:40:06] Endlessly, endlessly appreciative for every single one of you. I'm stoked about this podcast, I'm stoked that you chose to listen, and I'm stoked for all that's to come.
[00:40:15] Until next time, friends, stay curious.