Ep. 12: Why ChatGPT Sometimes Sucks (and What to Do)

Episode 12 October 09, 2025 00:25:18
Ep. 12: Why ChatGPT Sometimes Sucks (and What to Do)
ChatGPT Curious
Ep. 12: Why ChatGPT Sometimes Sucks (and What to Do)

Oct 09 2025 | 00:25:18

/

Show Notes

In this episode we dig into why ChatGPT sometimes gives you garbage outputs and what you can do about it. From hallucinations and drift to bad prompts and long chats, we’ll cover the common pitfalls, what to do when it sucks, and how the imperfections of AI might actually be keeping your own skills sharp.

Main Topics Covered

Links & Resources for This Episode

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:05] Speaker B: Welcome to ChatGPT Curious, a podcast for people who are, well, curious about ChatGPT. I'm, um, your host, Dr. Shantae Cofield, also known as the Maestro, and I created this show to explore what ChatGPT actually is. Really, though, are the files in the computer, how to use it, and what it might mean for how we think, work, create and move through life. Whether you're skeptical, intrigued, or already experimenting, you're in the right place. All that I ask is that you stay curious. All right, let's get into it. Hello, hello, hello, my curious people, and welcome to episode 12 of Chat GPT Curious. I am your grateful host, the Maestro, and today we are talking about something we've all experienced and something that should give you tremendous peace of mind when you see the sensationalized headlines that AI is coming for everyone's jobs. Today we are Talking about why ChatGPT sometimes sucks and what to do. We have all been there, right? You, you're working with it, and you're like, what the is this output? This is terrible. Like, I legit will be getting, be getting mad at it sometimes and, and I'm typing like, I didn't say that. Do it right. Like I get mad and exclamation points. And you know, it's not helpful. And of course, in Chat GPT is the yes man. It's like, you are correct. That wasn't helpful. And here's the same again, just as bad. So, uh, to, to start off here, I'm going to refer you back to episode one. I will link it in the show notes. But, uh, episode one was what you actually need to know about Chat gbt. Uh, and in that I go through like, you know, the. A breakdown or like a. It's a breakdown, but it's also like a very. Um, I want to. I believe it's a simplified version of how Chat GPT actually works. So you can go to that for the, the full rundown there. But, um, I want to give you another. Just a little synopsis of it, just a little reminder, or if you haven't listened to episode one, here you go, an introduction. So understand. Remember, folks, at the end of the day, Chat GPT is math. It is not magic. It is math. Chat GPT does not reason, it doesn't memorize, it doesn't think or understand. It takes your input and it predicts. Predicts. It is predicting it, right? Uh, predicts the output that has the highest probability of being correct. And by correct, it means largely that you're going to like, it, that's like what correct means. Yes. If it's, you know, things that it was definitely trained on, um, then we're looking, then correct is speaking to that. But, um, there's also a part of it that, that is like, what will this person like? [00:02:47] Speaker A: Right. [00:02:48] Speaker B: So remember your input, the thing you type in there, it's broken down into tokens, which is a common sequences of characters. GPT4 had about like 100,000 tokens in its vocabulary. It's uh, broken down the tokens, the tokens are represented as what we talked about this last week as vectors. Uh, those get transformed and the machine does a bunch of math, the model does a bunch of math and it's going to interpret what you said and then decide on the answer, the best answer to give. And it does that one token at a time based on probability. [00:03:20] Speaker A: Right. [00:03:20] Speaker B: Chat GPT is a probabilistic model, meaning it generates the output based on probability as it predicts the next token from its vocabulary. Now, vocabulary and the prediction, it's based on previous patterns that it was trained on. This is why. I know that's a bit wordy, but just stay with me here. Like just, just pretend you're here with me. This is why you, you can ask it the exact same question twice and it will generate similar but different results. [00:03:49] Speaker A: Right? [00:03:49] Speaker B: This also means that the response that it gives you can be factually incorrect. Chat GPT is probabilistic, not deterministic. It is presenting patterns and probability based on probability, just not recalling memorized information. Now this is where perhaps you've heard that term hallucinations, right? Where ChatGPT hallucinates. That doesn't mean that it's like spitting out wingdings. [00:04:16] Speaker A: Right? [00:04:16] Speaker B: Um, hallucination is a plausible sounding but factually incorrect output that is fabricated or unsupported by real data. Right? It's not gibberish, it is just wrong. So this is how the mo. This is why the model can absolutely suck at times. It's, it's because this is how it works, right? It's literally what makes it work. Uh, but it's also what is its greatest shortcoming. Or I don't say it's greatest, but it is a great shortcoming that it has and this is what will keep us from AGI. Remember, AGI is artificial general intelligence. That's Skynet, right? Go watch Terminator. Skynet is AGI, right? This, the fact that it is probabilistic, it is exactly why we will not get to AGI just by using these models and it's also why ChatGPT sometimes sucks. It's why it's great, but it's also why it sucks, right? So what I'm really focusing on within this episode, what I'm really talking about, is when ChatGPT is generating something for you as opposed to when it's answering a question, right? So if you're asking it to create an outline or summarize something or draft some text, and then it gives you an output, and you're like, this is terrible. It's not a hallucination, right? It's not like it's wrong, right? If you had, you know, input that you gave it that was like, summarize this, and it made it completely made up something completely different, then, yeah, it'd be a hallucination. But oftentimes m you're like, this just doesn't sound good. This sounds like a robot. Not good. Sometimes you'll hear the term AI slop used, right? This is what I'm referencing. Not when it gives you the incorrect answer, uh, for something, but when it's generating something that you're like, this is bad. That's what this episode is about. And chatgpt, when I'm saying chatgpt, sometimes it sucks. This is what I'm. What I'm referencing, right? So what can you do about it? I, uh, think there's a handful of things, right? So first off, I stay saying that the better the input, the better the output. So make sure your input is solid. Go check out episode 8. I will link it in the show notes. [00:06:15] Speaker A: And I. [00:06:15] Speaker B: In episode eight, I go through a clarity checklist to help you with making your prompts better. Right? Things. Seven. Seven things on that checklist, um, to. To include in a prompt, uh, to get the. To get a better output, right? To get CHAT GPT to suck less. Um, within this category of a better input is focus on the actions or action or actions that you want it to take as opposed to, you know, negative commands or like, don't do this, avoid this. And we know that with, like, coaching humans, right? When we're like, don't let your knees go in. Don't let your knees go in, right? If you're listening to this, you're a movement professional. That's why I went through it, because that's my background. But that's like the common cue, and it's like, that's not helpful. I tell people what to do, what to focus on. Um, it's just not as good. And, you know, I think that at some level, if we've been through any kind of higher level math, you understand why being like, this exception to the rule, like, it's not as good as, like, here's the rule. Uh, so it's just not as good at, you know, when you tell it, don't do this. All right, so if you do include that in your, in your input or in your, in your instructions, expect it to ignore you or be like, I stay telling it. No EM M dashes never give me. I don't like M M dashes. Not because I think that it looks like AI. It's as, I never use them. Like, I will use a comma till the end of time. Love a comma, love colons. Like, that's how I roll. I don't, I love ellipses. Like, I don't like EM dashes. And so I don't want them in my, in the writing. I don't like the way it looks. Um, but that's something to consider. Right? So the last part within this, giving it a better input, is to go and check the memory and see if all the things that it has stored about you and it remembers about you are correct or still relevant. [00:08:04] Speaker A: Right. [00:08:04] Speaker B: So I will link episode nine. Um, that IFOR is where I talked about checking the memory. Um, and that episode is helpful ChatGPT features you might not know about. So I'll link all of that. [00:08:14] Speaker A: Okay. [00:08:14] Speaker B: All right. The next thing, the next kind of category as to what you can do to make it suck less, uh, is to be aware of drift and correct for it. So drift is the gradual or sudden change in how ChatGPT responds. And this is caused by updates to the system rather than anything that you actually necessarily did. Um, and so to me, the biggest thing here is that as they change the models and they like be doing shit and tinkering, I, uh, think that it may require you to update or modify instructions that you have saved for your projects. And this is something that I recently did. And this is why that's part of the reason I included. I made this episode. Um, it says, like I've told you before, I turn the outline for these episodes into a newsletter and I have a project for it. And I just have to, like, all to do is paste the, the outline in there and I say, can you podcastify this? Can you companion this? And it gives me the podcast notes, the show notes, and it gives me the, um, companion, uh, the. Basically the text, the copy for the companion. And it just like, wasn't being good. Like I was having to correct so much. And I was like, this is the whole reason I'm using you, so I don't have to correct so much. And so I actually went back and I changed the instructions. I worked with Chat GPT and I told it. I was like, this is what's happening. I'm not liking this. [00:09:33] Speaker A: What. [00:09:33] Speaker B: What should I change in the instructions? What should I say in the instructions? Uh, and I went back and changed it, and I've been happier with the output. Um, so that second point there of what we can do to, you know, have Chat GPT suck less is to be aware of drift and then correct for it. The last technical, uh, suggestion is simply to start a new chat and if you've ever had it, generate a picture and then try to generate another picture based on that. Or, like, try to fix the thing that it's just generated. You've experienced this, right? You ask it to make a picture, and then you ask it to change or improve that picture. And it just is, like, off the rails. And you're like, what the happened? What this is. This is not the same. Like, you can just see the, like, degradation. I think it's getting better at it. I don't do a lot of picture stuff, but, um, I actually recently, um, I recently had it, uh, I put it in. I put a picture in because I needed a tie to match a suit that I'm wearing to, um, Jill's wedding in October. [00:10:31] Speaker A: And. [00:10:31] Speaker B: And so the tie, I'm looking for it to match Lex's dress. And so just Lex was, like, looking for dress colors, and I was like, okay, well, let me just put this in Chat GBT and it'll change the tie color for me. And we can have a few options so you can see, but it was exponentially better when I put the original picture in each time and then said change the tie color as opposed to, like, being like, okay, now make it red, now make it blue. Like, then the picture just. You see the degradation. [00:10:55] Speaker A: Right? [00:10:55] Speaker B: Um, so. And this, this should make sense. And this is why I started this episode out with talking about the math behind this and that it's a probabilistic model. This should make sense that if you are just asking it to change something that's already changed, what it gives, you may not be that good. [00:11:13] Speaker A: Right? [00:11:13] Speaker B: Uh, if we take a step back, every time you ask the same question to ChatGPT, you get a slightly different response. Or you can say the exact same input and they will give you slightly different response. Like, stylistically, because it's Probabilistic model, right? It's not uh, it's not a single input, single output. That's what deterministic means, right? Single output always get the same output. It's not a deterministic model. It doesn't memorize things, it doesn't know things. So anytime you put uh, an input and ask it something, it'll give you an answer. But each time it will be slightly different. Usually, um, usually it's slightly different, just like in the style, but it's still different. M. So to that end, oftentimes it is better to just start fresh or start from the beginning. If that conversation is really going off course, it's really going off the rails, right? Because each output that it's giving you, if you're like, no, make it better. No make it better. You are not getting changes made to an exact copy of the original. You're getting changes made to the changed document. Like quote, unquote, document. [00:12:14] Speaker A: Right. [00:12:16] Speaker B: It also, in the same idea of starting a new chat, it doesn't do well with super long chats. It like starts to kind of freeze and glitch and fail. Um, especially when things get really, really long. You have big body, big, you know, amounts of text that you're working with. So just start a new chat. It is, it's just much easier to start a new chat. I recently had this happen when I was making the custom GPT. All that I talked about in the last episode. And I was going back and forth with it to create what's called markdown files. They're just like simplified files. Uh, and it was taking forever. It was just like as I needed six of them because I have six weeks for the course and they're long. It's a lot of um, text that I'm inputting. So my. I, I've input the outline for the call and sometimes those outlines are five, six, seven pages, single spaced. That's a lot of text. And then it needs to mark that down and create a markdown file and just like make it kind of summarizes it simplifies it. But that's just a lot of text to keep inputting and it just like struggles. So suggestion. Start a new, start a new chat. [00:13:17] Speaker A: Right. [00:13:17] Speaker B: Started just. It just kind of helps everything out. Um, and it does have cross chat like memory. So it knows like what that you were just talking about this thing. Um, or, or even better, if you do it inside of a project, uh, then it has project with. It has project. Well, it has memory within that project and it's not going to like, set you back to start. To start a new chat. Okay, um, all right, so the last suggestion that I have to make chat GPT suck less. And this is not a technical suggestion, um, is simply to make sure that you know your right. You cannot blindly trust it. We know this. The best way to use it is as an assistant. You are in the driver's seat. You know all this, um, circling back to the example I just gave about the markdown files, um, I was having to create these and it was getting the, the weeks wrong and mixed up. And it was like, in week two, you're doing this. And I was like, this is wrong. That's not week two. Like, you've missed all this stuff that I know is that I want to make sure you have in here. That is important. And, uh, that only. I can only correct that if I know that. [00:14:21] Speaker A: All right. [00:14:21] Speaker B: I was talking to my guy Jojo, um, about this and he has. He had the same thing, like, he's had it happen where he's working on stuff for his Marvel channel and it'll get like the comic episodes mixed up and it's just like, that's the wrong one. [00:14:32] Speaker A: Right? [00:14:32] Speaker B: So just make sure you know your. You're like, it will suck less if you know your sh. Then you can correct it. Okay, so this is a good segue into the, the final thing and kind of the more cerebral thing that I want to chat briefly about, um, as it relates to the suckiness of chat gbt and that is that I think that this suckiness does have value. [00:14:53] Speaker A: All right? [00:14:53] Speaker B: We as humans, we learn to, um, sentient beings, right? We learn through trial and error. Like, as I'm saying that I'm like also the computer quote unquote learns, but it's not learning, um, but it gets trained through trial and error. But we as humans, we do, we learn through trial and error, right? We learn through making mistakes. And I think that overall it is a very good thing that ChatGPT isn't quote unquote perfect yet, right? There is something to be said about the value of automation and, you know, the value of AI, but in my opinion, that is largely regarding skills of operation versus skills of understanding, right? And the value of something and where we should have concern where, where it makes sense to be more concerned. So skills of operation relate to executing the task. Skills of understanding relate to knowing the principles behind the task. And so what I went, when I was sitting with this and kind of went back and forth. I was like, the first thing that kept coming to mind was driving and going from a manual transmission to automatic transmission. But that's really an example of skills of operation. A change there, right? Where it's executing the task, where it's like instead of knowing, you know, first gear, I'm m actually being able to move first gear, second gear, third gear, like the car will do it for you now. But we still understand the principles of driving, right? Uh, we don't necessarily understand, maybe understand how the car worked, but I don't know if people really understand that ever. Anyway, even when they had a manual transmission, they were just like, I need, I need to go to this now and then. I need to go to this one now and this one now. Although I will say if you know how to drive a manual transmission, I do. Um, it is. You like really do feel like you're in control of the car. Um, but going from a manual transmission to an automatic transmission, it doesn't remove your understanding of the principles of driving really. Right, where you're like red lights, green lights, stop signs, merging traffic patterns, that doesn't go away, right? So when we automate the skills of operation, in this case, it's separate from the skills of understanding. And so it's like not, it's not something of a big issue, right? We have, we see other examples of this, right? Calculators versus math by hand, right? As long as you're still taught the concepts of math, right? You don't have to sit there and like do the things, but you're taught and you understand how to do it. Like the, the having calculator is not a bad thing. It's a very helpful thing. Memorizing, uh, phone numbers, right? Like, like we're going to die if we don't, if we have to, don't if we can't, won't look them up yet. There's no like been no collapse of society because like we got rid of those, the yellow Pages, right? We still have the same constant understanding of like, hey, this identifier is associated with a person and I now have to go and find that identifier so that I can contact that, that, that, that person. But like how I do that, the actual execution, the skills of operation, that. That's all that changed. Um, a good example, I asked Chat GPT for, for some examples. A good example of that I didn't think about was film photography versus digital cameras. Critics said that we'd lose the, you know, quote unquote of art. The, the quote unquote art of capturing images and right. Instead, like we got more accessibility and artistry didn't. [00:17:49] Speaker A: Right. [00:17:49] Speaker B: M. It didn't vanish. It expanded, it shifted. [00:17:52] Speaker A: Right? [00:17:52] Speaker B: Where we still understand what's important for composition and things like that, but like how we take the picture. The, the operation, this just changed. [00:18:01] Speaker A: Right. [00:18:02] Speaker B: So these are examples of skills of operation and automating them hasn't. Isn't like this big detrimental thing. [00:18:09] Speaker A: I, um. [00:18:11] Speaker B: With, you know, the, the. The skills can become somewhat obsolete, but the understanding, the skills of, of understanding remained. [00:18:20] Speaker A: Right? Where. [00:18:21] Speaker B: Whether we're thinking about math, like we still understand what numbers are, whether it is, um, you know, and again, the, the photography. [00:18:30] Speaker A: Right. [00:18:31] Speaker B: We still understand about composition and capturing an image. [00:18:35] Speaker A: Right. [00:18:36] Speaker B: We still have that understanding there. They're not, they're not. We can separate the two. They can separate the skill of operation and the skill of understanding. The concern that people have and that I do get, but I do think there's solutions to this is where automation threatens the skill of understanding, right? Where changing the skill of operation threatens the skill of understanding. And so this is a huge issue. Yes. With comprehension and expression, whether that expression is written or spoken, whatever. [00:19:06] Speaker A: Right. [00:19:06] Speaker B: The skill of operation and the skill of understanding as it relates to comprehension and expression, they are fused. They are intimately entwined. Intertwined, right. So if we think about just the actual skill of uh, of. Of operation itself when it comes to like something like typing versus writing, that's. So it's a skill of operation. It's fine. You can write things, you can type things and like, you'll still be able to communicate and share. It isn't for some people, it's. Some people, they're better with typings. People are better with writing and it helps them. But like, they can be like, uh, interchanged dictating versus typing. You can. As long as you can get your ideas out, you're good. [00:19:42] Speaker A: Right? [00:19:43] Speaker B: Whether you're going to dictate it. And maybe, uh, that's just how your, your brain works versus typing. Totally fine, right? So if I'm like, hey, I'm gonna, I can articulate this, but I'm just gonna speak it to the computer and have it write it out for me. Totally fine. It's totally fine. The problem comes in when we look to outsource and automate the actual skill of understanding here, which is having chat, GPT, think and then type for you. That is not fine. [00:20:11] Speaker A: All right? [00:20:11] Speaker B: Not fine at all. And we know this is what people are going on about. And they're just like when you outsource all your Thinking and the kids. Yes, that is an issue when you just. When it's thinking for you and it's writing for you and it's doing all the things for you instead of, you know, assisting you. [00:20:25] Speaker A: Right? [00:20:26] Speaker B: But we know this, and I intentionally use the word skill because a skill is something that when you train it, it gets better. When you do more. When you do it more often, you. [00:20:36] Speaker A: Get better at it. [00:20:37] Speaker B: And when you don't do it as often, when you do it less, you get worse. [00:20:41] Speaker A: Right? [00:20:41] Speaker B: All that to say to me, uh, it is good that sometimes that Chat GPT is because it forces you to keep practicing, right? It forces you to keep working and utilizing. Working on and utilizing that skill. [00:20:52] Speaker A: Right? [00:20:53] Speaker B: But real quick, because this is not what this episode is about. But, um, as it relates to, you know, I said, I just kind of slipped in there before that I think there's a solution. I said this in the previous episode, right? To me, the concern about outsourcing one's thinking and expression, um, the solution to that isn't banishing AI. The solution is changing society such that we value and we champion thinking and individual expression. We don't value it, we don't champion it. We stifle it. We, we tell people that they should be starving artists. We don't pay artists. [00:21:24] Speaker A: Right? [00:21:25] Speaker B: We, we don't. We show and, uh, teach people that, like, shortcuts is the way to the top. And we see that, like, it is not based on merit, things are not based on merit, and we really glorify this materialistic life. And you're like, this is why kids don't want to think or write. Because they're like, that's not how I'm going to get to the thing that society is telling me that I should get to. [00:21:49] Speaker A: Right. [00:21:49] Speaker B: Um, so just a little fiery food for thought. But let's move it on, shift gears, because that is all that I got for the episode. But, you know, we have to round this out with how I use Chat GPT this week. Each week I include a section where I briefly discuss how I or someone I know use Chat GPT that day or that week. And this week we are highlighting, uh, one of the ways that my guy, Dr. Joe. Oh, Dr. Joseph Orbacheski. Orochevsky. I was actually just talking about him before, earlier in the episode. And he's back again. Um, but how he used Chat GPT. So he sent me a few, um, but one that I haven't discussed, uh, that he sent. I was like, that's a great one. And I Will share that is that he uses it to generate meals for the week for his family and what they need to buy. And this is a great way to use Chat GPT. I know that Lex definitely does this. Um, I am. She asked for recipes. I've done a pseudo version of this where I wanted to make a marinade. And I was like, hey, Chat, this is all that I have in the cabinet. Uh, I don't have 97 hours. Let it sit. Uh, this is the flavor profile that I'm going for. 3, 2, 1, go. Help me out. Um, and it did all right. To me, the coolest part about using ChatGPT for, for meal suggestions and recipes is that you can make it what you want, right? You. You want a certain flavor profile set, you want less ingredients. Bet you need a certain amount of time. You only have a certain amount of time. That's fine, right? You can input all of that and, and it'll help you out. So that is one way that Doc Joo has been using Chat GPT, and hopefully that's helpful for you. Um, but also I would love to hear from you. I think it's really cool to hear. It's one. It's really cool to hear from people, but I think it's really cool to share with other people, you know, how folks are using it. So would love to hear from you. Slide in the old DMS at the movement maestro. Uh, or you can give me a, uh, a little taxi text. That's 310-737-2345. I would love to hear from you. All right, and that is all that we have got for today. Hopefully you found this episode helpful. Talking about why ChatGPT sometimes sucks and what you can do about it. Um, if you, if you did find it helpful, consider sharing it with somebody who you know is curious about ChatGPT. I, uh, don't think this episode was too heady, so I feel pretty good about you sharing it with a friend and this being their possible potential first exposure to the pod. Don't forget, folks, I have a companion newsletter that drops every Thursday. Uh, that is basically the podcast in text format, written format, typed format. So if you prefer to read or you just want a written record of the things, join the newsletter fam. You can head to chatgptcurious.com forward/newsletter or just check out the link in the show notes. As always, all the things will be there. My fam. My peeps, my curious peeps. As always, I am endlessly, endlessly appreciative for every single one of you. I really enjoy recording this. [00:24:54] Speaker A: Right. [00:24:55] Speaker B: Wednesday is the day I sit down, I write the outlines, and I'm like, I'm excited about it. So thank you for giving me a space to, uh, share my curiosity and my excitement until we chat again next Thursday. Thursday. Stay curious.

Other Episodes

Episode 0

September 18, 2025 00:29:42
Episode Cover

Ep. 9: Helpful ChatGPT Features You Might Not Know About

In this episode of ChatGPT Curious I share a handful of ChatGPT features that you might not know about, and that I think you...

Listen

Episode 0

July 14, 2025 00:06:22
Episode Cover

Trailer: Welcome to ChatGPT Curious

Welcome to the trailer episode of ChatGPT Curious! In this episode I lay out why I created the show, what kind of topics I’ll...

Listen

Episode 11

October 02, 2025 00:29:19
Episode Cover

Ep. 11: WTF is a Custom GPT?

This episode is all about custom GPTs: What they are, what they can do, and how to “build” one. If you’ve ever wished you...

Listen