Ep. 5: Can You Trust ChatGPT?

Episode 5 August 21, 2025 00:20:59
Ep. 5: Can You Trust ChatGPT?
ChatGPT Curious
Ep. 5: Can You Trust ChatGPT?

Aug 21 2025 | 00:20:59

/

Show Notes

In this episode, I dig into whether you can trust ChatGPT, and what that question reveals about trust in general. From model changes and user backlash to the three pillars of trust (benevolence, integrity, competency), I share why double-checking is anon-negotiable and how personal responsibility plays into using AI. We talk about media incentives, misinformation, and why “important” is a slippery, subjective word when it comes to verifying the answers that ChatGPT gives you.

Main Topics Covered

Links & Resources for This Episode

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. Welcome to ChatGPT Curious, a podcast for people who are, well, curious about ChatGPT. I'm, um, your host, Dr. Shantae Cofield, also known as the Maestro, and I created this show to explore what ChatGPT actually is really, though, are the files in the computer, how to use it, and what it might mean for how we think, work, create, and move through life. Whether you're skeptical, intrigued, or already experimenting, you're in the right place. All that I ask is that you stay curious. All right, let's get into it. Hello, hello, hello, my curious people, and welcome to episode five of Chat GPT Curious. I, uh, am your grateful host, the Maestro, and today we are talking about whether or not you can trust Chat GPT. So real quick, before we get into that, let's chat about how quickly chat GPT changes things. So last week's episode, episode four, I spoke about the GPT5 rollout and, uh, two parts to this. One, that most people, in my opinion, likely wouldn't notice a difference. And number two, that the most relevant update, in my opinion, was that you no longer had to choose which model you were using. Well, fast forward, you know, 24 hours, uh, high, high usage folks did notice a difference in that 4.0 model. 4o was friendlier, right? People who use it a lot and really formed, you know, had really formed bonds with it. You know, they kind of talk about using as a therapist, but even more than that, just like, using it a lot and it has this, like, you know, humanness to it. People had formed bonds with this thing. GPT5 doesn't do that so much. It doesn't have that same kind of personality. I actually did notice this a little bit after it was pointed out. And I am a high usage person. Like, I use this thing a lot. Um, and I did notice it a little bit, but only after it was pointed out. And I think I was kind of happy about it at first because it doesn't give any fucking, fucking emojis. Like, I, I don't want an emoji. I don't want you to send an emoji with the. Like, if we're texting, yes. But like, on the answers, like, you try to copy things and they're sending an emoji out and I'm like, I don't want this. Um, and similarly, like, when I speak to it and type into it, sometimes I'll be like, hey, buddy, new question, whatever, I'm friendly with it. 40 would respond similarly, like, matches the tone, whereas 5 o' clock. It pretty much just gives you the answer. It's not like being like, hey, thank you. It's just like, literally like, here's the answer. Um, for what it's worth, I notice less difference with conversation mode. I will do another episode about that and just kind of. I don't even say the different models because we don't have that many different models. But, uh, the conversation mode, um, but the inflection actually did seem a little bit different. And it was a little bit annoying with 5 versus 4, but it was still like, as friendly. Um, but going back to what I was saying, the update folks did complain and OpenAI brought back 4.0. So now at the top of the screen, you can choose which model you use. So we're kind of back to the other, you know, original bullshit, but there's less of them. So now you have the option of five, using GPT5, and underneath that it says auto fast and thinking. And then underneath that it says legacy models. And if you click that, then it'll, uh, take you to 4.0. So that update is like not an update and an update on the update, but my takeaways from this, three takeaways from this, like, super speedy update that they did. Um, kind of like a backtrack with it. 1. Liz, the developer, you can follow her on Instagram. I really like the stuff she puts out, but she like, articulated this thought so well that we stay worried about robots taking over. And we're so worried about it that we have forgotten that we will also have to fight the humans that are defending those robots, which is exactly what we saw with this. Right. Um, next takeaway from me is that the speed with which things change, not to be confused with improve, but the speed with which things change as relates to, you know, ChatGPT, is a good reason to not spend so much time trying to predict how it's going to, you know, how it's going to impact and change things. [00:04:05] Speaker B: Right. [00:04:06] Speaker A: I think the prediction, predicting is a bit futile. So next point, not to contradict myself, but, um, my guess is that we will absolutely see a change to these models at some point. Uh, when, I don't know. But, you know, change is the only constant and these big companies ultimately do what they want and they wanted to, you know, unite the models. And so I think we'll probably end up seeing that at some point anyway. But let's get into the full episode and discuss whether or not you can trust Chat GPT. So one of the things that makes me the most excited about AI is that I believe it has the potential to encourage deeper discussions and better reflections about so many things in our lives. In this case, who and what we trust. So the short answer to can you trust ChatGPT is check all your work? Yes and right. ChatGPT is a great assistant, and it's great at assisting you with tasks you know the correct answer to. [00:05:00] Speaker B: Right? [00:05:01] Speaker A: It's great at helping you solve problems that you know the answer to. It's great at helping you with subjects that you already know a lot about. The reason that I say it's so great in these domains is that if you know what it's saying, uh, you'll know, or you know, the, you know, topic that it's talking about that you're asking about, then you will know if it's saying something incorrect or untrue, right? Remember when you hear the term hallucination, people talk about these models and they hallucinate, right? Hallucination. In regards to LLMs and ChatGPT, it is referring to an output that makes sense and sounds plausible, but is factually incorrect. It's not spitting out gibberish or text that looks like wingdings, right? One of the things that makes it so, you know, this is one of the things that makes it so difficult about using an LLM, right? When we're. When we see things, you know, pictures that are generated, it's very easy, right? Though it is beginning a bit tougher. But, um, it's very easy to see when something's wrong, right? Because we know what the correct output should look like. We know how many fucking fingers you should have. You know what a hand looks like, we know what movement looks like, even if we can't describe it. We're like, something about that doesn't look right. When you ask Chat GPT something, especially about a topic you don't know the answer to, this isn't the case. You don't have a reference to, you know, compare it to. And so we're like, okay, well, this, you know, writing is, is linguistically, semantically, whatever, correct. Grammatically, that's what I'm looking for, correct. But what the content is may be incorrect, right? So in reality, I do believe, yes, you can trust ChatGPT, but you should always double check its work, right? And it literally says this at the bottom of the window when you're using it right? Verbatim, it says or it reads, ChatGPT can make mistakes. Check important info, Something to understand here. And where I start to get excited is because you should always Be double checking your work, right? As ChatGPT rolled out the new models and was eventually able to search the Internet in real time, you know, it's kind of like, oh, that's great. But also on the Internet is wrong. So it's like, okay, well, it's searching the Internet, but the thing it's searching may not be correct, right? Am I to believe that you were previously thinking everything you googled was correct? Hopefully not. [00:07:19] Speaker B: Right? [00:07:19] Speaker A: We got to question things. The best part of the Internet is that it's easy to access. The worst part of the Internet is that it's easy to access. [00:07:26] Speaker B: Right? [00:07:27] Speaker A: Everybody can put on there. So we can go to that rabbit hole here of, well then what do you trust? Who do you trust? And that is what excites me. [00:07:37] Speaker B: Right? [00:07:38] Speaker A: Because the question, you know, that's the question I believe that we should be asking all along. In all honesty, I think that we kind of sort of have been or we've moved towards more of that. Um, I think initially maybe you're like, okay, great, everything's great. And then people started being like, wait a minute, is this correct? Is that per. Is this true? [00:07:56] Speaker B: Right? [00:07:56] Speaker A: And so, you know, I think we found ourselves in a pickle because we are deep in the information age and social media has started, started it, started delivering us so much information so quickly and so easily and in ways that felt and feel very credible and very believable. [00:08:16] Speaker B: Right? [00:08:16] Speaker A: And also, unfortunately, the best story wins. It doesn't matter about being the most factually correct. Perfect example if I refer back to episode two where I discuss the environment. If I was to use a title like every ChatGPT question you ask could drain a gallon of drinking water. Here's why that is objectively incorrect. Please go listen to that episode, folks. That is objectively fucking incorrect. But it will get way more clicks than my title that I chose, which was chatgpt and the Environment, Energy, Water and Carbon Emissions. Like the best story wins. Clickable titles win. People use this and do this on social media. Mainstream M Media, uh, uses this and that continues to put us in this, this pickle that we're in, right? So we have a dopamine hungry audience and media outlets that are willing to say whatever it takes to get the clicks, you know, because they're more concerned with being first than being correct. We pair that with a large population on social media that is hungry for attention and they are getting said attention by spending their time on social media platforms. And these platforms make it incredibly easy to share these sensationalized and incorrect stories and be rewarded. AKA they get that attention with comments, likes and engagement, right? Or you pair that with a population that is too tired, too stressed out, whatever, to fact check for the accuracy of what they're consuming. Pair that with a population that doesn't even know how to check for accuracy. I'm thinking, you know, even with like, videos and old people, like, they'd be sending you videos like, that is AI, that bunny doing that thing that. That. The cat on top of the wolf's head. That is. That's AI, right? Like, pair that with a population that doesn't know that they should check for accuracy. Pair that with a population that doesn't want to check for accuracy. I want to give people the benefit of the doubt, but also, like, some people think they don't fucking care, they don't want to check it. Pair that with corruption and bribes happening at the highest levels of business and the highest levels of government. We have objective in data that substantiates this. And subsequently, you know, people develop a feeling. Note, I said feeling, not a reality. I do not have data to substantiate that, uh, that it, you know, what's. What I'm about to say. [00:10:28] Speaker B: But. [00:10:28] Speaker A: And, uh, I don't want to perpetuate false claims here, but it is how people feel, right? But we have these highest levels of business and government having corruption. And so subsequently people develop a feeling that corruption and bribes are happening at every level of everything. [00:10:43] Speaker B: All right? [00:10:43] Speaker A: And we take all that into consideration and it's easy to see how we got here. And like, you know, how do we trust? What do we trust? Like, this is. Seems like a lot of fake information, right? Uh, it's easy to see how we got here, but maybe a little less easy to see how we get out of here. So to me, the bigger. The answer to the bigger question of who and what we can trust ultimately comes down to expertise and critical thinking from the individual. I know y' all asked, can I trust chat GPT? That was the question that is the topic of this, this episode. And I actually pulled this question from the responses I got on social media, on Instagram. So thank you for, for those. Um, but I know y' all simply ask, can I trust ChatGPT? And we're out here talking about the foundations of expertise and the need for critical thinking. Real talk. That is why I started this podcast, right? To have these discussions using ChatGPT. Like, isn't that complex, let's be honest. Which is why it's had the adoption and uptake that it's had like 700 million people or ever using it because it's easy. If it was super difficult, it would be seven people. [00:11:47] Speaker B: Right? [00:11:48] Speaker A: So I love talking about this. I love that we can have these conversations. And I want to say thank you for allowing me to have this conversation. So let's get, uh, circle back to this idea of who can we trust and what can we trust. And my, my belief being that this comes down to expertise and critical thinking. So to me, the components of expertise, four parts here. One is formal knowledge, so education, training, or extensive study in a subject. Second part, practical application. Proven ability to do the thing, not just talk about it. Part number three, component number three, peer recognition. [00:12:24] Speaker B: Right. [00:12:24] Speaker A: Support from other credible practitioners, providers, people in that field. Number four, transparency in method. There, that word again. Transparency. When I say again, it's because I said it a lot in episode two with the environment. Transparency. Transparency in method. Explaining how you know what you know and why you do what you do. I, um, think that these four components of expertise kind of lend themselves to a sort of checks and balances. Remember when we had those, Remember the good old days where, you know, how we do? How do we know? Because, because if we tie that into to critical thinking here and we start to question it and it's like, well, how do we know that what the person studied is quote, unquote, correct? Great. I love it. Critical thinking, stress tested. And to me that's going to be the kind of practical application part, which is outcomes. [00:13:18] Speaker B: Right? [00:13:19] Speaker A: Did they get, did they do the thing, implement the thing and what happened? The, the other kind of, you know, question you could be asking is like, okay, one of these components of expertise is peer recognition. But how do I know that their peers aren't trash? Right? Because everybody's hanging out with everybody and they're all the worst. Well, this is where we lean on and look into transparency in method, explaining how they know what they know and why they do what they do. [00:13:43] Speaker B: Right. [00:13:43] Speaker A: What is their, what are their motives? What's the method? What do they have to gain from this? [00:13:48] Speaker B: Right. [00:13:48] Speaker A: Again, we gotta hope for this transparency. To me, questioning these things and double checking these things is where that critical thinking piece comes in. Yes, it is a lift. It is, right? With great freedom comes great responsibility. With great opportunity comes great responsibility. And this is a really cool tool. And it has responsibility, right? You got responsibility. Another aspect of this, who do we trust? Peace is understanding how, um, slash, what makes us trust someone? Um, so I believe that there are three components to building trust. Benevolence, integrity and competency. If you've ever learned from me in any kind of like in person fashion, you've probably heard me go over this. Even in my business groups, you've probably heard me go over this and I believe it wholeheartedly. Three components to building trust. Benevolence. Having someone's best interest at heart. Integrity. Adhering to a code of morals. That's going to be subjective. Everyone's code of morals is subjective, but you're adhering to it. Walk the talk. Talk to walk. [00:14:53] Speaker B: Right. [00:14:54] Speaker A: Tell the truth. And then the last one, uh, Competency. Knowing how to do the thing and actually being able to do it reliably. [00:15:04] Speaker B: Right. [00:15:04] Speaker A: Three components to building trust. Benevolence, Integrity, competency. So if we test this, use this little rubric here to test chat GPT. Is it benevolent? Well, I can't say. I'm gonna give the benefit of the doubt and go with I can't say. Uh, maybe I would say it's neutral. But I have also had discussions with it where it's, it's basically told me that it's not. Basically it has told me that its goal is to keep me using it. Like this is the back and forth we have. Uh, and I'm like yeah, that makes sense. And also, you know, who's, who made this thing? Its founders. Do they have my best interest at heart? I don't know. Probably not. I, I would love some transparency here. So let's just for the sake of, the sake, we will answer each of these things as yes or no. [00:15:49] Speaker B: Right? [00:15:49] Speaker A: Benevolent. Is it benevolent? Integrity does have integrity and competency. Benevolent. We're going to go with no integrity. Also no. It'll give a wrong answer just as confidently as a right one. You know, quarters. I have a code of morals. I don't think so. And if so, maybe right. If so it's, you know, because it's trained by the people that own this whatever. Whose code of morals is it? Maybe it's not mine. So we're going to go a uh, no for that box. Third one here. Competency. And this is a strong because it can appear highly competent. But that's because it's drawing on patterns and probability. Please see episode one if you have no idea what I'm talking about when I say that even the so called reasoning models, they're still making predictions based on training data, not accessing independent understanding of truth. It is a probabilistic model. So this is three nos. [00:16:43] Speaker B: Right? [00:16:44] Speaker A: So can you trust ChatGPT based on that rubric, we would say no, but if we insert some nuance here, what I would say is trust it when you can verify. Do not blindly trust it ever. But definitely don't blindly trust it when you cannot verify the thing that it is outputting, as should be the case with anything that doesn't pass the trust components test. [00:17:07] Speaker B: All right? [00:17:07] Speaker A: The Internet, social media, all these things. Benevolence, aggregate competency, we don't know. You literally don't know how competent the person on the other side of the screen is that's sharing this thing for so many of these things that come across your screen, especially if you're on Facebook, right? You. So you should not just buy a lot blindly. I, uh, I made that, like so many syllables. Blindly trust it. Any of this stuff, you gotta double check it, right? And again, to ChatGPT's credit, it says under the input field, literally verbatim, ChatGPT can make mistakes. Check important info. Now I know that word important is doing some heavy lifting there and is 100 subjective, right? It's one thing to trust Chat GPT to make you a packing list important for maybe it's another thing to trust it with advice that can influence your health, your finances, public safety. Also very important and actually important. [00:17:59] Speaker B: Right? [00:17:59] Speaker A: And this is where that personal responsibility piece comes in, that judgment piece comes in, that critical thinking piece comes in. Ultimately, there is a tangled web here. When we look at the bigger question of who and what do we trust? And the fact that that bigger question comes up when we ask if we can trust ChatGPT. [00:18:19] Speaker B: Right? [00:18:20] Speaker A: That excites me, the fact that that question comes up. So thank you for hanging with my excitement. Right, last part of the episode. How have I used ChatGPT recently? So today I want to share more of a how to than a ah, how. I meaning, uh, how to use a suggestion on how to use chat gbt. Uh, and so we know that Chat GPT lives in suggestion mode, meaning you've all experienced this. You put something in there and it just stays asking like, is there a way that I can help you? And it offers up suggestions. It doesn't just ask, can I help you? It's like, hey, can I put this into a table for you? Can I put this into a, I don't know, whatever for you? Which I actually find helpful. But you can toggle that off in settings. So we go to the settings, uh, and then click, uh. I believe it says show follow up suggestions in chat. It's like a toggle. So you can toggle it on or off, but I actually like it, so I leave it on. Um, but it can be annoying when I just want to know if I'm correct in how I'm thinking about something. Like, I'm like, I want to put this whole thing in there, but like, is this correct in my thinking? I don't want a fucking suggestion. I don't want a table made out of it. I don't want a bulleted list. I just. Is this correct? So I will say, would you say that this sentence is correct, or would you say that this sentiment is correct? Or would you say that this thought is correct? If so, no corrections needed. And then if you, if you put that in and you put in whatever your sentence or sentiment is, uh, when it responds, it'll say something like, yep, that sentence, that sentiment, that thought is correct as written. No corrections needed. I super simple something that's super helpful for me, so I figured I'd share it with you. All right, looking at the time, that is all for today, friends. Hopefully you found this episode helpful. If you did consider leaving a rating or review, it helps out with that whole competency trust thing. See how I made that meta brought it back full circle. Don't forget folks, I also have a companion newsletter that drops every Thursday that is basically the podcast episode in text format. So if you prefer to read or you just want that written record of things, join the Curious Companion newsletter. You can head to chatgpt curious.com newsletter or you can check out the link in the show notes. As always, endlessly, endlessly, endlessly appreciative for every single one of you. Until we chat again next Thursday, stay curious.

Other Episodes

Episode 2

August 07, 2025 00:58:25
Episode Cover

Ep. 2: ChatGPT and the Environment: Energy, Water, and Carbon Emissions

In this episode I dig into the environmental impact of using ChatGPT: energy use, water consumption, and carbon emissions. I walk through the research,...

Listen

Episode 1

August 07, 2025 00:40:29
Episode Cover

Ep. 1: What You Actually Need to Know About ChatGPT

In this episode of ChatGPT Curious, I lay the groundwork for understanding what ChatGPT actually is, without getting too lost in the weeds. I...

Listen

Episode 3

August 07, 2025 00:17:16
Episode Cover

Ep. 3: Is ChatGPT Killing Creativity?

In this episode I answer the question “Is ChatGPT killing creativity?” with a resounding no, and then unpack why I think that question even...

Listen