I asked GPT-5 what an SRE is. Simple question, right?
Instead of a straight answer, you get jokes about "obsessing over uptime, performance, and not getting paged at 3am."
Cute. But not helpful when you actually need to know what a Site Reliability Engineer does.
This isn't isolated. GPT-5 seems determined to be your witty friend rather than your reliable assistant. Every query comes with a side of snark, every definition wrapped in attempted humour.
Here’s another, factual request:
Remember when we just wanted AI to be accurate? Now it's trying to make us laugh.
I get it. OpenAI wants AI to feel more human. Less robotic. More engaging.
But when I'm researching for a client presentation or trying to understand a technical concept, I don't need a comedy routine. I need clarity.
The irony? We spent years teaching AI to stop hallucinating facts. Now we're teaching it to hallucinate personality.
Give me the AI that respects my time. That knows when to be straightforward. That understands professional context.
What do you think - is AI getting too clever (or funny) for its own good?