OpenAI logo
AFP via Getty Images
Tech media is abuzz with news of an internal memo (not widely distributed) from OpenAI CEO Sam Altman asking employees to focus in on evolving the product, something outsiders have characterized as a “code red” around the third anniversary of ChatGPT’s origin.
Some cite the recent release of Google’s Gemini 3, and since we got the new Nano Banana Pro less than a week ago, you’d be forgiven for wondering whether that robust image-gen model incorporating reasoning put additional pressure on Altman’s people.
Pulling Away from Other Projects
News around Altman’s warning mentions some of the things that the titan of industry wants his employees to put on the back burner for an all-hands-on-deck effort to improve ChatGPT. One of them, notably, is the push to put ads inside of the model’s interface.
That this had been on the drawing board is itself news to more than a few of us who enjoy using ChatGPT. Although it has been suggested that this rollout would be only for free-tier users, it raises the question: what would these ads look like? Would ChatGPT start convincing us to buy Proctor and Gamble products or start singing to us in that creepy McDonald’s beat poet voice?
One response from MacRumors posits that neither of these would be the case, and that instead, the ads would look pretty conventional, if not downright archaic. Why would a technology like GPT have the kinds of banner ads that sold clicks in 1999?
“The ad experience may feel a lot like what we see already in Google and Amazon searches,” writes Microsoft Copilot, citing writing by Juli Clover.
Another push mentioned by reporters is the increase of GPT’s share of search, where remarks about a 10% stake in user activity is attributed to GPT head Nick Turley.
The news of the day indicates that these pursuits are now set aside, in favor of making sure that OpenAI doesn’t “fall behind” its competitors.
Some Specifics
I was trying to figure out exactly what Altman wants ChatGPT to be able to improve on. As usual, the public reports are cursory. On the other hand, I have hours of Altman’s previous comments to work with, since he’s been on camera a lot talking about his brain child over the last three years.
But perhaps the best way to do with is to get the whole thing right from the horse’s mouth, so to speak:
To wit, this response from none other than ChatGPT 5:
“Altman says he wants ChatGPT to be faster, more reliable, and deeply personalized; remember long-term context, reason better across text, images, and tools, reduce hallucinations, feel warmer yet honest, and eventually act as a safe, always-on assistant that understands your life, work, and preferences while respecting safety, governance, and limits.”
There you have it. And a lot of this seems intuitive, too: faster is good. More reliable, obviously good. Personalized, of course. Fewer hallucinations is a given. As for “feeling warmer yet honest,” that’s where the requests start to veer into weird territory. How do you do that with a chatbot, exactly?
Once again, the model comes to our aid, with not only a one-line response, but a list of relevant bullet points:
How? “By pairing emotional intelligence with calibrated truthfulness.”
- Use warmer language (acknowledging feelings, context) without over-promising.
- Explicitly signal uncertainty (“I’m not sure, but here’s what we do know”).
- Show confidence scores or “low/medium/high confidence” tags.
- Offer alternatives and next steps instead of bluffing.
- Admit limits quickly, in a kind, non-bureaucratic tone.
All of this, to me, seems like an implicit condemnation of GPT 4, which was widely panned for being too sycophantic, although it turned out that users liked this a lot. And they were mad when it went away. But now it seems like another step in the road to general AI, and it’s clear that obsequiousness in models is not really good for us as humans, in the long run.
Well, hopefully, OpenAI gets the “improvement” it needs.


