OpenAI CEO Sam Altman, pictured, speaks with SoftBank Group CEO Masayoshi Son at an event in Tokyo on Feb. 3, 2025.
Tomohiro Ohsumi | Getty Images News | Getty Images
GPT-5 just launched. GPT-6 is already on the way.
That’s the message OpenAI CEO Sam Altman delivered to reporters in San Francisco last week, offering a rare glimpse into the company’s evolving product road map, as well as its missteps.
Altman didn’t give a release date for his company’s next artificial intelligence model, but he made clear that GPT-6 will be different and that it will arrive faster than the gap between GPT-4 and GPT-5. It won’t just respond to users but will adapt to them, and allow people to create chatbots that mirror personal tastes.
He said he sees memory as the key for making ChatGPT truly personal. It needs to remember who you are — your preferences, routines and quirks — and adapt accordingly.
“People want memory,” Altman said. “People want product features that require us to be able to understand them.”
He said OpenAI has been working closely with psychologists to help shape the product, measuring how people feel while tracking well-being over time. The company hasn’t made that data public, but Altman indicated it might.
He also said that future versions of ChatGPT would comply with a recent executive order from the Trump administration that requires AI systems used by the federal government to be ideologically neutral and customizable
“I think our product should have a fairly center-of-the-road, middle stance, and then you should be able to push it pretty far,” Atlman said. “If you’re like, ‘I want you to be super woke’ — it should be super woke.”
He added that if a user wanted the model to be conservative, it should also reflect that as well.
Altman’s comments about GPT-6 follow a rocky rollout of GPT-5. Users took to social media to complain about the model being colder, less connected and less helpful than the prior version.
“I like the new one much better,” he said, acknowledging that the rollout was mishandled. He noted that OpenAI quietly pushed a tone update to GPT-5 that’s “much warmer.”
While Altman called enhanced memory his favorite feature launched this year, he said there are privacy concerns, particularly because temporary memory isn’t encrypted. That means sensitive information could potentially be exposed without stronger safeguards. Altman confirmed that encryption “very well could be” added, though there’s no timeline yet.
Queries involving legal or medical information, he said, need to be treated with adequate privacy protections that aren’t in place today.
“It’s in society’s interest for people to get good medical advice … good legal advice,” he said. “And if you can get better versions of those from AI, you ought to be able to have the same protection for the same reason we decided you could get them from a doctor or a lawyer.”
He’s looking to the future and brain-computer interfaces. Altman said he finds “neural interfaces a cool idea” and imagines being able to “think something and have ChatGPT respond.”
“There are a few areas adjacent to AI that I think are worth us doing something, and this is one of them,” he said, adding that he’s also interested in energy, novel substrates, robots and faster ways to build data centers.
For now, OpenAI’s core consumer product remains ChatGPT, and Altman said he’s focused on making it more flexible and more useful in daily life. He said he already relies on it for everything from work to parenting questions.
He said, however, that there are limits.
“The models have already saturated the chat use case,” Altman said. “They’re not going to get much better. … And maybe they’re going to get worse.”
