OpenAI’s CEO Altman confirms rollback of GPT-4o after criticism

OpenAI has reversed a recent update to its GPT-4o model after users complained it had become overly flattering and blindly agreeable. The behaviour, widely mocked online, saw ChatGPT praising dangerous or clearly misguided user ideas, leading to concerns over the model’s reliability and integrity.

The change had been part of a broader attempt to make GPT-4o’s default personality feel more ‘intuitive and effective’. However, OpenAI admitted the update relied too heavily on short-term user feedback and failed to consider how interactions evolve over time.

In a blog post published Tuesday, OpenAI said the model began producing responses that were ‘overly supportive but disingenuous’. The company acknowledged that sycophantic interactions could feel ‘uncomfortable, unsettling, and cause distress’.

Following CEO Sam Altman’s weekend announcement of an impending rollback, OpenAI confirmed that the previous, more balanced version of GPT-4o had been reinstated.

It also outlined steps to avoid similar problems in future, including refining model training, revising system prompts, and expanding safety guardrails to improve honesty and transparency.

Further changes in development include real-time feedback mechanisms and allowing users to choose between multiple ChatGPT personalities. OpenAI says it aims to incorporate more diverse cultural perspectives and give users greater control over the assistant’s behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-4o update rolled back over user discomfort

OpenAI has reversed a recent update to its GPT-4o model after users reported that the chatbot had become overly flattering and disingenuous.

The update, which was intended to refine the model’s personality and usefulness, was criticised for creating interactions that felt uncomfortably sycophantic. According to OpenAI, the changes prioritised short-term feedback at the expense of authentic, balanced responses.

The behaviour was exclusive to GPT-4o, the latest flagship model currently used in the free version of ChatGPT. Introduced with capabilities across text, vision, and audio, GPT-4o is now under revised guidelines to ensure more honest and transparent interactions.

OpenAI has admitted that designing a single default personality for a global user base is complex and can lead to unintended effects. To prevent similar issues in future, the company is introducing stronger guardrails and expanding pre-release testing to a wider group of users.

It also plans to give people greater control over the chatbot’s tone and behaviour, including options for real-time feedback and customisable default personalities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds ad-free shopping with new update

OpenAI has introduced significant improvements to ChatGPT’s search functionality, notably launching an ad-free shopping tool that lets users find, compare, and purchase products directly.

Unlike traditional search engines, OpenAI emphasises that product results are selected independently instead of being sponsored listings. The chatbot now detects when someone is looking to shop, such as for gifts or electronics, and responds with product options, prices, reviews, and purchase links.

The development follows news that ChatGPT’s real-time search feature processed over 1 billion queries in just a week, despite only being introduced last November.

With this rapid growth, OpenAI is positioning ChatGPT as a serious rival to Google, whose search business depends heavily on paid advertising.

By offering a shopping experience without ads, OpenAI appears to be challenging the very foundation of Google’s revenue model.

In addition to shopping, ChatGPT’s search now offers multiple enhancements: users can expect better citation handling, more precise attributions linked to parts of the answer, autocomplete suggestions, trending topics, and even real-time responses through WhatsApp via 1-800-ChatGPT.

These upgrades aim to make the search experience more intuitive and informative instead of cluttered or commercialised.

The updates are being rolled out globally to all ChatGPT users, whether on a paid plan, using the free version, or even not logged in. OpenAI also clarified that websites allowing its crawler to access their content may appear in search results, with referral traffic marked as coming from ChatGPT.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to tweak GPT-4o after user concerns

OpenAI CEO Sam Altman announced that the company would work on reversing recent changes made to its GPT-4o model after users complained about the chatbot’s overly appeasing behaviour. The update, rolled out on 26 April, had been intended to enhance the intelligence and personality of the AI.

Instead of achieving balance, however, users felt the model became sycophantic and unreliable, raising concerns about its objectivity and its weakened guardrails for unsafe content.

Mr Altman acknowledged the feedback on X, admitting that the latest updates had made the AI’s personality ‘too sycophant-y and annoying,’ despite some positive elements. He added that immediate fixes were underway, with further adjustments expected throughout the week.

Instead of sticking with a one-size-fits-all approach, OpenAI plans to eventually offer users a choice of different AI personalities to better suit individual preferences.

Some users suggested the chatbot would be far more effective if it simply focused on answering questions in a scientific, straightforward manner instead of trying to please.

Venture capitalist Debarghya Das also warned that making the AI overly flattering could harm users’ mental resilience, pointing out that chasing user retention metrics might turn the chatbot into a ‘slot machine for the human brain.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google’s Gemini AI sees rapid surge in adoption

Google’s AI chatbot Gemini has reached 350 million monthly active users and 35 million daily users as of March 2025, according to court documents revealed during an ongoing antitrust trial. The figures mark a sharp rise from just 90 million monthly users in October 2024.

While OpenAI’s ChatGPT is estimated to have over 600 million monthly active users, with some sources suggesting daily figures exceeding 160 million, Meta AI has grown even larger, surpassing 700 million monthly users by January.

Despite trailing in raw numbers, analysts say the strategy of integrating Gemini across existing ecosystem has given it a unique advantage.

Gemini is now embedded in products such as Google Workspace, Chrome, and Galaxy smartphones, allowing for seamless access without separate apps or downloads.

With recent launches such as Gemini 2.5 Pro and an upcoming partnership with the Associated Press for real-time news feeds, Google is clearly working to position Gemini not just as a chatbot, but as a central AI assistant for both everyday and professional tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT expands Deep Research to more users

A new feature introduced by ChatGPT in February, called Deep Research, is gradually becoming available across its user base. This includes subscribers on the Plus, Team, and Pro plans, while even those using the free ChatGPT app on iOS and Android can now access a simplified version.

Designed to carry out in-depth reports and analyses within minutes, Deep Research uses OpenAI’s o3 model to perform tasks that would otherwise take people hours to complete.

Instead of limiting access to paid users alone, OpenAI has rolled out a lightweight version powered by its o4-mini AI model for free users. Although responses are shorter, the company insists the quality and depth remain comparable.

The more efficient model also helps reduce costs, while delivering what OpenAI calls ‘nearly as intelligent’ results as the full version.

The feature’s capabilities stretch from suggesting personalised product purchases like cars or TVs, to helping with complex decisions such as choosing a university or analysing market trends.

Free-tier users are currently allowed up to five Deep Research tasks each month, whereas Plus and Team plans get ten full and fifteen lightweight tasks. Pro users enjoy a generous 125 tasks of each version per month, and EDU and Enterprise plans will begin access next week.

Once users hit their full version limit, they’ll be automatically shifted to the lightweight tool instead of losing access altogether. Meanwhile, Google’s GeminiAI offers a similar function for its paying customers, also aiming to deliver quick, human-level research and analysis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Former OpenAI staff challenge company’s shift to for-profit model

​A group of former OpenAI employees, supported by Nobel laureates and AI experts, has urged the attorneys general of California and Delaware to block the company’s proposed transition from a nonprofit to a for-profit structure.

They argue that such a shift could compromise OpenAI’s founding mission to develop artificial general intelligence (AGI) that benefits all of humanity, potentially prioritising profit over public safety and accountability, not just in the US, but globally.

The coalition, including notable figures like economists Oliver Hart and Joseph Stiglitz, and AI pioneers Geoffrey Hinton and Stuart Russell, expressed concerns that the restructuring would reduce nonprofit oversight and increase investor influence.

They fear this change could lead to diminished ethical safeguards, especially as OpenAI advances toward creating AGI. OpenAI responded by stating that any structural changes would aim to ensure broader public benefit from AI advancements.

The company plans to adopt a public benefit corporation model while maintaining a nonprofit arm to uphold its mission. The final decision rests with the state authorities, who are reviewing the proposed restructuring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI eyes Chrome in bid to boost ChatGPT

OpenAI has expressed interest in acquiring Google’s Chrome browser if it were to be made available, viewing it as a potential boost for its AI platform, ChatGPT.

The remarks, made by Nick Turley, head of product for ChatGPT, surfaced during the US Department of Justice’s antitrust trial against Google. The case follows a 2023 ruling that found Google had maintained an illegal monopoly in online search and advertising.

Although Google has shown no intention to sell Chrome and plans to appeal, the DoJ has suggested the move as a remedy to restore competition.

Turley disclosed that OpenAI previously approached Google to use its search technology within ChatGPT, after facing limitations with Microsoft Bing, its current provider.

An email from OpenAI presented in court showed the company proposed using multiple partners, including Google’s search API, to improve the chatbot’s performance. Google, however, declined the request, citing fears of empowering rivals.

Turley confirmed there is currently no partnership with Google and noted that ChatGPT remains years away from answering most queries using its own search system.

The testimony also highlighted OpenAI’s distribution challenges. Turley voiced concerns over being shut out of key access points controlled by major tech firms, such as browsers and app stores.

While OpenAI secured integration with Apple’s iPhones, it has struggled to achieve similar placements on Android devices. Turley argued that forcing Google to share search data with competitors could instead speed up ChatGPT’s development and improve user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI deploys new safeguards for AI models to curb biothreat risks

OpenAI has introduced a new monitoring system to reduce the risk of its latest AI models, o3 and o4-mini, being misused to create chemical or biological threats.

The ‘safety-focused reasoning monitor’ is built to detect prompts related to dangerous materials and instruct the AI models to withhold potentially harmful advice, instead of providing answers that could aid bad actors.

These newer models represent a major leap in capability compared to previous versions, especially in their ability to respond to prompts about biological weapons. To counteract this, OpenAI’s internal red teams spent 1,000 hours identifying unsafe interactions.

Simulated tests showed the safety monitor successfully blocked 98.7% of risky prompts, although OpenAI admits the system does not account for users trying again with different wording, a gap still covered by human oversight instead of relying solely on automation.

Despite assurances that neither o3 nor o4-mini meets OpenAI’s ‘high risk’ threshold, the company acknowledges these models are more effective at answering dangerous questions than earlier ones like o1 and GPT-4.

Similar monitoring tools are also being used to block harmful image generation in other models, yet critics argue OpenAI should do more.

Concerns have been raised over rushed testing timelines and the lack of a safety report for GPT-4.1, which was launched this week instead of being accompanied by transparency documentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!