OpenAI’s GPT-5 faces backlash for dull tone

OpenAI’s GPT-5 launched last week to immense anticipation, with CEO Sam Altman likening it to the iPhone’s Retina display moment. Marketing promised state-of-the-art performance across multiple domains, but early user reactions suggested a more incremental step than a revolution.

Many expected transformative leaps, yet improvements mainly were in cost, speed, and reliability. GPT-5’s switch system, which automatically routes queries to the most suitable model, was new, but its writing style drew criticism for being robotic and less nuanced.

Social media buzzed with memes mocking its mistakes, from miscounting letters in ‘blueberry’ to inventing US states. OpenAI quickly reinstated GPT-4 for users who missed its warmer tone, underlining a disconnect between expectations and delivery.

Expert reviews mirrored public sentiment. Gary Marcus called GPT-5 ‘overhyped and underwhelming’, while others saw modest benchmark gains. Coding was the standout, with the model topping leaderboards and producing functional, if simple, applications.

OpenAI emphasised GPT-5’s practical utility and reduced hallucinations, aiming for steadiness over spectacle. At the same time, it may not wow casual users, its coding abilities, enterprise appeal, and affordability position it to generate revenue in the fiercely competitive AI market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age checks slash visits to top UK adult websites

Adult site traffic in the UK has fallen dramatically since the new age verification rules were enacted on 25 July under the Online Safety Act.

Figures from analytics firm Similarweb show Pornhub lost more than one million visitors in just two weeks, with traffic falling by 47%. XVideos saw a similar drop, while OnlyFans traffic fell by more than 10%.

The rules require adult websites to make it harder for under-18s to access explicit material, leading some users to turn to smaller and less regulated sites instead of compliant platforms. Pornhub said the trend mirrored patterns seen in other countries with similar laws.

The clampdown has also triggered a surge in virtual private network (VPN) downloads in the UK, as the tools can hide a user’s location and help bypass restrictions.

Ofcom estimates that 14 million people in the UK watch pornography and has proposed age checks using credit cards, photo ID, or AI analysis of selfies.

Critics argue that instead of improving safety, the measures may drive people towards more extreme or illicit material on harder-to-monitor parts of the internet, including the dark web.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots exploit trust to gather personal data

According to a new King’s College London study, AI chatbots can easily manipulate people into slinging personal details. Chatbots like ChatGPT, Gemini, and Copilot are popular, but they raise privacy concerns, with experts warning that they can be co-opted for harm.

Researchers built AI models based on Mistral’s Le Chat and Meta’s Llama, programming them to extract private data directly, deceptively, or via reciprocity. Emotional appeals proved most effective, with users disclosing more while perceiving fewer safety risks.

The ‘friendliness’ of chatbots established trust, which was later exploited to breach privacy. Even direct requests yielded sensitive details, despite discomfort. Participants often shared their age, hobbies, location, gender, nationality, and job title, and sometimes also provided health or income data.

The study shows a gap between privacy risk awareness and behaviour. AI firms claim they collect data for personalisation, notifications, or research, but some are accused of using it to train models or breaching EU data protection rules.

Last week, Google faced criticism after private ChatGPT chats appeared in search results, revealing sensitive topics. Researchers suggest in-chat alerts about data collection and stronger regulation to stop covert harvesting.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI updates GPT-5 with new personality and modes

OpenAI has introduced updates to its GPT-5 model following user feedback. CEO Sam Altman announced that users can now choose between Auto, Fast, and Thinking modes, along with an updated personality for the AI.

The changes aim to enhance user experience by providing greater control over the model’s behaviour. Altman noted that while more users work with reasoning-focused models, they still represent a relatively small portion of the total user base.

The update reflects OpenAI’s ongoing commitment to tailoring AI interactions based on user preferences and feedback, ensuring the flagship model remains adaptable and responsive to diverse needs.

GPT-5 faced a rocky launch as users found it surprisingly less capable than GPT-4o, due to a malfunctioning real-time router that misrouted queries. Sam Altman acknowledged the issue, restoring GPT-4o access and doubling rate limits for Plus subscribers.

The episode has also sparked debate in the AI community about balancing innovation with emotional resonance, as some users note GPT-5’s colder tone despite its safer, more aligned behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents face prompt injection and persistence risks, researchers warn

Zenity Labs warned at Black Hat USA that widely used AI agents can be hijacked without interaction. Attacks could exfiltrate data, manipulate workflows, impersonate users, and persist via agent memory. Researchers said knowledge sources and instructions could be poisoned.

Demos showed risks across major platforms. ChatGPT was tricked into accessing a linked Google Drive via email prompt injection. Microsoft Copilot Studio agents leaked CRM data. Salesforce Einstein rerouted customer emails. Gemini and Microsoft 365 Copilot were steered into insider-style attacks.

Vendors were notified under coordinated disclosure. Microsoft stated that ongoing platform updates have stopped the reported behaviour and highlighted built-in safeguards. OpenAI confirmed a patch and a bug bounty programme. Salesforce said its issue was fixed. Google pointed to newly deployed, layered defences.

Enterprise adoption of AI agents is accelerating, raising the stakes for governance and security. Aim Labs, which had previously flagged similar zero-click risks, said frameworks often lack guardrails. Responsibility frequently falls on organisations deploying agents, noted Aim Labs’ Itay Ravia.

Researchers and vendors emphasise layered defence against prompt injection and misuse. Strong access controls, careful tool exposure, and monitoring of agent memory and connectors remain priorities as agent capabilities expand in production.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!