OpenAI updates GPT-5 with new personality and modes

OpenAI has introduced updates to its GPT-5 model following user feedback. CEO Sam Altman announced that users can now choose between Auto, Fast, and Thinking modes, along with an updated personality for the AI.

The changes aim to enhance user experience by providing greater control over the model’s behaviour. Altman noted that while more users work with reasoning-focused models, they still represent a relatively small portion of the total user base.

The update reflects OpenAI’s ongoing commitment to tailoring AI interactions based on user preferences and feedback, ensuring the flagship model remains adaptable and responsive to diverse needs.

GPT-5 faced a rocky launch as users found it surprisingly less capable than GPT-4o, due to a malfunctioning real-time router that misrouted queries. Sam Altman acknowledged the issue, restoring GPT-4o access and doubling rate limits for Plus subscribers.

The episode has also sparked debate in the AI community about balancing innovation with emotional resonance, as some users note GPT-5’s colder tone despite its safer, more aligned behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Perplexity AI offers US$34.5b for Google Chrome

Perplexity AI has made a surprise US$34.5 billion offer to acquire Google’s Chrome browser, which could align with antitrust measures under consideration in the US.

The San Francisco-based startup submitted the proposal in a letter of intent, claiming it would keep Chrome independent while prioritising openness and consumer protection.

The bid arrives as Google awaits a court ruling on potential remedies after being found to have maintained an illegal monopoly in online search.

US government lawyers have suggested Chrome’s divestment instead of allowing Google to strengthen its dominance through AI. Google has urged the court to reject such a move, warning that a sale could harm innovation and reduce quality.

Analysts at Baird Equity Research said Perplexity’s offer undervalues Chrome and may be intended to prompt rival bids or influence the judge’s decision. They added that Perplexity, which already operates its browser, could gain an advantage if Chrome became independent.

Google argues that most Chrome users are outside the US, meaning a forced sale would have global implications. The ruling is expected by the end of August, with the outcome likely to reshape the competitive landscape for browsers as AI increasingly shapes how users access the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents face prompt injection and persistence risks, researchers warn

Zenity Labs warned at Black Hat USA that widely used AI agents can be hijacked without interaction. Attacks could exfiltrate data, manipulate workflows, impersonate users, and persist via agent memory. Researchers said knowledge sources and instructions could be poisoned.

Demos showed risks across major platforms. ChatGPT was tricked into accessing a linked Google Drive via email prompt injection. Microsoft Copilot Studio agents leaked CRM data. Salesforce Einstein rerouted customer emails. Gemini and Microsoft 365 Copilot were steered into insider-style attacks.

Vendors were notified under coordinated disclosure. Microsoft stated that ongoing platform updates have stopped the reported behaviour and highlighted built-in safeguards. OpenAI confirmed a patch and a bug bounty programme. Salesforce said its issue was fixed. Google pointed to newly deployed, layered defences.

Enterprise adoption of AI agents is accelerating, raising the stakes for governance and security. Aim Labs, which had previously flagged similar zero-click risks, said frameworks often lack guardrails. Responsibility frequently falls on organisations deploying agents, noted Aim Labs’ Itay Ravia.

Researchers and vendors emphasise layered defence against prompt injection and misuse. Strong access controls, careful tool exposure, and monitoring of agent memory and connectors remain priorities as agent capabilities expand in production.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta leads booming AI smart glasses market in first half of 2025

According to Counterpoint Research, global shipments of smart glasses more than doubled in the first half of 2025, fuelled by soaring demand for AI-powered models.

The segment accounted for 78% of shipments, outpacing basic audio-enabled smart frames.

Meta led the market with over 73% share, primarily driven by the success of its Ray-Ban AI glasses. Rising competition came from Chinese firms, including Huawei, RayNeo, and Xiaomi, emerging as a surprise contender with its new AI glasses.

Analysts attribute the surge to growing consumer interest in AI-integrated wearable tech, with Meta and Xiaomi’s latest releases generating strong sales momentum.

Competition is expected to intensify as companies such as Alibaba and ByteDance enter the space in the second half of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Staff welcome AI but call for clear boundaries

New research shows that most workers are open to using AI tools at work, but resist the idea of being managed by them. Workers are far more positive about AI recommending skills or collaborating alongside them.

The Workday study found that while 82% of organisations are expanding AI agent use, only 30% of employees feel comfortable being overseen by such systems.

Nine in ten respondents believe AI can boost productivity, yet nearly half fear it could erode critical thinking and add to workloads. Trust in the technology grows with experience, with 95% of regular users expressing confidence compared with 36% of those new to AI.

Sensitive functions such as hiring, finance, and legal work remain areas where human oversight is preferred. Many see AI as a partner that complements judgement and empathy rather than replacing them entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santander expands AI-first strategy with OpenAI

Santander is accelerating its AI-first transformation through a new partnership with OpenAI, aiming to embed intelligent technology into every part of the bank.

Over the past two months, ChatGPT Enterprise has been rolled out to nearly 15,000 employees across Europe and the Americas, with plans to double that number by year-end. The move forms part of a broader ambition to become an AI-native institution where all decisions and processes are data-driven.

The bank will plan a mandatory AI training programme for all staff from 2026, with a focus on responsible use, and expects to scale agentic AI to enable fully conversational banking.

Santander says its AI initiatives saved over €200 million last year. In Spain alone, speech analytics now handles 10 million calls annually, automatically updating CRM records and freeing more than 100,000 work hours. Developer productivity has risen by up to 30% on some tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI browsers accused of harvesting sensitive data, according to new study

A new study from researchers in the UK and Italy found that popular AI-powered browsers collect and share sensitive personal data, often in ways that may breach privacy laws.

The team tested ten well-known AI assistants, including ChatGPT, Microsoft’s Copilot, Merlin AI, Sider, and TinaMind, using public websites and private portals like health and banking services.

All but Perplexity AI showed evidence of gathering private details, from medical records to social security numbers, and transmitting them to external servers.

The investigation revealed that some tools continued tracking user activity even during private browsing, sending full web page content, including confidential information, to their systems.

Sometimes, prompts and identifying details, like IP addresses, were shared with analytics platforms, enabling potential cross-site tracking and targeted advertising.

Researchers also found that some assistants profiled users by age, gender, income, and interests, tailoring their responses across multiple sessions.

According to the report, such practices likely violate American health privacy laws and the European Union’s General Data Protection Regulation.

Privacy policies for some AI browsers admit to collecting names, contact information, payment data, and more, and sometimes storing information outside the EU.

The study warns that users cannot be sure how their browsing data is handled once gathered, raising concerns about transparency and accountability in AI-enhanced browsing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s xAI makes Grok 4 free worldwide for a limited time

Elon Musk’s company xAI has made its latest AI model, Grok 4, available to all users worldwide at no cost for a limited period. The model, launched just a month ago, was initially exclusive to paying subscribers of SuperGrok and X Premium.

Although Grok 4 is now open to everyone, its most potent version, Grok 4 Heavy, remains restricted to SuperGrok Heavy members. The announcement comes days after OpenAI unveiled GPT-5, which is also freely accessible.

Grok 4 features two operating modes. Auto mode decides automatically whether a query requires more detailed reasoning, aiming to deliver faster responses and use fewer resources. Expert mode allows users to manually switch the AI into reasoning mode if they want a more thorough reply.

Alongside the release, xAI has introduced Grok Imagine, a free AI video generation tool for users in the US, with enhanced usage limits for paid members in other regions. The tool has already sparked controversy after reports emerged of its use to create explicit videos of celebrities.

Musk has also revealed plans to integrate advertising into the Grok chatbot interface as an additional revenue source to help offset the high costs of running the AI on powerful GPUs.

The ads will be placed between responses and suggestions on both the web platform and the mobile application, marking another step in xAI’s bid to expand its user base while sustaining the service financially.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!