OpenAI updates safety rules amid AI race

OpenAI has updated its Preparedness Framework, the internal system used to assess AI model safety and determine necessary safeguards during development.

The company now says it may adjust its safety standards if a rival AI lab releases a ‘high-risk’ system without similar protections, a move that reflects growing competitive pressure in the AI industry.

Instead of outright dismissing such flexibility, OpenAI insists that any changes would be made cautiously and with public transparency.

Critics argue OpenAI is already lowering its standards for the sake of faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned corporate restructure might encourage further shortcuts.

OpenAI denies these claims, but reports suggest compressed safety testing timelines and increasing reliance on automated evaluations instead of human-led reviews. According to sources, some safety checks are also run on earlier versions of models, not the final ones released to users.

The refreshed framework also changes how OpenAI defines and manages risk. Models are now classified as having either ‘high’ or ‘critical’ capability, the former referring to systems that could amplify harm, the latter to those introducing entirely new risks.

Instead of deploying models first and assessing risk later, OpenAI says it will apply safeguards during both development and release, particularly for models capable of evading shutdown, hiding their abilities, or self-replicating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI adds collaborative workspace to Grok

Elon Musk’s AI firm xAI has introduced a new feature called Grok Studio, offering users a dedicated space to create and edit documents, code, and simple apps.

Available on Grok.com for both free and paying users, Grok Studio opens content in a separate window, allowing for real-time collaboration between the user and the chatbot instead of relying solely on back-and-forth prompts.

Grok Studio functions much like canvas-style tools from other AI developers. It allows code previews and execution in languages such as Python, C++, and JavaScript. The setup mirrors similar features introduced earlier by OpenAI and Anthropic, instead of offering a radically different experience.

All content appears beside Grok’s chat window, creating a workspace that blends conversation with practical development tools.

Alongside this launch, xAI has also announced integration with Google Drive.

It will allow users to attach files directly to Grok prompts, letting the chatbot work with documents, spreadsheets, and slides from Drive instead of requiring uploads or manual input, making the platform more convenient for everyday tasks and productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Opera brings AI assistant to Opera Mini on Android

Opera, the Norway-based browser maker, has announced the rollout of its AI assistant, Aria, to Opera Mini users on Android. The move represents a strategic effort to bring advanced AI capabilities to users with low-end devices and limited data access, rather than confining such tools to high-spec platforms.

Aria allows users to access up-to-date information, generate images, and learn about a range of topics using a blend of models from OpenAI and Google.

Since its 2005 launch, Opera Mini has been known for saving data during browsing, and Opera claims that the inclusion of Aria won’t compromise that advantage nor increase the app’s size.

It makes the AI assistant more accessible for users in regions where data efficiency is critical, instead of making them choose between smart features and performance.

Opera has long partnered with telecom providers in Africa to offer free data to Opera Mini users. However, last year, it had to end its programme in Kenya due to regulatory restrictions around ads on browser bookmark tiles.

Despite such challenges, Opera Mini has surpassed a billion downloads on Android and now serves more than 100 million users globally.

Alongside this update, Opera continues testing new AI functions, including features that let users manage tabs using natural language and tools that assist with task completion.

An effort like this reflects the company’s ambition to embed AI more deeply into everyday browsing instead of limiting innovation to its main browser.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Search drops local domain URLs

Google is set to begin redirecting its country-specific search domains, such as google.fr or google.co.uk, to the main global address at google.com.

The company says the move is intended to streamline user experience across different regions, with the update being gradually rolled out over the coming months.

Although users will see google.com in their browser instead of their local version, Google assures that the way Search functions will remain the same.

Some users may be prompted to re-enter their search preferences during the transition, but results will still reflect local relevance.

Since 2017, the platform has delivered the same core Search experience regardless of whether users accessed it through a country-specific address or the global one.

With this standardisation already in place, Google has concluded that separate country domains are no longer necessary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google creates AI to decode dolphin sounds

Google DeepMind has developed a groundbreaking AI model capable of interpreting and generating dolphin vocalisations.

Named DolphinGemma, the model was created in collaboration with researchers from Georgia Tech and the Wild Dolphin Project, a nonprofit organisation known for its extensive studies on Atlantic spotted dolphins.

Using an audio-in, audio-out architecture, the AI DolphinGemma analyses sequences of natural dolphin sounds to detect patterns and structures, ultimately predicting the most likely sounds that follow.

The approach is similar to how large language models predict the next word in a sentence. It was trained using a vast acoustic database collected by the Wild Dolphin Project, ensuring accuracy in modelling natural dolphin communication.

Lightweight and efficient, DolphinGemma is designed to run on smartphones, making it accessible for field researchers and conservationists.

Google DeepMind’s blog noted that the model could mark a major advance in understanding dolphin behaviour, potentially paving the way for more meaningful interactions between humans and marine mammals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify launches Ads Exchange and Gen AI ads in India

Spotify has introduced its Ads Exchange (SAX) and Generative AI-powered advertisements in India, following a successful pilot in the US and Canada.

The SAX platform aims to give advertisers better control over performance tracking and maximise reach without overloading users with repetitive ads.

Integrated with platforms such as Google DV360, The Trade Desk, and Magnite, SAX enables advertisers to access Spotify’s high-quality inventory and enhance their programmatic strategies. In addition to multimedia formats, podcast ads will soon be included.

Through Generative AI, advertisers can create audio ads within Spotify’s Ads Manager platform at no extra cost, using scripts, voiceovers, and licensed music.

An innovation like this allows brands to produce more ads in shorter intervals with less effort, making the process quicker and more efficient for reaching a broader audience. Arjun Kolady, Head of Sales – India at Spotify, highlighted the ease of scaling campaigns with these new tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung brings AI-powered service tool to India

Samsung, already the leading home appliance brand in India by volume, is now enhancing its after-sales service with an AI-powered support tool.

The tech company from South Korea has introduced the Home Appliances Remote Management (HRM) tool, designed to improve service speed, accuracy, and overall customer experience instead of sticking with traditional support methods.

The HRM tool allows customer care teams to remotely diagnose and resolve issues in Samsung smart appliances connected via SmartThings. If a problem can be fixed remotely, staff will ask for the user’s consent before taking control of the device.

If the issue can be solved by the customer, step-by-step instructions are provided instead of sending a technician straight away.

When neither of these options applies, the issue is forwarded directly to service technicians with full diagnostics already completed, cutting down the time spent on-site.

The new system reduces the need for in-home visits, shortens waiting times, and increases the uptime of appliances instead of leaving users waiting unnecessarily.

SmartThings also plays a proactive role by automatically detecting issues and offering solutions before customers even need to call.

Samsung India’s Vice President for Customer Satisfaction, Sunil Cutinha, noted that the tool significantly streamlines service, boosts maintenance efficiency, and helps ensure timely product support for users across the country.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia brings AI supercomputer production to the US

Nvidia is shifting its AI supercomputer manufacturing operations to the United States for the first time, instead of relying on a globally dispersed supply chain.

In partnership with industry giants such as TSMC, Foxconn, and Wistron, the company is establishing large-scale facilities to produce its advanced Blackwell chips in Arizona and complete supercomputers in Texas. Production is expected to reach full scale within 12 to 15 months.

Over a million square feet of manufacturing space has been commissioned, with key roles also played by packaging and testing firms Amkor and SPIL.

The move reflects Nvidia’s ambition to create up to half a trillion dollars in AI infrastructure within the next four years, while boosting supply chain resilience and growing its US-based operations instead of expanding solely abroad.

These AI supercomputers are designed to power new, highly specialised data centres known as ‘AI factories,’ capable of handling vast AI workloads.

Nvidia’s investment is expected to support the construction of dozens of such facilities, generating hundreds of thousands of jobs and securing long-term economic value.

To enhance efficiency, Nvidia will apply its own AI, robotics, and simulation tools across these projects, using Omniverse to model factory operations virtually and Isaac GR00T to develop robots that automate production.

According to CEO Jensen Huang, bringing manufacturing home strengthens supply chains and better positions the company to meet the surging global demand for AI computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use EU user data for AI training amid scrutiny

Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.

The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.

Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.

Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.

The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.

Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.

Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X faces EU probe over AI data use

Elon Musk’s X platform is under formal investigation by the Irish Data Protection Commission over its alleged use of public posts from EU users to train the Grok AI chatbot.

The probe is centred on whether X Internet Unlimited Company, the platform’s newly renamed Irish entity, has adhered to key GDPR principles while sharing publicly accessible data, like posts and interactions, with its affiliate xAI, which develops the chatbot.

Concerns have grown over the lack of explicit user consent, especially as other tech giants such as Meta signal similar data usage plans.

A move like this is part of a wider regulatory push in the EU to hold AI developers accountable instead of allowing unchecked experimentation. Experts note that many AI firms have deployed tools under a ‘build first, ask later’ mindset, an approach at odds with Europe’s strict data laws.

Should regulators conclude that public data still requires user consent, it could force a dramatic shift in how AI models are developed, not just in Europe but around the world.

Enterprises are now treading carefully. The investigation into X is already affecting AI adoption across the continent, with legal and reputational risks weighing heavily on decision-makers.

In one case, a Nordic bank halted its AI rollout midstream after its legal team couldn’t confirm whether European data had been used without proper disclosure. Instead of pushing ahead, the project was rebuilt using fully documented, EU-based training data.

The consequences could stretch far beyond the EU. Ireland’s probe might become a global benchmark for how governments view user consent in the age of data scraping and machine learning.

Instead of enforcement being region-specific, this investigation could inspire similar actions from regulators in places like Singapore and Canada. As AI continues to evolve, companies may have no choice but to adopt more transparent practices or face a rising tide of legal scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!