OpenAI updates safety rules amid AI race

OpenAI has updated its Preparedness Framework, the internal system used to assess AI model safety and determine necessary safeguards during development.

The company now says it may adjust its safety standards if a rival AI lab releases a ‘high-risk’ system without similar protections, a move that reflects growing competitive pressure in the AI industry.

Instead of outright dismissing such flexibility, OpenAI insists that any changes would be made cautiously and with public transparency.

Critics argue OpenAI is already lowering its standards for the sake of faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned corporate restructure might encourage further shortcuts.

OpenAI denies these claims, but reports suggest compressed safety testing timelines and increasing reliance on automated evaluations instead of human-led reviews. According to sources, some safety checks are also run on earlier versions of models, not the final ones released to users.

The refreshed framework also changes how OpenAI defines and manages risk. Models are now classified as having either ‘high’ or ‘critical’ capability, the former referring to systems that could amplify harm, the latter to those introducing entirely new risks.

Instead of deploying models first and assessing risk later, OpenAI says it will apply safeguards during both development and release, particularly for models capable of evading shutdown, hiding their abilities, or self-replicating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hertz customer data stolen in vendor cyberattack

Hertz has disclosed a significant data breach involving sensitive customer information, including credit card and driver’s licence details, following a cyberattack on one of its service providers.

The breach stemmed from vulnerabilities in the Cleo Communications file transfer platform, exploited in October and December 2024.

Hertz confirmed the unauthorised access on 10 February, with further investigations revealing a range of exposed data, including names, birth dates, contact details, and in some cases, Social Security and passport numbers.

While the company has not confirmed how many individuals were affected, notifications have been issued in the US, UK, Canada, Australia, and across the EU.

Hertz stressed that no misuse of customer data has been identified so far, and that the breach has been reported to law enforcement and regulators. Cleo has since patched the exploited vulnerabilities.

The identity of the attackers remains unknown. However, Cleo was previously targeted in a broader cyber campaign last October, with the Clop ransomware group later claiming responsibility.

The gang published Cleo’s company data online and listed dozens of breached organisations, suggesting the incident was part of a wider, coordinated effort.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI adds collaborative workspace to Grok

Elon Musk’s AI firm xAI has introduced a new feature called Grok Studio, offering users a dedicated space to create and edit documents, code, and simple apps.

Available on Grok.com for both free and paying users, Grok Studio opens content in a separate window, allowing for real-time collaboration between the user and the chatbot instead of relying solely on back-and-forth prompts.

Grok Studio functions much like canvas-style tools from other AI developers. It allows code previews and execution in languages such as Python, C++, and JavaScript. The setup mirrors similar features introduced earlier by OpenAI and Anthropic, instead of offering a radically different experience.

All content appears beside Grok’s chat window, creating a workspace that blends conversation with practical development tools.

Alongside this launch, xAI has also announced integration with Google Drive.

It will allow users to attach files directly to Grok prompts, letting the chatbot work with documents, spreadsheets, and slides from Drive instead of requiring uploads or manual input, making the platform more convenient for everyday tasks and productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

People are forming emotional bonds with AI chatbots

AI is reshaping how people connect emotionally, with millions turning to chatbots for companionship, guidance, and intimacy.

From virtual relationships to support with mental health and social navigation, personified AI assistants such as Replika, Nomi, and ChatGPT are being used by over 100 million people globally.

These apps simulate human conversation through personalised learning, allowing users to form what some consider meaningful emotional bonds.

For some, like 71-year-old Chuck Lohre from the US, chatbots have evolved into deeply personal companions. Lohre’s AI partner, modelled after his wife, helped him process emotional insights about his real-life marriage, despite elements of romantic and even erotic roleplay.

Others, such as neurodiverse users like Travis Peacock, have used chatbots to enhance communication skills, regulate emotions, and build lasting relationships, reporting a significant boost in personal and professional life.

While many users speak positively about these interactions, concerns persist over the nature of such bonds. Experts argue that these connections, though comforting, are often one-sided and lack the mutual growth found in real relationships.

A UK government report noted widespread discomfort with the idea of forming personal ties with AI, suggesting the emotional realism of chatbots may risk deepening emotional dependence without true reciprocity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Opera brings AI assistant to Opera Mini on Android

Opera, the Norway-based browser maker, has announced the rollout of its AI assistant, Aria, to Opera Mini users on Android. The move represents a strategic effort to bring advanced AI capabilities to users with low-end devices and limited data access, rather than confining such tools to high-spec platforms.

Aria allows users to access up-to-date information, generate images, and learn about a range of topics using a blend of models from OpenAI and Google.

Since its 2005 launch, Opera Mini has been known for saving data during browsing, and Opera claims that the inclusion of Aria won’t compromise that advantage nor increase the app’s size.

It makes the AI assistant more accessible for users in regions where data efficiency is critical, instead of making them choose between smart features and performance.

Opera has long partnered with telecom providers in Africa to offer free data to Opera Mini users. However, last year, it had to end its programme in Kenya due to regulatory restrictions around ads on browser bookmark tiles.

Despite such challenges, Opera Mini has surpassed a billion downloads on Android and now serves more than 100 million users globally.

Alongside this update, Opera continues testing new AI functions, including features that let users manage tabs using natural language and tools that assist with task completion.

An effort like this reflects the company’s ambition to embed AI more deeply into everyday browsing instead of limiting innovation to its main browser.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firm DeepSeek opens up on model deployment tech

Chinese AI startup DeepSeek has announced its intention to share the technology behind its internal inference engine, a move aimed at enhancing collaboration within the open-source AI community.

The company’s inference engine and training framework have played a vital role in accelerating the performance and deployment of its models, including DeepSeek-V3 and R1.

Built on PyTorch, DeepSeek’s training framework is complemented by a modified version of the vLLM inference engine originally developed in the US at UC Berkeley.

While the company will not release the full source code of its engine, it will contribute its design improvements and select components as standalone libraries.

These efforts form part of DeepSeek’s broader open-source initiative, which began earlier this year with the partial release of its AI model code.

Despite this contribution, DeepSeek’s models fall short of the Open Source Initiative’s standards, as the training data and full framework remain restricted.

The company cited limited resources and infrastructure constraints as reasons for not making the engine entirely open-source. Still, the move has been welcomed as a meaningful gesture towards transparency and knowledge-sharing in the AI sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

KiloEX loses $7.5 million in oracle hack

A hacker has exploited decentralised exchange KiloEX, draining approximately US$7.5 million by manipulating its price oracle mechanism. The breach led to an immediate suspension of the platform and sparked a cross-industry investigation involving cybersecurity firms and blockchain networks.

The vulnerability centred on KiloEX’s price feed system, which allowed the attacker to manipulate the ETH/USD feed by inputting an artificial entry price of 100 and closing it at 10,000.

According to cybersecurity firm PeckShield, this simple flaw enabled the attacker to steal millions across multiple chains, including $3.3 million from Base, $3.1 million from opBNB, and $1 million from BNB Smart Chain.

KiloEX is working with various security experts and blockchain networks such as BNB Chain and Manta Network to recover the stolen assets.

Funds are reportedly being routed through cross-chain protocols like zkBridge and Meson. Co-founder of Fuzzland, Chaofan Shou, described the breach as stemming from a ‘very simple vulnerability’ in oracle verification, where only intermediaries were validated rather than the original transaction sender.

The attack caused KiloEX’s token price to plummet by over 29% and came just one day after the platform announced a strategic partnership with DWF Labs, aimed at fuelling growth. KiloEX has promised a full incident report and a bounty programme to encourage asset recovery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify launches Ads Exchange and Gen AI ads in India

Spotify has introduced its Ads Exchange (SAX) and Generative AI-powered advertisements in India, following a successful pilot in the US and Canada.

The SAX platform aims to give advertisers better control over performance tracking and maximise reach without overloading users with repetitive ads.

Integrated with platforms such as Google DV360, The Trade Desk, and Magnite, SAX enables advertisers to access Spotify’s high-quality inventory and enhance their programmatic strategies. In addition to multimedia formats, podcast ads will soon be included.

Through Generative AI, advertisers can create audio ads within Spotify’s Ads Manager platform at no extra cost, using scripts, voiceovers, and licensed music.

An innovation like this allows brands to produce more ads in shorter intervals with less effort, making the process quicker and more efficient for reaching a broader audience. Arjun Kolady, Head of Sales – India at Spotify, highlighted the ease of scaling campaigns with these new tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use EU user data for AI training amid scrutiny

Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.

The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.

Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.

Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.

The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.

Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.

Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!