AI growth changes the cycle for memory chip manufacturers

The growing demand for AI is reshaping the fortunes of the memory chip industry, according to leading manufacturers, who argue that the scale of AI investment is altering the sector’s typical boom-and-bust pattern.

The technology is creating more structural demand, rather than the sharp cyclical spikes that previously defined the market.

AI workloads depend heavily on robust memory systems, particularly as companies expand data centre capacity worldwide. Major chipmakers now expect steadier growth because AI models require vast data handling rather than one-off hardware surges.

Analysts suggest it could reduce the volatility that has often led to painful downturns for the industry.

Additionally, some reports claim that Japanese technology group Rakuten is prioritising low-cost AI development to improve profitability across its businesses.

Its AI leadership stresses the need to deploy systems that maximise margins instead of simply chasing capability for its own sake.

The developments underscore how AI is not only transforming software and services but also reshaping the economics of the hardware required to power them, from memory chips to cloud infrastructure on a global scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Facial recognition trial targets repeat offenders in New Zealand supermarkets

Teenagers account for most of the serious threats reported against supermarket staff across South Island stores, according to a privacy report released on Foodstuffs South Island’s facial recognition trial.

The company is testing the technology in three Christchurch supermarkets to identify only adult repeat offenders, rather than minors, even though six out of the ten worst offenders are under eighteen.

A system that creates a biometric template of every shopper at the trial stores and deletes it if there is no match with a watchlist. Detections remain stored within the Auror platform for seven years, while personal images are deleted on the same day.

The technology is supplied by the Australian firm Vix Vizion, in collaboration with Auror, which is already known for its vehicle plate recognition systems.

Foodstuffs argues the trial is justified by rising threatening and violent behaviour towards staff across all age groups.

A previous North Island pilot scanned 226 million faces and generated more than 1700 alerts, leading the Privacy Commissioner of New Zealand to conclude that strong safeguards could reduce privacy intrusion to an acceptable level.

The watchlist only includes adults previously involved in violence or serious threats, and any matches undergo human checks before action is taken.

Foodstuffs continues to provide regular updates to the Office of the Privacy Commissioner as the South Island trial proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake weight loss adverts removed from TikTok

TikTok removed fake adverts for weight loss drugs after a company impersonating UK retailer Boots used AI-generated videos. The clips falsely showed healthcare professionals promoting prescription-only medicines.

Boots said it contacted TikTok after becoming aware of the misleading adverts circulating on the platform. TikTok confirmed the videos were removed for breaching its rules on deceptive and harmful advertising.

BBC reporting found the account was briefly able to repost the same videos before being taken down. The account appeared to be based in Hong Kong and directed users to a website selling the drugs.

UK health regulators warned that prescription-only weight loss medicines must only be supplied by registered pharmacies. TikTok stated that it continues to strengthen its detection systems and bans the promotion of controlled substances.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AIOLIA framework translates AI principles into system design

An EU-funded project, AIOLIA, is examining how Europe’s approach to trustworthy AI can be applied in practice. Principles such as transparency and accountability are embedded in the AI Act’s binding rules. Turning those principles into design choices remains difficult.

The project focuses on closing that gap by analysing how AI ethics is applied in real systems. Its work supports the implementation of AI Act requirements beyond legal text. Lessons are translated into practical training.

Project coordinator Alexei Grinbaum argues that ethical principles vary widely by context. Engineers are expected to follow them, but implications differ across systems. Bridging the gap requires concrete examples.

AIOLIA analyses ten use cases across multiple domains involving professionals and citizens. The project examines how organisations operationalise ethics under regulatory and organisational constraints. Findings highlight transferable practices without a single model.

Training is central to the initiative, particularly for EU ethics evaluators and researchers working under the AI Act framework. As AI becomes more persuasive, risks around manipulation grow. AIOLIA aims to align ethical language with daily decisions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Atlas agent mode fortifies OpenAI’s ChatGPT security

ChatGPT Atlas has introduced an agent mode that allows an AI browser agent to view webpages and perform actions directly. The feature supports everyday workflows using the same context as a human user. Expanded capability also increases security exposure.

Prompt injection has emerged as a key threat to browser-based agents, targeting AI behaviour rather than software flaws. Malicious instructions embedded in content can redirect an agent from the user’s intended action. Successful attacks may trigger unauthorised actions.

To address the risk, OpenAI has deployed a security update to Atlas. The update includes an adversarially trained model and strengthened safeguards. It followed internal automated red teaming.

Automated red teaming uses reinforcement learning to train AI attackers that search for complex exploits. Simulations test how agents respond to injected prompts. Findings are used to harden models and system-level defences.

Prompt injection is expected to remain a long-term security challenge for AI agents. Continued investment in testing, training, and rapid mitigation aims to reduce real-world risk. The goal is to achieve reliable and secure AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Italy fines Apple €98 million over App Store competition breach

Apple has been fined €98 million by Italy’s competition authority after regulators concluded that its App Tracking Transparency framework distorted competition in the app store market.

Authorities stated that the policy strengthened Apple’s dominant position while limiting how third-party developers collect advertising data.

The investigation found that developers were required to request consent multiple times for the same data processing purposes, creating friction that disproportionately affected competitors.

Regulators in Italy argued that equivalent privacy protections could have been achieved through a single consent mechanism instead of duplicated prompts.

According to the Italian authority, the rules were imposed unilaterally across the App Store ecosystem and harmed commercial partners reliant on targeted advertising. The watchdog also questioned whether the policy was proportionate from a data protection perspective under the EU law.

Apple rejected the findings and confirmed plans to appeal, stating that App Tracking Transparency prioritises user privacy over the interests of ad technology firms.

The decision follows similar penalties and warnings issued in France and Germany, reinforcing broader European scrutiny of platform governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G network slicing strengthens Madrid emergency communications

Madrid has strengthened emergency response capabilities through a new collaboration between Orange and Ericsson, integrating a dedicated slice within Orange’s 5G Standalone network.

Advanced radio access and core technologies allow emergency teams to operate on prioritised connectivity during high network demand.

Police, fire and medical services benefit from guaranteed bandwidth and low-latency communications, ensuring uninterrupted coordination during incidents.

The infrastructure by Ericsson enables dynamic switching between public 5G and emergency spectrum, supporting rapid deployment when physical networks are compromised.

Resilience remains central to the design, with autonomous power systems and redundancy maintaining operations during outages. Live video transmission from firefighters’ helmets illustrates how real time data improves risk assessment and decision making on the ground.

By combining telecom innovation with public safety needs, the initiative reinforces Madrid’s role in the emergency communications leadership of the EU and demonstrates how 5G can support critical services at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

University of Phoenix breach exposes millions in major Oracle attack

Almost 3.5 million students, staff and suppliers linked to the University of Phoenix have been affected by a data breach tied to a sophisticated cyber extortion campaign. The incident followed unauthorised access to internal systems, exposing highly sensitive personal and financial information.

Investigations indicate attackers exploited a zero-day vulnerability in Oracle E-Business Suite, a widely used enterprise financial application. The breach surfaced publicly after the Clop ransomware group listed the university on its leak site, prompting internal reviews and regulatory disclosures.

Compromised data includes names, contact details, dates of birth, social security numbers and banking information. University officials have confirmed that affected individuals are being notified, while filings with US regulators outline the scale and nature of the incident.

The attack forms part of a broader wave of intrusions targeting American universities and organisations using Oracle platforms. As authorities offer rewards for intelligence on Clop’s operations, the breach highlights growing risks facing educational institutions operating complex digital infrastructures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber incident hits France’s postal and banking networks

France’s national postal service, La Poste, suffered a cyber incident days before Christmas that disrupted websites, mobile applications and parts of its delivery network.

The organisation confirmed a distributed denial of service attack temporarily knocked key digital systems offline, slowing parcel distribution during the busiest period of the year.

A disruption that also affected La Banque Postale, with customers reporting limited access to online banking and mobile services. Card payments in stores, ATM withdrawals, and authenticated online payments continued to function, easing concerns over wider financial instability.

La Poste stated there was no evidence of customer data exposure, although several post offices in France operated at reduced capacity. Staff were deployed to restore services while maintaining in-person banking and postal transactions where possible.

The incident added to growing anxiety over digital resilience in critical public services, particularly following a separate data breach disclosed at France’s Interior Ministry last week. Authorities have yet to identify those responsible for the attack on La Poste.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia seeks China market access as US eases AI chip restrictions

The US tech giant NVIDIA has largely remained shut out of China’s market for advanced AI chips, as US export controls have restricted sales due to national security concerns.

High-performance processors such as the H100 and H200 were barred, forcing NVIDIA to develop downgraded alternatives tailored for Chinese customers instead of flagship products.

A shift in policy emerged after President Donald Trump announced that H200 chip sales to China could proceed following a licensing review and a proposed 25% fee. The decision reopened a limited pathway for exporting advanced US AI hardware, subject to regulatory approval in both Washington and Beijing.

If authorised, the H200 shipments would represent the most powerful US-made AI chips permitted in China since restrictions were introduced. The move could help NVIDIA monetise existing H200 inventory while easing pressure on its China business as it transitions towards newer Blackwell chips.

Strategically, the decision may slow China’s push for AI chip self-sufficiency, as domestic alternatives still lag behind NVIDIA’s technology.

At the same time, the policy highlights a transactional approach to export controls, raising uncertainty over long-term US efforts to contain China’s technological rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!