EU faces renewed pressure to ease industrial AI rules

European governments are renewing pressure to scale back industrial AI rules rather than expand regulatory demands.

Ten countries, including Germany, France, Italy, Spain and Poland, have urged the EU to clarify how the AI Act overlaps with machinery law and to adopt more realistic implementation deadlines. Their position is even more surprising, given that the legislation already outlines its relationship with existing industrial frameworks.

Parliament’s centre and centre-right groups are pushing for deeper cuts. The European People’s Party wants all industrial sectors to move to a lighter regime, while Renew is advocating broad exemptions for industrial and business-to-business AI.

The European Conservatives and Reformers are also seeking reductions for non-safety-related systems. Together, the three groups edge close to a parliamentary majority, signalling momentum for a broader deregulation push.

No sweeping changes have been added to the AI omnibus so far, yet policymakers expect more adjustments ahead. The package must be finalised by August, so legislators are focused on meeting the deadline instead of reopening primary debates.

Broader revisions to industrial AI rules are likely to reappear in the Commission’s forthcoming Digital Fitness Check, which will reassess how multiple EU tech laws interact.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Galaxy S26 series brings powerful AI and privacy features

Samsung Electronics has unveiled the Galaxy S26 series, featuring advanced AI experiences, powerful performance, and an industry-leading camera system designed to simplify everyday smartphone tasks.

The series, which includes the Galaxy S26, S26+, and S26 Ultra, handles complex processes in the background, allowing users to focus on results rather than device operations.

The Galaxy S26 Ultra introduces the world’s first built-in Privacy Display, a redesigned chipset, and improved thermal management. Together, these upgrades enhance AI performance, graphics, and CPU efficiency, while ensuring faster, cooler, and more reliable operation throughout the day.

Photography and videography are also upgraded with wider apertures, Nightography Video, Super Steady video, and AI-powered editing tools that make professional-quality content accessible to all users.

Galaxy AI streamlines daily experiences by proactively suggesting actions, organising information, and automating tasks. Features such as Now Nudge, Now Brief, Circle to Search, and upgraded Bixby allow users to interact naturally with their devices.

Integrated AI agents, including Gemini and Perplexity, support multi-step tasks across apps, from booking services to advanced searches, all with minimal input.

Samsung has embedded multiple layers of security and privacy in the Galaxy S26 series. From AI-powered Call Screening and Privacy Alerts to Knox Vault, Knox Matrix, and post-quantum cryptography, users can control data access and protect personal information.

With long-term security updates, seamless software, and Galaxy Buds4 integration, the S26 series aims to combine performance, convenience, and safety in a single, intuitive device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scotland considers new offence for AI intimate images

The Scottish government has launched a consultation proposing a specific criminal offence for creating AI-generated intimate images without consent. Existing Scots law covers the sharing of such photos, but ministers in Scotland say gaps remain around their creation.

The consultation in Scotland also seeks views on criminalising digital tools designed solely to produce intimate images and videos. Ministers aim to address harms linked to emerging AI technologies affecting women and girls across Scotland.

Additional proposals in Scotland include a statutory aggravation where domestic abuse involves a pregnant woman, requiring courts to treat such cases more seriously at sentencing. Measures to strengthen protections against spiking offences are also under review in Scotland.

Justice Secretary Angela Constance said responses in Scotland would inform future action to reduce violence against women and girls. The consultation also considers changes to non-harassment orders and examines whether further laws on non-fatal strangulation are needed in Scotland.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI misuse in online scams involving OpenAI models

OpenAI has reported new instances of its models being exploited in online scams and coordinated information campaigns. The company detailed actions to remove offending accounts and strengthen safeguards, highlighting misuse in fraud and deceptive content creation.

Several cases involved romance and ‘task’ scams, in which AI-generated messages built emotional engagement before requesting payment. One network, dubbed ‘Operation Date Bait,’ used chatbots to promote a fictitious dating service targeting young men in Indonesia.

Another, ‘Operation False Witness,’ saw actors posing as legal professionals to solicit advance fees for non-existent recovery services.

The report also outlined coordinated campaigns leveraging AI to produce articles, social media posts, and comments on geopolitical topics. In ‘Operation Trolling Stone,’ AI-generated content on a Russian arrest in Argentina was shared widely in multiple languages to mimic grassroots engagement.

OpenAI stressed that AI was sometimes used, but reach and account size largely drove engagement.

The company continues monitoring misuse and collaborates with partners and authorities to curb fraudulent or deceptive activity. Systems have been updated to decline policy-violating requests, and not all suspicious content online was generated using its tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Binance targets Greece as EU gateway

Efforts to secure a foothold in Europe have led Binance to select Greece as its entry point for operating under the EU’s Markets in Crypto-Assets framework. A licence would let the exchange offer services across the European Union when the rules take effect in July 2026.

Strategic considerations outweigh speed in the decision. Co-chief executive Richard Teng cited workforce quality, safety, and long-term growth potential as decisive factors, even though several larger EU economies have already issued more licences.

Regulatory attention continues to shape the company’s trajectory. Founder Changpeng Zhao remains a shareholder, as leadership says reforms aim to make the platform one of the most regulated exchanges globally.

Expansion plans unfold amid turbulent market conditions.  Bitcoin’s prices remain well below last year’s highs, dampening retail sentiment, yet institutional participation has remained resilient, supporting liquidity amid volatility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw creator Peter Steinberger urges playful approach to AI coding

Peter Steinberger, creator of the viral AI agent OpenClaw and now at OpenAI, urged developers to approach AI experimentation with curiosity rather than rigid plans. On the Builders Unscripted podcast, he said progress often comes from exploration rather than expertise.

He said OpenClaw began without a roadmap. Early tests included a WhatsApp integration he paused, expecting major labs to build similar tools. When that did not happen, he developed his own prototype and refined it through real-world use.

Using the tool in low-connectivity environments helped clarify its value. Through trial and iteration, he observed how modern AI models can generate workable solutions without explicit programming, reshaping how developers think about problem-solving and workflows.

He cautioned that coding with AI is a skill that requires practice. Comparing it to learning guitar, Steinberger said early frustration is common, but persistence leads to improved intuition and efficiency over time.

Steinberger argued that developers who focus on solving problems and creating useful tools will remain in demand. Treating AI as a collaborative instrument rather than a shortcut, he said, is essential in a rapidly shifting technology landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AT&T data breach settlement wins preliminary approval in $177 million deal

A federal judge in Texas has preliminarily approved a $177 million settlement resolving claims that AT&T failed to safeguard consumer data in two separate breaches. The company denies wrongdoing but agreed to establish compensation funds covering affected customers nationwide.

The agreement creates two non-reversionary funds: $149 million for individuals whose personal data appeared on the dark web, and $28 million for customers whose call and text logs were accessed. It covers a March 2024 breach and a separate incident between May 2022 and early 2023.

Eligible class members may submit claims for cash payments, with amounts depending on the number of valid submissions, and may also receive up to 24 months of credit monitoring. The deadline to opt out or object is 17 October 2025, with a final approval hearing set for 3 December 2025.

Legal and administrative costs, attorneys’ fees, and service awards will be paid from the settlement funds. The case resolves claims brought on behalf of all living US residents whose data was exposed in the two AT&T breaches.

The settlement follows other recent legal challenges facing AT&T, including class actions filed by New York pensioners alleging the company misled investors about the environmental impact of its lead-sheathed cables.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI transforming the factory floor with smart automation and real-time oversight

According to industrial technology reporting, AI is being integrated across factory floor operations to improve efficiency, safety and productivity. Key applications include predictive maintenance, quality inspection, workflow optimisation and human-AI collaboration tools.

Machine learning models analyse sensor data from equipment (motors, conveyors, robots) to forecast failures before they occur, reducing unplanned downtime and lowering maintenance costs. Computer vision AI inspects products at high speed, detecting defects with greater accuracy than human inspection and enabling real-time corrective action.

AI systems analyse production workflows to identify bottlenecks, recommend adjustments to schedules and resource allocation, and help balance workload across stations. Augmented reality and AI assistants support factory workers with contextual guidance, safety alerts and hands-free documentation during complex tasks.

Manufacturers adopting these systems report gains in production reliability, reduced scrap rates and more flexible responsiveness to demand variability. However, the report notes challenges around data quality, legacy equipment integration and workforce upskilling.

Ensuring that AI tools are transparent and explainable for operators, rather than opaque ‘black box’ systems, is also highlighted as necessary for trust and operational safety.

These trends reflect a broader shift toward ‘smart factories’ within the framework of Industry 4.0, where digital tools across hardware, networks, data analytics and AI collaborate to support lean, adaptive and resilient manufacturing systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The New Delhi AI Summit between inclusion and fragmentation

The 2026 AI Summit in New Delhi was billed as a turning point for a more inclusive and development-focused approach to AI. As a rising ‘digital middle power’, India used its role as host to reframe the global AI debate around social empowerment, trust, energy efficiency, and equitable access to technology. Drawing on the concept of MANAV (a Sanskrit word for humanity), and a set of seven guiding pillars, the summit sought to place development and inclusion at the centre of global AI governance.

Yet, as Marília Maciel argues in her blog ‘The New Delhi AI Summit: Inclusive rhetoric, fractured reality,’ the event ultimately exposed growing fragmentation in the international AI landscape. While India succeeded in broadening the narrative, many of its priorities were pushed into working groups and voluntary initiatives rather than reflected in strong political commitments.

A proliferation of new charters, coalitions, and platforms added to an already crowded field of AI initiatives, raising concerns about duplication and a lack of follow-through from previous summits.

The language of the Delhi Declaration reinforced this impression. Its reliance on non-binding formulations and cautious diplomatic phrasing signalled a retreat from even modest collective ambition. At the same time, key UN-led processes on digital cooperation and AI governance were largely sidelined.

For Maciel, this omission risks weakening evidence-based multilateral efforts at a time when reliable data and coordinated policymaking are urgently needed to understand AI’s real impact on economies, labour markets, and education systems.

India’s decision to join the US-led ‘Pax Silica’ initiative on AI and supply chain reflects a broader trend in which AI governance is increasingly tied to economic security and strategic competition.

While the partnership may bring India investment and access to technology, it also embeds AI more deeply within bloc-based alignments and the securitisation of global supply chains.

The summit also highlighted the fluid and often contradictory meaning of ‘digital sovereignty.’ Although India is frequently seen as a champion of sovereign digital infrastructure, the concept received limited emphasis in Delhi.

Maciel notes that sovereignty is increasingly shaped by immediate political and economic calculations rather than anchored in clear strategies, metrics, or participatory governance frameworks. Without greater clarity, she warns, AI sovereignty risks drifting away from broader goals of autonomy, rights, and self-determination.

In the end, the New Delhi Summit may be remembered less for its inclusive rhetoric than for revealing a fractured reality. India demonstrated how middle powers can influence the AI agenda, but the event underscored how fragmented, securitised, and initiative-heavy global AI governance has become. Whether future summits and the United Nations can restore coherence and continuity to this landscape remains an open question.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot