Harvard researchers develop AI for brain surgery

Harvard researchers have developed an AI tool to distinguish glioblastoma from similar brain tumours during surgery. The PICTURE system gives surgeons near-real-time guidance for critical decisions during surgery.

PICTURE outperformed humans and other AI, correctly distinguishing glioblastoma from PCNSL over 98 percent of the time in international tests. The tool also flags cases it is unsure of, allowing human review and reducing the risk of misdiagnosis, particularly in complex or rare brain tumours.

The AI works on frozen tissue samples, commonly used for rapid surgical evaluation, and can identify crucial cancer features such as cell shape, density, and necrosis.

Accurate tumour differentiation helps surgeons avoid unnecessary tissue removal and choose the proper treatment- surgery for glioblastoma or radiation and chemotherapy for PCNSL.

Researchers envision PICTURE could be used in surgery and pathology to aid AI collaboration, train pathologists, and improve access to neuropathology expertise. Further studies are planned to test its accuracy across more diverse populations and potentially extend its application to other cancer types.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic unveils Claude Sonnet 4.5 as the best AI coding model yet

Anthropic has released Claude Sonnet 4.5, its most advanced AI model yet, claiming state-of-the-art results in coding benchmarks. The company says the model can build production-ready applications, rather than limited prototypes, making it more reliable than earlier versions.

Claude Sonnet 4.5 is available through the Claude API and chatbot at the same price as its predecessor, with $3 per million input tokens and $15 per million output tokens.

Early enterprise tests suggest the model can autonomously code for extended periods, integrate databases, secure domains, and perform compliance checks such as SOC 2 audits.

Industry leaders have endorsed the launch, with Cursor and Windsurf calling it a new generation of AI coding models. Anthropic also emphasises more substantial alignment, noting reduced risks of deception and sycophancy, and improved resistance to prompt injection attacks.

Alongside the model, the company has introduced a Claude Agent SDK to let developers build customised agents, and launched ‘Imagine with Claude’, a research preview showing real-time code generation.

A release that highlights the intense competition in AI, with Anthropic pushing frequent updates to keep pace with rivals such as OpenAI, which has recently gained ground on coding performance with GPT-5.

Claude Sonnet 4.5 follows just weeks after Anthropic’s Claude Opus 4.1, underlining the rapid development cycles driving the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NSW expands secure AI platform NSWEduChat across schools

Following successful school trials, the New South Wales Department of Education has confirmed the broader rollout of its in-house generative AI platform, NSWEduChat.

The tool, developed within the department’s Sydney-based cloud environment, prioritises privacy, security, and equity while tailoring content to the state’s educational context. It is aligned with the NSW AI Assessment Framework.

The trial began in 16 schools in Term 1, 2024, and then expanded to 50 schools in Term 2. Teachers reported efficiency gains, and students showed strong engagement. Access was extended to all staff in Term 4, 2024, with Years 5–12 students due to follow in Term 4, 2025.

Key features include a privacy-first design, built-in safeguards, and a student mode that encourages critical thinking by offering guided prompts rather than direct answers. Staff can switch between staff and student modes for lesson planning and preparation.

All data is stored in Australia under departmental control. NSWEduChat is free and billed as the most cost-effective AI tool for schools. Other systems are accessible but not endorsed; staff must follow safety rules, while students are limited to approved tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Lufthansa turns to automation and AI for efficiency

Lufthansa Group has unveiled a transformation strategy that places digitalisation and AI at the centre of its future operations. At Capital Markets Day, the company said efficiency will come from automation and streamlined processes.

Around 4,000 administrative roles are set to be cut by 2030, mainly in Germany, as Lufthansa consolidates functions and reduces duplication of work. Executives stressed that the focus will be on non-operational roles, with staff reductions to be conducted in consultation with social partners.

The airline group also confirmed continued investment in fleet renewal, with more than 230 new aircraft expected by 2030. Digital transformation and AI aim to cut costs, accelerate decisions, and boost competitiveness across the group’s airlines, cargo, and technical services.

By 2030, Lufthansa aims for an 8-10 percent EBIT margin, 15-20 percent return on capital, and over €2.5 billion in annual free cash flow. The company said these measures will ensure long-term resilience in a changing industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents complete first secure transaction with Mastercard and PayOS

PayOS and Mastercard have completed the first live agentic payment using a Mastercard Agentic Token, marking a pivotal step for AI-driven commerce. The demonstration, powered by Mastercard Agent Pay, extends the tokenisation infrastructure that already underpins mobile payments and card storage.

The system enables AI agents to initiate payments while enforcing consent, authentication, and fraud checks, thereby forming what Mastercard refers to as the trust layer. It shows how card networks are preparing for agentic transactions to become central to digital commerce.

Mastercard’s Chief Digital Officer, Pablo Fourez, stated that the company is developing a secure and interoperable ecosystem for AI-driven payments, underpinned by tokenized credentials. The framework aims to prepare for a future where the internet itself supports native agentic commerce.

For PayOS, the milestone represents a shift from testing to commercialisation. Chief executive Johnathan McGowan said the company is now onboarding customers and offering tools for fraud prevention, payments risk management, and improved user experiences.

The achievement signals a broader transition as agentic AI moves from pilot to real-world deployment. If security models remain effective, agentic payments could soon differentiate platforms, merchants, and issuers, embedding autonomy into digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered Opera Neon browser launches with premium subscription

After its announcement in May, Opera has started rolling out Neon, its first AI-powered browser. Unlike traditional browsers, Neon is designed for professionals who want AI to simplify complex online workflows.

The browser introduces Tasks, which act like self-contained workspaces. AI can understand context, compare sources, and operate across multiple tabs simultaneously to manage projects more efficiently.

Neon also features cards and reusable AI prompts that users can customise or download from a community store, streamlining repeated actions and tasks.

Its standout tool, Neon Do, performs real-time on-screen actions such as opening tabs, filling forms, and gathering data, while keeping everything local. Opera says no data is shared, and all information is deleted after 30 days.

Neon is available by subscription at $19.90 per month. Invitations are limited during rollout, but Opera promises broader availability soon.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK’s Stockton secures £100m AI data centre to strengthen local economy

A £100m AI data centre has been approved for construction on the outskirts of Stockton, with developers Latos Data Centres pledging up to 150 new jobs.

The Preston Farms Industrial Estate site will feature two commercial units, plants, substations and offices, designed to support the growing demands of AI and advanced computing.

Work on the Neural Data Centre is set to begin at the end of the year, with full operations expected by 2028. The project has been welcomed by Industry Minister and Stockton North MP Chris McDonald, who described it as a significant investment in skills and opportunities for the future.

Latos managing director Andy Collin said the facility was intended to be ‘future proof’, calling it a purpose-built factory for the modern digital economy. Local leaders hope the investment will help regenerate Teesside’s industrial base, positioning the region as a hub for cutting-edge infrastructure.

The announcement follows the UK government’s decision to create an AI growth zone in the North East, covering sites in Northumberland and Tyneside. Teesworks in Redcar was not included in the initial allocation, but ministers said further proposals from Teesside were still under review.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UAE university bets on AI to secure global talent

Abu Dhabi’s Mohamed bin Zayed University of AI (MBZUAI) claims to have rapidly become central to the UAE’s ambition to lead in AI.

Founded six years ago, the state-backed institute has hired over 100 faculty, recruited students from 49 nations, and now counts more than 700 alumni. All students receive full scholarships, while professors enjoy freedom from chasing research grants.

The university works closely with G42, the UAE’s flagship AI firm, and has opened a research lab in Silicon Valley. It has already unveiled non-English language models, including Arabic, Kazakh, and Hindi, and recently launched K2 Think, an open-source reasoning model.

MBZUAI is part of a wider national strategy that pairs investment in semiconductor chips with the creation of a global talent pipeline. The UAE now holds over 188,000 AI chips, second only to the US, and aims for AI to contribute 20% of its non-oil GDP by 2031.

About 80% of graduates have remained in the country, aided by long-term residency incentives and tax-free salaries. Analysts say the university’s success will depend on whether it can sustain momentum and secure permanent endowments to outlast shifting UAE government priorities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot