New AI method improves transparency in computer vision models

Researchers at MIT have developed a new technique designed to improve how computer vision models explain their predictions while maintaining strong accuracy. Transparency is crucial as AI enters fields like healthcare and autonomous driving, where decisions must be clear.

The method uses concept bottleneck models, which enable AI to base its predictions on human-understandable concepts. Traditional approaches rely on expert-defined concepts that can be incomplete or ill-suited, sometimes lowering model performance.

Researchers instead created a system that extracts concepts the AI learned during training. A sparse autoencoder selects key features, and a multimodal language model turns them into plain-language descriptions and labels.

The resulting module forces the AI to make predictions using only those extracted concepts.

Tests on bird classification and medical image datasets showed that the new method improved accuracy and provided clearer explanations. Findings suggest that using a model’s internal concepts can boost transparency and accountability in AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia introduces strict online child safety rules covering AI chatbots

New Age-Restricted Material Codes have begun to be enforced in Australia, requiring online platforms to introduce stronger protections to prevent children from accessing harmful digital content.

The rules apply across a wide range of services, including social media, app stores, gaming platforms, search engines, pornography websites, and AI chatbots.

Under the framework, companies must implement age-assurance systems before allowing access to content involving pornography, high-impact violence, self-harm material, or other age-restricted topics.

These measures also extend to AI companions and chatbots, which must prevent sexually explicit or self-harm-related conversations with minors.

The rules form part of Australia’s broader online safety framework overseen by the eSafety Commissioner, which will monitor compliance and enforce the codes.

Companies that fail to comply may face penalties of up to $49.5 million per breach.

The policy aims to shift responsibility toward technology companies by requiring them to build protections directly into their platforms.

Officials in Australia argue the measures mirror long-standing offline safeguards designed to prevent children from accessing adult environments or harmful material.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI legal advice case asks whether ChatGPT crosses legal boundaries

A newly filed lawsuit against OpenAI raises a key issue: Does allowing generative AI systems like ChatGPT to provide legal advice violate laws that bar the unauthorised practice of law (UPL)? UPL means providing legal services, such as drafting filings or giving advice, without the required legal qualifications or a state licence.

The case claims an individual used ChatGPT to prepare legal filings in a dispute with Nippon Life Insurance, prompting the company to argue OpenAI should be held responsible for the outcome.

The lawsuit claims ChatGPT helped the user challenge a settled legal dispute. As a result, the company had to spend additional time and resources responding to filings produced with ChatGPT. The claim alleges tortious interference with a contract, which is the unlawful disruption of an existing agreement between two parties by causing one of the parties to breach or alter it.

Ultimately, this disrupted another party’s contractual relationship. The suit also claims unauthorised practice of law and abuse of the judicial process, which means using the legal system improperly to gain an advantage. It argues OpenAI should be liable because ChatGPT operates under its control. The dispute centres on whether AI systems should analyse disputes and offer legal advice like a lawyer.

Advocates argue the tools could widen access to legal advice. They could make legal support more accessible and affordable for those who cannot easily hire a lawyer. However, US legal frameworks restrict the provision of legal advice to licensed lawyers. The rules are designed to protect consumers and ensure professional accountability.

Critics argue that limiting legal advice to licensed lawyers preserves an expensive monopoly and hinders access to justice. AI-driven legal tools highlight this tension over the future of legal services.

The outcome of this lawsuit will likely hinge on whether AI-generated responses constitute intentional legal advice and if OpenAI can be held liable for such outputs. Even if it fails, the case foregrounds the broader debate about granting generative AI a legitimate role in legal guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The EU faces growing AI copyright disputes

Courts across Europe are examining how copyright law applies to AI systems trained on large datasets. Judges in Europe are reviewing whether existing rules allow AI developers to use copyrighted books, music and journalism without permission.

One closely watched dispute in Luxembourg involves a publisher challenging Google over summaries produced by its Gemini chatbot. The case before the EU court in Luxembourg could test how press publishers’ rights apply to AI-generated outputs.

Legal experts warn the ruling in Luxembourg may not resolve wider questions about AI training data. Many disputes in Europe focus on the EU copyright directive and its text and data mining exception.

Additional lawsuits across Europe involving music rights group GEMA and OpenAI are expected to continue for years. Policymakers in Europe are also considering updates to copyright rules as AI technology expands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pentagon AI dispute raises concerns for startups

A dispute between Anthropic and the Pentagon in the US has raised questions about whether startups will hesitate to pursue defence contracts. Negotiations over the use of Anthropic’s Claude AI technology collapsed, prompting the US administration to label the company a supply chain risk.

The situation in the US escalated as OpenAI secured its own agreement with the Pentagon. The development sparked backlash online, with reports of a surge in ChatGPT uninstalls after the defence partnership announcement.

Technology analysts in the US say the controversy highlights the unusual scrutiny facing high-profile AI firms. Companies such as OpenAI and Anthropic attract intense public attention because widely used AI products place their defence partnerships in the spotlight.

Startup founders in the US are now debating the risks of government contracts, particularly with the Pentagon. Industry observers in the US warn that defence authorities’ contract changes could make government collaboration more uncertain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI copyright warning as 5 major risks outlined in UK Lords report

Concerns about AI copyright are rising after a House of Lords committee report. The report warns that unlicensed use of creative works for AI training threatens the UK’s creative industries.

Large AI systems rely on vast amounts of human-created content, often used without clear consent or compensation. Such developments have intensified debates around AI copyright protections.

The committee argues that the key issues are not the copyright framework itself, but the widespread unlicensed use of protected works and AI developers’ lack of transparency.

The lack of clarity prevents rightsholders from knowing whether their works are being used or from enforcing their rights, raising critical questions about the practical application of AI copyright rules.

The report urges the government to reject the proposed commercial text and data mining exception, introduce stronger protections against unauthorised digital replicas, and safeguard against AI outputs that imitate a creator’s style, voice, or identity.

The committee also calls for legal transparency in AI training data, backing the development of a licensing market, and standards for rights-reservation, data provenance, labelling AI-generated content, and support for UK-governed AI models within a robust AI copyright framework.

Baroness Keeley, committee chair, warned: ‘Our creative industries face a clear and present danger from uncredited and unremunerated use of copyrighted material to train AI models.

Photographers, musicians, authors, and publishers are seeing their work fed into AI models, which then produce imitations that take employment and earning opportunities from original creators.’

Keeley added: ‘AI may contribute to our future economic growth, but the UK creative industries create jobs and economic value now.

In 2023, the creative industries delivered £124 billion of economic value to the UK, and this is set to grow to £141 billion by 2030. Watering down the protections in our existing copyright regime to lure the biggest US tech companies is a race to the bottom that does not serve UK interests. We should not sacrifice our creative industries for the AI jam tomorrow.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cursor launches tool to automate agentic coding workflows

Cursor has launched a new tool called Automations, designed to help software engineers manage the growing complexity of overseeing multiple AI coding agents at once.

Rather than requiring a human to initiate each task, the system allows agents to launch automatically in response to events such as a new code addition, a Slack message, or a scheduled timer.

The shift is significant because it breaks the ‘prompt-and-monitor’ model that currently defines most AI-assisted engineering.

As Cursor’s engineering lead for asynchronous agents, Jonas Nelle put it, humans are no longer always the ones initiating; they are called in at the right moments, rather than tracking dozens of processes simultaneously.

Early applications include automated bug reviews, security audits, PagerDuty incident response, and weekly codebase summaries delivered to Slack.

The launch comes as competition in the agentic coding space intensifies, with both OpenAI and Anthropic releasing major updates to their tools in recent weeks. Cursor’s annual recurring revenue has nonetheless doubled over the past three months to more than $2 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe risks falling behind in the global robotics race

China’s dominance in humanoid robotics was on full display at the start of 2026, with Hangzhou-based Unitree at the forefront of innovation and 87% of all humanoid robots delivered in 2025 were made in China.

Germany’s Chancellor Friedrich Merz witnessed a live display of robots dancing and doing backflips during a visit to Hangzhou, returning home to warn that Germany was ‘simply no longer productive enough.’

European robotics startups face a stark funding gap compared to their US and Chinese rivals. Rodion Shishkov, founder of the London-based construction technology company All3, described having to ‘literally fight’ for tens of millions of euros, whilst similarly positioned American counterparts could secure billions of dollars with the same effort.

Barclays’ research suggests the global humanoid robotics market, currently worth $2–3 billion, could reach $200 billion by 2035, making the stakes of falling behind significant.

Andrei Danescu, CEO of the logistics robotics startup Dexory, warned that Europe should not confuse a strong industrial tradition with genuine momentum. He called on European regulators to set clearer standards, establish liability frameworks for autonomous systems, and align public investment levels with the strategic ambitions of other global players.

One industry analyst noted that achieving hardware independence from Chinese supply chains in robotics would be ‘naive’ to expect, but argued that Europe still has significant ground to claim on the intelligence and data side of the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Human workers behind AI training raise new privacy concerns

AI systems rely heavily on human labour to train and improve algorithms. Images and videos collected by AI-powered devices are often reviewed and labelled by human annotators so that systems can better recognise objects, environments, and context.

This work is frequently outsourced to data annotation companies such as Sama, which provides training data services for large technology firms, including Meta Platforms. Many of these tasks are carried out by contract workers in Nairobi, Kenya, where employees review large volumes of visual data under strict confidentiality agreements.

Recent investigations have raised concerns about privacy and data governance linked to AI wearables such as the Ray-Ban Meta smart glasses, developed in partnership with EssilorLuxottica. Some device features rely on cloud processing, meaning that captured images and voice inputs may be transmitted and analysed remotely.

Workers involved in the annotation process report regularly encountering sensitive material. Footage can include scenes recorded inside private homes, bedrooms, or bathrooms, as well as images that unintentionally reveal personal or financial information.

These practices raise broader questions about transparency and cross-border data transfers, particularly when data originating in Europe or the United States is processed in other countries. They also highlight the often-hidden human role behind AI systems that are frequently presented as fully automated technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data breach hits fintech lender Figure exposing nearly 1 million accounts

Fintech lender Figure Technology Solutions has disclosed a data breach after hackers exposed personal information from nearly one million accounts. Details from 967,200 accounts, including names, email addresses, phone numbers, home addresses, and dates of birth, were compromised.

Figure Technology Solutions, founded in 2018, operates a blockchain-based lending platform built on the Provenance blockchain. The company says it has facilitated more than $22 billion in home equity transactions through partnerships with banks, credit unions, and fintech firms. Despite blockchain security claims, attackers reportedly gained access by manipulating a staff member rather than breaking the underlying technology.

‘We recently identified that an employee was socially engineered, and that allowed an actor to download a limited number of files through their account,’ a company spokesperson said. ‘We acted quickly to block the activity and retained a forensic firm to investigate what files were affected. We understand the importance of these matters and are communicating with partners and those impacted as appropriate.’

Security researchers say the data breach follows a pattern used by groups such as ShinyHunters, who impersonate IT support staff and pressure employees into revealing login credentials through convincing phishing portals.

Once access to corporate single sign-on systems, which allow users to log in to multiple internal applications with a single set of credentials, is obtained, attackers can move across multiple internal platforms, often including services linked to major providers such as Microsoft and Google.

Experts warn that the data breach highlights a wider cybersecurity problem: even advanced technologies such as blockchain cannot prevent attacks that target human behaviour. Criminals can use exposed personal information to launch convincing phishing campaigns or financial scams, reinforcing the need for stronger employee training and security awareness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!