Courts signal limits on AI in legal proceedings

A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey.
He noted 14% of experts would accept such terms, which is unacceptable.

Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.

Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.

Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.

For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

University of Athens partners with Google to boost AI education

The National and Kapodistrian University of Athens has announced a new partnership with Google to enhance university-level education in AI. The collaboration grants all students free 12-month access to Google’s AI Pro programme, a suite of advanced learning and research tools.

Through the initiative, students can use Gemini 2.5 Pro, Google’s latest AI model, along with Deep Research and NotebookLM for academic exploration and study organisation. The offer also includes 2 TB of cloud storage and access to Veo 3 for video creation and Jules for coding support.

The programme aims to expand digital literacy and increase hands-on engagement with generative and research-driven AI tools. By integrating these technologies into everyday study, the university hopes to cultivate a new generation of AI-experienced graduates.

University officials view the collaboration as a milestone in Greek AI-driven education, following recent national initiatives to introduce AI programmes in schools and healthcare. The partnership marks a significant step in aligning higher education with the global digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic strengthens European growth through Paris and Munich offices

AI firm Anthropic is expanding its European presence by opening new offices in Paris and Munich, strengthening its footprint alongside existing hubs in London, Dublin, and Zurich.

An expansion that follows rapid growth across the EMEA region, where the company has tripled its workforce and seen a ninefold increase in annual run-rate revenue.

The move comes as European businesses increasingly rely on Claude for critical enterprise tasks. Companies such as L’Oréal, BMW, SAP, and Sanofi are using the AI model to enhance software, improve workflows, and ensure operational reliability.

Germany and France, both among the top 20 countries in Claude usage per capita, are now at the centre to Anthropic’s strategic expansion.

Anthropic is also strengthening its leadership team across Europe. Guillaume Princen will oversee startups and digital-native businesses, while Pip White and Thomas Remy will lead the northern and southern EMEA regions, respectively.

A new head will soon be announced for Central and Eastern Europe, reflecting the company’s growing regional reach.

Beyond commercial goals, Anthropic is partnering with European institutions to promote AI education and culture. It collaborates with the Light Art Space in Berlin, supports student hackathons through TUM.ai, and works with the French organisation Unaite to advance developer training.

These partnerships reinforce Anthropic’s long-term commitment to responsible AI growth across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta invests $600 billion to expand AI data centres across the US

A $600 billion investment aimed at boosting innovation, job creation, and sustainability is being launched in the US by Meta to expand its AI infrastructure.

Instead of outsourcing development, the company is building its new generation of AI data centres domestically, reinforcing America’s leadership in technology and supporting local economies.

Since 2010, Meta’s data centre projects have supported more than 30,000 skilled trade jobs and 5,000 operational roles, generating $20 billion in business for US subcontractors. These facilities are designed to power Meta’s AI ambitions while driving regional economic growth.

The company emphasises responsible development by investing heavily in renewable energy and water efficiency. Its projects have added 15 gigawatts of new energy to US power grids, upgraded local infrastructure, and helped restore water systems in surrounding communities.

Meta aims to become fully water positive by 2030.

Beyond infrastructure, Meta has channelled $58 million into community grants for schools, nonprofits, and local initiatives, including STEM education and veteran training programmes.

As AI grows increasingly central to digital progress, Meta’s continued investment in sustainable, community-focused data centres underscores its vision for a connected, intelligent future built within the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A €358 million EU investment strengthens the clean energy transition

The EU has announced more than €358 million in new funding for 132 environmental and climate projects under the LIFE Programme.

The investment covers over half of the total €536 million required, with the remainder coming from national and local governments, private partners and civil society.

A project that will advance the transition of the EU to a clean, circular and climate-resilient economy while supporting biodiversity, competitiveness and long-term climate neutrality.

Funding includes €147 million for nature and biodiversity, €76 million for circular economy initiatives, €58 million for climate resilience and €77 million for clean energy transition projects.

Examples include habitat restoration in Sweden and Poland, sustainable farming in France, and renewable energy training in France’s new LIFE SUNACADEMY. Other projects will tackle pollution, restore peatlands, and modernise energy systems across Europe, from rural communities to remote islands.

Since its launch in 1992, the LIFE Programme has co-financed over 6,500 projects that support environmental innovation and sustainability.

The current programme runs until 2027 with a total budget of €5.43 billion, managed by the European Climate Infrastructure and Environment Executive Agency (CINEA).

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

‘Wooing and suing’ defines News Corp’s AI strategy

News Corp chief executive Robert Thomson warned AI companies against using unlicensed publisher content, calling recipients of ‘stolen goods’ fair game for pursuit. He said ‘wooing and suing’ would proceed in parallel, with more licensing deals expected after the OpenAI pact.

Thomson argued that high-quality data must be paid for and that ingesting material without permission undermines incentives to produce journalism. He insisted that ‘content crime does not and will not pay,’ signalling stricter enforcement ahead.

While criticising bad actors, he praised partners that recognise publisher IP and are negotiating usage rights. The company is positioning itself to monetise archives and live reporting through structured licences.

He also pointed to a major author settlement with another AI firm as a watershed for compensation over past training uses. The message: legal and commercial paths are both accelerating.

Against this backdrop, News Corp said AI-related revenues are gaining traction alongside digital subscriptions and B2B data services. Further licensing announcements are likely in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cars.com launches Carson AI to transform online car shopping

The US tech company, Cars.com, has unveiled Carson, a multilingual AI search engine designed to revolutionise the online car shopping experience.

Instead of relying on complex filters, Carson interprets natural language queries such as ‘a reliable car for a family of five’ or ‘a used truck under $30,000’, instantly producing targeted results tailored to each shopper’s needs.

A new AI feature that already powers around 15% of all web and mobile searches on Cars.com, with early data showing that users engaging with Carson return to the site twice as often and save three times more vehicles.

They also generate twice as many leads and convert 30% more frequently from search to vehicle detail pages.

Cars.com aims to simplify decision-making for its 25 million monthly shoppers, 70% of whom begin their search without knowing which brand or model to choose.

Carson helps these undecided users explore lifestyle, emotional and practical preferences while guiding them through Cars.com’s award-winning listings.

Further updates will introduce AI-generated summaries, personalised comparisons and search refinement suggestions.

Cars.com’s parent company, Cars Commerce, plans to expand its use of AI-driven tools to strengthen its role at the forefront of automotive retail innovation, offering a more efficient and intelligent marketplace for both consumers and dealerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!