Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

University of Athens partners with Google to boost AI education

The National and Kapodistrian University of Athens has announced a new partnership with Google to enhance university-level education in AI. The collaboration grants all students free 12-month access to Google’s AI Pro programme, a suite of advanced learning and research tools.

Through the initiative, students can use Gemini 2.5 Pro, Google’s latest AI model, along with Deep Research and NotebookLM for academic exploration and study organisation. The offer also includes 2 TB of cloud storage and access to Veo 3 for video creation and Jules for coding support.

The programme aims to expand digital literacy and increase hands-on engagement with generative and research-driven AI tools. By integrating these technologies into everyday study, the university hopes to cultivate a new generation of AI-experienced graduates.

University officials view the collaboration as a milestone in Greek AI-driven education, following recent national initiatives to introduce AI programmes in schools and healthcare. The partnership marks a significant step in aligning higher education with the global digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Companies call back workers as AI fails to replace jobs

As interest in AI grows, many companies that previously cut staff are now rehiring some of the same employees. Visier data shows about 5.3 percent of laid-off workers have returned, marking a steady but rising trend.

The findings suggest AI adoption has not yet replaced human labour at the scale some executives anticipated.

Visier’s analysis of 2.4 million employees across 142 global companies indicates that AI tools often automate parts of tasks rather than entire jobs. Experts say organisations are realising that AI implementation costs, including infrastructure, data systems, and security, often exceed initial projections.

Many companies now rely on experienced staff to manage or complement AI tools effectively.

Industry observers highlight a gap between expectations and outcomes. MIT research shows around 95 percent of firms have yet to see measurable financial returns from AI investments.

Cost-cutting measures such as layoffs also carry hidden expenses, with estimates suggesting companies spend $1.27 for every $1 saved when reducing staff.

Executives are urged to carefully assess AI’s true impact before assuming workforce reductions will deliver long-term savings. Rehiring former employees has become a practical response to bridge skill gaps and ensure technology integration succeeds without disrupting operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI conference in Egypt promotes innovation for a greener future

The National Telecommunication Institute (NTI) and Misr University for Science and Technology (MUST) jointly hosted a conference titled Artificial Intelligence for Environmental Sustainability.
The event focused on using technology to tackle global environmental challenges.

The conference aimed to advance research and innovation in applying artificial intelligence (AI) to support sustainable development.

Experts and academics discussed how AI can enhance environmental protection through smarter data analysis, improved monitoring systems, and reduced carbon emissions. The sessions explored AI’s role in promoting sustainability and highlighted new applications designed to manage resources more efficiently.

Students from MUST’s Faculty of Information Technology presented projects that showcased practical AI solutions for building a greener future. Their work reflected the conference’s goal of encouraging collaboration and applied research to address urgent environmental issues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Honor to launch world’s first AI robot phone in 2026

Honor has unveiled plans to release the world’s first AI-powered robot phone in 2026, marking a bold step in the evolution of smart devices.

The announcement was made by CEO Li Jian during the World Internet Conference Wuzhen Summit. It highlighted the company’s ambition to merge AI with advanced hardware design.

The upcoming device will combine AI capabilities, embodied intelligence, and high-definition imaging within a foldable, liftable mechanical structure. Its rear camera module will act as a rotating gimbal, offering 360° movement, auto-tracking, and 4K ultra-HD recording for professional-grade content creation.

Powered by an on-device large model, the upgraded YOYO assistant will feature emotional interaction, environmental awareness, and seamless coordination across devices.

The launch forms part of Honor’s $10 billion Alpha Strategy, announced earlier this year, which aims to establish a complete ecosystem for AI-driven technologies over the next five years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic strengthens European growth through Paris and Munich offices

AI firm Anthropic is expanding its European presence by opening new offices in Paris and Munich, strengthening its footprint alongside existing hubs in London, Dublin, and Zurich.

An expansion that follows rapid growth across the EMEA region, where the company has tripled its workforce and seen a ninefold increase in annual run-rate revenue.

The move comes as European businesses increasingly rely on Claude for critical enterprise tasks. Companies such as L’Oréal, BMW, SAP, and Sanofi are using the AI model to enhance software, improve workflows, and ensure operational reliability.

Germany and France, both among the top 20 countries in Claude usage per capita, are now at the centre to Anthropic’s strategic expansion.

Anthropic is also strengthening its leadership team across Europe. Guillaume Princen will oversee startups and digital-native businesses, while Pip White and Thomas Remy will lead the northern and southern EMEA regions, respectively.

A new head will soon be announced for Central and Eastern Europe, reflecting the company’s growing regional reach.

Beyond commercial goals, Anthropic is partnering with European institutions to promote AI education and culture. It collaborates with the Light Art Space in Berlin, supports student hackathons through TUM.ai, and works with the French organisation Unaite to advance developer training.

These partnerships reinforce Anthropic’s long-term commitment to responsible AI growth across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta invests $600 billion to expand AI data centres across the US

A $600 billion investment aimed at boosting innovation, job creation, and sustainability is being launched in the US by Meta to expand its AI infrastructure.

Instead of outsourcing development, the company is building its new generation of AI data centres domestically, reinforcing America’s leadership in technology and supporting local economies.

Since 2010, Meta’s data centre projects have supported more than 30,000 skilled trade jobs and 5,000 operational roles, generating $20 billion in business for US subcontractors. These facilities are designed to power Meta’s AI ambitions while driving regional economic growth.

The company emphasises responsible development by investing heavily in renewable energy and water efficiency. Its projects have added 15 gigawatts of new energy to US power grids, upgraded local infrastructure, and helped restore water systems in surrounding communities.

Meta aims to become fully water positive by 2030.

Beyond infrastructure, Meta has channelled $58 million into community grants for schools, nonprofits, and local initiatives, including STEM education and veteran training programmes.

As AI grows increasingly central to digital progress, Meta’s continued investment in sustainable, community-focused data centres underscores its vision for a connected, intelligent future built within the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inside OpenAI’s battle to protect AI from prompt injection attacks

OpenAI has identified prompt injection as one of the most pressing new challenges in AI security. As AI systems gain the ability to browse the web, handle personal data and act on users’ behalf, they become targets for malicious instructions hidden within online content.

These attacks, known as prompt injections, can trick AI models into taking unintended actions or revealing sensitive information.

To counter the issue, OpenAI has adopted a multi-layered defence strategy that combines safety training, automated monitoring and system-level security protections. The company’s research into ‘Instruction Hierarchy’ aims to help models distinguish between trusted and untrusted commands.

Continuous red-teaming and automated detection systems further strengthen resilience against evolving threats.

OpenAI also provides users with greater control, featuring built-in safeguards such as approval prompts before sensitive actions, sandboxing for code execution, and ‘Watch Mode’ when operating on financial or confidential sites.

These measures ensure that users remain aware of what actions AI agents perform on their behalf.

While prompt injection remains a developing risk, OpenAI expects adversaries to devote significant resources to exploiting it. The company continues to invest in research and transparency, aiming to make AI systems as secure and trustworthy as a cautious, well-informed human colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

‘Wooing and suing’ defines News Corp’s AI strategy

News Corp chief executive Robert Thomson warned AI companies against using unlicensed publisher content, calling recipients of ‘stolen goods’ fair game for pursuit. He said ‘wooing and suing’ would proceed in parallel, with more licensing deals expected after the OpenAI pact.

Thomson argued that high-quality data must be paid for and that ingesting material without permission undermines incentives to produce journalism. He insisted that ‘content crime does not and will not pay,’ signalling stricter enforcement ahead.

While criticising bad actors, he praised partners that recognise publisher IP and are negotiating usage rights. The company is positioning itself to monetise archives and live reporting through structured licences.

He also pointed to a major author settlement with another AI firm as a watershed for compensation over past training uses. The message: legal and commercial paths are both accelerating.

Against this backdrop, News Corp said AI-related revenues are gaining traction alongside digital subscriptions and B2B data services. Further licensing announcements are likely in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cars.com launches Carson AI to transform online car shopping

The US tech company, Cars.com, has unveiled Carson, a multilingual AI search engine designed to revolutionise the online car shopping experience.

Instead of relying on complex filters, Carson interprets natural language queries such as ‘a reliable car for a family of five’ or ‘a used truck under $30,000’, instantly producing targeted results tailored to each shopper’s needs.

A new AI feature that already powers around 15% of all web and mobile searches on Cars.com, with early data showing that users engaging with Carson return to the site twice as often and save three times more vehicles.

They also generate twice as many leads and convert 30% more frequently from search to vehicle detail pages.

Cars.com aims to simplify decision-making for its 25 million monthly shoppers, 70% of whom begin their search without knowing which brand or model to choose.

Carson helps these undecided users explore lifestyle, emotional and practical preferences while guiding them through Cars.com’s award-winning listings.

Further updates will introduce AI-generated summaries, personalised comparisons and search refinement suggestions.

Cars.com’s parent company, Cars Commerce, plans to expand its use of AI-driven tools to strengthen its role at the forefront of automotive retail innovation, offering a more efficient and intelligent marketplace for both consumers and dealerships.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!