Unexpected language emerges as best for AI prompting

A new joint study by the University of Maryland and Microsoft has found that Polish is the most effective language for prompting AI, outperforming 25 others, including English, French, and Chinese.

The researchers tested leading AI models, including OpenAI, Google Gemini, Qwen, Llama, and DeepSeek, by providing identical prompts in 26 languages. Polish achieved an average accuracy of 88 percent, securing first place. English, often seen as the natural choice for AI interaction, came only sixth.

According to the study, Polish proved to be the most precise in issuing commands to AI, despite the fact that far less Polish-language data exists for model training. The Polish Patent Office noted that while humans find the language difficult, AI systems appear to handle it with remarkable ease.

Other high-performing languages included French, Italian, and Spanish, with Chinese ranking among the lowest. The finding challenges the widespread assumption that English dominates AI communication and could reshape future research on multilingual model optimisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI-powered patent search to make innovation intelligence accessible

The US software company, Perplexity, has unveiled Perplexity Patents, the first AI-powered patent research agent designed to democratise access to intellectual property intelligence. The new tool allows anyone to explore patents using natural language instead of complex keyword syntax.

Traditional patent research has long relied on rigid search systems that demand specialist knowledge and expensive software.

Perplexity Patents instead offers conversational interaction, enabling users to ask questions such as ‘Are there any patents on AI for language learning?’ or ‘Key quantum computing patents since 2024?’.

The system automatically identifies relevant patents, provides inline viewing, and maintains context across multiple questions.

Powered by Perplexity’s large-scale search infrastructure, the platform uses agentic reasoning to break down complex queries, perform multi-step searches, and return comprehensive results supported by extensive patent documentation.

Its semantic understanding also captures related concepts that traditional tools often miss, linking terms such as ‘fitness trackers’, ‘activity bands’, and ‘health monitoring wearables’.

Beyond patent databases, Perplexity Patents can also draw from academic papers, open-source code, and other publicly available data, revealing the entire landscape of technological innovation. The service launches today in beta, free for all users, with extra features for Pro and Max subscribers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google removes Gemma AI model following defamation claims

Google has removed its Gemma AI model from AI Studio after US Senator Marsha Blackburn accused it of producing false sexual misconduct claims about her. The senator said Gemma fabricated an incident allegedly from her 1987 campaign, citing nonexistent news links to support the claim.

Blackburn described the AI’s response as defamatory and demanded action from Google.

The controversy follows a similar case involving conservative activist Robby Starbuck, who claims Google’s AI tools made false accusations about him. Google acknowledged that AI’ hallucinations’ are a known issue but insisted it is working to mitigate such errors.

Blackburn argued these fabrications go beyond harmless mistakes and represent real defamation from a company-owned AI model.

Google stated that Gemma was never intended as a consumer-facing tool, noting that some non-developers misused it to ask factual questions. The company confirmed it would remove the model from AI Studio while keeping it accessible via API for developers.

The incident has reignited debates over AI bias and accountability. Blackburn highlighted what she sees as a consistent pattern of conservative figures being targeted by AI systems, amid wider political scrutiny over misinformation and AI regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan’s KDDI partners with Google for AI-driven news service

Japan’s telecom leader KDDI is set to partner with Google to introduce an AI-powered news search service in spring 2026. The platform will use Google’s Gemini model to deliver articles from authorised Japanese media sources while preventing copyright violations.

The service will cite original publishers and exclude independent web scraping, addressing growing global concerns about the unauthorised use of journalism by generative AI systems. Around six domestic media companies, including digital outlets, are expected to join the initiative.

KDDI aims to strengthen user trust by offering reliable news through a transparent and copyright-safe AI interface. Details of how the articles will appear to users are still under review, according to sources familiar with the plan.

The move follows lawsuits filed in Tokyo by major Japanese newspapers, including Nikkei and Yomiuri, against US startup Perplexity AI over alleged copyright infringement. Industry experts say KDDI’s collaboration could become a model for responsible AI integration in news services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK traffic to Pornhub plunges after age-verification law

In response to the UK’s new age-verification law, Pornhub reports that visits from UK users have fallen by about 77 %.

The change comes following legislation designed to block under-18s from accessing adult sites via mandatory age checks.

The company states that it began enforcing the verification system early in October, noting that many users are now turned away or fail the checks.

According to Pornhub, this explains the significant decrease in traffic from the UK. The platform emphasised that this is a reflection of compliance rather than an admission of harm.

Critics argue that the law creates risks of overblocking and privacy concerns, as users may turn to less regulated or unsafe alternatives. This case also underscores tensions between content regulation, digital rights and the efficacy of age-gating as a tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reliance and Google expand Gemini AI access across India

Google has partnered with Reliance Intelligence to expand access to its Gemini AI across India.

Under the new collaboration, Jio Unlimited 5G users aged between 18 and 25 will receive the Google AI Pro plan free for 18 months, with nationwide eligibility to follow soon.

The partnership grants access to the Gemini 2.5 Pro model and includes increased limits for generating images and videos with the Nano Banana and Veo 3.1 tools.

Users in India will also benefit from expanded NotebookLM access for study and research, plus 2 TB of cloud storage shared across Google Photos, Gmail and Drive for data and WhatsApp backups.

According to Google, the offer represents a value of about ₹35,100 and can be activated via the MyJio app. The company said the initiative aims to make its most advanced AI tools available to a wider audience and support everyday productivity across India’s fast-growing digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft leaders envision AI as an invisible partner in work and play

AI, gaming and work were at the heart of the discussion during the Paley International Council Summit, where three Microsoft executives explored how technology is reshaping human experience and industry structures.

Mustafa Suleyman, Phil Spencer and Ryan Roslansky offered perspectives on the next phase of digital transformation, from personalised AI companions to the evolution of entertainment and the changing nature of work.

Mustafa Suleyman, CEO of Microsoft AI, described a future where AI becomes an invisible companion that quietly assists users. He explained that AI is moving beyond standalone apps to integrate directly into systems and browsers, performing tasks through natural language rather than manual navigation.

With features like Copilot on Windows and Edge, users can let AI automate everyday functions, creating a seamless experience where technology anticipates rather than responds.

Phil Spencer, CEO of Microsoft Gaming, underlined gaming’s cultural impact, noting that the industry now surpasses film, books and music combined. He emphasised that gaming’s interactive nature offers lessons for all media, where creativity, participation and community define success.

For Spencer, the future of entertainment lies in blending audience engagement with technology, allowing fans and creators to shape experiences together.

Ryan Roslansky, CEO of LinkedIn, discussed how AI is transforming skills and workforce dynamics. He highlighted that required job skills are changing faster than ever, with adaptability, AI literacy and human-centred leadership becoming essential.

Roslansky urged companies to focus on potential and continuous learning instead of static job descriptions, suggesting that the most successful organisations will be those that evolve with technology and cultivate resilience through education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp adds passkey encryption for safer chat backups

Meta is rolling out a new security feature for WhatsApp that allows users to encrypt their chat backups using passkeys instead of passwords or lengthy encryption codes.

A feature for WhatsApp that enables users to protect their backups with biometric authentication such as fingerprints, facial recognition or screen lock codes.

WhatsApp became the first messaging service to introduce end-to-end encrypted backups over four years ago, and Meta says the new update builds on that foundation to make privacy simpler and more accessible.

With passkey encryption, users can secure and access their chat history easily without the need to remember complex keys.

The feature will be gradually introduced worldwide over the coming months. Users can activate it by going to WhatsApp settings, selecting Chats, then Chat backup, and enabling end-to-end encrypted backup.

Meta says the goal is to make secure communication effortless while ensuring that private messages remain protected from unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A licensed AI music platform emerges from UMG and Udio

UMG and Udio have struck an industry-first deal to license AI music, settle litigation, and launch a 2026 platform that blends creation, streaming, and sharing in a licensed environment. Training uses authorised catalogues, with fingerprinting, filtering, and revenue sharing for artists and songwriters.

Udio’s current app stays online during the transition under a walled garden, with fingerprinting, filtering, and other controls added ahead of relaunch. Rights management sits at the core: licensed inputs, transparent outputs, and enforcement that aims to deter impersonation and unlicensed derivatives.

Leaders frame the pact as a template for a healthier AI music economy that aligns rightsholders, developers, and fans. Udio calls it a way to champion artists while expanding fan creativity, and UMG casts it as part of its broader AI partnerships across platforms.

Commercial focus extends beyond headline licensing to business model design, subscriptions, and collaboration tools for creators. Expect guardrails around style guidance, attribution, and monetisation, plus pathways for official stems and remix packs so fan edits can be cleared and paid.

Governance will matter as usage scales, with audits of model inputs, takedown routes, and payout rules under scrutiny. Success will be judged on artist adoption, catalogue protection, and whether fans get safer ways to customise music without sacrificing rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI unveils new gpt-oss-safeguard models for adaptive content safety

Yesterday, OpenAI launched gpt-oss-safeguard, a pair of open-weight reasoning models designed to classify content according to developer-specified safety policies.

Available in 120b and 20b sizes, these models allow developers to apply and revise policies during inference instead of relying on pre-trained classifiers.

They produce explanations of their reasoning, making policy enforcement transparent and adaptable. The models are downloadable under an Apache 2.0 licence, encouraging experimentation and modification.

The system excels in situations where potential risks evolve quickly, data is limited, or nuanced judgements are required.

Unlike traditional classifiers that infer policies from pre-labelled data, gpt-oss-safeguard interprets developer-provided policies directly, enabling more precise and flexible moderation.

The models have been tested internally and externally, showing competitive performance against OpenAI’s own Safety Reasoner and prior reasoning models. They can also support non-safety tasks, such as custom content labelling, depending on the developer’s goals.

OpenAI developed these models alongside ROOST and other partners, building a community to improve open safety tools collaboratively.

While gpt-oss-safeguard is computationally intensive and may not always surpass classifiers trained on extensive datasets, it offers a dynamic approach to content moderation and risk assessment.

Developers can integrate the models into their systems to classify messages, reviews, or chat content with transparent reasoning instead of static rule sets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!