Perplexity launches AI-powered patent search to make innovation intelligence accessible

The US software company, Perplexity, has unveiled Perplexity Patents, the first AI-powered patent research agent designed to democratise access to intellectual property intelligence. The new tool allows anyone to explore patents using natural language instead of complex keyword syntax.

Traditional patent research has long relied on rigid search systems that demand specialist knowledge and expensive software.

Perplexity Patents instead offers conversational interaction, enabling users to ask questions such as ‘Are there any patents on AI for language learning?’ or ‘Key quantum computing patents since 2024?’.

The system automatically identifies relevant patents, provides inline viewing, and maintains context across multiple questions.

Powered by Perplexity’s large-scale search infrastructure, the platform uses agentic reasoning to break down complex queries, perform multi-step searches, and return comprehensive results supported by extensive patent documentation.

Its semantic understanding also captures related concepts that traditional tools often miss, linking terms such as ‘fitness trackers’, ‘activity bands’, and ‘health monitoring wearables’.

Beyond patent databases, Perplexity Patents can also draw from academic papers, open-source code, and other publicly available data, revealing the entire landscape of technological innovation. The service launches today in beta, free for all users, with extra features for Pro and Max subscribers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan’s KDDI partners with Google for AI-driven news service

Japan’s telecom leader KDDI is set to partner with Google to introduce an AI-powered news search service in spring 2026. The platform will use Google’s Gemini model to deliver articles from authorised Japanese media sources while preventing copyright violations.

The service will cite original publishers and exclude independent web scraping, addressing growing global concerns about the unauthorised use of journalism by generative AI systems. Around six domestic media companies, including digital outlets, are expected to join the initiative.

KDDI aims to strengthen user trust by offering reliable news through a transparent and copyright-safe AI interface. Details of how the articles will appear to users are still under review, according to sources familiar with the plan.

The move follows lawsuits filed in Tokyo by major Japanese newspapers, including Nikkei and Yomiuri, against US startup Perplexity AI over alleged copyright infringement. Industry experts say KDDI’s collaboration could become a model for responsible AI integration in news services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK traffic to Pornhub plunges after age-verification law

In response to the UK’s new age-verification law, Pornhub reports that visits from UK users have fallen by about 77 %.

The change comes following legislation designed to block under-18s from accessing adult sites via mandatory age checks.

The company states that it began enforcing the verification system early in October, noting that many users are now turned away or fail the checks.

According to Pornhub, this explains the significant decrease in traffic from the UK. The platform emphasised that this is a reflection of compliance rather than an admission of harm.

Critics argue that the law creates risks of overblocking and privacy concerns, as users may turn to less regulated or unsafe alternatives. This case also underscores tensions between content regulation, digital rights and the efficacy of age-gating as a tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When AI LLMs ‘think’ more, groups suffer, CMU study finds

Researchers at Carnegie Mellon University report that stronger-reasoning language models (LLMs) act more selfishly in groups, reducing cooperation and nudging peers toward self-interest. Concerns grow as people ask AI for social advice.

In a Public Goods test, non-reasoning models shared 96 percent; a reasoning model shared 20 percent. Adding a few reasoning steps cut cooperation nearly in half. Reflection prompts also reduced sharing.

Mixed groups showed spillover. Reasoning agents dragged down collective performance by 81 percent, spreading self-interest. Users may over-trust ‘rational’ advice that justifies uncooperative choices at work or in class.

Comparisons spanned LLMs from OpenAI, Google, DeepSeek, and Anthropic. Findings point to the need to balance raw reasoning with social intelligence. Designers should reward cooperation, not only optimise individual gain.

The paper ‘Spontaneous Giving and Calculated Greed in Language Models’ will be presented at EMNLP 2025, with a preprint on arXiv. Authors caution that more intelligent AI is not automatically better for society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australian police create AI tool to decode predators’ slang

Australian police are developing an AI tool with Microsoft to decode slang and emojis used by online predators. The technology is designed to interpret coded messages in digital conversations to help investigators detect harmful intent more quickly.

Federal Police Commissioner Krissy Barrett said social media has become a breeding ground for exploitation, bullying, and radicalisation. The AI based prototype, she explained, could allow officers to identify threats earlier and rescue children before abuse occurs.

Barrett also warned about the rise of so-called ‘crimefluencers’, offenders using social media trends to lure young victims, many of whom are pre-teen or teenage girls. Australian authorities believe understanding modern online language is key to disrupting their methods.

The initiative follows Australia’s new under-16 social media ban, due to take effect in December. Regulators worldwide are monitoring the country’s approach as governments struggle to balance online safety with privacy and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA AI powers mobile clinics for breast cancer screening in rural India

A mobile clinic powered by NVIDIA AI is bringing life-saving breast cancer screenings to women in rural India.

The Health Within Reach Foundation, in partnership with Dallas-based startup MedCognetics, operates the Women Cancer Screening Van, which has already conducted over 3,500 mammograms, with 90% of patients screened for the first time.

MedCognetics, a member of NVIDIA’s Inception programme, provides an AI system that analyses mammogram data in real time to identify potential abnormalities.

The foundation reports that around 8% of screenings revealed irregularities, with 24 confirmed cancer diagnoses detected early enough for timely treatment. The collaboration demonstrates how AI can expand access to preventive healthcare in remote areas.

MedCognetics’ technology uses NVIDIA IGX Orin and Holoscan platforms for rapid image processing, supporting real-time detection and risk analysis. Its algorithms can improve image quality, assist radiologists in identifying small or early-stage tumours, and predict breast cancer risk within a year.

These tools are part of a wider effort to make advanced medical diagnostics affordable and accessible in developing regions.

By combining edge AI with local cloud infrastructure, the system enables faster diagnosis and better connectivity between healthcare workers in the field and radiologists in urban hospitals.

For millions of women in rural India, the initiative brings high-quality care directly to their communities and offers a powerful example of how AI can reduce health inequalities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Protecting human rights in neurotechnology

The Australian Human Rights Commission has called for neurotechnology to be developed with strong human rights protections and legal safeguards for neural data. Its report, ‘Peace of Mind: Navigating the ethical frontiers of neurotechnology and human rights’, warns that such technologies could expose sensitive brain data and increase risks of surveillance, discrimination, and violations of freedom of thought.

Innovations in neurotechnology, including brain-computer interfaces that help people with paralysis communicate and wearable devices that monitor workplace fatigue, offer significant benefits but also present profound ethical challenges. Commissioner Lorraine Finlay stressed that protecting privacy and human dignity must remain central to technological progress.

The report urges the government, industry, and civil society in Australia to ensure informed consent, ban neuromarketing targeting children, prohibit coercive workplace applications, and legally review military uses. A specialist agency is recommended to enforce safety standards, prioritising the rights and best interests of children, older people, and individuals with disabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Reliance and Google expand Gemini AI access across India

Google has partnered with Reliance Intelligence to expand access to its Gemini AI across India.

Under the new collaboration, Jio Unlimited 5G users aged between 18 and 25 will receive the Google AI Pro plan free for 18 months, with nationwide eligibility to follow soon.

The partnership grants access to the Gemini 2.5 Pro model and includes increased limits for generating images and videos with the Nano Banana and Veo 3.1 tools.

Users in India will also benefit from expanded NotebookLM access for study and research, plus 2 TB of cloud storage shared across Google Photos, Gmail and Drive for data and WhatsApp backups.

According to Google, the offer represents a value of about ₹35,100 and can be activated via the MyJio app. The company said the initiative aims to make its most advanced AI tools available to a wider audience and support everyday productivity across India’s fast-growing digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp adds passkey encryption for safer chat backups

Meta is rolling out a new security feature for WhatsApp that allows users to encrypt their chat backups using passkeys instead of passwords or lengthy encryption codes.

A feature for WhatsApp that enables users to protect their backups with biometric authentication such as fingerprints, facial recognition or screen lock codes.

WhatsApp became the first messaging service to introduce end-to-end encrypted backups over four years ago, and Meta says the new update builds on that foundation to make privacy simpler and more accessible.

With passkey encryption, users can secure and access their chat history easily without the need to remember complex keys.

The feature will be gradually introduced worldwide over the coming months. Users can activate it by going to WhatsApp settings, selecting Chats, then Chat backup, and enabling end-to-end encrypted backup.

Meta says the goal is to make secure communication effortless while ensuring that private messages remain protected from unauthorised access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers classifying ChatGPT as a search engine under the DSA. What are the implications?

The European Commission is pondering whether OpenAI’s ChatGPT should be designated as a ‘Very Large Online Search Engine’ (VLOSE) under the Digital Services Act (DSA), a move that could reshape how generative AI tools are regulated across Europe.

OpenAI recently reported that ChatGPT’s search feature reached 120.4 million monthly users in the EU over the past six months, well above the 45 million threshold that triggers stricter obligations for major online platforms and search engines. The Commission confirmed it is reviewing the figures and assessing whether ChatGPT meets the criteria for designation.

The key question is whether ChatGPT’s live search function should be treated as an independent service or as part of the chatbot as a whole. Legal experts note that the DSA applies to intermediary services such as hosting platforms or search engines, categories that do not neatly encompass generative AI systems.

Implications for OpenAI

If designated, ChatGPT would be the first AI chatbot formally subject to DSA obligations, including systemic risk assessments, transparency reporting, and independent audits. OpenAI would need to evaluate how ChatGPT affects fundamental rights, democratic processes, and mental health, updating its systems and features based on identified risks.

‘As part of mitigation measures, OpenAI may need to adapt ChatGPT’s design, features, and functionality,’ said Laureline Lemoine of AWO. ‘Compliance could also slow the rollout of new tools in Europe if risk assessments aren’t planned in advance.’

The company could also face new data-sharing obligations under Article 40 of the DSA, allowing vetted researchers to request information about systemic risks and mitigation efforts, potentially extending to model data or training processes.

A test case for AI oversight

Legal scholars say the decision could set a precedent for generative AI regulation across the EU. ‘Classifying ChatGPT as a VLOSE will expand scrutiny beyond what’s currently covered under the AI Act,’ said Natali Helberger, professor of information law at the University of Amsterdam.

Experts warn the DSA would shift OpenAI from voluntary AI-safety frameworks and self-defined benchmarks to binding obligations, moving beyond narrow ‘bias tests’ to audited systemic-risk assessments, transparency and mitigation duties. ‘The DSA’s due diligence regime will be a tough reality check,’ said Mathias Vermeulen, public policy director at AWO.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!