Google limits search results to 10 per page

Google has removed the option to display up to 100 search results per page, now showing only 10 results at a time. The change limits visibility for websites beyond the top 10 and may reduce organic traffic for many content creators.

The update also impacts AI systems and automated workflows. Many tools rely on search engines to collect data, index content, or feed retrieval systems. With fewer results per query, these processes require additional searches, slowing data collection and increasing operational costs.

Content strategists and developers are advised to adapt. Optimising for top-ranked pages, revising SEO approaches, and rethinking data-gathering methods are increasingly important for both human users and AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT offers wellness checks for long chat sessions

OpenAI has introduced new features in ChatGPT to encourage healthier use for people who spend extended periods chatting with the AI. Users may see a pop-up message reading ‘Just checking in. You’ve been chatting for a while, is this a good time for a break?’.

Users can dismiss it or continue, helping to prevent excessive screen time while staying flexible. The update also guides high-stakes personal decisions.

ChatGPT will not give direct advice on sensitive topics such as relationships, but instead asks questions and encourages reflection, helping users consider their options safely.

OpenAI acknowledged that AI can feel especially personal for vulnerable individuals. Earlier versions sometimes struggled to recognise signs of emotional dependency or distress.

The company is improving the model to detect these cases and direct users to evidence-based resources when needed, making long interactions safer and more mindful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA expands open-source AI models to boost global innovation

The US tech giant, NVIDIA, has released open-source AI models and data tools across language, biology and robotics to accelerate innovation and expand access to cutting-edge research.

New model families, Nemotron, Cosmos, Isaac GR00T and Clara, are designed to empower developers to build intelligent agents and applications with enhanced reasoning and multimodal capabilities.

The company is contributing these open models and datasets to Hugging Face, further solidifying its position as a leading supporter of open research.

Nemotron models improve reasoning for digital AI agents, while Cosmos and Isaac GR00T enable physical AI and robotic systems to perform complex simulations and behaviours. Clara advances biomedical AI, allowing scientists to analyse RNA, generate 3D protein structures and enhance medical imaging.

Major industry partners, including Amazon Robotics, ServiceNow, Palantir and PayPal, are already integrating NVIDIA’s technologies to develop next-generation AI agents.

An initiative that reflects NVIDIA’s aim to create an open ecosystem that supports both enterprise and scientific innovation through accessible, transparent and responsible AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe launches AI Assistant to simplify creative design

Adobe has launched a new AI Assistant in Express, enabling users to create and edit content from concept to completion in minutes. The tool understands design context and lets users create on-brand visuals by describing their ideas.

Users can seamlessly adjust fonts, images, backgrounds, and other elements while keeping the rest of the design intact.

The AI Assistant integrates generative AI models with Adobe’s professional tools, turning templates into conversational canvases. Users can make targeted edits, replace objects, or transform designs without starting over.

The assistant also interprets subjective requests, suggesting creative options and offering contextual prompts to refine results efficiently, enhancing both speed and quality of content creation.

Adobe Express will extend the AI Assistant with enterprise-grade features, including template locking, batch creation, and brand consistency tools. Early adopters report that non-designers can now produce professional visuals quickly, while experienced designers save time on routine tasks.

Organisations can expect improved collaboration, efficiency, and consistency across content supply chains.

The AI Assistant beta is currently available to Adobe Express Premium customers on desktop, with full availability planned for all users via the Firefly generative credit system. Adobe stresses that AI enhances creativity, respects creators’ rights, and supports responsible generative AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI boards help Pinterest users refine taste and shop

Pinterest is giving boards an AI upgrade, adding smarter recommendations, fresher layouts, and built-in shopping to help users move from ideas to action worldwide over the coming months.

New tabs tailor each board: Make it yours for fashion and some home decor, More ideas for categories like beauty or recipes, and All saves as a single place to find everything previously pinned.

In the US and Canada, Styled for you creates dynamic AI collages from saved fashion Pins, letting people mix and match apparel and accessories, preview outfits, and shop items that fit their taste.

Pinterest is also testing Boards made for you, personalised boards curated with editorial input and AI picks, delivered to home feeds and inboxes, featuring trending styles, weekly outfit ideas, and shoppable looks.

Executives say boards remain central to Pinterest’s experience; the new AI features aim to act like a personal shopping assistant while keeping curation simple and privacy-respecting by design.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe Firefly expands with new AI tools for audio and video creation

Adobe has unveiled major updates to its Firefly creative AI studio, introducing advanced audio, video, and imaging tools at the Adobe MAX 2025 conference.

These new features include Generate Soundtrack for licensed music creation, Generate Speech for lifelike multilingual voiceovers, and a timeline-based video editor that integrates seamlessly with Firefly’s existing creative tools.

The company also launched the Firefly Image Model 5, which can produce photorealistic 4MP images with prompt-based editing. Firefly now includes partner models from Google, OpenAI, ElevenLabs, Topaz Labs, and others, bringing the industry’s top AI capabilities into one unified workspace.

Adobe also announced Firefly Custom Models, allowing users to train AI models to match their personal creative style.

In a preview of future developments, Adobe showcased Project Moonlight, a conversational AI assistant that connects across creative apps and social channels to help creators move from concept to content in minutes.

A system that can offer tailored suggestions and automate parts of the creative process while keeping creators in complete control.

Adobe emphasised that Firefly is designed to enhance human creativity rather than replace it, offering responsible AI tools that respect intellectual property rights.

With such a release, the company continues integrating generative AI across its ecosystem to simplify production and empower creators at every stage of their workflow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk launches AI-powered Grokipedia to rival Wikipedia

Elon Musk has launched Grokipedia, an AI-driven online encyclopedia developed by his company xAI. The platform, described as an alternative to Wikipedia, debuted on Monday with over 885,000 articles written and verified by AI.

Musk claimed the early version already surpasses Wikipedia in quality and transparency, promising significant improvements with the release of version 1.0.

Unlike Wikipedia’s crowdsourced model, Grokipedia does not allow users to edit content directly. Instead, users can request modifications through xAI’s chatbot Grok, which decides whether to implement changes and explains its reasoning.

Musk said the project’s guiding principle is ‘the truth, the whole truth, and nothing but the truth,’ acknowledging the platform’s imperfections while pledging continuous refinement.

However, Grokipedia’s launch has raised questions about originality. Several entries contain disclaimers crediting Wikipedia under a Creative Commons licence, with some articles appearing nearly identical.

Musk confirmed awareness of the issue and stated that improvements are expected before the end of the year. The Wikimedia Foundation, which operates Wikipedia, responded calmly, noting that human-created knowledge remains at the heart of its mission.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!