Google limits search results to 10 per page

Google has removed the option to display up to 100 search results per page, now showing only 10 results at a time. The change limits visibility for websites beyond the top 10 and may reduce organic traffic for many content creators.

The update also impacts AI systems and automated workflows. Many tools rely on search engines to collect data, index content, or feed retrieval systems. With fewer results per query, these processes require additional searches, slowing data collection and increasing operational costs.

Content strategists and developers are advised to adapt. Optimising for top-ranked pages, revising SEO approaches, and rethinking data-gathering methods are increasingly important for both human users and AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta and TikTok agree to comply with Australia’s under-16 social media ban

Meta and TikTok have confirmed they will comply with Australia’s new law banning under-16s from using social media platforms, though both warned it will be difficult to enforce. The legislation, taking effect on 10 December, will require major platforms to remove accounts belonging to users under that age.

The law is among the world’s strictest, but regulators and companies are still working out how it will be implemented. Social media firms face fines of up to A$49.5 million if found in breach, yet they are not required to verify every user’s age directly.

TikTok’s Australia policy head, Ella Woods-Joyce, warned the ban could drive children toward unregulated online spaces lacking safety measures. Meta’s director, Mia Garlick, acknowledged the ‘significant engineering and age assurance challenges’ involved in detecting and removing underage users.

Critics including YouTube and digital rights groups have labelled the ban vague and rushed, arguing it may not achieve its aim of protecting children online. The government maintains that platforms must take ‘reasonable steps’ to prevent young users from accessing their services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple fined over unfair iPhone sales contracts in France

A Paris court has ordered Apple to pay around €39 million to French mobile operators, ruling that the company imposed unfair terms in contracts governing iPhone sales more than a decade ago. The court also fined Apple €8 million and annulled several clauses deemed anticompetitive.

Judges found that Apple required carriers to sell a set number of iPhones at fixed prices, restricted how its products were advertised, and used operators’ patents without compensation. The French consumer watchdog DGCCRF had first raised concerns about these practices years earlier.

Under the ruling, Apple must compensate three of France’s four major mobile networks; Bouygues Telecom, Free, and SFR. The decision applies immediately despite Apple’s appeal, which will be heard at a later date.

Apple said it disagreed with the ruling and would challenge it, arguing that the contracts reflected standard commercial arrangements of the time. French regulators have increasingly scrutinised major tech firms as part of wider efforts to curb unfair market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ontario updates deidentification guidelines for safer data use

Ontario’s privacy watchdog has released an expanded set of deidentification guidelines to help organisations protect personal data while enabling innovation. The 100-page document from the Office of the Information and Privacy Commissioner (IPC) offers step-by-step advice, checklists and examples.

The update modernises the 2016 version to reflect global regulatory changes and new data protection practices. She emphasised that the guidelines aim to help organisations of all sizes responsibly anonymise data while maintaining its usefulness for research, AI development and public benefit.

Developed through broad stakeholder consultation, the guidelines were refined with input from privacy experts and the Canadian Anonymization Network. The new version responds to industry requests for more detailed, operational guidance.

Although the guidelines are not legally binding, experts said following them can reduce liability risks and strengthen compliance with privacy laws. The IPC hopes they will serve as a practical reference for executives and data officers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI Foundation to fund global health and AI safety projects

OpenAI has finalised its recapitalisation, simplifying its structure while preserving its core mission. The new OpenAI Foundation controls OpenAI Group PBC and holds about $130 billion in equity, making it one of history’s best-funded philanthropies.

The Foundation will receive further ownership as OpenAI’s valuation grows, ensuring its financial resources expand alongside the company’s success. Its mission remains to ensure that artificial general intelligence benefits all of humanity.

The more the business prospers, the greater the Foundation’s capacity to fund global initiatives.

An initial $25 billion commitment will focus on two core areas: advancing healthcare breakthroughs and strengthening AI resilience. Funds will go toward open-source health datasets, medical research, and technical defences to make AI systems safer and more reliable.

The initiative builds on OpenAI’s existing People-First AI Fund and reflects recommendations from its Nonprofit Commission.

The recapitalisation follows nearly a year of discussions with the Attorneys General of California and Delaware, resulting in stronger governance and accountability. With this structure, OpenAI aims to advance science, promote global cooperation, and share AI benefits broadly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most transformative decade begins as Kurzweil’s AI vision unfolds

AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translation and real-time voice synthesis to medical diagnostics and language generation, today’s systems perform tasks once reserved for human cognition. For those watching closely, this shift feels less like a surprise and more like a milestone reached.

Ray Kurzweil, one of the most prominent futurists of the past half-century, predicted much of what is now unfolding. In 1999, his book The Age of Spiritual Machines laid a roadmap for how computers would grow exponentially in power and eventually match and surpass human capabilities. Over two decades later, many of his projections for the 2020s have materialised with unsettling accuracy.

The futurist who measured the future

Kurzweil’s work stands out not only for its ambition but for its precision. Rather than offering vague speculation, he produced a set of quantifiable predictions, 147 in total, with a claimed accuracy rate of over 85 percent. These ranged from the growth of mobile computing and cloud-based storage to real-time language translation and the emergence of AI companions.

Since 2012, he has worked at Google as Director of Engineering, contributing to developing natural language understanding systems. He believes is that exponential growth in computing power, driven by Moore’s Law and its successors, will eventually transform our tools and biology.

Reprogramming the body with code

One of Kurzweil’s most controversial but recurring ideas is that human ageing is, at its core, a software problem. He believes that by the early 2030s, advancements in biotechnology and nanomedicine could allow us to repair or even reverse cellular damage.

The logic is straightforward: if ageing results from accumulated biological errors, then precise intervention at the molecular level might prevent those errors or correct them in real time.

AI adoption among US firms with over 250 employees fell to under 12 per cent in August, the largest drop since the Census Bureau began tracking in 2023.

Some of these ideas are already being tested, though results remain preliminary. For now, claims about extending life remain speculative, but the research trend is real.

Kurzweil’s perspective places biology and computation on a converging path. His view is not that we will become machines, but that we may learn to edit ourselves with the same logic we use to program them.

The brain, extended

Another key milestone in Kurzweil’s roadmap is merging biological and digital intelligence. He envisions a future where nanorobots circulate through the bloodstream and connect our neurons directly to cloud-based systems. In this vision, the brain becomes a hybrid processor, part organic, part synthetic.

By the mid-2030s, he predicts we may no longer rely solely on internal memory or individual thought. Instead, we may access external information, knowledge, and computation in real time. Some current projects, such as brain–computer interfaces and neuroprosthetics, point in this direction, but remain in early stages of development.

Kurzweil frames this not as a loss of humanity but as an expansion of its potential.

The singularity hypothesis

At the centre of Kurzweil’s long-term vision lies the idea of a technological singularity. By 2045, he believes AI will surpass the combined intelligence of all humans, leading to a phase shift in human evolution. However, this moment, often misunderstood, is not a single event but a threshold after which change accelerates beyond human comprehension.

Human like robot and artificial intelligence

The singularity, in Kurzweil’s view, does not erase humanity. Instead, it integrates us into a system where biology no longer limits intelligence. The implications are vast, from ethics and identity to access and inequality. Who participates in this future, and who is left out, remains an open question.

Between vision and verification

Critics often label Kurzweil’s forecasts as too optimistic or detached from scientific constraints. Some argue that while trends may be exponential, progress in medicine, cognition, and consciousness cannot be compressed into neat timelines. Others worry about the philosophical consequences of merging with machines.

Still, it is difficult to ignore the number of predictions that have already come true. Kurzweil’s strength lies not in certainty, but in pattern recognition. His work forces a reckoning with what might happen if the current pace of change continues unchecked.

Whether or not we reach the singularity by 2045, the present moment already feels like the future he described.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Estimating biological age from routine records with LifeClock

LifeClock, reported in Nature Medicine, estimates biological age from routine health records. Trained on 24.6 million visits and 184 indicators, it offers a low-cost route to precision health beyond simple chronology.

Researchers found two distinct clocks: a paediatric development clock and an adult ageing clock. Specialised models improved accuracy, reflecting scripted growth versus decline. Biomarkers diverged between stages, aligning with growth or deterioration.

LifeClock stratified risk years ahead. In children, clusters flagged malnutrition, developmental disorders, and endocrine issues, including markedly higher odds of pituitary hyperfunction and obesity. Adult clusters signalled future diabetes, stroke, renal failure, and cardiovascular disease.

Performance was strong after fine-tuning: the area under the curve hit 0.98 for current diabetes and 0.91 for future diabetes. EHRFormer outperformed RNN and gradient-boosting baselines across longitudinal records.

Authors propose LifeClock for accessible monitoring, personalised interventions, and prevention. Adding wearables and real-time biometrics could refine responsiveness, enabling earlier action on emerging risks and supporting equitable precision medicine at the population scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven diabetes prevention matches human-led programs in clinical trial

Researchers at Johns Hopkins Medicine and the Bloomberg School of Public Health report that an AI-driven diabetes prevention program achieved outcomes comparable to traditional, human-led coaching. The results come from a phase III randomised controlled trial, the first of its kind.

The trial enrolled participants with prediabetes and randomly assigned them to one of four remote human-led programs or an AI app that delivered personalised push notifications guiding diet, exercise and weight management. Over 12 months, both groups were evaluated against CDC benchmarks for risk reduction (e.g. achieving 5 % weight loss, meeting activity goals, or reducing A1C).

After one year, 31.7 % of AI-app users and 31.9 % of human-led participants met the composite benchmark. Interestingly, the AI arm saw higher initiation rates (93.4 % vs 82.7 %) and completion (63.9 % vs 50.3 %) than human programs.

The researchers note that scheduling, staffing, and access barriers can limit traditional lifestyle programs. The AI approach, which runs asynchronously and is always available, may help expand reach, especially for underserved populations or when human resources are constrained.

Future work will assess how these findings scale in broader, real-world patient groups and explore cost effectiveness, user preferences and the balance between AI and human support.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!