Brain’s reusable thinking blocks give humans a flexibility advantage over AI

Researchers have uncovered why the human brain remains far more adaptable than AI. A new Princeton study finds that the brain repurposes shared cognitive components to manage varied tasks, enabling quick adaptation to new challenges without relearning from scratch.

Experiments with rhesus macaques showed that the prefrontal cortex uses shared ‘cognitive blocks’ that combine and recombine based on the task, such as judging colour or shape. The monkeys completed related categorisation tasks, allowing scientists to observe how neural patterns were reused across activities.

The findings suggest that humans excel at flexible learning because the brain builds new behaviours from existing mental components. By activating only the necessary blocks and quieting others, the prefrontal cortex avoids overload and keeps learning efficient.

Researchers say the insight could help artificial intelligence move beyond its tendency to forget past skills when learning new ones. It may also support clinical advances for conditions where cognitive flexibility is impaired, including schizophrenia and certain brain injuries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces new battles over digital rights

EU policy debates intensified after Denmark abandoned plans for mandatory mass scanning in the draft Child Sexual Abuse Regulation. Advocates welcomed the shift yet warned that new age checks and potential app bans still threaten privacy.

France and the UK advanced consultations on good practice guidelines for cyber intrusion firms, seeking more explicit rules for industry responsibility. Civil society groups also marked two years of the Digital Services Act by reflecting on enforcement experience and future challenges.

Campaigners highlighted rising concerns about tech-facilitated gender violence during the 16 Days initiative. The Centre for Democracy and Technology launched fresh resources stressing encryption protection, effective remedies and more decisive action against gendered misinformation.

CDT Europe also criticised the Commission’s digital omnibus package for weakening safeguards under laws, including the AI Act. The group urged firm enforcement of existing frameworks while exploring better redress options for AI-related harms in the EU legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Online platforms face new EU duties on child protection

The EU member states have endorsed a position for new rules to counter child sexual abuse online. The plan introduces duties for digital services to prevent the spread of abusive material. It also creates an EU Centre to coordinate enforcement and support national authorities.

Service providers must assess how their platforms could be misused and apply mitigation measures. These may include reporting tools, stronger privacy defaults for minors, and controls over shared content. National authorities will review these steps and can order additional action where needed.

A three-tier risk system will categorise services as high, medium, or low risk. High-risk platforms may be required to help develop protective technologies. Providers that fail to comply with obligations could face financial penalties under the regulation.

Victims will be able to request the removal or disabling of abusive material depicting them. The EU Centre will verify provider responses and maintain a database to manage reports. It will also share relevant information with Europol and law enforcement bodies.

The Council supports extending voluntary scanning for abusive content beyond its current expiry. Negotiations with the European Parliament will now begin on the final text. The Parliament adopted its position in 2023 and will help decide the Centre’s location.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ecuador and Latin America expand skills in ethical AI with UNESCO training

UNESCO is strengthening capacities in AI ethics and regulation across Ecuador and Latin America through two newly launched courses. The initiatives aim to enhance digital governance and ensure the ethical use of AI in the region.

The first course, ‘Regulation of Artificial Intelligence: A View from and towards Latin America,’ is taking place virtually from 19 to 28 November 2025.

Organised by UNESCO’s Social and Human Sciences Sector in coordination with UNESCO-Chile and CTS Lab at FLACSO Ecuador, the programme involves 30 senior officials from key institutions, including the Ombudsman’s Office and the Superintendency for Personal Data Protection.

Participants are trained on AI ethical principles, risks, and opportunities, guided by UNESCO’s 2021 Recommendation on the Ethics of AI.

The ‘Ethical Use of AI’ course starts next week for telecom and electoral officials. The 20-hour hybrid programme teaches officials to use UNESCO’s RAM to assess readiness and plan ethical AI strategies.

UNESCO aims to train 60 officials and strengthen AI ethics and regulatory frameworks in Ecuador and Chile. The programmes reflect a broader commitment to building inclusive, human-rights-oriented digital governance in Latin America.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI scribes help reduce physician paperwork and burnout

A new UCLA Health study finds that AI-powered scribe tools can reduce physicians’ documentation time and may improve work satisfaction. Conducted across 14 specialities and 72,000 patient visits, the trial tested Microsoft DAX and Nabla in real-world clinical settings.

Physicians using Nabla reduced the time spent writing each note by almost 10% compared with usual care, saving around 41 seconds per note. Both AI tools modestly improved burnout, cognitive workload, and work exhaustion, but physician oversight remains essential.

The trial highlighted several limitations, including occasional inaccuracies in AI-generated notes and a single instance of mild patient safety concern. Physicians found the tools easy to use and noted an improvement in patient engagement, with most patients being receptive.

The findings provide timely evidence as healthcare systems increasingly adopt AI scribes. Researchers emphasise that rigorous evaluation is necessary to ensure patient safety and effectiveness, and that further long-term studies across multiple institutions are recommended.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI supercomputer to study eye behaviour

Researchers at the University of Essex are using one of the UK’s most powerful AI supercomputers to investigate how mental fatigue affects the eye.

The EyeWarn project has been granted 10,000 hours on the government-funded Isambard-AI to analyse eye movements in natural settings.

Led by Dr Javier Andreu-Perez, the study aims to combine human and environmental data to understand how cognition influences eye behaviour. Insights from the project could help predict fatigue levels and improve monitoring of human factors in real-world scenarios.

The initiative involves collaboration with academics across the UK and AI firm Solvemed Group. Essex is also set to become a hub for AI innovation with the upcoming £2 billion data centre in Loughton.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pope Leo warns teens not to outsource schoolwork to AI

During a livestream from the Vatican to the National Catholic Youth Conference in Indianapolis, Pope Leo XIV warned roughly 15,000 young people not to rely on AI to do their homework.

He described AI as ‘one of the defining features of our time’ but insisted that responsible use should promote personal growth, not shortcut learning: ‘Don’t ask it to do your homework for you.’

Leo also urged teens to be deliberate with their screen time and use technology in ways that nurture faith, community and authentic friendships. He warned that while AI can process data quickly, it cannot replace real wisdom or the capacity for moral judgement.

His remarks reflect a broader concern from the Vatican about the impact of AI on the development of young people. In a previous message to a Vatican AI ethics conference, he emphasised that access to data is not the same as accurate intelligence. That youth must not let AI stunt their growth or compromise their dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot