ChatGPT for Teachers launched as OpenAI expands educator tools

OpenAI has launched ChatGPT for Teachers, offering US US educators a secure workspace to plan lessons and utilise AI safely. The service is free for verified K–12 staff until June 2027. OpenAI states that its goal is to support classroom tasks without introducing data risks.

Educators can tailor responses by specifying grades, curriculum needs, and preferred formats. Content shared in the workspace is not used to train models by default. The platform includes GPT-5.1 Auto, search, file uploads, and image tools.

The system integrates with widely used school software, including Google Drive, Microsoft 365, and Canva. Teachers can import documents, design presentations, and organise materials in one place. Shared prompt libraries offer examples from other educators.

Collaboration features enable co-planned lessons, shared templates, and school-specific GPTs. OpenAI says these tools aim to reduce administrative workloads. Schools can create collective workspaces to coordinate teaching resources more easily.

The service remains free through June 2027, with pricing updates to follow later. OpenAI plans to keep costs accessible for schools. Educators can begin using the platform by verifying their status through SheerID.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pope Leo warns teens not to outsource schoolwork to AI

During a livestream from the Vatican to the National Catholic Youth Conference in Indianapolis, Pope Leo XIV warned roughly 15,000 young people not to rely on AI to do their homework.

He described AI as ‘one of the defining features of our time’ but insisted that responsible use should promote personal growth, not shortcut learning: ‘Don’t ask it to do your homework for you.’

Leo also urged teens to be deliberate with their screen time and use technology in ways that nurture faith, community and authentic friendships. He warned that while AI can process data quickly, it cannot replace real wisdom or the capacity for moral judgement.

His remarks reflect a broader concern from the Vatican about the impact of AI on the development of young people. In a previous message to a Vatican AI ethics conference, he emphasised that access to data is not the same as accurate intelligence. That youth must not let AI stunt their growth or compromise their dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UAE launches 1 billion AI initiative for Africa

The UAE has unveiled a US$1 billion AI for Development initiative to finance AI projects across African nations. The programme aims to enhance digital infrastructure, government services, and productivity, supporting long-term economic and social development.

Implementation will be led by the Abu Dhabi Exports Office (ADEX), in cooperation with the UAE Foreign Aid Agency. AI technologies will be applied in key sectors, including education, agriculture, and infrastructure, to create innovative solutions and promote sustainable growth.

Officials highlighted the initiative as part of the UAE’s vision to become a global hub for AI while reinforcing its humanitarian and developmental legacy. The programme aims to boost international partnerships and deliver impactful support to developing countries.

The initiative reinforces the UAE’s long-term commitment to Africa and its role in technological and digital advancement. Leaders emphasised that AI-driven projects can improve living standards and foster inclusive, sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Greece accelerates AI training for teachers

A national push to bring AI into public schools has moved ahead in Greece after the launch of an intensive training programme for secondary teachers.

Staff in selected institutions will receive guidance on a custom version of ChatGPT designed for academic use, with a wider rollout planned for January.

The government aims to prepare educators for an era in which AI tools support lesson planning, research and personalised teaching instead of remaining outside daily classroom practice.

Officials view the initiative as part of a broader ambition to position Greece as a technological centre, supported by partnerships with major AI firms and new infrastructure projects in Athens. Students will gain access to the system next spring under tight supervision.

Supporters argue that generative tools could help teachers reduce administrative workload and make learning more adaptive.

Concerns remain strong among pupils and educators who fear that AI may deepen an already exam-driven culture.

Many students say they worry about losing autonomy and creativity, while teachers’ unions warn that reliance on automated assistance could erode critical thinking. Others point to the risk of increased screen use in a country preparing to block social media for younger teenagers.

Teacher representatives also argue that school buildings require urgent attention instead of high-profile digital reforms. Poor heating, unreliable electricity and decades of underinvestment complicate adoption of new technologies.

Educators who support AI stress that meaningful progress depends on using such systems as tools to broaden creativity rather than as shortcuts that reinforce rote learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI use rises among Portuguese youth

A recent survey reveals that 38.7% of Portuguese individuals aged 16 to 74 used AI tools in the three months preceding the interview, primarily for personal purposes. Usage is particularly high among 16 to 24-year-olds (76.5%) and students (81.5%).

Internet access remains widespread, with 89.5% of residents going online recently. Nearly half (49.6%) placed orders online, primarily for clothing, footwear, and fashion accessories, while 74.2% accessed public service websites, often using a Citizen Card or Digital Mobile Key for authentication.

Digital skills are growing, with 59.2% of the population reaching basic or above basic levels. Young adults and tertiary-educated individuals show the highest digital proficiency, at 83.4% and 88.4% respectively.

Household internet penetration stands at 90.9%, predominantly via fixed connections.

Concerns about online safety are on the rise, as 45.2% of internet users reported encountering aggressive or discriminatory content, up from 35.5% in 2023. Reported issues include discrimination based on nationality, politics, and sexual identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI teaching leaves Staffordshire students frustrated

Students at the University of Staffordshire in the UK have criticised a coding course after discovering much of the teaching was delivered through AI-generated slides and voiceovers.

Participants in the government-funded apprenticeship programme said they felt deprived of knowledge and frustrated that the course relied heavily on automated materials.

Concerns arose when learners noticed inconsistencies in language, suspicious file names, and abrupt changes in voiceover accents during lessons.

Students reported raising these issues with university staff, but the institution maintained the use of AI, asserting it supported academic standards while remaining ethical and responsible.

Critics argue that AI teaching diminishes engagement and reduces the opportunity to acquire practical skills needed for career development.

Experts suggest students supplement AI-driven courses with hands-on learning and critical thinking to ensure the experience remains valuable and relevant to their professional goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New NVIDIA model drives breakthroughs in conservation biology

Researchers have introduced a biology foundation model that can recognise over a million species and understand relationships across the animal and plant kingdoms.

BioCLIP 2 was trained on one of the most extensive biological datasets ever compiled, allowing it to identify traits, cluster organisms and reveal patterns that support conservation efforts.

A model that relies on NVIDIA accelerated computing instead of traditional methods and demonstrates what large-scale biological learning can achieve.

Training drew on more than two hundred million images that cover hundreds of thousands of taxonomic classes. The AI model learned how species fit within wider biological hierarchies and how traits differ across age, gender and related groups without explicit guidance.

It even separated diseased leaves from healthy samples, offering a route to improved monitoring of ecosystems and agricultural resilience.

Scientists now plan to expand the project by utilising wildlife digital twins that simulate ecological systems in controlled environments.

Researchers will be able to study species interactions and test scenarios instead of disturbing natural habitats. The approach opens possibilities for richer ecological research and could offer the public immersive ways to view biodiversity from the perspective of different animals.

BioCLIP 2 is available as open-source software and has already attracted strong global interest. Its capabilities indicate a shift toward more advanced biological modelling powered by accelerated computing, providing conservationists and educators with new tools to address long-standing knowledge gaps.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New study shows AI improves mental health diagnoses

A Lund University study shows an AI assistant can assess psychiatric conditions more accurately than standard mental health rating scales. In a study of 303 participants, the AI assistant Alba gave DSM-based diagnoses, outperforming standard tools in eight of nine disorders.

The study included conditions such as depression, anxiety, OCD, PTSD, ADHD, autism, eating disorders, substance use disorder and bipolar disorder.

Alba proved particularly effective at distinguishing overlapping conditions where traditional rating scales often yield similar results. Participants also reported positive experiences with the AI interview, describing it as empathic, supportive and engaging.

Researchers highlighted that AI-assisted interviews could serve as a scalable, person-centred tool to complement clinical assessments while preserving the clinician’s essential role.

The study advances digital mental health tools, with Alba analysing the full DSM-5 manual instead of individual disorders. Talk To Alba offers AI-powered clinical interviews, CBT support, DSM-5-based diagnosis, and consultation transcription.

Experts emphasise that such AI solutions can ease healthcare workloads, provide preliminary assessments, and maintain high diagnostic reliability without replacing mental health professionals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

GPT‑5 expands research speed and idea generation for scientists

AI technology is increasingly helping scientists accelerate research across fields including biology, mathematics, physics, and computer science. Early GPT‑5 studies show it can synthesise information, propose experiments, and aid in solving long-standing mathematical problems.

Experts note the technology expands the range of ideas researchers can explore and shortens the time to validate results.

Case studies demonstrate tangible benefits: in biology, GPT‑5 helped identify mechanisms in human immune cells within minutes, suggesting experiments that confirmed the results.

In mathematics, GPT‑5 suggested new approaches, and in optimisation, it identified improved solutions later verified by researchers.

These advances reinforce human-led research rather than replacing it.

OpenAI for Science emphasises collaboration between AI and experts. GPT‑5 excels at conceptual literature review, exploring connections across disciplines, and proposing hypotheses for experimental testing.

Its greatest impact comes when researchers guide the process, breaking down problems, critiquing suggestions, and validating outcomes.

Researchers caution that AI does not replace human expertise. Current models aid speed, idea generation, and breadth, but expert oversight is essential to ensure reliable and meaningful scientific contributions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot