People show growing comfort with AI for counselling and teaching

A global survey of nearly 31,000 adults across 35 countries has revealed rising public trust in AI for roles traditionally handled by humans. In the UK, 41% of adults said they would be comfortable using ChatGPT for mental health support, while 61% expressed the same globally.

Experts note the appeal of AI’s non-judgmental tone and 24/7 availability, although cautioning that it cannot replace professional care.

The study also found that a quarter of UK adults would trust AI to teach their children, and 45% of people globally would rely on AI as their doctor.

Researchers warned that overreliance on AI in education could harm memory and cognitive development, potentially affecting the hippocampus, which is critical for learning and spatial awareness.

Trust in AI was strongest in social contexts. Over three-quarters of respondents globally, and more than half in the UK, said they would use AI chat tools as companions or friends.

The research team suggested that adaptive tone and private conversations give users a sense of security and personalised support.

Researchers emphasised the need for greater awareness of AI’s limitations. While generative AI is becoming integrated into daily life, caution is urged, particularly for education and health roles, until the long-term cognitive and social impacts are better understood.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lenovo introduces rollable laptop and AI agent

Redefining how people interact with technology, Lenovo is advancing through rollable laptops, foldable devices and adaptive AI systems that anticipate user needs.

The company is shifting from manufacturing hardware to creating multi-platform systems that adapt seamlessly to workflows instead of relying solely on traditional devices.

Qira, Lenovo’s personal AI super-agent, transfers tasks across devices while maintaining context and history with user permission. It can suggest actions and predict needs, aiming to improve productivity and employee satisfaction, although security and privacy concerns remain significant.

The rollable laptop features a 14-inch screen that expands vertically to 16.7 inches, providing immersive experiences for gaming and content consumption while remaining portable.

Lenovo is also exploring voice-driven tools, including AI Workmate prototypes, allowing users to create presentations and digital content simply through speech.

By combining innovative screen designs with intelligent AI agents, Lenovo aims to create unified ecosystems that prioritise user experience and adaptability instead of focusing solely on device specifications.

The company believes these technologies will gradually become culturally accepted, similar to self-driving cars.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU considers stronger child protection in Digital Fairness Act

Capitals across the EU are being asked to discuss how stronger child protection measures should be incorporated into the upcoming Digital Fairness Act (DFA).

The initiative comes as policymakers attempt to address growing concerns about how online platforms expose minors to harmful content, manipulative design practices, and unsafe digital environments.

According to a document circulated during Cyprus’s Council presidency of the European Union, member states are expected to debate which concrete safeguards should be introduced as part of the broader consumer protection framework.

Officials are exploring whether new rules should require platforms to adopt stricter safeguards when designing digital services used by children.

The discussions are part of the European Union’s broader effort to strengthen digital governance and consumer protection across online platforms. Policymakers are increasingly focusing on how platform design, recommendation algorithms, and monetisation models may affect younger users.

The proposals could complement existing EU regulations targeting large digital platforms, while expanding protections specifically focused on minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia introduces strict online child safety rules covering AI chatbots

New Age-Restricted Material Codes have begun to be enforced in Australia, requiring online platforms to introduce stronger protections to prevent children from accessing harmful digital content.

The rules apply across a wide range of services, including social media, app stores, gaming platforms, search engines, pornography websites, and AI chatbots.

Under the framework, companies must implement age-assurance systems before allowing access to content involving pornography, high-impact violence, self-harm material, or other age-restricted topics.

These measures also extend to AI companions and chatbots, which must prevent sexually explicit or self-harm-related conversations with minors.

The rules form part of Australia’s broader online safety framework overseen by the eSafety Commissioner, which will monitor compliance and enforce the codes.

Companies that fail to comply may face penalties of up to $49.5 million per breach.

The policy aims to shift responsibility toward technology companies by requiring them to build protections directly into their platforms.

Officials in Australia argue the measures mirror long-standing offline safeguards designed to prevent children from accessing adult environments or harmful material.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ChatGPT ‘adult mode’ launch delayed as OpenAI focuses on core improvements

OpenAI has postponed the launch of ChatGPT’s ‘adult mode’, a feature designed to let verified adult users access erotica and other mature content.

Teams are focusing on improving intelligence, personality and proactive behaviour instead of releasing the feature immediately.

A feature that was first announced by Sam Altman in October, with an initial December rollout, aiming to allow adults more freedom while maintaining safety for younger users.

The project faced an earlier delay as internal teams prioritised the core ChatGPT experience.

OpenAI stated it still supports the principle of treating adults like adults but warned that achieving the right experience will require more time. No new release date has been provided.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU competition scrutiny pushes Meta to reopen WhatsApp AI access

Meta has announced that third-party AI chatbots will again be allowed to operate through WhatsApp in Europe, reversing restrictions introduced earlier this year.

The decision follows pressure from the European Commission, which had warned it could impose interim competition measures.

Earlier in 2026, Meta limited access to rival chatbot services on the messaging platform, prompting regulators to examine whether the move unfairly restricted competition in the rapidly expanding AI market.

WhatsApp remains one of the most widely used messaging applications across European countries, making platform access critical for emerging AI services.

Under the new arrangement, companies will be able to distribute general-purpose AI chatbots via the WhatsApp Business API for 12 months.

The change is intended to give European regulators time to complete their investigation while allowing competing AI services to operate within the platform ecosystem.

Meta has also indicated that businesses offering chatbots through WhatsApp will be required to pay fees to access the system.

The European Commission is now assessing whether these adjustments sufficiently address competition concerns surrounding the integration of AI services inside major digital platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU launches panel on child safety online and social media age rules

The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools.

The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour.

Announced during the 2025 State of the Union Address by Commission President Ursula von der Leyen, the panel will evaluate evidence on both the opportunities and harms linked to children’s digital engagement.

Specialists from health, computer science, child rights and digital literacy will work alongside youth representatives to assess current research and policy responses.

Discussions during the first meeting centred on platform responsibility, including age-appropriate safety-by-design features, algorithmic amplification and addictive product design.

An initiative that also addresses digital literacy for children, parents and educators, while considering how regulatory measures can reduce risks without undermining the benefits of online participation.

The panel’s work complements the enforcement of the Digital Services Act and related European policies designed to strengthen protections for minors online.

Among the tools under development is an EU age-verification application currently tested in several member states, intended to support privacy-preserving checks compatible with the future EU digital identity framework.

The panel is expected to deliver policy recommendations to the Commission by summer 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI tracks how AI shapes student performance over time

AI is increasingly shaping education, offering tools like ChatGPT that provide personalised learning support for students anywhere. Early studies suggest features such as study mode can enhance exam performance, yet understanding AI’s long-term effect on learning remains a challenge.

Traditional research often focuses on test scores, overlooking how students interact with AI over time in real-world settings.

OpenAI, in partnership with Estonia’s University of Tartu and Stanford’s SCALE Initiative, created the Learning Outcomes Measurement Suite to track longitudinal learning outcomes. The framework assesses interactions, engagement, cognitive growth, and alignment with pedagogical principles.

Large-scale trials involve tens of thousands of students, combining AI-driven insights with traditional classroom measures such as exams and observations.

Research shows that guided AI interactions can strengthen understanding, persistence, and problem-solving. Microeconomics students using the study mode achieved around 15% higher exam scores than those relying on traditional online resources.

Beyond short-term results, the measurement suite evaluates deeper learning effects, including motivation, metacognition, and productive engagement, helping educators and developers optimise AI tools for meaningful outcomes.

The suite will be validated through ongoing studies and eventually made available to schools, universities, and education systems worldwide. OpenAI aims to share findings broadly to ensure AI contributes effectively to student learning and cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini Canvas reaches millions as Google expands AI Search tools

Google has expanded access to the Canvas feature in Google Search’s AI Mode, making it available to all US users.

Canvas allows users to organise research, draft documents and develop small applications directly inside search.

Prompts can generate code, transform reports into webpages or quizzes, and produce audio summaries from uploaded material. The tool was previously introduced as part of experimental projects in Google Labs.

The feature builds on capabilities already available in Google Gemini and partly overlaps with NotebookLM, which supports research analysis and document processing.

Within Canvas, users can gather information from the web and the Google Knowledge Graph while refining projects through interaction with the Gemini model.

Competition is intensifying across AI development platforms. OpenAI and Anthropic offer similar tools, though their design approaches differ in how collaborative workspaces are triggered and used.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New UNESCO and CENIA agreement targets AI literacy and ethical standards

The UNESCO Regional Office in Santiago and the National Centre for Artificial Intelligence (CENIA) signed a cooperation agreement at the end of February 2026 to promote ethical AI in education across Chile and Latin America.

The framework supports joint initiatives aimed at strengthening digital skills, improving AI literacy and advancing people-centred development models for AI.

Projects under the partnership will focus on training programmes and educational resources designed for a wide range of audiences, including the general public, educators, technical specialists and policymakers.

Collaborative efforts will also encourage dialogue between institutions, governments and industry to support responsible innovation and reinforce regional ecosystems linked to emerging technologies.

An early outcome includes Latam-GPT, the first open large language model for Latin America and the Caribbean. The system will aid education ministries and the UNESCO Regional Observatory on AI, helping guide responsible adoption and monitor developments.

‘Artificial Intelligence represents a historic opportunity to transform our education and productive systems, but its development must be guided by clear ethical principles and a people-centred vision. This partnership with CENIA will enable us to support countries in building capacities and governance frameworks that ensure AI effectively contributes to the common good,’ stated Esther Kuisch Laroche, Director of the UNESCO Regional Office in Santiago.

‘At CENIA, we have been working consistently on applied research and capacity-building, advancing knowledge generation, technology transfer and scientific evidence.

This experience allows us to contribute from both a technical and training perspective to ensure that the development of Artificial Intelligence in the region is grounded in robust and ethical standards, thereby impacting education and productive development. We are convinced that technological progress must be accompanied by training, responsible frameworks and multi-sector collaboration.

For this reason, this agreement with UNESCO represents a strategic step towards strengthening capacity development and the ethical, people-centred adoption of Artificial Intelligence in Latin America and the Caribbean.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot