OpenAI tracks how AI shapes student performance over time

AI is increasingly shaping education, offering tools like ChatGPT that provide personalised learning support for students anywhere. Early studies suggest features such as study mode can enhance exam performance, yet understanding AI’s long-term effect on learning remains a challenge.

Traditional research often focuses on test scores, overlooking how students interact with AI over time in real-world settings.

OpenAI, in partnership with Estonia’s University of Tartu and Stanford’s SCALE Initiative, created the Learning Outcomes Measurement Suite to track longitudinal learning outcomes. The framework assesses interactions, engagement, cognitive growth, and alignment with pedagogical principles.

Large-scale trials involve tens of thousands of students, combining AI-driven insights with traditional classroom measures such as exams and observations.

Research shows that guided AI interactions can strengthen understanding, persistence, and problem-solving. Microeconomics students using the study mode achieved around 15% higher exam scores than those relying on traditional online resources.

Beyond short-term results, the measurement suite evaluates deeper learning effects, including motivation, metacognition, and productive engagement, helping educators and developers optimise AI tools for meaningful outcomes.

The suite will be validated through ongoing studies and eventually made available to schools, universities, and education systems worldwide. OpenAI aims to share findings broadly to ensure AI contributes effectively to student learning and cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe issues new guidance on AI and gender equality

Ahead of International Women’s Day on 8 March, the Council of Europe adopted two new recommendations addressing gender equality and the prevention of violence against women in the context of emerging technologies.

One recommendation targets the design and use of AI to prevent discrimination, while the other focuses on accountability for technology-facilitated violence against women and girls.

The AI recommendation advises member states on preventing discrimination throughout the lifecycle of AI systems, from development to deployment and retirement. It highlights risks like gender bias while promoting transparency, explainability, and safeguards.

Special attention is given to discrimination based on gender, race, and sexual orientation, gender identity, and expression (SOGIESC).

The second recommendation sets the first international standard for addressing technology-facilitated violence against women. It outlines strategies to overcome impunity, including clearer legal frameworks, accessible reporting systems, and victim-centred approaches.

Emphasis is placed on multistakeholder engagement, trauma-informed policies, and safety-by-design in technology products to prevent digital harm.

Both recommendations reinforce the importance of combining regulation, institutional support, and public awareness to ensure technology advances equality rather than perpetuates harm.

The formal launch is scheduled for 10 June 2026 at the Palais de l’Europe in Strasbourg during an event titled ‘From standards to action: making accountability for technology-facilitated violence against women and girls a reality.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini Canvas reaches millions as Google expands AI Search tools

Google has expanded access to the Canvas feature in Google Search’s AI Mode, making it available to all US users.

Canvas allows users to organise research, draft documents and develop small applications directly inside search.

Prompts can generate code, transform reports into webpages or quizzes, and produce audio summaries from uploaded material. The tool was previously introduced as part of experimental projects in Google Labs.

The feature builds on capabilities already available in Google Gemini and partly overlaps with NotebookLM, which supports research analysis and document processing.

Within Canvas, users can gather information from the web and the Google Knowledge Graph while refining projects through interaction with the Gemini model.

Competition is intensifying across AI development platforms. OpenAI and Anthropic offer similar tools, though their design approaches differ in how collaborative workspaces are triggered and used.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New UNESCO and CENIA agreement targets AI literacy and ethical standards

The UNESCO Regional Office in Santiago and the National Centre for Artificial Intelligence (CENIA) signed a cooperation agreement at the end of February 2026 to promote ethical AI in education across Chile and Latin America.

The framework supports joint initiatives aimed at strengthening digital skills, improving AI literacy and advancing people-centred development models for AI.

Projects under the partnership will focus on training programmes and educational resources designed for a wide range of audiences, including the general public, educators, technical specialists and policymakers.

Collaborative efforts will also encourage dialogue between institutions, governments and industry to support responsible innovation and reinforce regional ecosystems linked to emerging technologies.

An early outcome includes Latam-GPT, the first open large language model for Latin America and the Caribbean. The system will aid education ministries and the UNESCO Regional Observatory on AI, helping guide responsible adoption and monitor developments.

‘Artificial Intelligence represents a historic opportunity to transform our education and productive systems, but its development must be guided by clear ethical principles and a people-centred vision. This partnership with CENIA will enable us to support countries in building capacities and governance frameworks that ensure AI effectively contributes to the common good,’ stated Esther Kuisch Laroche, Director of the UNESCO Regional Office in Santiago.

‘At CENIA, we have been working consistently on applied research and capacity-building, advancing knowledge generation, technology transfer and scientific evidence.

This experience allows us to contribute from both a technical and training perspective to ensure that the development of Artificial Intelligence in the region is grounded in robust and ethical standards, thereby impacting education and productive development. We are convinced that technological progress must be accompanied by training, responsible frameworks and multi-sector collaboration.

For this reason, this agreement with UNESCO represents a strategic step towards strengthening capacity development and the ethical, people-centred adoption of Artificial Intelligence in Latin America and the Caribbean.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia reviews children’s social media ban

Australia has begun reviewing its ban on social media accounts for children under 16, introduced in December 2025. Australia’s eSafety Commissioner is tracking more than 4,000 children and families to assess how the policy works in practice.

Researchers in Australia will analyse surveys, interviews and voluntary smartphone data to measure how young people interact with apps. Officials in Australia aim to understand how the ban affects children, parents and everyday online behaviour.

Early reactions in Australia have been mixed, with some teenagers telling media outlets they bypass age verification systems. Platforms reportedly remain accessible to some minors in Australia.

Meanwhile, the UK government has launched a public consultation on potential social media restrictions for children. Policymakers in the UK are seeking views on bans, stronger age verification and limits on addictive platform features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI Readiness Assessment Report highlights India’s progress and gaps in ethical AI

UNESCO and India’s Ministry of Electronics and Information Technology (MeitY) have launched the India AI Readiness Assessment Report during the India AI Impact Summit 2026. The report evaluates the country’s progress in building an ethical and human-centred AI ecosystem.

Developed by UNESCO with the IndiaAI Mission and Ikigai Law as implementing partner, the report draws on consultations with more than 600 stakeholders from government, academia, industry, and civil society. The assessment examined governance, workforce readiness, and infrastructure development.

Principal Scientific Adviser to the Government of India, Dr Ajay Kumar Sood, emphasised the importance of embedding ethics throughout the technology lifecycle. ‘AI is here to make an impact. The question is not how fast we adopt AI, but how thoughtfully we shape it,’ he said.

The report highlights the country’s growing role in global AI development, noting that it accounts for around 16% of the world’s AI talent and has filed more than 86,000 related patents since 2010. It also points to progress in multilingual AI systems and digital public services.

The assessment also identifies policy priorities, including stronger legal frameworks, inclusive workforce transitions, and better access to high-quality datasets. UNESCO officials said the recommendations aim to support responsible AI governance and strengthen public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cisco report highlights cybersecurity risks and benefits of industrial AI

AI is becoming central to industrial networking strategies, but it is also creating new security challenges, according to Cisco’s 2026 State of Industrial AI Report.

Based on a survey of 1,000 professionals across 19 countries and 21 sectors, the report shows organisations view cybersecurity as both a barrier and an opportunity for AI adoption. About 40% cited cybersecurity concerns as a major obstacle, while 48% named security their biggest networking challenge.

At the same time, many organisations believe AI will strengthen their cyber resilience. Cisco noted that ‘while security gaps are limiting AI scale today, organisations view AI as a tool to strengthen detection, monitoring and resilience’.

The report also highlights organisational challenges, particularly collaboration between IT and operational technology teams. Only 20% of organisations report fully collaborative IT and OT cybersecurity operations, despite the growing importance of coordination for AI deployment.

Cisco said industrial AI adoption is accelerating, with 61% of organisations already deploying AI in industrial environments. However, only one in five reports mature, scaled adoption, suggesting many deployments remain in early stages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OneTrust’s new CEO outlines AI governance ambitions

OneTrust has entered a new leadership phase in the US after appointing John Heyman as chief executive, replacing founder Kabir Barday. Barday will remain on the board in an advisory role as the US-based compliance technology firm continues to push into AI governance.

John Heyman said organisations across the US and globally are rapidly integrating AI into daily operations. Companies deploying large numbers of AI agents increasingly need tools to manage risk, data use and regulatory compliance.

OneTrust believes demand for governance technology will grow as AI systems multiply inside businesses in the US and worldwide. John Heyman described a future where automated monitoring tools oversee AI agents operating within company systems.

Leadership at OneTrust in the US aims to build systems that track how AI agents collect and share data while maintaining enterprise control. Growing adoption of AI in the US and globally continues to drive demand for responsible governance platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool from MIT speeds up complex engineering optimisation

MIT researchers have developed a new AI approach that helps engineers solve complex design problems faster, from power grid optimisation to vehicle safety.

The method adapts a foundation model trained on tabular data, enabling high-dimensional optimisation without retraining and significantly speeding up results.

The system uses a foundation model with Bayesian optimisation to pinpoint the variables that most impact outcomes. Focusing on key variables, the model finds top solutions 10 to 100 times faster than existing optimisation methods.

Early tests show the approach excels in costly, time-consuming scenarios like car crash testing and power system design. The technique lowers computational demands and suits large-scale, high-frequency engineering challenges across multiple domains.

Researchers aim to expand the method to even higher-dimensional problems, such as naval ship design, while highlighting the broader potential of foundation models as algorithmic engines in scientific and engineering tools.

Experts see it as a practical step toward making advanced optimisation more accessible in real-world applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Developers gain early access to Gemini 3.1 Flash-Lite

Google’s Gemini 3.1 Flash-Lite has launched in preview for developers via AI Studio and for enterprises through Vertex AI. Designed for high-volume workloads, it promises fast, cost-effective performance while maintaining high-quality outputs.

Priced at just $0.25 per million input tokens and $1.50 per million output tokens, 3.1 Flash-Lite offers 2.5X faster response times and 45% higher output speed than the previous 2.5 Flash model.

Benchmarks show strong performance across reasoning and multimodal tasks, including an Elo score of 1432 on Arena.ai, 86.9% on GPQA Diamond, and 76.8% on MMMU Pro, surpassing some older, larger Gemini models.

The model also provides adaptive intelligence features, allowing developers to adjust how much the AI ‘thinks’ for each task. The model handles both high-frequency tasks, such as translation, and complex tasks, such as interface generation and simulations.

Early-access developers and companies report that 3.1 Flash-Lite handles complex workloads with precision comparable to larger models. Its speed, affordability, and reasoning capabilities make it an attractive choice for scalable, real-time AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot