Smart Classrooms initiative transforms learning in 10 Thai pilot schools

Ten pilot schools in Buriram and Si Sa Ket provinces have launched Smart Classrooms under the UNESCO–Huawei TEOSA initiative, supporting Thailand’s drive to expand digital education.

Led by UNESCO Bangkok in partnership with Thailand’s Ministry of Education and Huawei Technologies Co., Ltd, the Smart Classrooms initiative aims to strengthen digital learning environments, equip teachers with digital and AI competencies, and support policy development for AI in education. The programme also supports Thailand’s ‘Transforming Education in the Digital Era’ policy and the National AI Strategy and Action Plan (2022–2027).

Each province has one designated ‘mother school’ that serves as a regional digital hub, supporting four surrounding ‘child schools’ by sharing resources, training, and expertise. The ten pilot schools in total have received high-speed internet, interactive digital displays, and collaborative learning platforms that support real-time content sharing and blended learning. Forty-five teachers from the pilot schools also participated in hands-on demonstrations of Smart Classrooms systems on 4–5 March.

‘This new technology will help translate theory into practice, allowing students to experiment, test strategies, and see results immediately,’ said Pathanapong Momprakhon, Principal of Paisan Pittayakom School. UNESCO Bangkok’s Deputy Director and Chief of Education, Marina Patrier, highlighted the importance of combining infrastructure with teacher capacity-building.

‘At UNESCO, we are committed to promoting the ethical and inclusive use of AI in ways that empower teachers and expand opportunities for every learner,’ Ms Patrier said at the launch. ‘While Smart Classrooms provide important tools, it is teachers’ creativity, professional judgement and leadership that ultimately bring these innovations to life.’

Chitralada Chanyaem of the Thai National Commission for UNESCO highlighted the importance of collaboration in advancing digital education.

‘The UNESCO–Huawei Funds-in-Trust Project on Technology-Enabled Open Schools for All stands as a powerful example of collaboration dedicated to transforming education into a system that is open, inclusive, flexible, and resilient in the face of a rapidly changing world, she said. ‘As the future of education cannot be confined within classroom walls, it must bridge sectors and communities, working collaboratively to create equitable and sustainable opportunities for all.’

Teachers observed Huawei technical staff and master teachers demonstrate how digital tools and AI-supported applications can be used in everyday lessons. Ms Piyaporn Kidsirianan, Public Relations Manager at Huawei Technologies (Thailand) Co., Ltd, said the initiative aims to reduce digital inequality.

‘The Open Schools for All initiative represents a commitment to using technology as a bridge to deliver quality education to remote and underserved communities.’ The TEOSA Smart Classrooms initiative combines policy support, digital infrastructure upgrades, and teacher training to help translate Thailand’s digital education ambitions into practical impact at the school level.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools linked to rise in abuse disclosures

Support organisations in the UK report that some abuse survivors are turning to AI tools such as ChatGPT before contacting helplines. Charities in the UK say individuals increasingly use AI to explore experiences and seek guidance before approaching professional support services.

The National Association of People Abused in Childhood said callers in the UK have recently reported being referred to its helpline after conversations with ChatGPT. Staff say AI is being used as an informal step in processing trauma.

Law enforcement and support groups in the UK have also recorded a rise in disclosures involving ritualistic sexual abuse. Authorities in the UK say only 14 criminal cases since 1982 have formally recognised such practices.

Police and support organisations are responding by improving training and launching specialist working groups. Officials aim to strengthen the identification and investigation of complex cases of abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces challenges in curbing digital abuse against women

Researchers and policymakers are raising concerns about how new technologies may put women at risk online, despite existing EU rules designed to ensure safer digital spaces.

AI-powered tools and smart devices have been linked to incidents of harassment and the creation of non-consensual sexualised imagery, highlighting gaps in enforcement and compliance.

The European Commission’s Gender Equality 2026–2030 Strategy noted that women are disproportionately targeted by online gender-based violence, including harassment, doxing, and AI-generated deepfakes.

Investigations into tools such as Elon Musk’s Grok AI and Meta’s Ray-Ban smart glasses have drawn attention to how digital platforms and wearable technologies can be misused, even where legal frameworks like the Digital Services Act (DSA) are in place.

Experts emphasise that while the EU’s rules offer a foundation to regulate online content, significant challenges remain. Advocates and lawmakers say enforcement gaps let harmful AI functions like nudification persist.

Commissioners have stressed ongoing cooperation with tech companies and upcoming guidelines to prioritise flagged content from independent organisations to address gender-based cyber violence.

Authorities are also monitoring new technologies closely. In the case of wearable devices, regulators are considering how users and bystanders are informed about recording features.

Ongoing discussions aim to strengthen compliance under existing legislation and ensure that digital spaces become safer and more accountable for all users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers stronger child protection in Digital Fairness Act

Capitals across the EU are being asked to discuss how stronger child protection measures should be incorporated into the upcoming Digital Fairness Act (DFA).

The initiative comes as policymakers attempt to address growing concerns about how online platforms expose minors to harmful content, manipulative design practices, and unsafe digital environments.

According to a document circulated during Cyprus’s Council presidency of the European Union, member states are expected to debate which concrete safeguards should be introduced as part of the broader consumer protection framework.

Officials are exploring whether new rules should require platforms to adopt stricter safeguards when designing digital services used by children.

The discussions are part of the European Union’s broader effort to strengthen digital governance and consumer protection across online platforms. Policymakers are increasingly focusing on how platform design, recommendation algorithms, and monetisation models may affect younger users.

The proposals could complement existing EU regulations targeting large digital platforms, while expanding protections specifically focused on minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia introduces strict online child safety rules covering AI chatbots

New Age-Restricted Material Codes have begun to be enforced in Australia, requiring online platforms to introduce stronger protections to prevent children from accessing harmful digital content.

The rules apply across a wide range of services, including social media, app stores, gaming platforms, search engines, pornography websites, and AI chatbots.

Under the framework, companies must implement age-assurance systems before allowing access to content involving pornography, high-impact violence, self-harm material, or other age-restricted topics.

These measures also extend to AI companions and chatbots, which must prevent sexually explicit or self-harm-related conversations with minors.

The rules form part of Australia’s broader online safety framework overseen by the eSafety Commissioner, which will monitor compliance and enforce the codes.

Companies that fail to comply may face penalties of up to $49.5 million per breach.

The policy aims to shift responsibility toward technology companies by requiring them to build protections directly into their platforms.

Officials in Australia argue the measures mirror long-standing offline safeguards designed to prevent children from accessing adult environments or harmful material.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ChatGPT ‘adult mode’ launch delayed as OpenAI focuses on core improvements

OpenAI has postponed the launch of ChatGPT’s ‘adult mode’, a feature designed to let verified adult users access erotica and other mature content.

Teams are focusing on improving intelligence, personality and proactive behaviour instead of releasing the feature immediately.

A feature that was first announced by Sam Altman in October, with an initial December rollout, aiming to allow adults more freedom while maintaining safety for younger users.

The project faced an earlier delay as internal teams prioritised the core ChatGPT experience.

OpenAI stated it still supports the principle of treating adults like adults but warned that achieving the right experience will require more time. No new release date has been provided.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New AI feature keeps Roblox chat respectful and flowing

Roblox Corporation has unveiled an AI-powered real-time chat rephrasing feature designed to maintain civility while keeping in-game conversations fluid. Previously, messages containing profanity were blocked with hashmarks, disrupting gameplay.

The new system automatically rephrases inappropriate language into more respectful alternatives while preserving the original meaning. Users in the chat are notified when their messages are rephrased, ensuring transparency.

The feature supports in-game chat between age-verified users and all languages via Roblox’s automatic translation. The company consulted its TEEN COUNCIL to design the system, ensuring it reflects how teens naturally communicate.

Earlier experiments with real-time warnings and notifications reduced filtered messages and abuse reports by 5–6%, indicating the approach’s effectiveness.

Roblox is also enhancing its text filters to detect complex attempts to bypass Community Standards, such as leet-speak or symbols. Testing shows a 20-fold reduction in missed cases involving the sharing of personal information, such as social handles or phone numbers.

These upgrades represent a significant step toward safer, more natural in-game chat.

The company plans to continue refining these tools, aiming to minimise disruptions further while promoting civil communication. Users can expect iterative improvements and additional controls in the future to enhance chat safety and overall user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy lawsuit targets Meta AI glasses after reports of footage review

Meta is facing a new lawsuit in the US over privacy concerns tied to its AI smart glasses.

The legal complaint follows investigative reporting indicating that contractors working for a Kenya-based subcontractor reviewed footage captured by users’ devices, including sensitive personal scenes.

The lawsuit alleges that some of the reviewed material included nudity and other intimate activities recorded by the glasses’ cameras.

According to the complaint, the footage formed part of a data review process designed to improve the AI system integrated into the wearable device.

Plaintiffs claim Meta marketed the product as prioritising user privacy, citing advertisements suggesting that the glasses were ‘designed for privacy’ and that users remained in control of their personal data.

The complaint argues that such messaging could mislead consumers if the footage were subject to human review without clear disclosure.

A legal action that also names eyewear manufacturer Luxottica, which partnered with Meta to produce the glasses.

Meanwhile, the UK’s Information Commissioner’s Office has begun examining the issue after reports that face-blurring safeguards may not have consistently protected individuals captured in the recordings.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU launches panel on child safety online and social media age rules

The European Commission has convened a new expert panel tasked with examining how children can be better protected across digital platforms, including social media, gaming environments and AI tools.

The initiative reflects growing concern across Europe regarding the psychological and safety risks associated with young users’ online behaviour.

Announced during the 2025 State of the Union Address by Commission President Ursula von der Leyen, the panel will evaluate evidence on both the opportunities and harms linked to children’s digital engagement.

Specialists from health, computer science, child rights and digital literacy will work alongside youth representatives to assess current research and policy responses.

Discussions during the first meeting centred on platform responsibility, including age-appropriate safety-by-design features, algorithmic amplification and addictive product design.

An initiative that also addresses digital literacy for children, parents and educators, while considering how regulatory measures can reduce risks without undermining the benefits of online participation.

The panel’s work complements the enforcement of the Digital Services Act and related European policies designed to strengthen protections for minors online.

Among the tools under development is an EU age-verification application currently tested in several member states, intended to support privacy-preserving checks compatible with the future EU digital identity framework.

The panel is expected to deliver policy recommendations to the Commission by summer 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China strengthens online safeguards for minors

Chinese authorities have introduced new rules to classify online content that could affect the health and well-being of minors. Set to take effect on 1 March, the measures aim to adapt to a rapidly evolving internet landscape.

Top government bodies, including those in cyberspace, education, publishing, film, culture, tourism, public security, and radio and television, jointly released the initiative. Together, they outlined four categories of content that could negatively impact minors and specified their key characteristics.

Recent issues, such as the misuse of minors’ images, have been integrated into the regulatory framework. Authorities also established preventive guidelines to manage risks from emerging technologies, including algorithmic recommendations and generative AI.

Internet platforms and content producers are now required to take both proactive and corrective measures against harmful content. The rules emphasise that platforms must monitor, block, or remove information that could affect minors’ well-being.

The Cyberspace Administration of China pledged to continue purifying the online environment. Authorities will urge platforms to assume their primary responsibilities and strengthen governance of content affecting young users, aiming to create a safer and healthier digital space for children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot