UNESCO explores how AI and design can reshape culture and creativity

UNESCO’s Regional Office for East Asia has launched a global call for good practice cases on how AI and design are being used to support culture, creativity, education, sustainability and social inclusion.

The call invites submissions from organisations, institutions, practitioners, educators and innovators using AI together with design approaches to create positive outcomes in cultural and creative sectors. UNESCO says the initiative is looking for practical examples that support culture, creativity, livelihoods, learning, sustainability and social inclusion.

The call focuses on four thematic areas: cultural heritage protection, documentation and interpretation; cultural tourism and visitor experience design; fashion and creative industry innovation; and design education and capacity development.

Selected projects may receive UNESCO recognition, be included in a publication or catalogue, participate in exhibitions or showcases, receive invitations to talks or events, and gain visibility through UNESCO communication channels.

The initiative reflects growing international interest in how AI can support creative and cultural sectors beyond industrial productivity. UNESCO’s framing places design principles such as inclusion, accessibility, cultural relevance and people-centred use at the centre of responsible AI deployment in cultural and educational contexts.

Submissions are open until 15 June 2026, with selected cases scheduled to be announced on 15 July 2026. Applications may be submitted in English or Chinese and are expected to demonstrate practical examples of AI supporting learning, livelihoods, creativity or sustainable development through design-oriented approaches.

Why does it matter?

The call points to a wider effort to shape AI use in culture and creativity around public value rather than solely on automation. By focusing on heritage, tourism, fashion and design education, UNESCO is encouraging examples where AI supports local knowledge, creative livelihoods, cultural access and inclusive innovation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Canada invests in AI and quantum technology firms in British Columbia

Gregor Robertson, Minister of Housing and Infrastructure and Minister responsible for Pacific Economic Development Canada (PacifiCan), announced more than C$17.3 million in funding for eight British Columbia technology companies to accelerate the commercialisation and adoption of AI and quantum technologies.

Through PacifiCan, the federal government is supporting projects focused on robotics, semiconductor manufacturing, AI infrastructure, and quantum supply chains as part of a broader strategy to strengthen domestic innovation and sovereign technology capabilities.

A major share of the investment will support Human in Motion Robotics, which received CAD$3 million to commercialise its AI-powered XoMotion wearable robotic exoskeleton. The company plans to integrate AI into mobility systems, expand manufacturing, and move the technology beyond clinical environments into homes and community settings for people with spinal cord injuries and neurological conditions.

Another funded company, Dream Photonics, will receive more than CAD$1.1 million to establish pilot manufacturing for optical interconnect technologies used in AI and quantum chips. The project aims to strengthen Canada’s domestic semiconductor and quantum ecosystem while creating skilled technology jobs in British Columbia.

The announcement also highlighted the rapid expansion of British Columbia’s AI ecosystem, which now includes nearly 600 AI companies. Canadian officials linked the investments to broader efforts to secure domestic compute infrastructure, strengthen AI supply chains, and position Canada competitively in emerging technologies ahead of events such as Web Summit Vancouver.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Canada advances sovereign AI data centre strategy with TELUS

The Canadian government and TELUS are advancing plans to develop large-scale sovereign AI infrastructure as part of Ottawa’s broader strategy to strengthen domestic compute capacity and support the country’s AI ecosystem.

The initiative was announced by Evan Solomon (Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario) and focuses on a proposed AI data centre project in British Columbia designed to support researchers, businesses, and academic institutions.

A project that forms part of Canada’s ‘Enabling large-scale sovereign AI data centres’ initiative, which was introduced under Budget 2025. Ottawa stated that sovereign compute infrastructure is increasingly important for maintaining national competitiveness in AI while ensuring Canadian data, intellectual property, and economic value remain within the country.

The government also confirmed that no formal funding commitments have yet been distributed, with discussions currently progressing through non-binding memoranda of understanding with selected industry participants.

Local officials argued that large-scale compute infrastructure has become a strategic economic requirement as governments worldwide race to expand AI processing capabilities. Canada believes it holds competitive advantages due to its colder climate, sustainable energy resources, and network infrastructure, all of which could help attract future AI investment and hyperscale data centre development.

Why does it matter?

The race for sovereign AI infrastructure is rapidly becoming one of the most important geopolitical and economic competitions of the digital era. The Canada-TELUS partnership illustrates how countries are moving beyond AI model development alone and shifting focus towards the physical infrastructure required to sustain future AI ecosystems, including data centres, energy capacity, semiconductors, and domestic compute networks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China launches AI ethics review pilot programme

A national pilot programme for AI ethics review and services has been launched by China, as authorities move to strengthen oversight of growing risks linked to advanced AI systems.

The initiative, announced by China’s Ministry of Industry and Information Technology, aims to establish practical mechanisms for AI ethics governance as concerns over algorithmic discrimination, emotional dependence, and broader societal risks continue to grow. Authorities said the initiative will initially operate in provincial-level regions hosting national AI industrial innovation pilot zones. It will focus on refining provincial AI ethics review rules, supporting the creation of ethics committees, and developing specialised ethics review and service centres. Chinese regulators also plan to transform the ethics review process into technical standards while improving mechanisms for reporting AI-related ethical concerns.

The Ministry of Industry and Information Technology has also called for the creation of a national AI ethics risk monitoring service network, along with training materials, ethics education courses, and early-warning systems to support pilot cities.

By embedding ethics reviews into AI development and deployment processes, China appears to be building a more institutionalised framework for managing the societal and technological risks associated with increasingly powerful AI systems.

Why does it matter?

China’s latest move signals a shift from broad AI governance principles towards operational enforcement mechanisms embedded directly into regional innovation ecosystems. The programme could influence how other governments approach AI oversight, particularly as global concerns grow over algorithmic bias, psychological manipulation, and accountability in frontier AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China outlines AI and energy integration plan

The Chinese National Energy Administration, alongside the National Development and Reform Commission, the Ministry of Industry and Information Technology and the National Data Administration, has released an action plan to promote mutual development between AI and the energy sector.

The plan focuses on ensuring a reliable energy supply for computing infrastructure while using AI to support energy transformation. It outlines 29 key tasks covering green energy use, efficient coordination between power and computing, and expanding high-value AI applications in energy.

Authorities aim to significantly improve the clean energy supply for AI computing and strengthen AI adoption in energy by 2030. The strategy also seeks to enhance data use and drive innovation in AI models within the energy sector.

The agencies will establish coordination mechanisms across government and industry to support implementation and innovation. The initiative reflects a broader push to integrate AI and energy systems more deeply in China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EDPS frames safe AI as Europe’s next big idea

The European Data Protection Supervisor has framed safe and ethical AI as a defining European idea, linking AI governance to Europe’s history of collective initiatives rooted in shared values and fundamental rights.

In a Europe Day blog post, EDPS official Leonardo Cervera Navas argues that Europe’s approach to AI builds on earlier initiatives such as data protection, the creation of the EDPS and the adoption of the General Data Protection Regulation. He presents the AI Act as a continuation of that tradition, aimed at ensuring that AI systems operate safely, ethically and in line with fundamental rights.

The post highlights the AI Act’s risk-based model, which prohibits AI systems posing unacceptable risks to health, safety and fundamental rights, while setting binding requirements for high-risk systems in areas such as safety, transparency, human oversight and rights protection. It also notes that most AI systems are considered minimal risk and fall outside the regulation’s scope.

Cervera Navas also points to the EDPS’s practical role under the AI Act as the AI supervisor for the EU institutions, agencies and bodies. The post refers to the EDPS network of AI Act correspondents, the mapping of AI systems used in the EU public administration, and a regulatory sandbox pilot for testing AI systems in compliance with the AI Act.

The post also emphasises international cooperation, including EDPS engagement through the AI Board, cooperation with market surveillance authorities, UNESCO’s Global Network of AI Supervising Authorities, Council of Europe work on AI risk and impact assessment, and AI discussions within the OECD.

Why does it matter?

As it seems, EDPS wants Europe’s AI governance model to be understood not only as regulation, but as part of a broader rights-based digital policy tradition. Its significance lies in linking the AI Act with practical supervision, institutional coordination and international cooperation, suggesting that the next test for Europe’s AI approach will be implementation rather than rule-making alone.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum analysis explores AI-driven future planning for organisations

A World Economic Forum article argues that organisations need to move beyond static reports and analytical forecasts to become more future-ready in an era marked by rapid technological and geopolitical change.

The article highlights FutureSlam, a foresight method that combines participatory scenario-building, AI-supported reflection and improvisational performance to help organisations experience possible futures rather than analyse them. The authors say many organisations already invest in foresight, but struggle to translate insights into operational decisions because they often remain confined to strategy teams and slide decks.

The approach integrates human imagination with AI-generated scenarios. Participants first develop scenarios themselves, before comparing them with future images generated by an AI system using the same trend material. The authors argue that this comparison can challenge assumptions, confirm parts of participants’ reasoning and introduce perspectives that human groups may avoid.

FutureSlam then uses improvised performance, including simulated news broadcasts and staged scenarios, to make possible futures more tangible. According to the article, the method is designed to make foresight more inclusive, structured and memorable by turning participants into co-creators rather than passive recipients of expert analysis.

The authors suggest that such approaches could help organisations adapt more effectively to technological, geopolitical and societal change by turning foresight into a shared organisational capability rather than a niche strategic exercise.

Why does it matter?

AI is increasingly being used not only to automate tasks, but also to support strategic thinking, scenario-building and organisational learning. The FutureSlam example points to a broader shift in how organisations may prepare for uncertainty: less focus on predicting precise outcomes, and more focus on building the capacity to test assumptions, imagine alternatives and adapt collectively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Canada issues age assurance guidance

The Office of the Privacy Commissioner of Canada has issued guidance on how organisations should assess and implement age assurance tools for websites and online services.

The OPC states that age assurance should only be used where there is a clear legal requirement or a demonstrable risk of harm to children. It emphasises that organisations must evaluate whether alternative, less intrusive measures could address these risks before adopting such systems.

The guidance highlights that any age assurance approach, including those that use AI, must be proportionate, limit personal data collection, and operate in a privacy-protective manner. It also warns against using collected data for other purposes or linking user activity across sessions.

The OPC adds that organisations must provide user choice with respect to the type of personal information they would prefer to use in an age-assurance process, provide appeal mechanisms, and minimise repeated verification. The framework aims to balance child protection with privacy rights, with the guidance applying to online services in Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot