Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU advances GPAI framework with focus on forecasting systemic risks

At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.

The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.

The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.

Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.

The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Saudi initiative attempts to link AI with sustainability goals

A new AI-enabled sustainability platform developed with support from the World Economic Forum aims to strengthen partnerships across sectors. The initiative is led by Saudi Arabia’s Ministry of Economy and Planning as part of its wider development agenda.

The platform, known as SUSTAIN, uses AI to match organisations with potential partners and opportunities. It is designed to connect government, businesses, academia, and civil society more efficiently and to help move sustainability projects from planning to implementation.

Developers say the system could accelerate collaboration and support the delivery of higher-impact sustainability projects. Official estimates suggest it could help unlock partnerships worth up to $20 billion in Saudi Arabia and significantly more across the wider region.

The initiative forms part of broader efforts to advance long-term sustainability goals through more coordinated action and practical uses of AI. The project is being developed in Saudi Arabia and presented as a tool to strengthen cross-sector cooperation rather than a stand-alone sustainability programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malaysia expands national AI strategy through Microsoft partnership

Malaysia is strengthening its national AI strategy through an expanded partnership with Microsoft, launching the Microsoft Elevate initiative to accelerate AI readiness across society.

The programme aligns with the country’s AI Nation 2030 ambitions and extends digital skills development beyond traditional sectors.

An initiative that targets educators, public sector institutions, small businesses and wider communities, aiming to embed practical AI capabilities into everyday economic and social activity.

Early deployment has already reached tens of thousands of learners, reflecting a shift from pilot programmes to large-scale national implementation.

Government and industry leaders in Malaysia emphasise that long-term competitiveness depends not only on technological investment but on widespread adoption and understanding of AI tools.

The programme therefore prioritises workforce activation, institutional capacity and sustainable integration across sectors.

Malaysia’s approach reflects a broader global trend where public–private partnerships are increasingly central to AI development, focusing on inclusive access, responsible use and real-world application rather than purely technological advancement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNIDIR highlights the security implications of the shift from classical to quantum technologies

The United Nations Institute for Disarmament Research (UNIDIR) has outlined the evolution of digital technologies from early internet systems to emerging quantum capabilities, highlighting their growing impact on global systems and security.

In its analysis, UNIDIR traces the progression from dial-up connectivity and classical computing to advanced technologies such as AI and quantum computing, noting that innovation cycles are accelerating and becoming increasingly interconnected. The organisation states that the transition to quantum technologies represents a significant shift in how data is processed, stored and secured.

Unlike classical systems, quantum computing introduces new capabilities that could transform fields ranging from scientific research to communications.

However, UNIDIR warns that these advances also present risks, particularly in cybersecurity. Quantum technologies could challenge existing encryption methods and expose vulnerabilities in digital infrastructure, with implications for governments, businesses and critical systems.

The analysis also links emerging technologies to broader geopolitical dynamics, noting that competition over technological leadership is becoming a key factor in international security. As digital and physical systems converge, technological developments are increasingly shaping strategic stability.

Why does it matter?

UNIDIR emphasises the need for forward-looking governance, international cooperation and policy coordination to manage these challenges. It calls for stronger dialogue among states and stakeholders to ensure that technological progress supports global security rather than undermines it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Azerbaijan explores regulatory framework for AI and intellectual property

Azerbaijani lawmakers and experts discussed the legal status of AI systems and their implications for intellectual property (IP) at a policy roundtable in Baku, Trend News Agency reported.

Speaking at the event marking World Intellectual Property Day, Member of the Azerbeijani Parliament Hijran Huseynova said that defining the legal nature of AI remains a key issue as the technology advances.

Participants highlighted differing views on whether AI should be treated as a legal entity or regarded solely as a tool. While some experts argued that AI lacks independent legal standing, others suggested that its ability to make autonomous decisions requires deeper legal examination.

The discussion also addressed whether outputs generated by AI systems can qualify for patent protection, an issue that remains under debate in many jurisdictions.

Huseynova noted that the growing use of AI is raising complex questions about ownership and rights, as traditional intellectual property frameworks are based on human creativity.

Why does it matter?

The debate comes as Azerbaijan advances its national AI strategy for 2025–2028, which includes efforts to establish legal and institutional frameworks for the development and regulation of AI technologies. Officials say these measures aim to address emerging legal challenges and support the responsible adoption of AI as part of the country’s broader digital transformation agenda.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nigeria’s TETFund supports AI research and digital development in universities

The Tertiary Education Trust Fund has outlined efforts to support AI research and digital development in higher education institutions. The initiative focuses on strengthening research capacity and innovation.

According to the authority, funding is being directed towards projects that promote technological advancement, including AI-related studies and infrastructure. This aims to enhance academic output and relevance.

The authority also highlights the importance of building skills and supporting researchers to engage with emerging technologies. The approach is intended to improve competitiveness and knowledge creation.

Why does it matter?

The authority presents the initiative as part of broader efforts to advance research and innovation in the education sector in Nigeria.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Wikipedia-based AI model identifies 100 emerging technologies to watch in 2026

A new analysis by Australian researchers reveals how AI is reshaping the way emerging technologies are identified and tracked.

Using a dataset derived from thousands of Wikipedia entries, the researchers mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, highlighting the fastest-growing technologies across science and industry.

The findings place reinforcement learning at the top, followed closely by blockchain and other rapidly advancing fields such as 3D printing, soft robotics and augmented reality.

These technologies reflect a broader shift towards data-driven innovation, where systems capable of learning, adapting and scaling are becoming central to both research and commercial applications.

Unlike traditional forecasts, which often rely on expert judgement, the model uses large-scale data analysis to detect patterns of growth and interconnection between technologies.

The approach offers a more dynamic and repeatable method, capturing early signals that might otherwise be overlooked in manual assessments.

Despite its advantages, researchers caution that predicting real-world impact remains difficult at early stages.

While AI-driven mapping provides valuable insights, policymakers and industry leaders still rely on hybrid approaches that combine data analysis with expert evaluation, as seen in frameworks developed by organisations such as the World Economic Forum.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GPT-5.5 pushes AI deeper into agentic work

OpenAI has released GPT-5.5 as its latest push towards more capable agentic AI, presenting the model as better suited to complex, multi-step digital work across coding, research, analysis, and enterprise tasks.

The company frames it as a system designed to carry more of the work itself, moving beyond isolated prompt-response interactions towards fuller execution across digital workflows.

According to OpenAI, the model’s biggest gains are in software engineering, tool use, and knowledge work. GPT-5.5 improves performance on coding and workflow benchmarks, strengthens long-horizon reasoning, and handles complex digital tasks with greater efficiency while maintaining earlier latency standards.

OpenAI also says the model performs better across documents, spreadsheets, presentations, and data analysis, reflecting a broader effort to make AI more useful across full professional workflows rather than only as an assistant for isolated tasks.

The release also highlights stronger performance in scientific and technical research, alongside expanded safety testing and tighter safeguards for higher-risk capabilities.

The wider significance of GPT-5.5 lies in its reflection of the next phase of AI competition. The focus is shifting from better answers to more reliable execution across real-world digital work, with growing implications for productivity, oversight, and governance.

Why does it matter? 

GPT-5.5 signals a shift from AI as a passive tool to AI as an active digital operator that can complete full workflows across coding, research, and business systems with minimal human supervision.

Over time, such capability could reshape productivity, speed up development cycles, and shift competitive advantage toward those best integrating autonomous AI while managing safety and governance risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!