Azerbaijan explores regulatory framework for AI and intellectual property

Azerbaijani lawmakers and experts discussed the legal status of AI systems and their implications for intellectual property (IP) at a policy roundtable in Baku, Trend News Agency reported.

Speaking at the event marking World Intellectual Property Day, Member of the Azerbeijani Parliament Hijran Huseynova said that defining the legal nature of AI remains a key issue as the technology advances.

Participants highlighted differing views on whether AI should be treated as a legal entity or regarded solely as a tool. While some experts argued that AI lacks independent legal standing, others suggested that its ability to make autonomous decisions requires deeper legal examination.

The discussion also addressed whether outputs generated by AI systems can qualify for patent protection, an issue that remains under debate in many jurisdictions.

Huseynova noted that the growing use of AI is raising complex questions about ownership and rights, as traditional intellectual property frameworks are based on human creativity.

Why does it matter?

The debate comes as Azerbaijan advances its national AI strategy for 2025–2028, which includes efforts to establish legal and institutional frameworks for the development and regulation of AI technologies. Officials say these measures aim to address emerging legal challenges and support the responsible adoption of AI as part of the country’s broader digital transformation agenda.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nigeria’s TETFund supports AI research and digital development in universities

The Tertiary Education Trust Fund has outlined efforts to support AI research and digital development in higher education institutions. The initiative focuses on strengthening research capacity and innovation.

According to the authority, funding is being directed towards projects that promote technological advancement, including AI-related studies and infrastructure. This aims to enhance academic output and relevance.

The authority also highlights the importance of building skills and supporting researchers to engage with emerging technologies. The approach is intended to improve competitiveness and knowledge creation.

Why does it matter?

The authority presents the initiative as part of broader efforts to advance research and innovation in the education sector in Nigeria.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Wikipedia-based AI model identifies 100 emerging technologies to watch in 2026

A new analysis by Australian researchers reveals how AI is reshaping the way emerging technologies are identified and tracked.

Using a dataset derived from thousands of Wikipedia entries, the researchers mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, highlighting the fastest-growing technologies across science and industry.

The findings place reinforcement learning at the top, followed closely by blockchain and other rapidly advancing fields such as 3D printing, soft robotics and augmented reality.

These technologies reflect a broader shift towards data-driven innovation, where systems capable of learning, adapting and scaling are becoming central to both research and commercial applications.

Unlike traditional forecasts, which often rely on expert judgement, the model uses large-scale data analysis to detect patterns of growth and interconnection between technologies.

The approach offers a more dynamic and repeatable method, capturing early signals that might otherwise be overlooked in manual assessments.

Despite its advantages, researchers caution that predicting real-world impact remains difficult at early stages.

While AI-driven mapping provides valuable insights, policymakers and industry leaders still rely on hybrid approaches that combine data analysis with expert evaluation, as seen in frameworks developed by organisations such as the World Economic Forum.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

GPT-5.5 pushes AI deeper into agentic work

OpenAI has released GPT-5.5 as its latest push towards more capable agentic AI, presenting the model as better suited to complex, multi-step digital work across coding, research, analysis, and enterprise tasks.

The company frames it as a system designed to carry more of the work itself, moving beyond isolated prompt-response interactions towards fuller execution across digital workflows.

According to OpenAI, the model’s biggest gains are in software engineering, tool use, and knowledge work. GPT-5.5 improves performance on coding and workflow benchmarks, strengthens long-horizon reasoning, and handles complex digital tasks with greater efficiency while maintaining earlier latency standards.

OpenAI also says the model performs better across documents, spreadsheets, presentations, and data analysis, reflecting a broader effort to make AI more useful across full professional workflows rather than only as an assistant for isolated tasks.

The release also highlights stronger performance in scientific and technical research, alongside expanded safety testing and tighter safeguards for higher-risk capabilities.

The wider significance of GPT-5.5 lies in its reflection of the next phase of AI competition. The focus is shifting from better answers to more reliable execution across real-world digital work, with growing implications for productivity, oversight, and governance.

Why does it matter? 

GPT-5.5 signals a shift from AI as a passive tool to AI as an active digital operator that can complete full workflows across coding, research, and business systems with minimal human supervision.

Over time, such capability could reshape productivity, speed up development cycles, and shift competitive advantage toward those best integrating autonomous AI while managing safety and governance risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK embraces 6 frontier technologies to drive digital growth

The UK government has identified six frontier technologies as central to strengthening digital capability, economic growth, and long-term competitiveness.

Outlined in the 2025 Modern Industrial Strategy and Digital and Technologies Sector Plan, the approach prioritises AI, cybersecurity, advanced connectivity, engineering biology, quantum technologies, and semiconductors as pillars of national resilience and technological sovereignty.

Advanced connectivity and AI remain core drivers of digital transformation. Investment in next-generation telecoms, including 5G and future 6G development, is supported through public funding and infrastructure initiatives, while AI continues to expand rapidly through commitments to compute capacity, national supercomputing infrastructure, and workforce development. The strategy positions the UK as aiming to strengthen its role as a leading European AI hub.

Cybersecurity, engineering biology, and quantum technologies reflect a broader strategy linking innovation with security, resilience, and sustainability. Government-backed programmes are intended to support commercialisation, strengthen secure-by-design systems, and accelerate growth in emerging areas such as bio-based manufacturing. Quantum technologies are also being positioned for longer-term use across sectors, including healthcare, defence, and finance.

Semiconductors complete the strategy as a foundational technology underpinning modern digital systems. Rather than focusing on large-scale manufacturing, the UK is prioritising areas such as design, photonics, compound semiconductors, and specialised materials, backed by targeted funding and institutional support.

Across all six areas, the strategy reflects a wider effort to align innovation policy with economic security, global competitiveness, and more resilient supply chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK government seeks industry cooperation to strengthen AI-driven cyber resilience

The UK government has called on leading AI companies to collaborate on building advanced cyber defence capabilities, as threats grow in scale and sophistication.

Speaking ahead of CYBERUK, Security Minister Dan Jarvis emphasised that AI-driven security will become a defining challenge, requiring innovation at unprecedented speed and scale.

Government officials warn that AI is already reshaping the threat landscape, with hostile states and criminal groups increasingly deploying automated systems to identify vulnerabilities.

The number of nationally significant cyber incidents handled by authorities more than doubled in 2025, highlighting the urgency of strengthening national resilience.

To address these risks, businesses are being encouraged to sign a voluntary Cyber Resilience Pledge, committing to stronger governance, early warning systems, and supply chain security standards.

Alongside this initiative, the UK government will invest £90 million over the next three years to support cyber defences, particularly for small and medium-sized enterprises.

A strategy that forms part of a broader National Cyber Action Plan, reflecting a shift towards integrating AI into national security infrastructure.

Officials argue that effective cooperation between government and industry will be essential to protect critical systems and maintain economic stability in an increasingly automated threat environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNIDIR highlights role of women in AI governance and international security

The United Nations Institute for Disarmament Research highlights the role of women in shaping the digital future, particularly in AI and international security. The organisation stresses the importance of increasing female participation in decision-making.

According to the research Institute, women remain underrepresented in AI and related policy spaces, including diplomacy and security forums. This imbalance risks limiting perspectives in global technology governance.

The organisation’s Women in AI Fellowship programme aims to address this gap by providing training and expertise to women diplomats. Participants gain knowledge across technical, legal and policy aspects of AI.

The research Institute positions inclusion as essential to effective AI governance and security policy, emphasising the need for diverse voices in shaping digital futures globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ukraine highlights AI strategic shifts

The National Security and Defense Council of Ukraine has published an overview of global AI developments for March 2026, highlighting a shift towards infrastructure and strategic realignment. The report is part of its ‘AI Frontiers’ analytical series.

According to the Council, growing investment and expansion of data centres to fuel AI demands are increasing pressure on energy resources. This is creating new competition not only for computing power but also for energy stability.

The analysis also points to intensifying competition between the US, China and the European Union, extending beyond AI models to supply chains, semiconductors and infrastructure. At the same time, AI is becoming more integrated into defence, cyberspace and information operations.

The Council highlights rising risks linked to disinformation, synthetic content and legal challenges, alongside growing demand for clearer regulation and content labelling as AI adoption expands in Ukraine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ILO sets first global framework for AI use in manufacturing sector

The International Labour Organization (ILO) has adopted its first-ever tripartite conclusions on AI in manufacturing, marking a significant policy step in addressing the sector’s digital transformation.

Agreed following a five-day technical meeting in Geneva, the framework brings together governments, employers and workers to shape how AI is integrated into one of the world’s largest employment sectors.

These ILO conclusions respond to the growing impact of AI on manufacturing, which employs nearly 500 million people globally.

Rather than focusing solely on productivity gains, the framework emphasises the need to align technological adoption with labour standards, ensuring that innovation supports decent work, strengthens enterprises and contributes to inclusive economic growth.

Key provisions address skills development, lifelong learning and occupational safety, alongside the protection of fundamental rights at work.

The framework also highlights the importance of social dialogue, recognising that collaboration between stakeholders is essential to managing AI-driven change and mitigating potential disruptions to employment and working conditions.

An agreement that reflects a broader effort to balance efficiency with worker protection, rejecting the notion that productivity and labour rights are competing priorities.

Instead, it positions AI as a tool that, if properly governed, can enhance both economic performance and job quality within the manufacturing sector.

The conclusions will be submitted to the ILO Governing Body in November 2026 for formal approval, with the intention of guiding national policies and international approaches to AI deployment in industry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!