GPT-5.5 pushes AI deeper into agentic work

OpenAI has released GPT-5.5 as its latest push towards more capable agentic AI, presenting the model as better suited to complex, multi-step digital work across coding, research, analysis, and enterprise tasks.

The company frames it as a system designed to carry more of the work itself, moving beyond isolated prompt-response interactions towards fuller execution across digital workflows.

According to OpenAI, the model’s biggest gains are in software engineering, tool use, and knowledge work. GPT-5.5 improves performance on coding and workflow benchmarks, strengthens long-horizon reasoning, and handles complex digital tasks with greater efficiency while maintaining earlier latency standards.

OpenAI also says the model performs better across documents, spreadsheets, presentations, and data analysis, reflecting a broader effort to make AI more useful across full professional workflows rather than only as an assistant for isolated tasks.

The release also highlights stronger performance in scientific and technical research, alongside expanded safety testing and tighter safeguards for higher-risk capabilities.

The wider significance of GPT-5.5 lies in its reflection of the next phase of AI competition. The focus is shifting from better answers to more reliable execution across real-world digital work, with growing implications for productivity, oversight, and governance.

Why does it matter? 

GPT-5.5 signals a shift from AI as a passive tool to AI as an active digital operator that can complete full workflows across coding, research, and business systems with minimal human supervision.

Over time, such capability could reshape productivity, speed up development cycles, and shift competitive advantage toward those best integrating autonomous AI while managing safety and governance risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta expands parental oversight with new AI conversation insights for teens

Meta has introduced new supervision features that allow parents to see the topics their teenagers discuss with its AI assistant across Facebook, Messenger, and Instagram.

The update provides visibility into activity over the previous seven days, grouping interactions into areas such as education, health and well-being, lifestyle, travel, and entertainment. Parents can review these themes through a new Insights tab, although they will not see the exact prompts their teen sent or Meta AI’s responses.

The feature forms part of Meta’s broader effort to strengthen safeguards for younger users as AI becomes more embedded in everyday digital experiences. For more sensitive issues, including suicide and self-harm, Meta says it is developing additional alerts to notify parents when teens try to engage in those types of conversations with its AI assistant.

Meta has also partnered with external experts, including the Cyberbullying Research Centre, to develop structured conversation prompts to help families talk about AI use. The company says these tools are intended to support informed, non-judgemental dialogue rather than passive monitoring.

Alongside these updates, Meta has created an AI Wellbeing Expert Council to provide input on the development of age-appropriate AI systems for teens. The move reflects a wider shift towards embedding safety, transparency, and parental involvement into AI-driven platforms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK embraces 6 frontier technologies to drive digital growth

The UK government has identified six frontier technologies as central to strengthening digital capability, economic growth, and long-term competitiveness.

Outlined in the 2025 Modern Industrial Strategy and Digital and Technologies Sector Plan, the approach prioritises AI, cybersecurity, advanced connectivity, engineering biology, quantum technologies, and semiconductors as pillars of national resilience and technological sovereignty.

Advanced connectivity and AI remain core drivers of digital transformation. Investment in next-generation telecoms, including 5G and future 6G development, is supported through public funding and infrastructure initiatives, while AI continues to expand rapidly through commitments to compute capacity, national supercomputing infrastructure, and workforce development. The strategy positions the UK as aiming to strengthen its role as a leading European AI hub.

Cybersecurity, engineering biology, and quantum technologies reflect a broader strategy linking innovation with security, resilience, and sustainability. Government-backed programmes are intended to support commercialisation, strengthen secure-by-design systems, and accelerate growth in emerging areas such as bio-based manufacturing. Quantum technologies are also being positioned for longer-term use across sectors, including healthcare, defence, and finance.

Semiconductors complete the strategy as a foundational technology underpinning modern digital systems. Rather than focusing on large-scale manufacturing, the UK is prioritising areas such as design, photonics, compound semiconductors, and specialised materials, backed by targeted funding and institutional support.

Across all six areas, the strategy reflects a wider effort to align innovation policy with economic security, global competitiveness, and more resilient supply chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK government seeks industry cooperation to strengthen AI-driven cyber resilience

The UK government has called on leading AI companies to collaborate on building advanced cyber defence capabilities, as threats grow in scale and sophistication.

Speaking ahead of CYBERUK, Security Minister Dan Jarvis emphasised that AI-driven security will become a defining challenge, requiring innovation at unprecedented speed and scale.

Government officials warn that AI is already reshaping the threat landscape, with hostile states and criminal groups increasingly deploying automated systems to identify vulnerabilities.

The number of nationally significant cyber incidents handled by authorities more than doubled in 2025, highlighting the urgency of strengthening national resilience.

To address these risks, businesses are being encouraged to sign a voluntary Cyber Resilience Pledge, committing to stronger governance, early warning systems, and supply chain security standards.

Alongside this initiative, the UK government will invest £90 million over the next three years to support cyber defences, particularly for small and medium-sized enterprises.

A strategy that forms part of a broader National Cyber Action Plan, reflecting a shift towards integrating AI into national security infrastructure.

Officials argue that effective cooperation between government and industry will be essential to protect critical systems and maintain economic stability in an increasingly automated threat environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNIDIR highlights role of women in AI governance and international security

The United Nations Institute for Disarmament Research highlights the role of women in shaping the digital future, particularly in AI and international security. The organisation stresses the importance of increasing female participation in decision-making.

According to the research Institute, women remain underrepresented in AI and related policy spaces, including diplomacy and security forums. This imbalance risks limiting perspectives in global technology governance.

The organisation’s Women in AI Fellowship programme aims to address this gap by providing training and expertise to women diplomats. Participants gain knowledge across technical, legal and policy aspects of AI.

The research Institute positions inclusion as essential to effective AI governance and security policy, emphasising the need for diverse voices in shaping digital futures globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ukraine highlights AI strategic shifts

The National Security and Defense Council of Ukraine has published an overview of global AI developments for March 2026, highlighting a shift towards infrastructure and strategic realignment. The report is part of its ‘AI Frontiers’ analytical series.

According to the Council, growing investment and expansion of data centres to fuel AI demands are increasing pressure on energy resources. This is creating new competition not only for computing power but also for energy stability.

The analysis also points to intensifying competition between the US, China and the European Union, extending beyond AI models to supply chains, semiconductors and infrastructure. At the same time, AI is becoming more integrated into defence, cyberspace and information operations.

The Council highlights rising risks linked to disinformation, synthetic content and legal challenges, alongside growing demand for clearer regulation and content labelling as AI adoption expands in Ukraine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ILO sets first global framework for AI use in manufacturing sector

The International Labour Organization (ILO) has adopted its first-ever tripartite conclusions on AI in manufacturing, marking a significant policy step in addressing the sector’s digital transformation.

Agreed following a five-day technical meeting in Geneva, the framework brings together governments, employers and workers to shape how AI is integrated into one of the world’s largest employment sectors.

These ILO conclusions respond to the growing impact of AI on manufacturing, which employs nearly 500 million people globally.

Rather than focusing solely on productivity gains, the framework emphasises the need to align technological adoption with labour standards, ensuring that innovation supports decent work, strengthens enterprises and contributes to inclusive economic growth.

Key provisions address skills development, lifelong learning and occupational safety, alongside the protection of fundamental rights at work.

The framework also highlights the importance of social dialogue, recognising that collaboration between stakeholders is essential to managing AI-driven change and mitigating potential disruptions to employment and working conditions.

An agreement that reflects a broader effort to balance efficiency with worker protection, rejecting the notion that productivity and labour rights are competing priorities.

Instead, it positions AI as a tool that, if properly governed, can enhance both economic performance and job quality within the manufacturing sector.

The conclusions will be submitted to the ILO Governing Body in November 2026 for formal approval, with the intention of guiding national policies and international approaches to AI deployment in industry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Employee monitoring grows at Meta as AI overhaul accelerates

Meta has introduced a new internal tool to track employee activity, including keystrokes and mouse movements, as part of efforts to train its AI systems. The company says the data will help improve AI models designed to perform everyday digital tasks.

According to company statements, the tracking is limited to Meta-owned devices and applications, with safeguards in place to protect sensitive information. The initiative reflects a broader strategy to gather real-world usage data to enhance the performance and accuracy of AI tools.

The move has raised concerns among employees, some of whom view the monitoring as intrusive, particularly amid ongoing job cuts and reduced hiring. Reports indicate that Meta has significantly scaled back recruitment while increasing investment in AI development.

The company has committed substantial resources to AI, with plans to expand spending and accelerate model development. Internal tracking is positioned as part of a broader shift toward automation, as firms seek to reshape workflows and productivity through AI.

The development highlights growing tensions between AI innovation and workplace privacy. Increased reliance on employee data to train AI systems may reshape labour practices, raising questions about surveillance, consent, and the balance between technological advancement and workers’ rights.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Law Society conference highlights GDPR’s role in regulating AI tools

GDPR obligations remain ‘fundamental’ when addressing data protection issues linked to AI tools, according to legal experts speaking at a conference organised by the Law Society’s Intellectual Property and Data Protection Commission, a committee within the Law Society of Ireland, on 20 April. The event reviewed recent legislative developments, case law and the use of AI tools in the workplace.

Olivia Mullooly, partner at Arthur Cox, said regulation in the area remains a ‘moving feast’ amid ongoing negotiations on the EU Digital Omnibus. She added that GDPR has been effective in regulating new and novel activities by AI companies, and continues to overlap with other regulatory frameworks.

In a panel discussion, Bird & Bird partner Deirdre Kilroy said firms should not ignore fundamental GDPR principles when using AI. She also noted that organisations should not delay compliance actions despite shifting regulatory conditions.

Speakers also discussed uncertainty around evolving the EU rules and increasing complexity in compliance. The Data Protection Commission reported a rise in AI-related engagements, which accounted for one in four cases last year, up from one in 35 in 2021.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!