OpenAI integrates Codex into ChatGPT mobile app

OpenAI has integrated Codex into the ChatGPT mobile app, allowing users to monitor and manage agentic coding workflows from iOS and Android devices.

The feature, currently in preview and available across all plans, lets users view live Codex environments, review outputs, approve commands, change models, and start new tasks from their phones. OpenAI said the update is intended to support work across multiple threads and workflows, rather than to control a single task remotely.

Codex is OpenAI’s coding agent for software development, designed to help with tasks such as building features, refactoring code, generating pull requests, testing and documentation. OpenAI describes the Codex app as a command centre for agentic coding, with agents able to work in parallel across projects through worktrees and cloud environments.

The mobile integration aligns with other recent Codex updates, including background operations in desktop environments and a browser extension for live sessions. Together, the updates point to OpenAI’s effort to turn Codex into a persistent development assistant that can continue working across devices and environments.

The move also comes amid growing competition with Anthropic’s Claude Code, which has introduced similar remote-monitoring features. Both companies are competing to make agentic coding tools central to developer workflows, particularly for businesses and technical teams seeking more autonomous software development support.

Why does it matter?

Mobile access makes agentic coding less tied to a single workstation. If developers can review outputs, approve commands and manage parallel coding tasks from a phone, AI coding agents become more like always-on collaborators than occasional coding assistants. The shift could accelerate competition between OpenAI, Anthropic and other AI firms over who controls the next layer of software development workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kazakhstan warns AI could displace up to 400,000 jobs

Kazakhstan’s Ministry of Labour and Social Protection has warned that widespread AI adoption could affect between 300,000 and 400,000 jobs over the next decade, highlighting concerns over structural shifts in the labour market.

First Vice-Minister Yerbol Tuyakbayev said the Workforce Development Centre is studying the potential impact of AI on the labour market. He said possible reductions could affect auxiliary and administrative roles, including accounting and some legal positions where tasks do not require direct human involvement.

At the same time, labour officials said demand remains strong for skilled technical and manual professions. The ministry pointed to current vacancies on the Enbek.kz platform and noted continued shortages in occupations requiring specialised practical expertise.

In response, the government has expanded retraining initiatives to help workers move into new roles. Tuyakbayev said around 186,000 people have already completed retraining programmes this year, including through regional initiatives and local centres such as JOLTAP in Astana.

Officials stressed that future employability and wages will depend heavily on qualification levels, as AI continues to reshape job structures and skills requirements across the economy.

Why does it matter?

Kazakhstan’s warning shows how governments are starting to treat AI as a labour-market transition issue, not only a productivity tool. The estimate points to potential pressure on routine administrative and professional roles, while also highlighting the need for retraining systems that can move workers into higher-demand technical and skilled occupations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Worldwide AI adoption surges, new report shows

Ireland remains one of the world’s leading markets for AI adoption, with 48.4% of its working-age population using AI tools, according to Microsoft’s Global AI Diffusion Report for the first quarter of 2026.

Microsoft said Ireland recorded a quarterly increase of 3.8 percentage points, placing it fourth globally and close to surpassing the 50% milestone. If current trends continue, Ireland could overtake Norway, which currently ranks third for AI adoption.

Globally, AI usage increased from 16.3% to 17.8% of the working-age population during the first quarter of 2026. Adoption remains uneven, with 26 economies now exceeding 30% usage, while the United Arab Emirates leads globally at 70.1%.

Regional trends show strong momentum in Asia, driven in part by improved AI capabilities for Asian languages. Microsoft said South Korea, Thailand and Japan recorded some of the greatest movement during the quarter.

At the same time, the gap between the Global North and Global South widened, with AI usage reaching 27.5% in developed regions compared with 15.4% elsewhere. Microsoft said it measures AI diffusion as the share of people aged 15 to 64 who used a generative AI product during the reported period.

Advances in AI-assisted coding also affected software development. Microsoft said global git pushes increased 78% year on year, while US software developer employment reached about 2.2 million in 2025 and was about 4% higher in March 2026 than in March 2025. The report cautions that it is still too early to determine the full labour-market impact of AI-assisted coding.

Why does it matter?

The report shows how quickly generative AI is becoming part of everyday work and digital activity, but also how uneven that adoption remains across countries and regions. If high-adoption economies continue to move faster, AI could widen existing digital and economic divides, especially where infrastructure, language support, skills and access remain weaker.

The findings also show why governments and businesses are under pressure to adapt workforce training, regulation and digital infrastructure as AI use spreads. Rising adoption may support productivity gains, but it also raises questions about who benefits, which regions fall behind and how labour markets adjust as AI tools become more embedded in software development and services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

AI’s economic impact could redefine jobs and productivity trends

AI is increasingly being viewed as a potential general-purpose technology, similar to electricity, computers and the internet, with the capacity to reshape economies over time, according to Bank of Canada External Deputy Governor Michelle Alexopoulos.

Speaking at the Ottawa Economics Association and Canadian Association for Business Economics Spring Policy Conference, Alexopoulos said technological change usually unfolds gradually. Still, some innovations spread across industries and transform the wider economy. AI has developed over the past decades, but recent advances have accelerated adoption among people and businesses.

If AI becomes a general-purpose technology, it could eventually reshape jobs, improve productivity and make businesses more competitive, potentially leading to higher wages, lower costs for consumers and reduced inflationary pressure. Alexopoulos cautioned that forecasts will change as new information becomes available, but said AI’s potential effects on productivity, inflation and the labour market cannot be ignored.

Global uptake is expanding, though unevenly. Investment in AI data centres has risen sharply, particularly in the United States, while constraints such as power generation capacity and skills shortages continue to affect adoption. In Canada, adoption is gaining momentum but remains uneven across sectors, with some businesses saying AI does not yet meet their needs or that workers lack the required skills.

Early signs of modest productivity gains are emerging, as AI may allow economies to produce more goods and services without requiring people to work harder. Because productivity affects estimates of future economic growth, the Bank of Canada sees AI’s potential impact as relevant to monetary policy.

Labour-market effects remain mixed. Alexopoulos noted that some large technology firms have linked recent job cuts to AI, and studies show weaker hiring in highly exposed roles such as entry-level coding and customer service. However, she said the evidence so far does not show large-scale job losses, but rather that AI is transforming work tasks instead of replacing people.

Why does it matter?

AI’s possible emergence as a general-purpose technology could affect productivity, wages, inflation and labour demand over time. The Bank of Canada’s framing matters because it links AI adoption directly to macroeconomic policy, rather than just to business innovation. The central question is whether AI raises productivity broadly enough to support growth and lower costs, or whether uneven adoption deepens gaps between firms, sectors and workers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Workers demand human oversight as AI reshapes workplaces in USA

A large majority of US workers support stronger AI workplace protections and see labour unions as the most trusted defenders of employee rights, according to an AFL-CIO poll. The findings highlight growing concern over how AI is being used in employment decisions and workplace management.

The survey of 1,588 respondents found over 90% support for human oversight in employment decisions, alongside strong backing for transparency, accountability and AI safeguards. A significant share also supported expanding unionisation to help workers negotiate protections related to automation.

Respondents expressed high levels of concern over undisclosed AI monitoring in the workplace, with most saying employers fail to clearly explain when or how AI tools are being used. Many workers said they view labour unions as more trustworthy than employers, political parties or tech companies in managing AI’s impact on jobs.

Union representatives said AI is increasingly used in scheduling, performance tracking and healthcare decisions, often without adequate consultation. The poll suggests broad demand for enforceable rules ensuring AI does not replace human judgement or reduce job security without worker consent.

Why does it matter? 

The findings point to a broader structural tension between rapid AI adoption in workplaces and the slower development of governance frameworks that protect labour rights.

As AI becomes embedded in hiring, monitoring and decision-making systems, questions over accountability, transparency and human oversight are shifting from technical issues to core employment rights.

The strong preference for union-led safeguards also signals a potential rebalancing of power in the digital economy, where workers increasingly seek collective mechanisms to influence how automation is deployed and to ensure it does not erode job security or professional autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

US EDA launches AI workforce training programme

The US Economic Development Administration has announced approximately $25 million in funding for a new AI Upskill Accelerator Pilot Program to support AI workforce training.

The programme will fund industry-driven partnerships that design and implement AI training models for workers and businesses in sectors considered important to regional economies. EDA says the initiative is intended to support workforce development approaches that can scale, adapt and become self-sustaining as AI technologies continue to evolve.

The funding opportunity links the programme to the Trump administration’s 2025 Artificial Intelligence Action Plan, which includes goals to accelerate AI development, support adoption across industries and strengthen US leadership in the technology. EDA says the programme is part of efforts to empower American workers to use AI tools and support industries tied to regional growth.

Deputy Assistant Secretary and Chief Operating Officer Ben Page said AI is becoming ‘a core driver of productivity and growth across industries’ and that workers need AI skills so regions can attract investment, adopt advanced technologies and sustain long-term economic growth.

The pilot will support workforce development in an emerging technology area while helping businesses and workers build the skills needed to use AI in the workplace. Applications for the programme are open until 10 July 2026.

Why does it matter?

The programme shows how AI policy is increasingly being linked to regional economic development and workforce readiness, not only research or infrastructure. By funding industry-driven training models, the EDA is trying to prepare workers and local economies for AI adoption while helping businesses close skills gaps that could affect productivity, investment and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI productivity claims need stronger scrutiny according to Ada Lovelace Institute’s findings

The Ada Lovelace Institute has warned that AI productivity claims in the UK public sector need stronger scrutiny, as headline estimates are already shaping spending, workforce planning and public service reform.

In a policy briefing on AI and public services, the institute says UK government communications, industry reports and third-party analyses frequently present AI as a tool for cutting costs, saving time and boosting growth. It argues that stronger evidence is needed to assess whether those claims translate into public value.

The briefing notes that the UK’s 2025 Spending Review committed to ‘a step change in investment in digital and AI across public services’, informed by estimates of potential savings and productivity benefits that run as high as £45 billion per year.

Many current estimates rely on limited or uncertain evidence, the institute argues. Studies often measure first-order effects, such as time savings or cost reductions, while paying less attention to outcomes that matter for public services, including service quality, equity, citizen experience, institutional capacity and worker well-being.

The briefing also warns that productivity claims often fail to fully account for implementation costs, trade-offs, transition periods and the opportunity cost of prioritising AI investment over other public spending.

Several methodological concerns are identified in AI productivity research, including reliance on task automation models, self-reported surveys and limited triangulation across methods. The institute also highlights the growing use of large language models to assess which tasks they can perform, warning that this creates a circular dynamic in which AI systems are used to judge their own capabilities.

Headline figures can obscure mixed evidence, with productivity estimates varying widely and positive findings often receiving more attention than contradictory or null results. Industry involvement can also shape what gets researched and how results are framed, particularly when AI companies fund studies, provide tools or publish their own reports.

To improve the evidence base, the Ada Lovelace Institute calls for productivity research to reflect uncertainty, report ranges rather than single headline numbers and measure outcomes that matter for public services. It recommends more independent research, transparent methodologies, longer-term studies and measurement built into AI deployments from the start, including tracking service quality, error rates, staff well-being and citizen satisfaction.

Why does it matter?

Public-sector AI is increasingly being justified through promises of efficiency, savings and productivity growth. If those claims are based on weak or narrow evidence, governments risk making major investment and workforce decisions before understanding the real costs, trade-offs and effects on service quality.

The briefing is important because it shifts the question from whether AI can save time in isolated tasks to whether AI improves public services in practice. That includes outcomes such as fairness, reliability, staff well-being, citizen experience and institutional capacity, which are harder to measure than headline savings but central to public value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California expands digital democracy platform for AI policy debate

California’s Governor is expanding Engaged California, a digital democracy initiative designed to give residents a direct voice in shaping AI policy across the state. The programme invites Californians to share how AI is affecting their jobs, industries, and communities, with the findings expected to help guide future state policy decisions.

The initiative will begin with a public participation phase, during which residents can submit experiences and recommendations through the state’s online platform. A second phase, later in 2026, will bring together a smaller representative group of residents for live deliberative forums focused on AI’s economic and social impact. The process aims to identify areas of public consensus on how government should respond to rapidly evolving AI technologies.

State officials described ‘Engaged California’ as a first-in-the-nation deliberative democracy programme inspired partly by Taiwan’s digital governance model. Instead of functioning like a social media platform or public poll, the initiative is designed to encourage structured discussion and collaborative policymaking around emerging technologies.

California also used the announcement to highlight broader AI initiatives already underway, including AI procurement reforms, workforce training partnerships with major technology companies, AI-powered wildfire detection systems, cybersecurity assessments, and responsible governance frameworks.

Officials said the state aims to balance innovation with safeguards related to child safety, deepfakes, digital likeness protections, and AI accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!