US EDA launches AI workforce training programme

The US Economic Development Administration has announced approximately $25 million in funding for a new AI Upskill Accelerator Pilot Program to support AI workforce training.

The programme will fund industry-driven partnerships that design and implement AI training models for workers and businesses in sectors considered important to regional economies. EDA says the initiative is intended to support workforce development approaches that can scale, adapt and become self-sustaining as AI technologies continue to evolve.

The funding opportunity links the programme to the Trump administration’s 2025 Artificial Intelligence Action Plan, which includes goals to accelerate AI development, support adoption across industries and strengthen US leadership in the technology. EDA says the programme is part of efforts to empower American workers to use AI tools and support industries tied to regional growth.

Deputy Assistant Secretary and Chief Operating Officer Ben Page said AI is becoming ‘a core driver of productivity and growth across industries’ and that workers need AI skills so regions can attract investment, adopt advanced technologies and sustain long-term economic growth.

The pilot will support workforce development in an emerging technology area while helping businesses and workers build the skills needed to use AI in the workplace. Applications for the programme are open until 10 July 2026.

Why does it matter?

The programme shows how AI policy is increasingly being linked to regional economic development and workforce readiness, not only research or infrastructure. By funding industry-driven training models, the EDA is trying to prepare workers and local economies for AI adoption while helping businesses close skills gaps that could affect productivity, investment and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF report says HR leaders will shape the success of AI transformation

AI is reshaping how companies organise labour, distribute decision-making and redesign internal operations, making workforce strategy a central part of AI adoption.

Writing for the World Economic Forum, Al-Futtaim Group HR director David Henderson argues that many AI projects fail because organisations focus too heavily on technology while neglecting the need to change work, accountability, and operational processes.

The article says successful AI adoption depends on how effectively businesses combine human judgement with machine-driven systems, rather than treating automation as a standalone software rollout.

Using Garry Kasparov’s ‘advanced chess’ model after his 1997 defeat to IBM’s Deep Blue as an example, Henderson highlights how humans working alongside computers eventually outperformed both machines and grandmasters operating independently.

He suggests the same principle is now emerging across modern enterprises, where stronger results come from integrating AI directly into operational workflows rather than isolating it in technical departments.

The article identifies four major responsibilities for HR leaders during AI transformation. As ‘design architects’, Chief Human Resources Officers are expected to redefine which decisions remain human-led, which become AI-assisted and how accountability is distributed across organisations. As ‘capability stewards’, they must build continuous AI learning systems rather than rely on occasional employee training programmes.

HR leaders are also described as ‘adoption catalysts’, responsible for helping frontline employees integrate AI into daily workflows, and as ‘transition guardians’, tasked with managing concerns linked to surveillance, bias, fairness, employability and workforce trust.

Several companies are cited as examples of that transition. Procter & Gamble embedded AI engineers and data scientists directly within operational business units rather than centralising them within analytics teams.

Zurich Insurance developed enterprise-wide AI learning systems focused on transferable skills and workforce redeployment, while Al-Futtaim enabled frontline retail teams to develop AI-supported customer recommendation systems through agile operational groups rather than top-down executive planning.

Why does it matter?

AI competitiveness increasingly depends on organisational adaptability instead of access to technology alone. Workforce redesign, reskilling systems, internal trust, and operational flexibility are becoming critical strategic advantages as automation expands across industries. WEF’s argument highlights how HR departments are evolving from administrative functions into central actors shaping AI governance, labour transformation, and long-term business resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI productivity claims need stronger scrutiny according to Ada Lovelace Institute’s findings

The Ada Lovelace Institute has warned that AI productivity claims in the UK public sector need stronger scrutiny, as headline estimates are already shaping spending, workforce planning and public service reform.

In a policy briefing on AI and public services, the institute says UK government communications, industry reports and third-party analyses frequently present AI as a tool for cutting costs, saving time and boosting growth. It argues that stronger evidence is needed to assess whether those claims translate into public value.

The briefing notes that the UK’s 2025 Spending Review committed to ‘a step change in investment in digital and AI across public services’, informed by estimates of potential savings and productivity benefits that run as high as £45 billion per year.

Many current estimates rely on limited or uncertain evidence, the institute argues. Studies often measure first-order effects, such as time savings or cost reductions, while paying less attention to outcomes that matter for public services, including service quality, equity, citizen experience, institutional capacity and worker well-being.

The briefing also warns that productivity claims often fail to fully account for implementation costs, trade-offs, transition periods and the opportunity cost of prioritising AI investment over other public spending.

Several methodological concerns are identified in AI productivity research, including reliance on task automation models, self-reported surveys and limited triangulation across methods. The institute also highlights the growing use of large language models to assess which tasks they can perform, warning that this creates a circular dynamic in which AI systems are used to judge their own capabilities.

Headline figures can obscure mixed evidence, with productivity estimates varying widely and positive findings often receiving more attention than contradictory or null results. Industry involvement can also shape what gets researched and how results are framed, particularly when AI companies fund studies, provide tools or publish their own reports.

To improve the evidence base, the Ada Lovelace Institute calls for productivity research to reflect uncertainty, report ranges rather than single headline numbers and measure outcomes that matter for public services. It recommends more independent research, transparent methodologies, longer-term studies and measurement built into AI deployments from the start, including tracking service quality, error rates, staff well-being and citizen satisfaction.

Why does it matter?

Public-sector AI is increasingly being justified through promises of efficiency, savings and productivity growth. If those claims are based on weak or narrow evidence, governments risk making major investment and workforce decisions before understanding the real costs, trade-offs and effects on service quality.

The briefing is important because it shifts the question from whether AI can save time in isolated tasks to whether AI improves public services in practice. That includes outcomes such as fairness, reliability, staff well-being, citizen experience and institutional capacity, which are harder to measure than headline savings but central to public value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California expands digital democracy platform for AI policy debate

California’s Governor is expanding Engaged California, a digital democracy initiative designed to give residents a direct voice in shaping AI policy across the state. The programme invites Californians to share how AI is affecting their jobs, industries, and communities, with the findings expected to help guide future state policy decisions.

The initiative will begin with a public participation phase, during which residents can submit experiences and recommendations through the state’s online platform. A second phase, later in 2026, will bring together a smaller representative group of residents for live deliberative forums focused on AI’s economic and social impact. The process aims to identify areas of public consensus on how government should respond to rapidly evolving AI technologies.

State officials described ‘Engaged California’ as a first-in-the-nation deliberative democracy programme inspired partly by Taiwan’s digital governance model. Instead of functioning like a social media platform or public poll, the initiative is designed to encourage structured discussion and collaborative policymaking around emerging technologies.

California also used the announcement to highlight broader AI initiatives already underway, including AI procurement reforms, workforce training partnerships with major technology companies, AI-powered wildfire detection systems, cybersecurity assessments, and responsible governance frameworks.

Officials said the state aims to balance innovation with safeguards related to child safety, deepfakes, digital likeness protections, and AI accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European Commission updates guidance on generative AI use in research

The European Commission has updated the ERA Living Guidelines on the responsible use of generative AI in research, reflecting the growing use of AI tools across scientific work. The revised guidance aims to support researchers, research organisations and funding bodies in adopting generative AI while maintaining core principles of research integrity.

The guidelines emphasise reliability, honesty, respect and accountability, including transparency over AI use, protection of privacy and confidential information, and responsibility for research outputs. They also stress that researchers remain ultimately responsible for scientific output and should verify AI-generated results.

New recommendations address risks linked to the use of generative AI by third parties, including in meetings, note-taking, summaries and document overviews, where confidential information, data protection or intellectual property rights may be affected. The guidelines encourage researchers and organisations to inform third parties about the use of such tools and related risks.

A specific addition concerns the risk of ‘hidden prompts’, where instructions may be secretly embedded in documents or inputs to influence generative AI tools. The guidelines call on research funding organisations to remain aware of such risks, set rules prohibiting manipulation where relevant, and introduce appropriate safeguards in IT systems used to process information.

Developed through the European Research Area Forum, the guidelines are intended as a non-binding supporting tool for the research community. The Commission says they will be updated regularly and that users can continue to provide feedback as generative AI and the surrounding policy landscape evolve.

Why does it matter?

Generative AI is becoming part of everyday research workflows, from drafting and summarising to proposal preparation and document analysis. The updated guidelines show that research integrity risks now extend beyond individual misuse to organisational processes, third-party tools and hidden technical behaviours that may affect scientific judgement. Shared guidance across the European Research Area can help institutions adopt AI without weakening transparency, accountability or trust in research.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

Automation fuels inequality more than productivity gains, study finds

A new study co-authored by economists from Massachusetts Institute of Technology and Yale University finds that automation in the United States has often been driven less by productivity gains and more by firms’ efforts to reduce labour costs.

Rather than replacing workers to maximise efficiency, companies have frequently targeted employees earning a ‘wage premium’, effectively lowering higher-than-average salaries within comparable roles.

The research suggests this pattern has contributed significantly to widening income inequality while delivering only limited productivity improvements.

The analysis, which examines data spanning multiple decades and industries, indicates that automation has disproportionately affected higher-earning workers within affected groups. It also estimates that inefficient automation deployment may have offset a large share of potential productivity gains over time.

Researchers argue that the findings highlight a structural tension in how automation is applied, where short-term cost reduction can take priority over long-term economic efficiency, shaping both wage distribution and overall growth dynamics in the US economy since 1980.

Why does it matter? 

The findings challenge the assumption that automation primarily improves efficiency and productivity, showing instead that firms can strategically use it to reshape wage structures and concentrate economic gains.

From a broader perspective, this helps explain why technological progress has not translated evenly into higher productivity or shared prosperity, while also highlighting how corporate incentives can steer innovation in ways that deepen inequality across labour markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Generative AI guidance issued by Australia’s New South Wales tribunal

The New South Wales Civil and Administrative Tribunal has issued guidance on the acceptable use of generative AI in tribunal proceedings as part of Privacy Awareness Week NSW 2026, which this year focuses on personal information risks in the age of AI.

According to NCAT, generative AI tools may be used to assist with administrative and organisational tasks such as summarising material, organising information, or preparing chronologies. At the same time, the tribunal warns that such tools can create privacy risks if users enter personal, sensitive, or confidential information.

The guidance is set out in NCAT Procedural Direction 7 on the use of generative AI, together with an accompanying fact sheet. NCAT says the aim is to clarify when generative AI may be used in tribunal-related work while reinforcing obligations to protect personal and confidential information.

The tribunal also draws a clear line around evidentiary material. Generative AI must not be used to generate or alter evidence in tribunal proceedings, including statements, affidavits, statutory declarations, character references, or other evidentiary documents.

NCAT further states that generative AI must not be used to generate content for an expert report unless the tribunal has given permission. It is encouraging parties and their representatives to review the guidance before using such tools in proceedings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ILO warns lifelong learning is critical for the future AI economy

The International Labour Organization has warned that governments must place lifelong learning at the centre of economic and social policy as AI, digitalisation and demographic shifts continue transforming labour markets worldwide. The organisation said stronger and more inclusive learning systems are necessary to prevent widening inequality between workers, industries and countries.

According to the ILO’s new report, titled ‘Lifelong learning and skills for the future’, only 16% of people aged between 15 and 64 participated in structured training during the previous year. Access remains significantly higher among full-time employees in formal companies, where employer-supported training reaches 51%.

The ILO report warns that workers in informal jobs and smaller enterprises continue relying mainly on learning through experience instead of structured education programmes. Furthermore, the study found that employers increasingly seek combinations of digital, socio-emotional, communication and problem-solving skills rather than narrow technical expertise alone.

While demand for AI-related capabilities is expected to increase, the report noted that most workers currently use ready-made AI tools that require broader digital literacy, critical thinking and collaborative abilities instead of specialist engineering knowledge.

The ILO also highlighted the growing importance of green and care economy skills. It estimates that 32% of workers globally already perform environmentally relevant tasks, while demand for long-term care workers could almost double by 2050.

The organisation called for greater public investment, stronger institutional coordination and inclusive lifelong learning strategies capable of supporting workers throughout rapidly changing technological and economic transitions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Why DeepSeek V4 is changing the AI model race

DeepSeek has again placed itself at the centre of the global AI race. After drawing worldwide attention with its R1 reasoning model in early 2025, the Chinese company has recently released DeepSeek V4, a new model designed to compete not only on performance, but also on price, openness and efficiency.

The hype around DeepSeek V4 is not based on a single feature. The model comes with a 1 million-token context window, open weights, two versions for different use cases and a strong focus on agentic workflows such as coding, research, document analysis and long-running tasks. In a market still dominated by expensive closed models, DeepSeek is trying to prove that powerful AI does not need to remain locked behind trademarked systems.

A model built for long memory

The most immediate difference between DeepSeek V4 and other models is context length. Both DeepSeek-V4-Pro and DeepSeek-V4-Flash support a 1-million-token context window, meaning they can process inputs far longer than those of older generations of mainstream models. According to DeepSeek’s official release, one million tokens is now the default across all official DeepSeek services.

For ordinary users, that may sound technical. In practice, it matters because a longer context allows models to work with large documents, long conversations, full codebases, legal materials, research archives or complex project histories without losing track as quickly.

That is why DeepSeek V4 is not just another chatbot release. It is aimed at the next stage of AI use, where models are expected to act less like question-answering tools and more like assistants that can follow long processes over time.

Two models for two different needs

DeepSeek V4 comes in two main versions. DeepSeek-V4-Pro is a larger and more capable model, with 1.6 trillion total parameters and 49 billion active parameters. DeepSeek-V4-Flash is a smaller model, with 284 billion total parameters and 13 billion active parameters, designed for faster and more cost-effective workloads.

That distinction is important. Not every user needs the strongest model for every task. A company summarising documents, routing queries or running basic support may choose Flash. A developer working on complex coding tasks, long-context agents or advanced reasoning may prefer Pro.

DeepSeek’s release reflects a broader trend in AI. The best model is no longer always the biggest one. Cost, speed, context size and deployment flexibility are now as important as raw benchmark performance.

Why the price matters

One reason DeepSeek attracts so much attention is its aggressive pricing. DeepSeek’s API page lists V4-Flash at USD 0.14 per 1 million input tokens on a cache miss and USD 0.28 per 1 million output tokens. V4-Pro is listed at USD 1.74 per 1 million input tokens and USD 3.48 per 1 million output tokens before the temporary 75% discount.

For developers and companies, that changes the calculation. High-performing AI models are useful only if they can be deployed at scale. If every long document, coding session or agentic workflow becomes too expensive, adoption slows down.

DeepSeek’s challenge to the market is therefore not only technical. It is economic. The company is pushing the idea that frontier-level AI should be cheaper to run, easier to access and less dependent on closed ecosystems.

The architecture behind the hype

DeepSeek V4 uses a mixture-of-experts approach, meaning only part of the model is active during each response. That helps explain why the model can be very large on paper, yet still more efficient to run than a dense model of similar overall size.

The more interesting part is how DeepSeek handles long context. NVIDIA’s technical overview explains that DeepSeek V4 uses hybrid attention, combining compression and selective attention techniques to reduce the cost of processing very long prompts. NVIDIA says these changes are designed to cut per-token inference FLOPs by 73% and reduce KV cache memory burden by 90% compared with DeepSeek-V3.2.

For a non-technical audience, the point is simple. DeepSeek V4 is trying to solve one of the biggest problems in modern AI: how to make models remember and process much more information without becoming too slow or too expensive.

That is where much of the hype comes from. The model is not merely larger. It is designed around the economics of long-context AI.

Why NVIDIA is still in the picture

DeepSeek’s R2 launch is delayed as US restrictions cut off supply of NVIDIA H20 chips built for China.

NVIDIA’s role in the DeepSeek V4 story is especially interesting. DeepSeek is often discussed as part of China’s effort to build a more independent AI ecosystem, but NVIDIA has also been quick to move forward to support developers who want to build with the model.

In its technical blog, NVIDIA describes DeepSeek V4 as a model family designed for efficient inference of million-token contexts. The company says DeepSeek-V4-Pro and V4-Flash are available through NVIDIA GPU-accelerated endpoints, while developers can also use NVIDIA Blackwell, NIM containers, SGLang and vLLM deployment options.

NVIDIA also reports that early tests of DeepSeek-V4-Pro on the GB200 NVL72 platform showed more than 150 tokens per second per user. That matters because long-context models place heavy memory pressure, as well as on compute and networking infrastructure. The model may be efficient by design, but serving it at scale still requires serious hardware.

So, DeepSeek V4 does not remove NVIDIA from the story – it complicates it. The model is part of a broader push towards more efficient AI, but the infrastructure race remains central.

The chip question behind the model

DeepSeek V4 also arrives at a time when AI infrastructure is becoming just as important as model performance. MIT Technology Review frames the release partly through that lens, noting that DeepSeek’s new model reflects China’s broader attempt to reduce reliance on foreign AI hardware and build a more self-sufficient technology stack.

That detail matters because the AI race is no longer only about who builds the most capable model. It is also about who controls the chips, software frameworks and data centres needed to run it.

Replacing NVIDIA, however, remains difficult. Its advantage lies not just in its chips, but also in the software ecosystem developers have built around its platforms over many years. Moving to alternative hardware means adapting code, rebuilding tools and proving that the new systems are stable enough for serious use.

DeepSeek V4, however, sits between two realities. It points towards China’s ambition to build a more independent AI stack, while NVIDIA’s rapid support for the model shows that frontier AI still depends heavily on established infrastructure.

Open weights as a strategic move

DeepSeek V4 is also important because the model weights are available through Hugging Face under the MIT License. That gives developers more freedom to inspect, adapt and deploy the model than they would have with a fully closed commercial system.

Open-weight models are becoming a major pressure point in the AI race. Closed models may still lead in some areas, especially in polished consumer products, enterprise support and safety layers. However, open models offer something different: flexibility.

For universities, start-ups, smaller companies and developers outside the largest AI ecosystems, that flexibility matters. It means advanced AI can be tested, modified and integrated without relying entirely on a handful of dominant providers.

Benchmarks need caution

DeepSeek presents V4-Pro as highly competitive across reasoning, coding, long-context and agentic benchmarks. Hugging Face lists results including 80.6 on SWE-bench Verified, 90.1 on GPQA Diamond and 87.5 on MMLU-Pro for DeepSeek-V4-Pro.

Those numbers are impressive, but they should not be treated as the full story. Benchmarks are useful, but they rarely capture every real-world use case. A model can score well on coding tests and still struggle with reliability, factual accuracy, safety or complex multi-step workflows in production.

That caution is important. The AI industry often turns benchmarks into headlines, while real performance depends on deployment, prompting, safety controls and the specific task at hand.

More than just another model release

DeepSeek V4 matters because it combines several trends into one release: long context, lower prices, open weights, agentic workflows and geopolitical competition. It also shows that the AI race is no longer fought only in labs, benchmarks and data centres. Visibility now matters too. Tools such as Diplo’s Digital Footprints show how digital presence shapes the way technology actors and media narratives are discovered, ranked and understood. At this stage, the competition is not only about who has the smartest model. It is also about who can make intelligence cheaper, more available and easier to deploy.

That does not mean DeepSeek has solved every problem. Questions remain around independent benchmarking, safety, data governance, infrastructure and the broader political context of Chinese AI development. Still, the release does show where the market is heading.

The next phase of AI may not be defined solely by the most powerful model. It may be defined by the model that is powerful enough, affordable enough and open enough to change how people build products, services and tools with AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!