AI productivity gap reveals critical enterprise adoption challenges

AI continues to generate expectations of broad economic transformation, particularly in productivity and employment. However, the extent of measurable economy-wide gains remains uncertain, and the overall impact of AI on business performance is still being assessed.

An extensive survey conducted by the National Bureau of Economic Research (NBER) found that while around 70% of firms across the US, UK, Germany, and Australia report using AI, nearly 9 in 10 companies have seen no significant effect on productivity or employment over the past 3 years. The findings suggest a gap between adoption rates and tangible outcomes.

Current enterprise use of AI remains concentrated in specific functions, including text generation with large language models, visual content creation, and data processing. Although previous studies have identified productivity gains in targeted areas such as customer support and writing tasks, these improvements have not yet translated into broad organisational performance increases.

Despite limited results to date, business leaders expect AI to deliver modest productivity gains in the coming years. The survey highlights a divergence in expectations, with senior executives anticipating slight reductions in employment, while employees foresee small job growth linked to AI adoption.

At the same time, some technology leaders predict more immediate disruption. Microsoft AI leader has argued that AI could soon reach human-level performance in many professional tasks, potentially reshaping white-collar work within the next few years.

The survey also indicates limited engagement with AI tools among top executives, with many reporting minimal or no direct use of them. This suggests that while AI investment is widespread, its integration into day-to-day leadership practices remains uneven.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brand turns AI demon into marketing stunt

Beverage company Liquid Death triggered confusion during the Winter Olympics after airing an AI advert featuring a figure skater who transforms into a red-eyed demon. The commercial appeared on Peacock’s Olympics stream but was not posted online, leaving viewers questioning whether it was real.

The brand later confirmed the advert was intentional and designed to parody fears around AI. According to Liquid Death, the limited run and lack of online acknowledgement were meant to amplify the sense of unease during the Winter Olympics broadcast.

Marketing analysts said that brands are increasingly leaning into AI scepticism to build trust with wary consumers. Campaigns from Equinox and Almond Breeze have similarly contrasted human authenticity with AI-generated content.

Despite the strategy, the Winter Olympics stunt drew criticism on social media, with some users labelling the advert AI slop. The reaction highlights both the risks and rewards for brands experimenting with AI-themed messaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Africa balances fintech innovation with financial stability

South Africa’s fintech sector has evolved from a niche disruptor into a pillar of the digital economy, fuelled by rapid digital adoption and entrepreneurial growth. Regulators are now tasked with supporting innovation in decentralised finance and AI while safeguarding market stability and consumer protection.

Coordinated oversight has been central to that effort. The Intergovernmental Fintech Working Group, bringing together the National Treasury, the South African Reserve Bank and the Financial Sector Conduct Authority, promotes a harmonised and principle-based regulatory approach.

A significant turning point came when crypto assets were classified as financial products under the Financial Advisory and Intermediary Services Act. Licensing requirements for Crypto Asset Service Providers and alignment with Financial Action Task Force standards strengthened consumer safeguards and anti-money laundering controls.

Fintech also plays a growing role in financial inclusion, particularly through mobile money, digital lending and digital payments. Wider access to affordable financial tools supports inclusive economic growth across underserved communities.

AI presents fresh regulatory questions around bias, transparency and operational resilience. Ensuring compliance with the Protection of Personal Information Act while encouraging responsible experimentation remains central to South Africa’s evolving fintech strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global South at the heart of India AI plan

India has unveiled the New Delhi Frontier AI Impact Commitments, a new initiative aimed at promoting inclusive and responsible AI, particularly across the Global South. The announcement was made by Union Minister for Electronics and Information Technology Ashwini Vaishnaw at the opening of the India AI Impact Summit 2026.

Vaishnaw described India’s AI strategy as focused on democratisation, scale, and technological sovereignty. He outlined a comprehensive approach spanning the whole AI ecosystem, including applications, models, computing infrastructure, talent, and energy, with a strong emphasis on practical use in sectors such as healthcare, agriculture, education, and public services.

Framing AI as a transformative technology, the minister stressed that its benefits must reach the widest possible population. He called for a human-centric approach that prioritises safety and dignity, while also addressing risks linked to rapid technological change.

The voluntary commitments bring together Indian innovators such as Sarvam, BharatGen, Gnani.ai, and Soket alongside leading global AI companies. Together, they aim to ensure that AI systems are developed and deployed in ways that reflect equity, cultural diversity, and local realities.

One of the core pledges focuses on improving understanding of how AI is used in the real world. Participating organisations will share anonymised and aggregated insights to help policymakers assess AI’s impact on jobs, skills, productivity, and economic transformation, supporting more informed decision-making.

Another key commitment seeks to strengthen multilingual and context-sensitive AI evaluation. By developing datasets and benchmarks in underrepresented languages and cultural settings, the initiative aims to improve system performance for diverse populations and expand access to high-quality AI tools globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adoption of agentic AI slowed by data readiness and governance gaps

Agentic AI is emerging as a new stage of enterprise automation, enabling systems to reason, plan, and act across workflows. Adoption, however, remains uneven, with far fewer organisations scaling deployments beyond pilots.

Unlike traditional analytics or generative tools, agentic systems make decisions rather than simply producing insights. Without sufficient context, they struggle to align actions with real business conditions, revealing a persistent context gap.

Recent survey data highlights this disconnect. Although executives express confidence in AI ambitions, significant shares cite data readiness, infrastructure, and skills as barriers. Many identify AI as central to strategy, yet only a limited proportion tie deployments to measurable business outcomes.

Effective agentic AI depends on layered data foundations. Public data provides baseline capability, organisational data enables operational competence, and third-party context supports differentiation. Weak governance or integration can undermine autonomy at scale.

Enterprises that align data governance, enrichment, and AI oversight are more likely to scale beyond pilots. Progress depends less on model sophistication than on trusted data foundations that support transparency and measurable outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT study finds AI chatbots underperform for vulnerable users

Research from the MIT Centre for Constructive Communication (CCC) finds that leading AI chatbots often provide lower-quality responses to users with lower English proficiency, less education, or who are outside the US.

Models tested include GPT-4, Claude 3 Opus, and Llama 3, which sometimes refuse to answer or respond condescendingly. Using TruthfulQA and SciQ datasets, researchers added user biographies to simulate differences in education, language, and country.

Accuracy fell sharply among non-native English speakers and less-educated users, with the most significant drop among those affected by both; users from countries like Iran also received lower-quality responses.

Refusal behaviour was notable. Claude 3 Opus declined 11% of questions for less-educated, non-native English speakers versus 3.6% for control users. Manual review showed 43.7% of refusals contained condescending language.

Some users were denied access to specific topics even though they answered correctly for others.

The study echoes human sociocognitive biases, in which non-native speakers are often perceived as less competent. Researchers warn AI personalisation could worsen inequities, providing marginalised users with subpar or misleading information when they need it most.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini 3.1 Pro brings advanced logic to developers and consumers

Google has launched Gemini 3.1 Pro, an upgraded AI model for solving complex science, research, and engineering challenges. Following the Gemini 3 Deep Think release, the update adds enhanced core reasoning for consumer, developer, and enterprise applications.

Developers can access 3.1 Pro in preview via the Gemini API, Google AI Studio, Gemini CLI, Antigravity, and Android Studio, while enterprise users can use it through Vertex AI and Gemini Enterprise.

Consumers can now try the upgrade through the Gemini app and NotebookLM, with higher limits for Google AI Pro and Ultra plan users.

Benchmarks show significant improvements in logic and problem-solving. On the ARC-AGI-2 benchmark, 3.1 Pro scored 77.1%, more than doubling the reasoning performance of its predecessor.

The upgrade is intended to make AI reasoning more practical, offering tools to visualise complex topics, synthesise data, and enhance creative projects.

Feedback from Gemini 3 Pro users has driven the rapid development of 3.1 Pro. The preview release allows Google to validate improvements and continue refining advanced agentic workflows before the model becomes widely available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK sets 48-hour deadline for removing intimate images

The UK government plans to require technology platforms to remove intimate images shared without consent within forty-eight hours instead of allowing such content to remain online for days.

Through an amendment to the Crime and Policing Bill, firms that fail to comply could face fines amounting to ten percent of their global revenue or risk having their services blocked in the UK.

A move that reflects ministers’ commitment to treat intimate image abuse with the same seriousness as child sexual abuse material and extremist content.

The action follows mounting concern after non-consensual sexual deepfakes produced by Grok circulated widely, prompting investigations by Ofcom and political pressure on platforms owned by Elon Musk.

The government now intends victims to report an image once instead of repeating the process across multiple services. Once flagged, the content should disappear across all platforms and be blocked automatically on future uploads through hash-matching or similar detection tools.

Ministers also aim to address content hosted outside the reach of the Online Safety Act by issuing guidance requiring internet providers to block access to sites that refuse to comply.

Keir Starmer, Liz Kendall and Alex Davies-Jones emphasised that no woman should be forced to pursue platform after platform to secure removal and that the online environment must offer safety and respect.

The package of reforms forms part of a broader pledge to halve violence against women and girls during the next decade.

Alongside tackling intimate image abuse, the government is legislating against nudification tools and ensuring AI chatbots fall within regulatory scope, using this agenda to reshape online safety instead of relying on voluntary compliance from large technology firms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Summit in India hears call for safe AI

The UN Secretary General has warned that AI must augment human potential rather than replace it, speaking at the India AI Impact Summit in New Delhi. Addressing leaders at Bharat Mandapam in New Delhi, he urged investment in workers so that technology strengthens, rather than displaces, human capacity.

In New Delhi, he cautioned that AI could deepen inequality, amplify bias and fuel harm if left unchecked. He called for stronger safeguards to protect people from exploitation and insisted that no child should be exposed to unregulated AI systems.

Environmental concerns also featured prominently in New Delhi, with Guterres highlighting rising energy and water demands from data centres. He urged a shift to clean power and warned against transferring environmental costs to vulnerable communities.

The UN chief proposed a $3 billion Global Fund on AI to build skills, data access and affordable computing worldwide. In New Delhi, he argued that broader access is essential to prevent countries from being excluded from the AI age and to ensure AI supports sustainable development goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft outlines challenges in verifying AI-generated media

In an era of deepfakes and AI-manipulated content, determining what is real online has become increasingly complex. Microsoft’s report Media Integrity and Authentication reviews current verification methods, their limits, and ways to boost trust in digital media.

The study emphasises that no single solution can prevent digital deception. Techniques such as provenance tracking, watermarking, and digital fingerprinting can provide useful context about a media file’s origin, creation tools, and whether it has been altered.

Microsoft has pioneered these technologies, cofounding the Coalition for Content Provenance and Authenticity (C2PA) to standardise media authentication globally.

The report also addresses the risks of sociotechnical attacks, where even subtle edits can manipulate authentication results to mislead the public.

Researchers explored how provenance information can remain durable and reliable across different environments, from high-security systems to offline devices, highlighting the challenge of maintaining consistent verification.

As AI-generated or edited content becomes commonplace, secure media provenance is increasingly important for news outlets, public figures, governments, and businesses.

Reliable provenance helps audiences spot manipulated content, with ongoing research guiding clearer, practical verification displays for the public.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot