Structural friction, not intelligence, is holding back agentic AI

CIO leadership commentary highlights that many organisations investing in agentic AI, autonomous AI agents designed to execute complex, multi-step tasks, encounter disappointing results when deployments focus solely on outcomes like speed or cost savings without addressing underlying system design challenges.

The so-called ‘friction tax’ arises from siloed data, disjointed workflows and tools that force employees to act as manual connectors between systems, negating much of the theoretical efficiency AI promises.

The author proposes an ‘architecture of flow’ as a solution, in which context is unified across systems and AI agents operate on shared data and protocols, enabling work to move seamlessly between functions without bottlenecks.

This approach prioritises employee experience and customer value, enabling context-rich automation that reduces repetitive work and improves user satisfaction.

Key elements of such an architecture include universal context layers (e.g. standard protocols for data sharing) and agentic orchestration mechanisms that help specialised AI agents communicate and coordinate tasks across complex workflows.

When implemented effectively, this reduces cognitive load, strengthens adoption, and makes business growth a natural result of friction-free operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Is AI eroding human intelligence?

The article reflects on the growing integration of AI into daily life, from classrooms to work, and asks whether this shift is making people intellectually sharper or more dependent on machines.

Tools such as ChatGPT, Grok and Perplexity have moved from optional assistants to everyday aids that generate instant answers, summaries and explanations, reducing the time and effort traditionally required for research and deep thinking.

While quantifiable productivity gains are clear, the piece highlights trade-offs: readily available answers can diminish the cognitive struggle that builds critical thinking, problem-solving and independent reasoning.

In education, easy AI responses may weaken students’ engagement in learning unless teachers guide their use responsibly. Some respondents point to creativity and conceptual understanding eroding when AI is used as a shortcut. In contrast, others see it as a democratising tutor that supports learners who otherwise lack resources.

The article also incorporates perspectives from AI systems themselves, which generally frame AI as neither inherently making people smarter nor dumber, but dependent on how it’s used.

It concludes that the impact of AI on human cognition is not predetermined by the technology, but shaped by user choice: whether AI is a partner that augments thinking or a crutch that replaces it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Conversational advertising takes the stage as ChatGPT tests in-chat promotions

Advertising inside ChatGPT marks a shift in where commercial messages appear, not a break from how advertising works. AI systems have shaped search, social media, and recommendations for years, but conversational interfaces make those decisions more visible during moments of exploration.

Unlike search or social formats, conversational advertising operates inside dialogue. Ads appear because users are already asking questions or seeking clarity. Relevance is built through context rather than keywords, changing when information is encountered rather than how decisions are made.

In healthcare and clinical research, this distinction matters. Conversational ads cannot enroll patients directly, but they may raise awareness earlier in patient journeys and shape later discussions with clinicians and care providers.

Early rollout will be limited to free or low-cost ChatGPT tiers, likely skewing exposure towards patients and caregivers. As with earlier platforms, sensitive categories may remain restricted until governance and safeguards mature.

The main risks are organisational rather than technical. New channels will not fix unclear value propositions or operational bottlenecks. Conversational advertising changes visibility, not fundamentals, and success will depend on responsible integration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI model promises faster monoclonal antibody production

Researchers at the University of Oklahoma have developed a machine-learning model that could significantly speed up the manufacturing of monoclonal antibodies, a fast-growing class of therapies used to treat cancer, autoimmune disorders, and other diseases.

The study, published in Communications Engineering, targets delays in selecting high-performing cell lines during antibody production. Output varies widely between Chinese hamster ovary cell clones, forcing manufacturers to spend weeks screening for high yields.

By analysing early growth data, the researchers trained a model to predict antibody productivity far earlier in the process. Using only the first 9 days of data, it forecast production trends through day 16 and identified higher-performing clones in more than 76% of tests.

The model was developed with Oklahoma-based contract manufacturer Wheeler Bio, combining production data with established growth equations. Although further validation is needed, early results suggest shorter timelines and lower manufacturing costs.

The work forms part of a wider US-funded programme to strengthen biotechnology manufacturing capacity, highlighting how AI is being applied to practical industrial bottlenecks rather than solely to laboratory experimentation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Research warns of AI driven burnout risks

Generative AI is not reducing workloads as widely expected but intensifying them, according to new workplace research. Findings suggest productivity gains are being offset by expanding responsibilities and longer working hours.

An eight-month study at a US tech firm found employees worked faster, took on broader tasks, and extended working hours. AI tools enabled staff to take on duties beyond their roles, including coding, research, and technical problem-solving.

Researchers identified three pressure points driving intensification: task expansion, blurred work-life boundaries, and increased multitasking. Workers used AI during breaks and off-hours while juggling parallel tasks, increasing cognitive load.

Experts warn that the early productivity surge may mask burnout, fatigue, and declining work quality. Organisations are now being urged to establish structured ‘AI practices’ to regulate usage, protect focus, and maintain sustainable productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Enterprise AI security evolves as Cisco expands AI Defense capabilities

Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.

The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.

Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.

Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.

Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI governance takes focus at UN security dialogue

The UN will mark the fourth International Day for the Prevention of Violent Extremism Conducive to Terrorism on 12 February 2026 with a high-level dialogue focused on AI. The event will examine how emerging technologies are reshaping both prevention strategies and extremist threats.

Organised by the UN Office of Counter-Terrorism in partnership with the Republic of Korea’s UN mission, the dialogue will take place at UN Headquarters in New York. Discussions will bring together policymakers, technology experts, civil society representatives, and youth stakeholders.

A central milestone will be the launch of the first UN Practice Guide on Artificial Intelligence and Preventing and Countering Violent Extremism. The guide offers human rights-based advice on responsible AI use, addressing ethical, governance, and operational risks.

Officials warn that AI-generated content, deepfakes, and algorithmic amplification are accelerating extremist narratives online. Responsibly governed AI tools could enhance early detection, research, and community prevention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU challenges Meta over WhatsApp AI restrictions

The European Commission has warned Meta that it may have breached EU antitrust rules by restricting third-party AI assistants from operating on WhatsApp. A Statement of Objections outlines regulators’ preliminary view that the policy could distort competition in the AI assistant market.

The probe centres on updated WhatsApp Business terms announced in October 2025 and enforced from January 2026. Under the changes, rival general-purpose AI assistants were effectively barred from accessing the platform, leaving Meta AI as the only integrated assistant available to users.

Regulators argue that WhatsApp serves as a critical gateway for consumers AI access AI services. Excluding competitors could reinforce Meta’s dominance in communication applications while limiting market entry and expansion opportunities for smaller AI developers.

Interim measures are now under consideration to prevent what authorities describe as potentially serious and irreversible competitive harm. Meta can respond before any interim measures are imposed, while the broader antitrust probe continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU telecom simplification at risk as Digital Networks Act adds extra admin

The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.

The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.

Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.

Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.

The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Coal reserves could help Nigeria enter $650 billion AI economy

Nigeria has been advised to develop its coal reserves to benefit from the rapidly expanding global AI economy. A policy organisation said the country could capture part of the projected $650 billion AI investment by strengthening its energy supply capacity.

AI infrastructure requires vast and reliable electricity to power data centres and advanced computing systems. Technology companies worldwide are increasing energy investments as competition intensifies and demand for computing power continues to grow rapidly.

Nigeria holds nearly five billion metric tonnes of coal, offering a significant opportunity to support global energy needs. Experts warned that failure to develop these resources could result in major economic losses and missed industrial growth.

The organisation also proposed creating a national corporation to convert coal into high-value energy and industrial products. Analysts stressed that urgent government action is needed to secure Nigeria’s position in the emerging AI-driven economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!