How AI in 2026 will transform management roles and organisational design

In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 2026, AI is expected to move beyond experimentation and pilot projects and begin reshaping how companies are actually run.

According to researchers and professors at IMD, the focus will shift from testing AI tools to redesigning organisational structures, decision-making processes, and management roles themselves. After several years of hype-driven investment, many companies are now under pressure to show clear returns from AI.

Those that remain stuck in proof-of-concept mode risk falling behind competitors who are willing to make more significant operational changes. Several corporate functions are set to become AI native by the end of the year.

Human roles in these areas will focus more on interpersonal judgement, oversight and complex decision-making, while software forms the operational backbone. Workforce structures are also likely to change. Middle management roles are expected to shrink gradually as AI systems take over reporting, forecasting and coordination tasks.

At the same time, risks associated with AI are growing. Highly realistic synthetic media is expected to fuel a rise in misinformation, exposing organisations to reputational and governance challenges. To respond, companies will need faster monitoring systems, clearer crisis-response protocols and closer cooperation with digital platforms to counter fabricated content.

Economic uncertainty is adding further pressure. Organisations that remain stuck in pilot mode may be forced to scale back, while those committing to bigger operational change are expected to gain an advantage.

Operational areas are expected to deliver the highest returns on investment. Supply chains, core operations and internal processes are expected to outperform customer-facing applications in efficiency, resilience and cost reduction.

As a result, chief operating officers may emerge as the most influential leaders of AI within executive teams. Ultimately, by 2026, competitive advantage will depend less on whether a company uses advanced AI and more on how deliberately it integrates these systems into everyday decision-making, roles, and organisational structures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU targets addictive gaming features

Video gaming has become one of Europe’s most prominent entertainment industries, surpassing a niche hobby, with over half the population regularly engaging in it.

As the sector grows, the EU lawmakers are increasingly worried about addictive game design and manipulative features that push players to spend more time and money online.

Much of the concern focuses on loot boxes, where players pay for random digital rewards that resemble gambling mechanics. Studies and parliamentary reports warn that children may be particularly vulnerable, with some lawmakers calling for outright bans on paid loot boxes and premium in-game currencies.

The European Commission is examining how far design choices contribute to digital addiction and whether games are exploiting behavioural weaknesses rather than offering fair entertainment.

Officials say the risk is higher for minors, who may not fully understand how engagement-driven systems are engineered.

The upcoming Digital Fairness Act aims to strengthen consumer protection across online services, rather than leaving families to navigate the risks alone. However, as negotiations continue, the debate over how tightly gaming should be regulated is only just beginning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan climbs global AI readiness ranking

Kazakhstan has risen to 60th place out of 195 countries in the 2025 Government AI Readiness Index, marking a 16-place improvement and highlighting a year of accelerated institutional and policy development.

The ranking, compiled by Oxford Insights, measures governments’ ability to adopt and manage AI across public administration, the economy, and social systems.

At a regional level, Kazakhstan now leads Central Asia in AI readiness. A strong performance in the Public Sector Adoption pillar, with a score of 73.59, reflects the widespread use of digital services, e-government platforms, and a shift toward data-led public service delivery.

The country’s advanced digital infrastructure, high internet penetration, and mature electronic government ecosystem provide a solid foundation for scaling AI nationwide.

Political and governance initiatives have further strengthened Kazakhstan’s position. In 2025, the government enacted its first comprehensive AI law, which covers ethics, safety, and digital innovation.

At the same time, the Ministry of Digital Development, Innovation and Aerospace Industry was restructured into a dedicated Ministry of Artificial Intelligence and Digital Development, signalling the government’s commitment to making AI a central policy priority.

Kazakhstan’s progress demonstrates how a focused policy, infrastructure, and institutional approach can enhance AI readiness, enabling the responsible and effective integration of AI across public and economic sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan to boost spending on semiconductors and AI

Japan’s Ministry of Economy, Trade and Industry is set to significantly increase funding for advanced semiconductors and AI in the coming fiscal year.

Spending on chips and AI is expected to nearly quadruple to ¥1.23 trillion ($7.9 billion), accounting for the majority of the ministry’s ¥3.07 trillion budget, a 50% increase from last year. The budget, approved by Prime Minister Sanae Takaichi’s Cabinet, will be debated in parliament early next year.

The funding boost reflects Japan’s push to strengthen its position in frontier technologies amid global competition with the US and China. The government will fund most of the additional support through regular budgets, ensuring more stable backing for semiconductor and AI development.

Key initiatives include ¥150 billion for chip venture Rapidus and ¥387.3 billion for domestic foundation AI models, data infrastructure, and ‘physical AI’ for robotics and machinery control.

The budget also allocates ¥5 billion for critical minerals and ¥122 billion for decarbonisation, including next-generation nuclear power. Special bonds worth ¥1.78 trillion will also support Japanese investment in the US, reinforcing the trade agreement between the two countries.

The increase in funding demonstrates Japan’s strategic focus on achieving technological self-sufficiency and enhancing global competitiveness in emerging industries, thereby ensuring long-term support for innovation and critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT-IBM researchers improve large language models with PaTH Attention

Researchers at MIT and the MIT-IBM Watson AI Lab have introduced a new attention mechanism designed to enhance the capabilities of large language models (LLMs) in tracking state and reasoning across long texts.

Unlike traditional positional encoding methods, the PaTH Attention system adapts to the content of words, enabling models to follow complex sequences more effectively.

PaTH Attention models sequences through data-dependent transformations, allowing LLMs to track how meaning changes between words instead of relying solely on relative distance.

The approach improves performance on long-context reasoning, multi-step recall, and language modelling benchmarks, all while remaining computationally efficient and compatible with GPUs.

Tests demonstrated consistent gains in perplexity and content-awareness compared with conventional methods. The team combined PaTH Attention with FoX to down-weight less relevant information, improving reasoning and long-sequence understanding.

According to senior author Yoon Kim, these advances represent the next step in developing general-purpose building blocks for AI, combining expressivity, scalability, and efficiency for broader applications in structured domains such as biology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMF calls for stronger AI regulation in global securities markets

Regulators worldwide are being urged to adopt stronger oversight frameworks for AI in capital markets after an IMF technical note warned that rapid AI adoption could reshape securities trading while increasing systemic risk.

AI brings major efficiency gains in asset management and high-frequency trading instead of slower, human-led processes, yet opacity, market volatility, cyber threats and model concentration remain significant concerns.

The IMF warns that AI could create powerful data oligopolies where only a few firms can train the strongest models, while autonomous trading agents may unintentionally collude by widening spreads without explicit coordination.

Retail investors also face rising exposure to AI washing, where financial firms exaggerate or misrepresent AI capability, making transparency, accountability and human-in-the-loop review essential safeguards.

Supervisory authorities are encouraged to scale their own AI capacity through SupTech tools for automated surveillance and social-media sentiment monitoring.

The note highlights India as a key case study, given the dominance of algorithmic trading and SEBI’s early reporting requirements for AI and machine learning. The IMF also points to the National Stock Exchange’s use of AI in fraud detection as an emerging-market model for resilient monitoring infrastructure.

The report underlines the need for regulators to prepare for AI-driven market shocks, strengthen governance obligations on regulated entities and build specialist teams capable of understanding model risk instead of reacting only after misconduct or misinformation harms investors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI, digital twins, and intelligent wearables reshape security operations in 2026

Operational success in security technology is increasingly being judged through measurable performance rather than early-stage novelty.

As 2026 approaches, Agentic AI, digital twins and intelligent wearables are moving from research concepts into everyday operational roles, reshaping how security functions are designed and delivered.

Agentic AI is no longer limited to demonstrations. Instead of simple automation, autonomous agents now analyse video feeds, access data and sensor logs to investigate incidents and propose mitigation steps for human approval.

Adoption is accelerating worldwide, particularly in Singapore, where most business leaders already view Agentic AI as essential for maintaining competitiveness. The technology is becoming embedded in workflows rather than used as an experimental add-on.

Digital twins are also reaching maturity. Instead of being static models, they now mirror complex environments such as ports, airports and high-rise estates, allowing organisations to simulate emergencies, plan resource deployment, and optimise systems in real time.

Wearables and AR tools are undergoing a similar shift, acting as intelligent companions that interpret the environment and provide timely guidance, rather than operating as passive recording devices.

The direction of travel is clear. Security work is becoming more predictive, interconnected and immersive.

Organisations most likely to benefit are those that prioritise integration, simulation and augmentation, while measuring outcomes through KPIs such as response speed, false-positive reduction and decision confidence instead of chasing technological novelty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Drones and AI take over Christmas tree farms

Across Christmas tree farms, drones and AI are beginning to replace manual counting and field inspections.

Growers in Denmark and North Carolina are now mapping plantations using AI-driven image analysis instead of relying on workers walking the fields for days.

Systems can recognise and measure each tree, give it a digital ID and track health and growth over time, helping farmers plan harvests and sales more accurately.

The technology is proving particularly valuable in large or difficult terrain. Some plantations in North Carolina sit on steep slopes where machinery and people face higher risks, so farmers are turning to laser-scanning drones and heavy-duty robotic mowers instead of traditional equipment.

Experts say the move saves time, improves safety and reduces labour needs, while accuracy rates can reach as high as 98 percent.

Adoption still depends on cost, aviation rules and staff training, so smaller farms may struggle to keep pace. Yet interest continues to rise as equipment becomes cheaper and growers grow more comfortable with digital tools.

Many industry specialists now see AI-enabled drones as everyday agricultural equipment rather than experimental gadgets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI disruption risk seen as lower for India’s white-collar jobs

India faces a lower risk of AI-driven disruption to white-collar jobs than Western economies, IT Secretary S Krishnan said. A smaller share of cognitive roles and strong STEM employment reduce near-term impact.

Rather than replacing workers, artificial intelligence is expected to create jobs through sector-specific applications. Development and deployment of these systems will require many trained professionals.

Human oversight will remain essential as issues such as AI hallucinations limit full automation of cognitive tasks. Productivity gains are expected to support, rather than eliminate, knowledge-based work.

India is positioning itself as a global contributor to applied artificial intelligence solutions. Indigenous AI models under development are expected to support jobs, innovation and long-term economic growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Court blocks Texas app store law as Apple halts rollout

Apple has paused previously announced plans for Texas after a federal judge blocked a new age-verification law for app stores. The company said it will continue to monitor the legal process while keeping certain developer tools available for testing.

The law, known as the App Store Accountability Act, would have required app stores to verify user ages and obtain parental consent for minors. It also mandated that age data be shared with app developers, a provision criticised by technology companies on privacy grounds.

A US judge halted enforcement of the law, citing First Amendment concerns, ahead of its planned January rollout. Texas officials said they intend to appeal the decision, signalling that the legal dispute is likely to continue.

Apple had announced new requirements to comply with the law, including mandatory Family Sharing for users under 18 and renewed parental consent following significant app updates. Those plans are now on hold following the ruling.

Apple said its age-assurance tools remain available globally, while reiterating concerns that broad data collection could undermine user privacy. Similar laws are expected to take effect in other US states next year.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!