How AI in 2026 will transform management roles and organisational design

In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 2026, AI is expected to move beyond experimentation and pilot projects and begin reshaping how companies are actually run.

According to researchers and professors at IMD, the focus will shift from testing AI tools to redesigning organisational structures, decision-making processes, and management roles themselves. After several years of hype-driven investment, many companies are now under pressure to show clear returns from AI.

Those that remain stuck in proof-of-concept mode risk falling behind competitors who are willing to make more significant operational changes. Several corporate functions are set to become AI native by the end of the year.

Human roles in these areas will focus more on interpersonal judgement, oversight and complex decision-making, while software forms the operational backbone. Workforce structures are also likely to change. Middle management roles are expected to shrink gradually as AI systems take over reporting, forecasting and coordination tasks.

At the same time, risks associated with AI are growing. Highly realistic synthetic media is expected to fuel a rise in misinformation, exposing organisations to reputational and governance challenges. To respond, companies will need faster monitoring systems, clearer crisis-response protocols and closer cooperation with digital platforms to counter fabricated content.

Economic uncertainty is adding further pressure. Organisations that remain stuck in pilot mode may be forced to scale back, while those committing to bigger operational change are expected to gain an advantage.

Operational areas are expected to deliver the highest returns on investment. Supply chains, core operations and internal processes are expected to outperform customer-facing applications in efficiency, resilience and cost reduction.

As a result, chief operating officers may emerge as the most influential leaders of AI within executive teams. Ultimately, by 2026, competitive advantage will depend less on whether a company uses advanced AI and more on how deliberately it integrates these systems into everyday decision-making, roles, and organisational structures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU targets addictive gaming features

Video gaming has become one of Europe’s most prominent entertainment industries, surpassing a niche hobby, with over half the population regularly engaging in it.

As the sector grows, the EU lawmakers are increasingly worried about addictive game design and manipulative features that push players to spend more time and money online.

Much of the concern focuses on loot boxes, where players pay for random digital rewards that resemble gambling mechanics. Studies and parliamentary reports warn that children may be particularly vulnerable, with some lawmakers calling for outright bans on paid loot boxes and premium in-game currencies.

The European Commission is examining how far design choices contribute to digital addiction and whether games are exploiting behavioural weaknesses rather than offering fair entertainment.

Officials say the risk is higher for minors, who may not fully understand how engagement-driven systems are engineered.

The upcoming Digital Fairness Act aims to strengthen consumer protection across online services, rather than leaving families to navigate the risks alone. However, as negotiations continue, the debate over how tightly gaming should be regulated is only just beginning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IMF calls for stronger AI regulation in global securities markets

Regulators worldwide are being urged to adopt stronger oversight frameworks for AI in capital markets after an IMF technical note warned that rapid AI adoption could reshape securities trading while increasing systemic risk.

AI brings major efficiency gains in asset management and high-frequency trading instead of slower, human-led processes, yet opacity, market volatility, cyber threats and model concentration remain significant concerns.

The IMF warns that AI could create powerful data oligopolies where only a few firms can train the strongest models, while autonomous trading agents may unintentionally collude by widening spreads without explicit coordination.

Retail investors also face rising exposure to AI washing, where financial firms exaggerate or misrepresent AI capability, making transparency, accountability and human-in-the-loop review essential safeguards.

Supervisory authorities are encouraged to scale their own AI capacity through SupTech tools for automated surveillance and social-media sentiment monitoring.

The note highlights India as a key case study, given the dominance of algorithmic trading and SEBI’s early reporting requirements for AI and machine learning. The IMF also points to the National Stock Exchange’s use of AI in fraud detection as an emerging-market model for resilient monitoring infrastructure.

The report underlines the need for regulators to prepare for AI-driven market shocks, strengthen governance obligations on regulated entities and build specialist teams capable of understanding model risk instead of reacting only after misconduct or misinformation harms investors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI, digital twins, and intelligent wearables reshape security operations in 2026

Operational success in security technology is increasingly being judged through measurable performance rather than early-stage novelty.

As 2026 approaches, Agentic AI, digital twins and intelligent wearables are moving from research concepts into everyday operational roles, reshaping how security functions are designed and delivered.

Agentic AI is no longer limited to demonstrations. Instead of simple automation, autonomous agents now analyse video feeds, access data and sensor logs to investigate incidents and propose mitigation steps for human approval.

Adoption is accelerating worldwide, particularly in Singapore, where most business leaders already view Agentic AI as essential for maintaining competitiveness. The technology is becoming embedded in workflows rather than used as an experimental add-on.

Digital twins are also reaching maturity. Instead of being static models, they now mirror complex environments such as ports, airports and high-rise estates, allowing organisations to simulate emergencies, plan resource deployment, and optimise systems in real time.

Wearables and AR tools are undergoing a similar shift, acting as intelligent companions that interpret the environment and provide timely guidance, rather than operating as passive recording devices.

The direction of travel is clear. Security work is becoming more predictive, interconnected and immersive.

Organisations most likely to benefit are those that prioritise integration, simulation and augmentation, while measuring outcomes through KPIs such as response speed, false-positive reduction and decision confidence instead of chasing technological novelty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Drones and AI take over Christmas tree farms

Across Christmas tree farms, drones and AI are beginning to replace manual counting and field inspections.

Growers in Denmark and North Carolina are now mapping plantations using AI-driven image analysis instead of relying on workers walking the fields for days.

Systems can recognise and measure each tree, give it a digital ID and track health and growth over time, helping farmers plan harvests and sales more accurately.

The technology is proving particularly valuable in large or difficult terrain. Some plantations in North Carolina sit on steep slopes where machinery and people face higher risks, so farmers are turning to laser-scanning drones and heavy-duty robotic mowers instead of traditional equipment.

Experts say the move saves time, improves safety and reduces labour needs, while accuracy rates can reach as high as 98 percent.

Adoption still depends on cost, aviation rules and staff training, so smaller farms may struggle to keep pace. Yet interest continues to rise as equipment becomes cheaper and growers grow more comfortable with digital tools.

Many industry specialists now see AI-enabled drones as everyday agricultural equipment rather than experimental gadgets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI disruption risk seen as lower for India’s white-collar jobs

India faces a lower risk of AI-driven disruption to white-collar jobs than Western economies, IT Secretary S Krishnan said. A smaller share of cognitive roles and strong STEM employment reduce near-term impact.

Rather than replacing workers, artificial intelligence is expected to create jobs through sector-specific applications. Development and deployment of these systems will require many trained professionals.

Human oversight will remain essential as issues such as AI hallucinations limit full automation of cognitive tasks. Productivity gains are expected to support, rather than eliminate, knowledge-based work.

India is positioning itself as a global contributor to applied artificial intelligence solutions. Indigenous AI models under development are expected to support jobs, innovation and long-term economic growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Court blocks Texas app store law as Apple halts rollout

Apple has paused previously announced plans for Texas after a federal judge blocked a new age-verification law for app stores. The company said it will continue to monitor the legal process while keeping certain developer tools available for testing.

The law, known as the App Store Accountability Act, would have required app stores to verify user ages and obtain parental consent for minors. It also mandated that age data be shared with app developers, a provision criticised by technology companies on privacy grounds.

A US judge halted enforcement of the law, citing First Amendment concerns, ahead of its planned January rollout. Texas officials said they intend to appeal the decision, signalling that the legal dispute is likely to continue.

Apple had announced new requirements to comply with the law, including mandatory Family Sharing for users under 18 and renewed parental consent following significant app updates. Those plans are now on hold following the ruling.

Apple said its age-assurance tools remain available globally, while reiterating concerns that broad data collection could undermine user privacy. Similar laws are expected to take effect in other US states next year.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Digital rules dispute deepens as US administration avoids trade retaliation

The US administration is criticising foreign digital regulations affecting major online platforms while avoiding trade measures that could disrupt the US economy. Officials say the rules disproportionately impact American technology companies.

US officials have paused or cancelled trade discussions with the UK, the EU, and South Korea. Current negotiations are focused on rolling back digital taxes, privacy rules, and platform regulations that Washington views as unfair barriers to US firms.

US administration officials describe the moves as a negotiating tactic rather than an escalation toward tariffs. While trade investigations into digital practices have been raised as a possibility, officials have stressed that the goal remains a negotiated outcome rather than a renewed trade conflict.

Technology companies have pressed for firmer action, though some industry figures warn that aggressive retaliation could trigger a wider digital trade war. Officials acknowledge that prolonged disputes with major partners could ultimately harm both US firms and global markets.

Despite rhetorical escalation and targeted threats against European companies, the US administration has so far avoided dismantling existing trade agreements. Analysts say mounting pressure may soon force Washington to choose between compromise and more concrete enforcement measures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots exploited to create nonconsensual bikini deepfakes

Users of popular AI chatbots are generating bikini deepfakes by manipulating photos of fully clothed women, often without consent. Online discussions show how generative AI tools can be misused to create sexually suggestive deepfakes from ordinary images, raising concerns about image-based abuse.

A now-deleted Reddit thread shared prompts for using Google’s Gemini to alter clothing in photographs. One post asked for a woman’s traditional dress to be changed to a bikini. Reddit removed the content and later banned the subreddit over deepfake-related harassment.

Researchers and digital rights advocates warn that nonconsensual deepfakes remain a persistent form of online harassment. Millions of users have visited AI-powered websites designed to undress people in photos. The trend reflects growing harm enabled by increasingly realistic image generation tools.

Most mainstream AI chatbots prohibit the creation of explicit images and apply safeguards to prevent abuse. However, recent advances in image-editing models have made it easier for users to bypass guardrails using simple prompts, according to limited testing and expert assessments.

Technology companies say their policies ban altering a person’s likeness without consent, with penalties including account suspensions. Legal experts argue that deepfakes involving sexualised imagery represent a core risk of generative AI and that accountability must extend to both users and platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Santa Tracker services add new features on Christmas Eve

AI-powered tools are adding new features to long-running Santa Tracker services used by families on Christmas Eve. Platforms run by NORAD and Google allow users to follow Father Christmas’s journey through their Santa Tracker tools, which also introduce interactive and personalised digital experiences.

NORAD’s Santa Tracker, first launched in 1955, now features games, videos, music, and stories in addition to its live tracking map. This year, the service introduced AI-powered features that generate elf-style avatars, create toy ideas, and produce personalised holiday stories for families.

The Santa Tracker presents Santa’s journey on a 3D globe built using open-source mapping technology and satellite imagery. Users can also watch short videos on Santa Cam, featuring Santa travelling to destinations around the world.

Google’s rendition offers similar features, including a live map, estimated arrival times, and interactive activities available throughout December. Santa’s Village includes games, animations, and beginner-friendly coding activities designed for children.

Google Assistant introduces a voice-based experience to its service, enabling users to ask about Santa’s location or receive updates from the North Pole. Both platforms aim to blend tradition with digital tools to create a seamless and engaging holiday experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!