Evidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental use to routine deployment in malicious cyber operations in 2026. Rather than introducing entirely new threats, AI is expected to accelerate existing attack techniques, reduce operational costs for attackers, and increase the scale and persistence of campaigns.
AI-enabled malware is expected to adapt during execution. Threat intelligence reporting indicates that malware using AI models is already capable of modifying behaviour in real time. In 2026, such capabilities are expected to become more common, allowing malicious code to adjust tactics in response to defensive measures.
AI agents are likely to automate key stages of cyberattacks. Researchers expect wider use of agentic AI systems that can independently conduct reconnaissance, exploit vulnerabilities, and maintain persistence, reducing the need for continuous human control.
Prompt injection will be treated as a practical attack technique against AI deployments. As organisations embed AI assistants and agents into workflows, attackers are expected to target the AI layer itself (e.g. through prompt injection, unsafe tool use, and weak guardrails) to trigger unintended actions or expose data.
Threat actors will use AI to target humans at scale. The text emphasises AI-enhanced social engineering: conversational bots, real-time manipulation, and automated account takeover, shifting attacks from isolated human-led attempts to continuous, scalable interaction.
AI will expose APIs as a too-easily-exploited attack surface. The experts argue that AI agents capable of discovering and interacting with software interfaces will lower the barrier to abusing APIs, including undocumented or unintended ones. As agents gain broader permissions and access to cloud services, APIs are expected to become a more frequent point of exploitation and concealment.
Extortion will evolve beyond ransomware encryption. Extortion campaigns are expected to rely less on encryption alone and more on a combination of tactics, including data theft, threats to leak or alter information, and disruption of cloud services, backups, and supply chains.
Cyber incidents will increasingly spread from IT into industrial operations. Ransomware and related intrusions are expected to move beyond enterprise IT systems and disrupt operational technology and industrial control environments, amplifying downtime, supply-chain disruption, and operational impact.
The insider threat will increasingly include imposter employees. Analysts anticipate insider risks will extend beyond malicious or negligent staff to include external actors who gain physical or remote access by posing as legitimate employees, including through hardware implants or direct device access that bypasses end point security.
Nation-state cyber activity will continue to target Western governments and industries. Experts point to continued cyber operations by state-linked actors, including financially motivated campaigns and influence operations, with increased use of social engineering, deception techniques, and AI-enabled tools to scale and refine targeting.
Identity management is expected to remain a primary failure point. The rapid growth of human and machine identities, including AI agents, across SaaS, cloud platforms and third-party environments is likely to reinforce credential misuse as a leading cause of major breaches.
Taken together, these trends suggest that in 2026, cyber risk will increasingly reflect systemic exposure created by the combination of AI adoption, identity sprawl, and interconnected digital infrastructure, rather than isolated technical failures.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Stargate UAE data centre project is expected to cost more than $30 billion, underscoring the scale of the Emirates’ investment in AI infrastructure.
Speaking at the Machines Can Think summit in Abu Dhabi, UAE Minister of State for AI described the project as a centrepiece of the UAE’s efforts to expand global cooperation on AI infrastructure.
Designed as a flagship development, Stargate UAE reflects the country’s ambition to lead in AI infrastructure. Spanning 19.2 square kilometres in Abu Dhabi, the campus will be built in phases, with the first phase due in the third quarter of 2026.
Beyond domestic capacity, the UAE is positioning Stargate UAE as a platform to support the sovereign AI and data sovereignty needs of other countries.
Officials emphasised that the initiative aims to provide non-profit-oriented AI options that nations can adapt, train, and build upon in response to rising global concerns about the control of data and AI systems.
The project is supported by the UAE’s expanding capabilities in large language model development, including Jais and K2 Think.
Stargate UAE is being developed by Khazna Data Centres, part of Abu Dhabi-based AI group G42, in partnership with global technology companies including OpenAI, Oracle, Nvidia, Cisco, SoftBank, and South Korea, reinforcing its role as a globally collaborative AI infrastructure initiative.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The state of Georgia is emerging as the focal point of a growing backlash against the rapid expansion of data centres powering the US’ AI boom.
Lawmakers in several states are now considering statewide bans, as concerns over energy consumption, water use and local disruption move to the centre of economic and environmental debate.
A bill introduced in Georgia would impose a moratorium on new data centre construction until March next year, giving state and municipal authorities time to establish more explicit regulatory rules.
The proposal arrives after Georgia’s utility regulator approved plans for an additional 10 gigawatts of electricity generation, primarily driven by data centre demand and expected to rely heavily on fossil fuels.
Local resistance has intensified as the Atlanta metropolitan area led the country in data centre construction last year, prompting multiple municipalities to impose their own temporary bans.
Critics argue that rapid development has pushed up electricity bills, strained water supplies and delivered fewer tax benefits than promised. At the same time, utility companies retain incentives to expand generation rather than improve grid efficiency.
The issue has taken on broader political significance as Georgia prepares for key elections that will affect utility oversight.
Supporters of the moratorium frame the pause as a chance for public scrutiny and democratic accountability, while backers of the industry warn that blanket restrictions risk undermining investment, jobs and long-term technological competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Firefighting is entering a new era with HEN Technologies. Founder Sunny Sethi has developed nozzles that extinguish fires up to three times faster while conserving two-thirds of water.
HEN’s products include nozzles, valves, monitors, and sprinklers equipped with sensors and smart circuits. A cloud platform tracks water flow, pressure, GPS, and weather conditions, allowing fire departments to respond efficiently and manage resources effectively.
Predictive analytics built on this data provide real-time insights for incident commanders. Firefighters can anticipate wind shifts, monitor water usage, and optimise operations, attracting interest from the Department of Homeland Security and military agencies worldwide.
Commercial adoption has been rapid, with revenue rising from $200,000 in 2023 to a projected $20 million this year. Serving 1,500 clients globally and filing 20 patents, HEN is also collecting real-world fire data that could support AI models simulating extreme environments.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Creative communities are pushing back against generative AI in literature and art. The Science Fiction and Fantasy Writers Association now bars works created wholly or partly with large language models after criticism of earlier, more permissive rules.
San Diego Comic-Con faced controversy when it initially allowed AI-generated art in its exhibition, but not for sale. Artists argued that the rules threatened originality, prompting organisers to ban all AI-created material.
Authors warn that generative AI undermines the creative process. Some point out that large language model tools are already embedded in research and writing software, raising concerns about accidental disqualification from awards.
Fans and members welcomed SFWA’s decision, but questions remain about how broadly AI usage will be defined. Many creators insist that machines cannot replicate storytelling and artistic skill.
Industry observers expect other cultural organisations to follow similar policies this year. The debate continues over ethics, fairness, and technology’s role in arts and literature.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
New measures are being introduced in west Northamptonshire with the deployment of an AI-powered CCTV tower to combat fly-tipping in known hotspots. The mobile system will be rotated between locations until January 2027 to improve detection and deterrence.
Fly-tipping remains a significant issue across the area, with more than 21,000 incidents cleared between April 2024 and March 2025. Local authorities say illegal dumping damages neighbourhoods, harms wildlife and places a heavy financial burden on taxpayers.
The tower uses 360-degree cameras and AI to monitor activity and identify offences as they occur. Automatic number plate recognition allows enforcement officers to link incidents to suspected vehicles more quickly.
Council leaders say a similar scheme in Dartford have reduced fly-tipping and believe the technology sends a strong message to offenders. Residents are encouraged to report incidents through the council website or smartphone app to support enforcement efforts.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.
Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.
Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.
Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.
In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.
Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.
Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US companies are increasingly adopting Chinese AI models as part of their core technology stacks, raising questions about global leadership in AI. In the US, Pinterest has confirmed it is using Chinese-developed models to improve recommendations and shopping features.
In the US, executives point to open-source Chinese models such as DeepSeek and tools from Alibaba as faster, cheaper and easier to customise. US firms say these models can outperform proprietary alternatives at a fraction of the cost.
Adoption extends beyond Pinterest in the US, with Airbnb also relying on Chinese AI to power customer service tools. Data from Hugging Face shows Chinese models frequently rank among the most downloaded worldwide, including across US developers.
Researchers at Stanford University have found Chinese AI capabilities now match or exceed global peers. In the US, firms such as OpenAI and Meta remain focused on proprietary systems, leaving China to dominate open-source AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.
According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.
Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.
Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.
IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.
For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.
At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.
Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!