Ransomware attack on Under Armour leads to massive customer data exposure

Under Armour is facing growing scrutiny following the publication of customer data linked to a ransomware attack disclosed in late 2025.

According to breach verification platform Have I Been Pwned, a dataset associated with the incident appeared on a hacking forum in January, exposing information tied to tens of millions of customers.

The leaked material reportedly includes 72 million email addresses alongside names, dates of birth, location details and purchase histories. Security analysts warn that such datasets pose risks that extend far beyond immediate exposure, particularly when personal identifiers and behavioural data are combined.

Experts note that verified customer information linked to a recognised brand can enable compelling phishing and fraud campaigns powered by AI tools.

Messages referencing real transactions or purchase behaviour can blur the boundary between legitimate communication and malicious activity, increasing the likelihood of delayed victimisation.

The incident has also led to legal action against Under Armour, with plaintiffs alleging failures in safeguarding sensitive customer information. The case highlights how modern data breaches increasingly generate long-term consequences rather than immediate technical disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI method boosts reasoning without extra training

Researchers at the University of California, Riverside, have introduced a technique that improves AI reasoning without requiring additional training data. Called Test-Time Matching, the approach enhances AI performance by enabling dynamic model adaptation.

The method addresses a persistent weakness in multimodal AI systems, which often struggle to interpret unfamiliar combinations of images and text. Traditional evaluation metrics rely on isolated comparisons that can obscure deeper reasoning capabilities.

By replacing these with a group-based matching approach, the researchers uncovered hidden model potential and achieved markedly stronger results.

Test-Time Matching lets AI systems refine predictions through repeated self-correction. Tests on SigLIP-B16 showed substantial gains, with performance surpassing larger models, including GPT-4.1, on key reasoning benchmarks.

The findings suggest that smarter evaluation and adaptation strategies may unlock powerful reasoning abilities even in smaller models. Researchers say the approach could speed AI deployment across robotics, healthcare, and autonomous systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI fuels surge in online fraud risks in 2026

Online scams are expected to surge in 2026, overtaking ransomware as the top cyber-risk, the World Economic Forum warned, driven by the growing use of generative AI.

Executives are increasingly concerned about AI-driven scams that are easier to launch and harder to detect than traditional cybercrime. WEF managing director Jeremy Jurgens said leaders now face the challenge of acting collectively to protect trust and stability in an AI-driven digital environment.

Consumers are also feeling the impact. An Experian report found 68% of people now see identity theft as their main concern, while US Federal Trade Commission data shows consumer fraud losses reached $12.5 billion in 2024, up 25% year on year.

Generative AI is enabling more convincing phishing, voice cloning, and impersonation attempts. The WEF reported that 62% of executives experienced phishing attacks, 37% encountered invoice fraud, and 32% reported identity theft, with vulnerable groups increasingly targeted through synthetic content abuse.

Experts warn that many organisations still lack the skills and resources to defend against evolving threats. Consumer groups advise slowing down, questioning urgent messages, avoiding unsolicited requests for information, and verifying contacts independently to reduce the risk of generative AI-powered scams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Advanced Linux malware framework VoidLink likely built with AI

Security researchers from Check Point have uncovered VoidLink. This advanced and modular Linux malware framework has been developed predominantly with AI assistance, likely by a single individual rather than a well-resourced threat group.

VoidLink’s development process, exposed due to the developer’s operational security (OPSEC) failures, indicates that AI models were used not just for parts of the code but to orchestrate the entire project plan, documentation and implementation.

According to analysts, the malware framework reached a functional state in under a week with more than 88,000 lines of code, compressing what would traditionally take weeks or months into days.

While no confirmed in-the-wild attacks have yet been reported, researchers caution that the advent of AI-assisted malware represents a significant cybersecurity shift, lowering the barrier to creating sophisticated threats and potentially enabling widespread future misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK launches software security ambassadors scheme

The UK government has launched the Software Security Ambassadors Scheme to promote stronger software security practices nationwide. The initiative is led by the Department for Science, Innovation and Technology and the National Cyber Security Centre.

In the UK, participating organisations commit to championing the new Software Security Code of Practice within their industries. Signatories agree to lead by example through secure development, procurement and advisory practices, while sharing lessons learned to strengthen national cyber resilience.

The scheme aims to improve transparency and risk management across UK digital supply chains. Software developers are encouraged to embed security throughout the whole lifecycle, while buyers are expected to incorporate security standards into procurement processes.

Officials say the approach supports the UK’s broader economic and security goals by reducing cyber risks and increasing trust in digital technologies. The government believes that better security practices will help UK businesses innovate safely and withstand cyber incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Davos roundtable calls for responsible AI growth

Leaders from the tech industry, academia, and policy circles met at a TIME100 roundtable in Davos, Switzerland, on 21 January to discuss how to pursue rapid AI progress without sacrificing safety and accountability. The conversation, hosted by TIME CEO Jessica Sibley, focused on how AI should be built, governed, and used as it becomes more embedded in everyday life.

A major theme was the impact of AI-enabled technology on children. Jonathan Haidt, an NYU Stern professor and author of The Anxious Generation, argued that the key issue is not total avoidance but the timing and habits of exposure. He suggested children do not need smartphones until at least high school, emphasising that delaying access can help protect brain development and executive function.

Yoshua Bengio, a professor at the Université de Montréal and founder of LawZero, said responsible innovation depends on a deeper scientific understanding of AI risks and stronger safeguards built into systems from the start. He pointed to two routes, consumer and societal demand for ‘built-in’ protections, and government involvement that could include indirect regulation through liability frameworks, such as requiring insurance for AI developers and deployers.

Participants also challenged the idea that geopolitical competition should justify weaker guardrails. Bengio argued that even rivals share incentives to prevent harmful outcomes, such as AI being used for cyberattacks or the development of biological weapons, and said coordination between major powers is possible, drawing a comparison to Cold War-era cooperation on nuclear risk reduction.

The roundtable linked AI risks to lessons from social media, particularly around attention-driven business models. Bill Ready, CEO of Pinterest, said engagement optimisation can amplify divisions and ‘prey’ on negative human impulses, and described Pinterest’s shift away from maximising view time toward maximising user outcomes, even if it hurts short-term metrics.

Several speakers argued that today’s alignment approach is too reactive. Stanford computer scientist Yejin Choi warned that models trained on the full internet absorb harmful patterns and then require patchwork fixes, urging exploration of systems that learn moral reasoning and human values more directly from the outset.

Kay Firth-Butterfield, CEO of Good Tech Advisory, added that wider AI literacy, shaped by input from workers, parents, and other everyday users, should underpin future certification and trust in AI tools.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Microsoft restores Exchange and Teams after Microsoft 365 disruption

The US tech giant, Microsoft, investigated a service disruption affecting Exchange Online, Teams and other Microsoft 365 services after users reported access and performance problems.

An incident that began late on Wednesday affected core communication tools used by enterprises for daily operations.

Engineers initially focused on diagnosing the fault, with Microsoft indicating that a potential third-party networking issue may have interfered with access to Outlook and Teams.

During the disruption, users experienced intermittent connectivity failures, latency and difficulties signing in across parts of the Microsoft 365 ecosystem.

Microsoft later confirmed that service access had been restored, although no detailed breakdown of the outage scope was provided.

The incident underlined the operational risks associated with cloud productivity platforms and the importance of transparency and resilience in enterprise digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

From chips to jobs: Huang’s vision for AI at Davos 2026

AI is evolving into a foundational economic system rather than a standalone technology, according to NVIDIA chief executive Jensen Huang, who described AI as a five-layer infrastructure spanning energy, hardware, data centres, models and applications.

Speaking at the World Economic Forum in Davos, Huang argued that building and operating each layer is triggering what he called the most significant infrastructure expansion in human history, with job creation stretching from power generation and construction to cloud operations and software development.

Investment patterns suggest a structural shift instead of a speculative cycle. Venture capital funding in 2025 reached record levels, largely flowing into AI-native firms across healthcare, manufacturing, robotics and financial services.

Huang stressed that the application layer will deliver the most significant economic return as AI moves from experimentation to core operational use across industries.

Concerns around job displacement were framed as misplaced. AI automates tasks rather than replacing professional judgement, enabling workers to focus on higher-value activities.

In healthcare, productivity gains from AI-assisted diagnostics and documentation are already increasing demand for radiologists and nurses rather than reducing headcount, as improved efficiency enables institutions to treat more patients.

Huang positioned AI as critical national infrastructure, urging governments to develop domestic capabilities aligned with local language, culture and industrial strengths.

He described AI literacy as an essential skill, comparable to leadership or management, while arguing that accessible AI tools could narrow global technology divides rather than widen them.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea sets the global standard for frontier AI regulation

South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to introduce formal safety requirements for high-performance, or frontier, AI systems, reshaping the global regulatory landscape.

The law establishes a national AI governance framework, led by the Presidential Council on National Artificial Intelligence Strategy, and creates an AI Safety Institute to oversee safety and trust assessments.

Alongside regulatory measures, the government is rolling out broad support for research, data infrastructure, talent development, startups, and overseas expansion, signalling a growth-oriented policy stance.

To minimise early disruption, authorities will introduce a minimum one-year grace period centred on guidance, consultation, and education rather than enforcement.

Obligations cover three areas: high-impact AI in critical sectors, safety rules for frontier models, and transparency requirements for generative AI, including disclosure of realistic synthetic content.

Enforcement remains light-touch, prioritising corrective orders over penalties, with fines capped at 30 million won for persistent noncompliance. Officials said the framework aims to build public trust while supporting innovation, serving as a foundation for ongoing policy development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

GPT-5.2 shows how AI can generate real-world cyber exploits

Advanced language models have demonstrated the ability to generate working exploits for previously unknown software vulnerabilities. Security researcher Sean Heelan tested two systems built on GPT-5.2 and Opus 4.5 by challenging them to exploit a zero-day flaw in the QuickJS JavaScript interpreter.

Across multiple scenarios with varying security protections, GPT-5.2 completed every task, while Opus 4.5 failed only 2. The systems produced more than 40 functional exploits, ranging from basic shell access to complex file-writing operations that bypassed modern defences.

Most challenges were solved in under an hour, with standard attempts costing around $30. Even the most complex exploit, which bypassed protections such as address space layout randomisation, non-executable memory, and seccomp sandboxing, was completed in just over three hours for roughly $50.

The most advanced task required GPT-5.2 to write a specific string to a protected file path without access to operating system functions. The model achieved this by chaining seven function calls through the glibc exit handler mechanism, bypassing shadow stack protections.

The findings suggest exploit development may increasingly depend on computational resources rather than human expertise. While QuickJS is less complex than browsers such as Chrome or Firefox, the approach demonstrated could scale to larger and more secure software environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!