AI malware emerges as major cybersecurity threat

Cybersecurity experts are raising alarms as AI transitions from a theoretical concern to an operational threat. The H2 2025 ESET Threat Report shows AI-powered malware is now targeting systems globally, raising attack sophistication.

PromptLock, the first AI-driven ransomware, uses a dual-component system to generate unique scripts for each target. The malware autonomously decides to exfiltrate, encrypt, or destroy data, using a feedback loop to ensure reliable execution.

Other AI threats include PromptFlux, which rewrites malware for persistence, and PromptSteal, which harvests sensitive files. These developments highlight the growing capabilities of attackers using machine learning models to evade traditional defences.

The ransomware-as-a-service market is growing, with Qilin, Akira, and Warlock using advanced evasion techniques. The convergence of AI-driven malware and thriving ransomware economies presents an urgent challenge for organisations globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Polish authorities flag TikTok for potential election interference

Polish authorities have urged the European Commission to investigate TikTok over AI-generated content advocating Poland’s exit from the European Union. Officials say the videos pose risks to democratic processes and public order.

Deputy Minister for Digitalisation Dariusz Standerski highlighted that the narratives, distribution patterns, and synthetic audiovisual material suggest TikTok may not be fulfilling its obligations under the EU Digital Services Act for Very Large Online Platforms.

The associated TikTok account has since disappeared from the platform.

The Digital Services Act requires platforms to address systemic risks, including disinformation, and allows fines of up to 6% of a company’s global annual turnover for non-compliance. TikTok and the Commission have not provided immediate comment.

Authorities emphasised that the investigation could set an important precedent for how EU countries address AI-driven disinformation on major social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Best AI dictation tools for faster speech-to-text in 2026

AI dictation reached maturity during the years after many attempts of patchy performance and frustrating inaccuracies.

Advances in speech-to-text engines and large language models now allow modern dictation tools to recognise everyday speech more reliably while keeping enough context to format sentences automatically instead of producing raw transcripts that require heavy editing.

Several leading apps have emerged with different strengths. Wispr Flow focuses on flexibility with style options and custom vocabulary, while Willow blends automation with privacy by storing transcripts locally.

Monologue also prioritises privacy by allowing users to download the model and run transcription entirely on their own machines. Superwhisper caters for power users by supporting multiple downloadable models and transcription from audio or video files.

Other tools take different approaches. VoiceTypr offers an offline-first design with lifetime licensing, Aqua promotes speed and phrase-based shortcuts, Handy provides a simple free open source starting point, and Typeless gives one of the most generous free allowances while promising strong data protection.

Each reflects a wider trend where developers try to balance convenience, privacy, control and affordability.

Users now benefit from cleaner, more natural-sounding transcripts instead of the rigid audio typing tools of previous years. AI dictation has become faster, more accurate and far more usable for everyday note-taking, messaging and work tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Best AI chatbot for maths accuracy revealed in new benchmark

AI tools are increasingly used for simple everyday calculations, yet a new benchmark suggests accuracy remains unreliable.

The ORCA study tested five major chatbots across 500 real-world maths prompts and found that users still face roughly a 40 percent chance of receiving the wrong answer.

Gemini from Google recorded the highest score at 63 percent, with xAI’s Grok almost level at 62.8 percent. DeepSeek followed with 52 percent, while ChatGPT scored 49.4 percent, and Claude placed last at 45.2 percent.

Performance varied sharply across subjects, with maths and conversion tasks producing the best results, but physics questions dragged scores down to an average accuracy below 40 percent.

Researchers identified most errors as sloppy calculations or rounding mistakes, rather than deeper failures to understand the problem. Finance and economics questions highlighted the widest gaps between the models, while DeepSeek struggled most in biology and chemistry, with barely one correct answer in ten.

Users are advised to double-check results whenever accuracy is crucial. A calculator or a verified source is still advised instead of relying entirely on an AI chatbot for numerical certainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China proposes strict AI rules to protect children

China has proposed stringent new rules for AI aimed at protecting children and preventing chatbots from providing advice that could lead to self-harm, violence, or gambling.

The draft regulations, published by the Cyberspace Administration of China (CAC), require developers to include personalised settings, time limits, and parental consent for services offering emotional companionship.

High-risk chats involving self-harm or suicide must be passed to a human operator, with guardians or emergency contacts alerted. AI providers must not produce content that threatens national security, harms national honour, or undermines national unity.

The rules come as AI usage surges, with platforms such as DeepSeek, Z.ai, and Minimax attracting millions of users in China and abroad. The CAC supports safe AI use, including tools for local culture and elderly companionship.

The move reflects growing global concerns over AI’s impact on human behaviour. Notably, OpenAI has faced legal challenges over alleged chatbot-related harm, prompting the company to create roles focused on tracking AI risks to mental health and cybersecurity.

China’s draft rules signal a firm approach to regulating AI technology as its influence expands rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China plans stricter consent rules for AI chat platforms

China is proposing new rules requiring users to consent before AI companies can use chat logs for training. The draft measures aim to balance innovation with safety and public interest.

Platforms would need to inform users when interacting with AI and provide options to access or delete their chat history. For minors, guardian consent is required before sharing or storing any data.

Analysts say the rules may slow AI chatbot improvements but provide guidance on responsible development. The measures signal that some user conversations are too sensitive for free training data.

The draft rules are open for public consultation with feedback due in late January. China encourages expanding human-like AI applications once safety and reliability are demonstrated.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers abuse new AI agent connections

Security researchers warn hackers are exploiting a new feature in Microsoft Copilot Studio. The issue affects recently launched Connected Agents functionality.

Connected Agents allows AI systems to interact and share tools across environments. Researchers say default settings can expose sensitive capabilities without clear monitoring.

Zenity Labs reported attackers linking rogue agents to trusted systems. Exploits included unauthorised email sending and data access.

Experts urge organisations to disable Connected Agents for critical workloads. Stronger authentication and restricted access are advised until safeguards improve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI cheating drives ACCA to halt online exams

The Association of Chartered Certified Accountants (ACCA) has announced it will largely end remote examinations in the UK from March 2026, requiring students to sit tests in person unless exceptional circumstances apply.

The decision aims to address a surge in cheating, particularly facilitated by AI tools.

Remote testing was introduced during the Covid-19 pandemic to allow students to continue qualifying when in-person exams were impossible. The ACCA said online assessments have now become too difficult to monitor effectively, despite efforts to strengthen safeguards against misconduct.

Investigations show cheating has impacted major auditing firms, including the ‘big four’ and other top companies. High-profile cases, such as EY’s $100m (£74m) settlement in the US, highlight the risks posed by compromised professional examinations.

While other accounting bodies, including the Institute of Chartered Accountants in England and Wales, continue to allow some online exams, the ACCA has indicated that high-stakes assessments must now be conducted in person to maintain credibility and integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI plans push US agencies to prioritise data reform

US federal agencies planning to deploy agentic AI in 2026 are being told to prioritise data organisation as a prerequisite for effective adoption. AI infrastructure providers say poorly structured data remains a major barrier to turning agentic systems into operational tools.

Public sector executives at Amazon Web Services, Oracle, and Cisco said government clients are shifting focus away from basic chatbot use cases. Instead, agencies are seeking domain-specific AI systems capable of handling defined tasks and delivering measurable outcomes.

US industry leaders said achieving this shift requires modernising legacy infrastructure alongside cleaning, structuring, and contextualising data. Executives stressed that agentic AI depends on high-quality data pipelines that allow systems to act autonomously within defined parameters.

Oracle said its public sector strategy for 2026 centres on enabling context-aware AI through updated data assets. Company executives argued that AI systems are only effective when deeply aligned with an organisation’s underlying data environment.

The companies said early agentic AI use cases include document review, data entry, and network traffic management. Cloud infrastructure was also highlighted as critical for scaling agentic systems and accelerating innovation across government workflows.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI brain model mirrors lab animal behaviour without using animal data

A new computational brain model, built entirely from biological principles, has learned a visual categorisation task with accuracy and variability matching that of lab animals. Remarkably, the model achieved these results without being trained on any animal data.

The biomimetic design integrates detailed synaptic rules with large-scale architecture across the cortex, striatum, brainstem, and acetylcholine-modulated systems.

As the model learned, it reproduced neural rhythms observed in real animals, including strengthened beta-band synchrony during correct decisions. The result demonstrates emergent realism in both behaviour and underlying neural activity.

The model also revealed a previously unnoticed set of ‘incongruent neurons’ that predicted errors. When researchers revisited animal data, they found the same signals had gone undetected, highlighting the platform’s potential to uncover hidden neural dynamics.

Beyond neuroscience research, the model offers a powerful tool for testing neurotherapeutic interventions in silico. Simulating disease-related circuits allows scientists to test treatments before costly clinical trials, potentially speeding up the development of next-generation neurotherapeutics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot