Underground AI tools marketed for hacking raise alarms among cybersecurity experts

Cybersecurity researchers say cybercriminals are turning to a growing underground market of customised large language models designed to support low-level hacking tasks.

A new report from Palo Alto Networks’ Unit 42 describes how dark web forums promote jailbroken, open-source and bespoke AI models as hacking assistants or dual-use penetration testing tools, often sold via monthly or annual subscriptions.

Some appear to be repurposed commercial models trained on malware datasets and maintained by active online communities.

These models help users scan for vulnerabilities, write scripts, encrypt or exfiltrate data and generate exploit or phishing code, tasks that can support both attackers and defenders.

Unit 42’s Andy Piazza compared them to earlier dual-use tools, such as Metasploit and Cobalt Strike, which were developed for security testing but are now widely abused by criminal groups. He warned that AI now plays a similar role, lowering the expertise needed to launch attacks.

One example is a new version of WormGPT, a jailbroken LLM that resurfaced on underground forums in September after first appearing in 2023.

The updated ‘WormGPT 4’ is marketed as an unrestricted hacking assistant, with lifetime access reportedly starting at around $220 and an option to buy the complete source code. Researchers say it signals a shift from simple jailbreaks to commercialised, specialised tools that train AI for cybercrime.

Another model, KawaiiGPT, is available for free on GitHub and brands itself as a playful ‘cyber pentesting’ companion while generating malicious content.

Unit 42 calls it an entry-level but effective malicious LLM, with a casual, friendly style that masks its purpose. Around 500 contributors support and update the project, making it easier for non-experts to use.

Piazza noted that internal tests suggest much of the malware generated by these tools remains detectable and less advanced than code seen in some recent AI-assisted campaigns. The wider concern, he said, is that such models make hacking more accessible by translating technical knowledge into simple prompts.

Users no longer need to know jargon like ‘lateral movement’ and can instead ask everyday questions, such as how to find other systems on a network, and receive ready-made scripts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Staffordshire Police trials AI agents on its 101 line

Staffordshire Police will trial AI-powered ‘agents’ on its 101 non-emergency service early next year, according to a recent BBC report.

The technology, known as Agentforce, is designed to resolve simple information requests without human intervention, allowing call handlers to focus on more complex or urgent cases. The force said the system aims to improve contact centre performance after past criticism over long wait times.

Senior officers explained that the AI agent will support queries where callers are seeking information rather than reporting crimes. If keywords indicating risk or vulnerability are detected, the system will automatically route the call to a human operator.

Thames Valley Police is already using the technology and has given ‘very positive reports’, according to acting Chief Constable Becky Riggs.

The force’s current average wait for 101 calls is 3.3 minutes, a marked improvement on the previous 7.1-minute average. Abandonment rates have also fallen from 29.2% to 18.7%. However, Commissioner Ben Adams noted that around eight percent of callers still wait over an hour.

UK officers say they have been calling back those affected, both to apologise and to gather ‘significant intelligence’ that has strengthened public confidence in the system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ecuador and Latin America expand skills in ethical AI with UNESCO training

UNESCO is strengthening capacities in AI ethics and regulation across Ecuador and Latin America through two newly launched courses. The initiatives aim to enhance digital governance and ensure the ethical use of AI in the region.

The first course, ‘Regulation of Artificial Intelligence: A View from and towards Latin America,’ is taking place virtually from 19 to 28 November 2025.

Organised by UNESCO’s Social and Human Sciences Sector in coordination with UNESCO-Chile and CTS Lab at FLACSO Ecuador, the programme involves 30 senior officials from key institutions, including the Ombudsman’s Office and the Superintendency for Personal Data Protection.

Participants are trained on AI ethical principles, risks, and opportunities, guided by UNESCO’s 2021 Recommendation on the Ethics of AI.

The ‘Ethical Use of AI’ course starts next week for telecom and electoral officials. The 20-hour hybrid programme teaches officials to use UNESCO’s RAM to assess readiness and plan ethical AI strategies.

UNESCO aims to train 60 officials and strengthen AI ethics and regulatory frameworks in Ecuador and Chile. The programmes reflect a broader commitment to building inclusive, human-rights-oriented digital governance in Latin America.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Character AI blocks teen chat and introduces new interactive Stories feature

A new feature called ‘Stories’ from Character.AI allows users under 18 to create interactive fiction with their favourite characters. The move replaces open-ended chatbot access, which has been entirely restricted for minors amid concerns over mental health risks.

Open-ended AI chatbots can initiate conversations at any time, raising worries about overuse and addiction among younger users.

Several lawsuits against AI companies have highlighted the dangers, prompting Character.AI to phase out access for minors and introduce a guided, safety-focused alternative.

Industry observers say the Stories feature offers a safer environment for teens to engage with AI characters while continuing to explore creative content.

The decision aligns with recent AI regulations in California and ongoing US federal proposals to limit minors’ exposure to interactive AI companions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI is accelerating the transition to clean energy

AI is playing an increasingly vital role in supporting the transition to clean energy. AI helps optimise power grid operations, plan infrastructure investments, and accelerate the discovery of novel materials for energy generation, storage, and conversion.

While energy-hungry data centres can increase electricity demand, AI applications are helping reduce energy consumption across buildings, transport, and industry.

On electric grids, AI algorithms enhance efficiency, integrate renewable energy sources, and predict maintenance needs to prevent power outages. Grid operators can utilise AI to forecast supply and demand, optimise energy storage, and manage resources in real-time.

Technologies such as smart thermostats, electric vehicle batteries, and AI-managed data centres provide additional flexibility to balance peak demand and supply.

AI also aids long-term planning by helping utilities forecast future infrastructure needs amid growing renewable deployment and climate-related risks. Additionally, AI accelerates the discovery of materials for energy technologies.

At MIT, researchers use AI-guided experiments and robotics to design and test new materials, significantly shortening development times from decades to years.

Through research, modelling, and collaboration, AI is being applied to fusion reactor management, solar cell optimisation, and energy-efficient data centre design. MIT Energy Initiative programmes unite academics, industry, and policymakers to harness AI for a resilient and sustainable energy future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI development by Chinese companies shifts abroad

Leading Chinese technology companies are increasingly training their latest AI models outside the country to maintain access to Nvidia’s high-performance chips, according to a report by the Financial Times. Firms such as Alibaba and ByteDance are shifting parts of their AI development to data centres in Southeast Asia, a move that comes as the United States tightens restrictions on advanced chip exports to China.

The trend reportedly accelerated after Washington imposed new limits in April on the sale of Nvidia’s H20 chips, a key component for developing sophisticated large language models. By relying on leased server space operated by non-Chinese companies abroad, tech firms are able to bypass some of the effects of US export controls while continuing to train next-generation AI systems.

One notable exception is DeepSeek, which had already stockpiled a significant number of Nvidia chips before the export restrictions took effect. The company continues to train its models domestically and is now collaborating with Chinese chipmakers led by Huawei to develop and optimise homegrown alternatives to US hardware.

Neither Alibaba, ByteDance, Nvidia, DeepSeek, nor Huawei has commented publicly on the report, and Reuters stated that it could not independently verify the claims. However, the developments underscore the increasing complexity of global AI competition and the lengths to which companies may go to maintain technological momentum amid geopolitical pressure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New alliance between Samsung and SK Telecom accelerates 6G innovation

Samsung Electronics and SK Telecom have taken a significant step toward shaping next-generation connectivity after signing an agreement to develop essential 6G technologies.

Their partnership centres on AI-based radio access networks, with both companies aiming to secure an early lead as global competition intensifies.

Research teams from Samsung and SK Telecom will build and test key components, including AI-based channel estimation, distributed MIMO and AI-driven schedulers.

AI models will refine signals in real-time to improve accuracy, rather than relying on conventional estimation methods. Meanwhile, distributed MIMO will enable multiple antennas to cooperate for reliable, high-speed communication across diverse environments.

The companies believe that AI-enabled schedulers and core networks will manage data flows more efficiently as the number of devices continues to rise.

Their collaboration also extends into the AI-RAN Alliance, where a jointly proposed channel estimation technology has already been accepted as a formal work item, strengthening their shared role in shaping industry standards.

Samsung continues to promote 6G research through its Advanced Communications Research Centre, and recent demonstrations at major industry events highlight the growing momentum behind AI-RAN technology.

Both organisations expect their work to accelerate the transition toward a hyperconnected 6G future, rather than allowing competing ecosystems to dominate early development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and anonymity intensifies online violence against women

Digital violence against women is rising sharply, fuelled by AI, online anonymity, and weak legal protections, leaving millions exposed.

UN Women warns that abuse on digital platforms often spills into real life, threatening women’s safety, livelihoods, and ability to participate freely in public life.

Public figures, journalists, and activists are increasingly targeted with deepfakes, coordinated harassment campaigns, and gendered disinformation designed to silence and intimidate.

One in four women journalists report receiving online death threats, highlighting the urgent scale and severity of the problem.

Experts call for stronger laws, safer digital platforms, and more women in technology to address AI-driven abuse effectively. Investments in education, digital literacy, and culture-change programmes are also vital to challenge toxic online communities and ensure digital spaces promote equality rather than harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI scribes help reduce physician paperwork and burnout

A new UCLA Health study finds that AI-powered scribe tools can reduce physicians’ documentation time and may improve work satisfaction. Conducted across 14 specialities and 72,000 patient visits, the trial tested Microsoft DAX and Nabla in real-world clinical settings.

Physicians using Nabla reduced the time spent writing each note by almost 10% compared with usual care, saving around 41 seconds per note. Both AI tools modestly improved burnout, cognitive workload, and work exhaustion, but physician oversight remains essential.

The trial highlighted several limitations, including occasional inaccuracies in AI-generated notes and a single instance of mild patient safety concern. Physicians found the tools easy to use and noted an improvement in patient engagement, with most patients being receptive.

The findings provide timely evidence as healthcare systems increasingly adopt AI scribes. Researchers emphasise that rigorous evaluation is necessary to ensure patient safety and effectiveness, and that further long-term studies across multiple institutions are recommended.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot