Australia moves to curb nudify tools after eSafety action

A major provider of three widely used nudify services has cut off Australian access after enforcement action from eSafety.

The company received an official warning in September for allowing its tools to be used to produce AI-generated material that harmed children.

A withdrawal that follows concerns about incidents involving school students and repeated reminders that online services must meet Australia’s mandatory safety standards.

eSafety stated that Australia’s codes and standards are encouraging companies to adopt stronger safeguards.

The Commissioner noted that preventing the misuse of consumer tools remains central to reducing the risk of harm and that more precise boundaries can lower the likelihood of abuse affecting young people.

Attention has also turned to underlying models and the hosting platforms that distribute them.

Hugging Face has updated its terms to require users to take steps to mitigate the risks associated with uploaded models, including preventing misuse for generating harmful content. The company is required to act when reports or internal checks reveal breaches of its policies.

eSafety indicated that failure to comply with industry codes or standards can lead to enforcement measures, including significant financial penalties.

The agency is working with the government on further reforms intended to restrict access to nudify tools and strengthen protections across the technology stack.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase Ventures reveals top areas to watch in 2026

Coinbase Ventures has shared the ideas its team is most excited about for 2026, highlighting areas with high potential for innovation in crypto and blockchain. Key sectors include asset tokenisation, specialised exchanges, next-generation DeFi, and AI-driven robotics.

The firm is actively seeking teams to invest in these emerging opportunities.

Perpetual contracts on real-world assets are set to expand, enabling synthetic exposure to private companies, commodities, and macroeconomic data. Specialised exchanges and trading terminals aim to consolidate liquidity, protect market makers, and improve the prediction market user experience.

Next-gen DeFi will expand with composable perpetual markets, unsecured lending, and privacy-focused applications. These developments could redefine capital efficiency, financial infrastructure, and user confidentiality across the ecosystem.

AI and robotics are also a focus, with projects targeting advanced robotic data collection, proof-of-humanity solutions, and AI-driven innovative contract development. Coinbase Ventures emphasises the potential for these technologies to accelerate on-chain adoption and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As AI agents proliferate, human purpose is being reconsidered

As AI agents rapidly evolve from tools to autonomous actors, experts are raising existential questions about human value and purpose.

These agents, equipped with advanced reasoning and decision-making capabilities, can now complete entire workflows with minimal human intervention.

The report notes that in corporate settings, AI agents are already being positioned to handle tasks such as client negotiations, quote generation, project coordination, or even strategic decision support. Some proponents foresee these agents climbing organisational charts, potentially serving as virtual CFOs or CEOs.

At the same time, sceptics warn that such a shift could hollow out traditional human roles. Research from McKinsey Global Institute suggests that while many human skills remain relevant, the nature and context of work will change significantly, with humans increasingly collaborating with AI rather than directly doing classical tasks.

The questions this raises extend beyond economics and efficiency: they touch on identity, dignity, and social purpose. If AI can handle optimisation and execution, what remains uniquely human, and how will societies value those capacities?

Some analysts suggest we shift from valuing output to valuing emotional leadership, creativity, ethical judgement and human connection.

The rise of AI agents thus invites a critical rethink of labour, value, and our roles in an AI-augmented world. As debates continue, it may become ever more crucial to define what we expect from people, beyond productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI and 5G technology transforms stadium experience

Fans attending live football matches in the UK can now enjoy uninterrupted connectivity with a new technology combining AI and 5G.

Trials at a stadium in Milton Keynes demonstrated that thousands of spectators can stream high-quality live video feeds directly to their mobile devices.

Developed collaboratively by the University of Bristol, AI specialists Madevo, and network experts Weaver Labs, the system also delivers live player statistics, exclusive behind-the-scenes content, and real-time queue navigation. Traditional mobile networks often struggle to cope with peak demand at large venues, leaving fans frustrated.

The innovation offers clubs an opportunity to transform their stadiums into fully smart-enabled venues. University researchers said the successful trial represents a major step forward for Bristol’s Smart Internet Lab as it celebrates a decade of pioneering connectivity solutions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Snapdragon 8 Gen 5 by Qualcomm brings faster AI performance to flagship phones

Qualcomm has introduced the Snapdragon 8 Gen 5 Mobile Platform, positioning it as a premium upgrade that elevates performance, AI capability, and gaming. The company says the new chipset responds to growing demand for more advanced features in flagship smartphones.

Snapdragon 8 Gen 5 includes an enhanced sensing hub that wakes an AI assistant when a user picks up their device. Qualcomm says the platform supports agentic AI functions through the updated AI Engine, enabling more context-aware interactions and personalised assistance directly on the device.

The system is powered by the custom Oryon CPU, reaching speeds up to 3.8 GHz and delivering notable improvements in responsiveness and web performance. Qualcomm reports a 36% increase in overall processing power and an 11% boost to graphics output through its updated Adreno GPU architecture.

Qualcomm executives say the refreshed platform will bring high-end performance to more markets. Chris Patrick, senior vice-president for mobile handsets, says Snapdragon 8 Gen 5 is built to meet rising demands for speed, efficiency, and intelligent features.

Qualcomm confirmed that the chipset will appear in upcoming flagship devices from manufacturers including iQOO, Honor, Meizu, Motorola, OnePlus, and vivo. The company expects the platform to anchor next-generation models entering global markets in the months ahead.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Underground AI tools marketed for hacking raise alarms among cybersecurity experts

Cybersecurity researchers say cybercriminals are turning to a growing underground market of customised large language models designed to support low-level hacking tasks.

A new report from Palo Alto Networks’ Unit 42 describes how dark web forums promote jailbroken, open-source and bespoke AI models as hacking assistants or dual-use penetration testing tools, often sold via monthly or annual subscriptions.

Some appear to be repurposed commercial models trained on malware datasets and maintained by active online communities.

These models help users scan for vulnerabilities, write scripts, encrypt or exfiltrate data and generate exploit or phishing code, tasks that can support both attackers and defenders.

Unit 42’s Andy Piazza compared them to earlier dual-use tools, such as Metasploit and Cobalt Strike, which were developed for security testing but are now widely abused by criminal groups. He warned that AI now plays a similar role, lowering the expertise needed to launch attacks.

One example is a new version of WormGPT, a jailbroken LLM that resurfaced on underground forums in September after first appearing in 2023.

The updated ‘WormGPT 4’ is marketed as an unrestricted hacking assistant, with lifetime access reportedly starting at around $220 and an option to buy the complete source code. Researchers say it signals a shift from simple jailbreaks to commercialised, specialised tools that train AI for cybercrime.

Another model, KawaiiGPT, is available for free on GitHub and brands itself as a playful ‘cyber pentesting’ companion while generating malicious content.

Unit 42 calls it an entry-level but effective malicious LLM, with a casual, friendly style that masks its purpose. Around 500 contributors support and update the project, making it easier for non-experts to use.

Piazza noted that internal tests suggest much of the malware generated by these tools remains detectable and less advanced than code seen in some recent AI-assisted campaigns. The wider concern, he said, is that such models make hacking more accessible by translating technical knowledge into simple prompts.

Users no longer need to know jargon like ‘lateral movement’ and can instead ask everyday questions, such as how to find other systems on a network, and receive ready-made scripts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Staffordshire Police trials AI agents on its 101 line

Staffordshire Police will trial AI-powered ‘agents’ on its 101 non-emergency service early next year, according to a recent BBC report.

The technology, known as Agentforce, is designed to resolve simple information requests without human intervention, allowing call handlers to focus on more complex or urgent cases. The force said the system aims to improve contact centre performance after past criticism over long wait times.

Senior officers explained that the AI agent will support queries where callers are seeking information rather than reporting crimes. If keywords indicating risk or vulnerability are detected, the system will automatically route the call to a human operator.

Thames Valley Police is already using the technology and has given ‘very positive reports’, according to acting Chief Constable Becky Riggs.

The force’s current average wait for 101 calls is 3.3 minutes, a marked improvement on the previous 7.1-minute average. Abandonment rates have also fallen from 29.2% to 18.7%. However, Commissioner Ben Adams noted that around eight percent of callers still wait over an hour.

UK officers say they have been calling back those affected, both to apologise and to gather ‘significant intelligence’ that has strengthened public confidence in the system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ecuador and Latin America expand skills in ethical AI with UNESCO training

UNESCO is strengthening capacities in AI ethics and regulation across Ecuador and Latin America through two newly launched courses. The initiatives aim to enhance digital governance and ensure the ethical use of AI in the region.

The first course, ‘Regulation of Artificial Intelligence: A View from and towards Latin America,’ is taking place virtually from 19 to 28 November 2025.

Organised by UNESCO’s Social and Human Sciences Sector in coordination with UNESCO-Chile and CTS Lab at FLACSO Ecuador, the programme involves 30 senior officials from key institutions, including the Ombudsman’s Office and the Superintendency for Personal Data Protection.

Participants are trained on AI ethical principles, risks, and opportunities, guided by UNESCO’s 2021 Recommendation on the Ethics of AI.

The ‘Ethical Use of AI’ course starts next week for telecom and electoral officials. The 20-hour hybrid programme teaches officials to use UNESCO’s RAM to assess readiness and plan ethical AI strategies.

UNESCO aims to train 60 officials and strengthen AI ethics and regulatory frameworks in Ecuador and Chile. The programmes reflect a broader commitment to building inclusive, human-rights-oriented digital governance in Latin America.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot