South Korea accelerates AI adoption as NVIDIA strengthens national ecosystem

NVIDIA AI Day Seoul drew more than 1,000 visitors who gathered to explore sovereign AI and the rapid progress shaping South Korea’s digital landscape.

Attendees joined workshops, technical sessions and startup showcases designed to highlight the country’s expanding ecosystem instead of focusing only on theoretical advances.

Five finalists from the Inception Grand Challenge also presented their work, reflecting the growing strength of South Korea’s startup community.

Speakers outlined how AI now supports robotics, industrial production, entertainment and public administration.

Conglomerates from South Korea, such as Samsung, SK Group, Hyundai Motor Group and NAVER Cloud, have intensified their investment in AI, while government agencies rely on accelerated computing to process documents and policy information at scale.

South Korea’s ecosystem continues to expand with hundreds of Inception startups, sovereign LLM initiatives and major supercomputing deployments.

Developers engaged directly with NVIDIA engineers through workshops and a Q&A area covering AI infrastructure, LLMs, robotics and automotive technologies. Plenary sessions examined agentic AI, reasoning models and the evolution of AI factories.

Partners presented advances in training efficiency, agentic systems and large-scale AI infrastructure built with NVIDIA’s platforms instead of legacy hardware.

South Korea’s next phase of development will be supported by access to 260,000 GPUs announced during the APEC Summit. Officials expect the infrastructure to accelerate startup growth, stimulate national AI priorities and attract new collaboration across research and industry.

The Seoul event marks another step in the country’s effort to reinforce its digital foundation while expanding its role in global AI innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia moves to curb nudify tools after eSafety action

A major provider of three widely used nudify services has cut off Australian access after enforcement action from eSafety.

The company received an official warning in September for allowing its tools to be used to produce AI-generated material that harmed children.

A withdrawal that follows concerns about incidents involving school students and repeated reminders that online services must meet Australia’s mandatory safety standards.

eSafety stated that Australia’s codes and standards are encouraging companies to adopt stronger safeguards.

The Commissioner noted that preventing the misuse of consumer tools remains central to reducing the risk of harm and that more precise boundaries can lower the likelihood of abuse affecting young people.

Attention has also turned to underlying models and the hosting platforms that distribute them.

Hugging Face has updated its terms to require users to take steps to mitigate the risks associated with uploaded models, including preventing misuse for generating harmful content. The company is required to act when reports or internal checks reveal breaches of its policies.

eSafety indicated that failure to comply with industry codes or standards can lead to enforcement measures, including significant financial penalties.

The agency is working with the government on further reforms intended to restrict access to nudify tools and strengthen protections across the technology stack.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase Ventures reveals top areas to watch in 2026

Coinbase Ventures has shared the ideas its team is most excited about for 2026, highlighting areas with high potential for innovation in crypto and blockchain. Key sectors include asset tokenisation, specialised exchanges, next-generation DeFi, and AI-driven robotics.

The firm is actively seeking teams to invest in these emerging opportunities.

Perpetual contracts on real-world assets are set to expand, enabling synthetic exposure to private companies, commodities, and macroeconomic data. Specialised exchanges and trading terminals aim to consolidate liquidity, protect market makers, and improve the prediction market user experience.

Next-gen DeFi will expand with composable perpetual markets, unsecured lending, and privacy-focused applications. These developments could redefine capital efficiency, financial infrastructure, and user confidentiality across the ecosystem.

AI and robotics are also a focus, with projects targeting advanced robotic data collection, proof-of-humanity solutions, and AI-driven innovative contract development. Coinbase Ventures emphasises the potential for these technologies to accelerate on-chain adoption and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As AI agents proliferate, human purpose is being reconsidered

As AI agents rapidly evolve from tools to autonomous actors, experts are raising existential questions about human value and purpose.

These agents, equipped with advanced reasoning and decision-making capabilities, can now complete entire workflows with minimal human intervention.

The report notes that in corporate settings, AI agents are already being positioned to handle tasks such as client negotiations, quote generation, project coordination, or even strategic decision support. Some proponents foresee these agents climbing organisational charts, potentially serving as virtual CFOs or CEOs.

At the same time, sceptics warn that such a shift could hollow out traditional human roles. Research from McKinsey Global Institute suggests that while many human skills remain relevant, the nature and context of work will change significantly, with humans increasingly collaborating with AI rather than directly doing classical tasks.

The questions this raises extend beyond economics and efficiency: they touch on identity, dignity, and social purpose. If AI can handle optimisation and execution, what remains uniquely human, and how will societies value those capacities?

Some analysts suggest we shift from valuing output to valuing emotional leadership, creativity, ethical judgement and human connection.

The rise of AI agents thus invites a critical rethink of labour, value, and our roles in an AI-augmented world. As debates continue, it may become ever more crucial to define what we expect from people, beyond productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New AI and 5G technology transforms stadium experience

Fans attending live football matches in the UK can now enjoy uninterrupted connectivity with a new technology combining AI and 5G.

Trials at a stadium in Milton Keynes demonstrated that thousands of spectators can stream high-quality live video feeds directly to their mobile devices.

Developed collaboratively by the University of Bristol, AI specialists Madevo, and network experts Weaver Labs, the system also delivers live player statistics, exclusive behind-the-scenes content, and real-time queue navigation. Traditional mobile networks often struggle to cope with peak demand at large venues, leaving fans frustrated.

The innovation offers clubs an opportunity to transform their stadiums into fully smart-enabled venues. University researchers said the successful trial represents a major step forward for Bristol’s Smart Internet Lab as it celebrates a decade of pioneering connectivity solutions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Snapdragon 8 Gen 5 by Qualcomm brings faster AI performance to flagship phones

Qualcomm has introduced the Snapdragon 8 Gen 5 Mobile Platform, positioning it as a premium upgrade that elevates performance, AI capability, and gaming. The company says the new chipset responds to growing demand for more advanced features in flagship smartphones.

Snapdragon 8 Gen 5 includes an enhanced sensing hub that wakes an AI assistant when a user picks up their device. Qualcomm says the platform supports agentic AI functions through the updated AI Engine, enabling more context-aware interactions and personalised assistance directly on the device.

The system is powered by the custom Oryon CPU, reaching speeds up to 3.8 GHz and delivering notable improvements in responsiveness and web performance. Qualcomm reports a 36% increase in overall processing power and an 11% boost to graphics output through its updated Adreno GPU architecture.

Qualcomm executives say the refreshed platform will bring high-end performance to more markets. Chris Patrick, senior vice-president for mobile handsets, says Snapdragon 8 Gen 5 is built to meet rising demands for speed, efficiency, and intelligent features.

Qualcomm confirmed that the chipset will appear in upcoming flagship devices from manufacturers including iQOO, Honor, Meizu, Motorola, OnePlus, and vivo. The company expects the platform to anchor next-generation models entering global markets in the months ahead.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Underground AI tools marketed for hacking raise alarms among cybersecurity experts

Cybersecurity researchers say cybercriminals are turning to a growing underground market of customised large language models designed to support low-level hacking tasks.

A new report from Palo Alto Networks’ Unit 42 describes how dark web forums promote jailbroken, open-source and bespoke AI models as hacking assistants or dual-use penetration testing tools, often sold via monthly or annual subscriptions.

Some appear to be repurposed commercial models trained on malware datasets and maintained by active online communities.

These models help users scan for vulnerabilities, write scripts, encrypt or exfiltrate data and generate exploit or phishing code, tasks that can support both attackers and defenders.

Unit 42’s Andy Piazza compared them to earlier dual-use tools, such as Metasploit and Cobalt Strike, which were developed for security testing but are now widely abused by criminal groups. He warned that AI now plays a similar role, lowering the expertise needed to launch attacks.

One example is a new version of WormGPT, a jailbroken LLM that resurfaced on underground forums in September after first appearing in 2023.

The updated ‘WormGPT 4’ is marketed as an unrestricted hacking assistant, with lifetime access reportedly starting at around $220 and an option to buy the complete source code. Researchers say it signals a shift from simple jailbreaks to commercialised, specialised tools that train AI for cybercrime.

Another model, KawaiiGPT, is available for free on GitHub and brands itself as a playful ‘cyber pentesting’ companion while generating malicious content.

Unit 42 calls it an entry-level but effective malicious LLM, with a casual, friendly style that masks its purpose. Around 500 contributors support and update the project, making it easier for non-experts to use.

Piazza noted that internal tests suggest much of the malware generated by these tools remains detectable and less advanced than code seen in some recent AI-assisted campaigns. The wider concern, he said, is that such models make hacking more accessible by translating technical knowledge into simple prompts.

Users no longer need to know jargon like ‘lateral movement’ and can instead ask everyday questions, such as how to find other systems on a network, and receive ready-made scripts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Staffordshire Police trials AI agents on its 101 line

Staffordshire Police will trial AI-powered ‘agents’ on its 101 non-emergency service early next year, according to a recent BBC report.

The technology, known as Agentforce, is designed to resolve simple information requests without human intervention, allowing call handlers to focus on more complex or urgent cases. The force said the system aims to improve contact centre performance after past criticism over long wait times.

Senior officers explained that the AI agent will support queries where callers are seeking information rather than reporting crimes. If keywords indicating risk or vulnerability are detected, the system will automatically route the call to a human operator.

Thames Valley Police is already using the technology and has given ‘very positive reports’, according to acting Chief Constable Becky Riggs.

The force’s current average wait for 101 calls is 3.3 minutes, a marked improvement on the previous 7.1-minute average. Abandonment rates have also fallen from 29.2% to 18.7%. However, Commissioner Ben Adams noted that around eight percent of callers still wait over an hour.

UK officers say they have been calling back those affected, both to apologise and to gather ‘significant intelligence’ that has strengthened public confidence in the system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!