London councils activate emergency plans after serious cyber attack

The Royal Borough of Kensington and Chelsea has activated emergency response plans after a cyberattack disrupted council systems in west London.

Westminster City Council and Hammersmith and Fulham Council are also affected through joint arrangements, with the National Crime Agency and the National Cyber Security Centre, led by GCHQ, leading the investigation. Staff in some areas have been advised to work from home while parts of the network stay offline as a precaution.

An internal memo shows that sections of the network remain closed and that a full return of affected systems is not expected for several days. Phone lines and online forms may face disruption, although alternative contact numbers are available on the council website.

Cybersecurity specialist Nathan Webb advised residents to be cautious about emails or calls referencing the incident, as attackers frequently exploit public attention surrounding a breach to launch scams.

He added that identifying any external supplier involved is essential so that other clients can secure their own systems. Forescout expert Rik Ferguson said the case demonstrates how shared digital services can allow a breach to spread risk across multiple organisations.

Councils have praised the overnight work by IT teams, but are not disclosing technical details while the investigation continues.

BBC cyber correspondent Joe Tidy said taking servers offline is an extreme step usually used for significant incidents. He pointed to the Co-op case earlier this year, where the company also disconnected systems, but only after hackers had already taken data from 6.5 million people.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New phishing kit targets Microsoft 365 users

Researchers have uncovered a large phishing operation, known as Quantum Route Redirect (QRR), that creates fake Microsoft 365 login pages across nearly 1,000 domains. The campaign uses convincing email lures, including DocuSign notices and payment alerts, to steal user credentials.

QRR operations have reached 90 countries, with US users hit hardest. Analysts say the platform evades scanners by sending bots to safe pages while directing real individuals to credential-harvesting sites on compromised domains.

The kit emerged shortly after Microsoft disrupted the RaccoonO365 network, which had stolen thousands of accounts. Similar tools, such as VoidProxy and Darcula, have appeared; yet, QRR stands out for its automation and ease of use, which enable rapid, large-scale attacks.

Cybersecurity experts warn that URL scanning alone can no longer stop such operations. Organisations are urged to adopt layered protection, stronger sign-in controls and behavioural monitoring to detect scams that increasingly mimic genuine Microsoft systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Underground AI tools marketed for hacking raise alarms among cybersecurity experts

Cybersecurity researchers say cybercriminals are turning to a growing underground market of customised large language models designed to support low-level hacking tasks.

A new report from Palo Alto Networks’ Unit 42 describes how dark web forums promote jailbroken, open-source and bespoke AI models as hacking assistants or dual-use penetration testing tools, often sold via monthly or annual subscriptions.

Some appear to be repurposed commercial models trained on malware datasets and maintained by active online communities.

These models help users scan for vulnerabilities, write scripts, encrypt or exfiltrate data and generate exploit or phishing code, tasks that can support both attackers and defenders.

Unit 42’s Andy Piazza compared them to earlier dual-use tools, such as Metasploit and Cobalt Strike, which were developed for security testing but are now widely abused by criminal groups. He warned that AI now plays a similar role, lowering the expertise needed to launch attacks.

One example is a new version of WormGPT, a jailbroken LLM that resurfaced on underground forums in September after first appearing in 2023.

The updated ‘WormGPT 4’ is marketed as an unrestricted hacking assistant, with lifetime access reportedly starting at around $220 and an option to buy the complete source code. Researchers say it signals a shift from simple jailbreaks to commercialised, specialised tools that train AI for cybercrime.

Another model, KawaiiGPT, is available for free on GitHub and brands itself as a playful ‘cyber pentesting’ companion while generating malicious content.

Unit 42 calls it an entry-level but effective malicious LLM, with a casual, friendly style that masks its purpose. Around 500 contributors support and update the project, making it easier for non-experts to use.

Piazza noted that internal tests suggest much of the malware generated by these tools remains detectable and less advanced than code seen in some recent AI-assisted campaigns. The wider concern, he said, is that such models make hacking more accessible by translating technical knowledge into simple prompts.

Users no longer need to know jargon like ‘lateral movement’ and can instead ask everyday questions, such as how to find other systems on a network, and receive ready-made scripts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Family warns others after crypto scam costs elderly man £3,000

A South Tyneside family has spoken publicly after an elderly man lost almost £3,000 to a highly persuasive cryptocurrency scam, according to a recent BBC report. The scammer contacted the victim repeatedly over several weeks, initially offering help with online banking before shifting to an ‘investment opportunity’.

According to the family, the caller built trust by using personal details, even fabricating a story about ‘free Bitcoin’ awarded to the man years earlier.

Police said the scam fits a growing trend of crypto-related fraud. The victim, under the scammer’s guidance, opened multiple new bank accounts and was eventually directed to transfer nearly £3,000 into a Coinbase-linked crypto wallet.

Attempts by the family to recover the funds were unsuccessful. Coinbase said it advises users to research any investment carefully and provides guidance on recognising scams.

Northumbria Police and national fraud agencies have been alerted. Officers said crypto scams present particular challenges because, unlike traditional banking fraud, the transferred funds are far harder to trace.

Community groups in Sunderland, such as Pallion Action Group, are now running sessions to educate older residents about online threats, noting that rapid changes in technology can make such scams especially daunting for pensioners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Character AI blocks teen chat and introduces new interactive Stories feature

A new feature called ‘Stories’ from Character.AI allows users under 18 to create interactive fiction with their favourite characters. The move replaces open-ended chatbot access, which has been entirely restricted for minors amid concerns over mental health risks.

Open-ended AI chatbots can initiate conversations at any time, raising worries about overuse and addiction among younger users.

Several lawsuits against AI companies have highlighted the dangers, prompting Character.AI to phase out access for minors and introduce a guided, safety-focused alternative.

Industry observers say the Stories feature offers a safer environment for teens to engage with AI characters while continuing to explore creative content.

The decision aligns with recent AI regulations in California and ongoing US federal proposals to limit minors’ exposure to interactive AI companions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and anonymity intensifies online violence against women

Digital violence against women is rising sharply, fuelled by AI, online anonymity, and weak legal protections, leaving millions exposed.

UN Women warns that abuse on digital platforms often spills into real life, threatening women’s safety, livelihoods, and ability to participate freely in public life.

Public figures, journalists, and activists are increasingly targeted with deepfakes, coordinated harassment campaigns, and gendered disinformation designed to silence and intimidate.

One in four women journalists report receiving online death threats, highlighting the urgent scale and severity of the problem.

Experts call for stronger laws, safer digital platforms, and more women in technology to address AI-driven abuse effectively. Investments in education, digital literacy, and culture-change programmes are also vital to challenge toxic online communities and ensure digital spaces promote equality rather than harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI clarifies position in sensitive lawsuit

A legal case is underway involving OpenAI and the family of a teenager who had extensive interactions with ChatGPT before his death.

OpenAI has filed a response in court that refers to its terms of use and provides additional material for review. The filing also states that more complete records were submitted under seal so the court can assess the situation in full.

The family’s complaint includes concerns about the model’s behaviour and the company’s choices, while OpenAI’s filing outlines its view of the events and the safeguards it has in place. Both sides present different interpretations of the same interactions, which the court will evaluate.

OpenAI has also released a public statement describing its general approach to sensitive cases and the ongoing development of safety features intended to guide users towards appropriate support.

The case has drawn interest because it relates to broader questions about safety measures within conversational AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Industrial sectors push private 5G momentum

Private 5G is often dismissed as too complex or narrow, yet analysts argue it carries strong potential for mission-critical industries instead of consumer-centric markets.

Sectors that depend on high reliability, including manufacturing, logistics, energy and public safety, find public networks and Wi-Fi insufficient for the operational demands they face. The technology aligns with the rise of AI-enabled automation and may provide growth in a sluggish telecom landscape.

Success depends on the maturity of surrounding ecosystems. Devices, edge computing and integration models differ across industrial verticals, slowing adoption instead of enabling rapid deployment.

The increasing presence of physical AI systems, from autonomous drones to industrial vehicles, makes reliable connectivity even more important.

Debate intensified when Nokia considered divesting its private 5G division, raising doubts about commercial viability, yet industry observers maintain that every market involves unique complexity.

Private 5G extends beyond traditional telecom roles by supporting real-economy sectors such as factories, ports and warehouses. The challenge lies in tailoring networks to distinct operational needs instead of expecting a single solution for all industries.

Analysts also note that inflated expectations in 2019 created a perception of underperformance, although private cellular remains a vital piece in a broader ecosystem involving edge computing, device readiness and software integration.

Long-term outlooks remain optimistic. Analysts project an equipment market worth around $30 billion each year by 2040, supported by strong service revenue. Adoption will vary across industries, but its influence on public RAN markets is expected to grow.

Despite complexity, interest inside the telecom sector stays high, especially as enterprise venues search for reliable connectivity solutions that can support their digital transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!