Europe builds a laser ground station in Greenland to protect satellite links

Europe is building a laser-based ground station in Greenland to secure satellite links as Russian jamming intensifies. ESA and Denmark chose Kangerlussuaq for its clear skies and direct access to polar-orbit traffic.

The optical system uses Astrolight’s technology to transmit data markedly faster than radio signals. Narrow laser beams resist interference, allowing vast imaging sets to reach analysts with far fewer disruptions.

Developers expect terabytes to be downloaded in under a minute, reducing reliance on vulnerable Arctic radio sites. European officials say the upgrade strengthens autonomy as undersea cables and navigation systems face repeated targeting from countries such as Russia.

The Danish station will support defence monitoring, climate science and search-and-rescue operations across high latitudes. Work is underway, with completion planned for 2026 and ambitions for a wider global laser network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Porn site fined £1m for ignoring UK child safety age checks

A UK pornographic website has been fined £1m by Ofcom for failing to comply with mandatory age verification under the Online Safety Act. The company, AVS Group Ltd, did not respond to repeated contact from the regulator, prompting an additional £50,000 penalty.

The Act requires websites hosting adult content to implement ‘highly effective age assurance’ to prevent children from accessing explicit material. Ofcom has ordered the company to comply within 72 hours or face further daily fines.

Other tech platforms are also under scrutiny, with one unnamed major social media company undergoing compliance checks. Regulators warn that non-compliance will result in formal action, highlighting the growing enforcement of child safety online.

Critics argue the law must be tougher to ensure real protection, particularly for minors and women online. While age checks have reduced UK traffic to some sites, loopholes like VPNs remain a concern, and regulators are pushing for stricter adherence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns lead India to withdraw cyber safety app mandate

India has scrapped its order mandating smartphone manufacturers to pre-install the state-run Sanchar Saathi cyber safety app. The directive, which faced widespread criticism, had raised concerns over privacy and potential government surveillance.

Smartphone makers, including Apple and Samsung, reportedly resisted the order, highlighting that it was issued without prior consultation and challenged user privacy norms. The government argued the app was necessary to verify handset authenticity.

So far, the Sanchar Saathi app has attracted 14 million users, reporting around 2,000 frauds daily, with a sharp spike of 600,000 new registrations in a single day. Despite these figures, the mandatory pre-installation rule provoked intense backlash from cybersecurity experts and digital rights advocates.

India’s Minister of Communications, Jyotiraditya Scindia, dismissed concerns about surveillance, insisting that the app does not enable snooping. Digital advocacy groups welcomed the withdrawal but called for complete legal clarity on the revised Cyber Security Rules, 2024.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple support scam targets users with real tickets

Cybercriminals are increasingly exploiting Apple’s support system to trick users into surrendering their accounts. Fraudsters open real support tickets in a victim’s name, which triggers official Apple emails and creates a false sense of legitimacy. These messages appear professional, making it difficult for users to detect the scam.

Victims often receive a flood of alerts, including two-factor authentication notifications, followed by phone calls from callers posing as Apple agents. The scammers guide users through steps that appear to secure their accounts, often directing them to convincing fake websites that request sensitive information.

Entering verification codes or following instructions on these fraudulent pages gives attackers access to the account. Even experienced users can fall prey because the emails come from official Apple domains, and the phone calls are carefully scripted to build trust.

Experts recommend checking support tickets directly within your Apple ID account, never sharing verification codes, and reviewing all devices linked to your account. Using antivirus software, activating two-factor authentication, and limiting personal information online further strengthen protection against such sophisticated phishing attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Safran and UAE institute join forces on AI geospatial intelligence

Safran.AI, the AI division of Safran Electronics & Defence, and the UAE’s Technology Innovation Institute have formed a strategic partnership to develop a next-generation agentic AI geospatial intelligence platform.

The collaboration aims to transform high-resolution satellite imagery into actionable intelligence for defence operations.

The platform will combine human oversight with advanced geospatial reasoning, enabling operators to interpret and respond to emerging situations faster and with greater precision.

Key initiatives include agentic reasoning systems powered by large language models, a mission-specific AI detector factory, and an autonomous multimodal fusion engine for persistent, all-weather monitoring.

Under the agreement, a joint team operating across France and the UAE will accelerate innovation within a unified operational structure.

Leaders from both organisations emphasise that the alliance strengthens sovereign geospatial intelligence capabilities and lays the foundations for decision intelligence in national security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe boosts defence with Leonardo’s Michelangelo Dome

Italian defence company Leonardo has revealed plans for the ‘Michelangelo Dome’, an AI-powered shield designed to protect cities and critical infrastructure from missile attacks and drone swarms. The system will integrate multiple defence platforms and is expected to be fully operational by the end of the decade.

The project follows a surge in European defence spending amid geopolitical tensions and uncertainty over US support.

Leonardo’s CEO, Roberto Cingolani, highlighted the system’s open architecture, allowing compatibility with other nations’ defence networks and emphasising the need for innovation and international cooperation.

European defence companies are increasingly investing in integrated command systems rather than standalone hardware.

Private investors are also backing startups developing autonomous and AI-driven defence technologies, creating competition for traditional primes such as Leonardo, BAE Systems, Rheinmetall, and Thales.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Asahi faces major disruption after cyberattack

Growing concern surrounds Asahi Group after the company acknowledged a possible leak of nearly two million personal records linked to a cyberattack that began in late September.

Company president Atsushi Katsuki apologised publicly and confirmed that operations remain heavily disrupted as logistics teams work towards full recovery by February.

Investigators found that attackers infiltrated network equipment at one of Asahi’s facilities, obtained administrator credentials and accessed servers repeatedly.

Atsushi Katsuki noted that the breach demonstrated significant vulnerabilities, although he stressed that improvements had already been implemented and no ransom had been paid.

Production and shipments across most domestic factories were halted, forcing employees to handle orders manually and slowing the resumption of supply lines.

Competitors Kirin, Suntory and Sapporo have struggled to meet unexpected demand, triggering shipping limits and suspensions on some products across the wider beer industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Underground AI tools marketed for hacking raise alarms among cybersecurity experts

Cybersecurity researchers say cybercriminals are turning to a growing underground market of customised large language models designed to support low-level hacking tasks.

A new report from Palo Alto Networks’ Unit 42 describes how dark web forums promote jailbroken, open-source and bespoke AI models as hacking assistants or dual-use penetration testing tools, often sold via monthly or annual subscriptions.

Some appear to be repurposed commercial models trained on malware datasets and maintained by active online communities.

These models help users scan for vulnerabilities, write scripts, encrypt or exfiltrate data and generate exploit or phishing code, tasks that can support both attackers and defenders.

Unit 42’s Andy Piazza compared them to earlier dual-use tools, such as Metasploit and Cobalt Strike, which were developed for security testing but are now widely abused by criminal groups. He warned that AI now plays a similar role, lowering the expertise needed to launch attacks.

One example is a new version of WormGPT, a jailbroken LLM that resurfaced on underground forums in September after first appearing in 2023.

The updated ‘WormGPT 4’ is marketed as an unrestricted hacking assistant, with lifetime access reportedly starting at around $220 and an option to buy the complete source code. Researchers say it signals a shift from simple jailbreaks to commercialised, specialised tools that train AI for cybercrime.

Another model, KawaiiGPT, is available for free on GitHub and brands itself as a playful ‘cyber pentesting’ companion while generating malicious content.

Unit 42 calls it an entry-level but effective malicious LLM, with a casual, friendly style that masks its purpose. Around 500 contributors support and update the project, making it easier for non-experts to use.

Piazza noted that internal tests suggest much of the malware generated by these tools remains detectable and less advanced than code seen in some recent AI-assisted campaigns. The wider concern, he said, is that such models make hacking more accessible by translating technical knowledge into simple prompts.

Users no longer need to know jargon like ‘lateral movement’ and can instead ask everyday questions, such as how to find other systems on a network, and receive ready-made scripts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

VPN credential theft emerges as top ransomware entry point

Cyber Express reports that compromised VPN credentials are now the most common method for ransomware attackers to gain entry. In Q3 2025, nearly half of all ransomware incidents began with valid, stolen VPN logins.

The analysis, based on data from Beazley Security (the insurance arm of Beazley), reveals that threat actors are increasingly exploiting remote access tools, rather than relying solely on software exploits or phishing.

Notably, VPN misuse accounted for more initial access than social engineering, supply chain attacks or remote desktop credential compromises.

One contributing factor is that many organisations do not enforce multi-factor authentication (MFA) or maintain strict access controls for VPN accounts. Cyber Express highlights that this situation underscores the ‘critical need’ for MFA and for firms to monitor for credential leaks on the dark web.

The report also mentions specific ransomware groups such as Akira, Qilin and INC, which are known to exploit compromised VPN credentials, often via brute-force attacks or credential stuffing.

From a digital-security policy standpoint, the trend has worrying implications. It shows how traditional perimeter security (like VPNs) is under pressure, and reinforces calls for zero-trust architectures, tighter access governance and proactive credentials-monitoring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kremlin launches new push for domestic AI development

Russian President Vladimir Putin has ordered the creation of a national task force to accelerate the development of domestic generative AI systems, arguing that homegrown models are essential to safeguarding the country’s technological and strategic sovereignty. Speaking at AI Journey, Russia’s flagship AI forum, he warned that foreign large-language models shape global information flows and can influence entire populations, making reliance on external technologies unacceptable.

Putin said the new task force will prioritise expanding data-centre infrastructure and securing reliable energy supplies, including through small-scale nuclear power stations. Russia still trails global leaders like the United States and China, but local companies have produced notable systems such as Sberbank’s Gigachat and Yandex’s GPT.

Sberbank unveiled a new version of Gigachat and showcased AI-powered tools, ranging from humanoid robots to medical-scanning ATMs. However, recent public demonstrations have drawn unwanted attention, including an incident in which a Russian AI robot toppled over on stage.

The Kremlin aims for AI technologies to contribute more than 11 trillion roubles ($136 billion) to Russia’s economy by 2030. Putin urged state bodies and major companies to adopt AI more aggressively while cautioning against overly strict regulation.

However, he stressed that only Russian-made AI systems should be used for national security to prevent sensitive data from flowing abroad. Western sanctions, which restrict access to advanced hardware, particularly microchips, continue to hinder Russia’s ambitions.

The push for domestic AI comes as Ukraine warns that Russia is developing a new generation of autonomous, AI-driven drones capable of operating in coordinated swarms and striking targets up to 62 miles away, underscoring the growing military stakes of the AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot