AI and cyber priorities headline massive US defence budget bill

The US House of Representatives has passed an $848 billion defence policy bill with new provisions for cybersecurity and AI. Lawmakers voted 231 to 196 to approve the chamber’s version of the National Defence Authorisation Act (NDAA).

The bill mandates that the National Security Agency brief Congress on plans for its Cybersecurity Coordination Centre and requires annual reports from combatant commands on the levels of support provided by US Cyber Command.

It also calls for a software bill of materials for AI-enabled technology that the Department of Defence uses. The Pentagon will be authorised to create up to 12 generative AI projects to improve cybersecurity and intelligence operations.

An adopted amendment allows the NSA to share threat intelligence with the private sector to protect US telecommunications networks. Another requirement is that the Pentagon study the National Guard’s role in cyber response at the federal and state levels.

Proposals to renew the Cybersecurity Information Sharing Act and the State and Local Cybersecurity Grant Program were excluded from the final text. The Senate is expected to approve its version of the NDAA next week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Moncler Korea fined over customer data breach

South Korea’s Personal Information Protection Commission has fined Moncler Korea 88 million won ($63,200) over a large-scale customer data breach.

The regulator said a cyberattack in December 2021 exposed the personal details of about 230,000 customers. Hackers gained access by compromising an administrator account and installing malware on the company’s servers.

The stolen information of the South Korean customers included purchase-related data, though names, dates of birth, emails and card numbers were not part of the leak.

According to officials, Moncler Korea only became aware of the breach a month later and delayed reporting it to both customers and the regulator.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers social media restrictions for minors

European Commission President Ursula von der Leyen announced that the EU is considering tighter restrictions on children’s access to social media platforms.

During her annual State of the Union address, von der Leyen said the Commission is closely monitoring Australia’s approach, where individuals under 16 are banned from using platforms like TikTok, Instagram, and Snapchat.

‘I am watching the implementation of their policy closely,’ von der Leyen said, adding that a panel of experts will advise her on the best path forward for Europe by the end of 2025.

Currently, social media age limits are handled at the national level across the EU, with platforms generally setting a minimum age of 13. France, however, is moving toward a national ban for those under 15 unless an EU-wide measure is introduced.

Several EU countries, including the Netherlands, have already warned against children under 15 using social media, citing health risks.

In June, the European Commission issued child protection guidelines under the Digital Services Act, and began working with five member states on age verification tools, highlighting growing concern over digital safety for minors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cyberattack hits LNER passenger data, investigation under way

The contact details of rail passengers have been stolen in a cyberattack affecting London North Eastern Railway (LNER). The company stated that it had been notified of unauthorised access to files managed by a third-party supplier and advised customers to be vigilant against phishing attempts.

LNER stressed that no bank details, card numbers, or passwords had been compromised. The York-based operator stated that it was collaborating with cybersecurity experts and the supplier to investigate the breach and ensure necessary safeguards.

The company did not confirm the number of passengers affected. The incident comes as LNER reported revenues exceeding £1 billion, yet it continues to rely on government support since its nationalisation in 2018.

Passenger complaints rose 12.2 percent in 2025, reaching 24,015, and competition from private operators is driving losses—online ticket platforms such as Trainline direct passengers to cheaper rivals, costing LNER significant revenue.

The breach follows other attacks on UK transport services, including a 2024 incident in which the bank details of 5,000 Transport for London customers were exposed, resulting in weeks of disrupted online services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack keeps JLR factories shut, hackers claim responsibility

Jaguar Land Rover (JLR) has confirmed that data was affected in a cyberattack that has kept its UK factories idle for more than a week. The company stated that it is contacting anyone whose data was involved, although it did not clarify whether the breach affected customers, suppliers, or internal systems.

JLR reported the incident to the Information Commissioner’s Office and immediately shut down IT systems to limit damage. Production at Midlands and Merseyside sites has been halted until at least Thursday, with staff instructed not to return before next week.

The disruption has also hit suppliers and retailers, with garages struggling to order spare parts and dealers facing delays registering vehicles. JLR said it is working around the clock to restore operations in a safe and controlled way, though the process is complex.

Responsibility for the hack has been claimed by Scattered Lapsus$ Hunters, a group linked to previous attacks on Marks & Spencer, the Co-op, and Las Vegas casinos in the UK and the US. The hackers posted alleged screenshots from JLR’s internal systems on Telegram last week.

Cybersecurity experts say the group’s claim that ransomware was deployed raises questions, as it appears to have severed ties with Russian ransomware gangs. Analysts suggest the hackers may have only stolen data or are building their own ransomware infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and OpenAI drive record $300B investment in cloud for AI

OpenAI has finalised a record $300 billion deal with Oracle to secure vast computing infrastructure over five years, marking one of the most significant cloud contracts in history. The agreement is part of Project Stargate, OpenAI’s plan to build massive data centre capacity in the US and abroad.

The two companies will develop 4.5 gigawatts of computing power, equivalent to the energy consumed by millions of homes.

Backed by SoftBank and other partners, the Stargate initiative aims to surpass $500 billion in investment, with construction already underway in Texas. Additional plans include a large-scale data centre project in the United Arab Emirates, supported by Emirati firm G42.

The scale of the deal highlights the fierce race among tech giants to dominate AI infrastructure. Amazon, Microsoft, Google and Meta are also pledging hundreds of billions of dollars towards data centres, while OpenAI faces mounting financial pressure.

The company currently generates around $10 billion in revenue but is expected to spend far more than that annually to support its expansion.

Oracle is betting heavily on OpenAI as a future growth driver, although the risk is high given OpenAI’s lack of profitability and Oracle’s growing debt burden.

A gamble that rests on the assumption that ChatGPT and related AI technologies will continue to grow at an unprecedented pace, despite intense competition from Google, Anthropic and others.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity protections for US companies at risk as key law nears expiration

As cyber threats grow, a vital legal safeguard encouraging US companies to share threat intelligence is on the verge of expiring.

The US Cybersecurity Information Sharing Act of 2015 (CISA 2015), which grants liability protection to firms that voluntarily share cyber threat data with peers and the federal government, is set to lapse at the end of the month unless Congress acts swiftly.

The potential loss of this law could leave companies, especially small and mid-sized organisations, isolated in defending against cyberattacks, including those powered by emerging technologies like agentic AI. Companies may revert to lengthy legal reviews without liability protection or avoid information-sharing altogether.

On 3 September 2025, the House Homeland Security Committee unanimously approved a bill to extend these protections, but it still needs full congressional approval and the president’s signature.

According to Bloomberg, the Cybersecurity and Infrastructure Security Agency (CISA) has suffered budget cuts and workforce reductions under the Trump administration. Despite the administration’s criticism of the agency, its nominee to lead CISA, Sean Plankey, has publicly supported extending CISA 2015.

Industry leaders warn that losing these protections could slow down vital threat coordination. ‘This is the last line of defence,’ said Carole House, a former White House cybersecurity advisor.

With the potential expiration of CISA 2015, industry-focused Information Sharing and Analysis Centres (ISACs), now numbering at least 28 in the USA, may serve as a fallback for cybersecurity collaboration.

While some ISACs already offer legal protections like NDAs and anonymous sharing, experts warn that companies may hesitate to participate without federal liability protections.

Complex legal agreements could become necessary, potentially limiting engagement. ‘You run the risk of some companies deciding it’s too risky,’ said Scott Algeier, executive director of the IT-ISAC, despite hopes for continued collaboration.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New iPhone security ups pressure on spyware

Apple is rolling out Memory Integrity Enforcement on the iPhone 17 line and iPhone Air, an always-on set of protections aimed at blocking memory-safety exploits used by mercenary spyware.

MIE builds on ARM’s Enhanced Memory Tagging Extension in Apple’s A19 chips, alongside secure allocators and tag-confidentiality measures.

Older devices without the new tagging hardware also receive memory-safety upgrades. Apple says new Spectre V1 leak mitigations arrive with virtually no CPU penalty.

Comparable ideas exist elsewhere, such as Windows 11’s memory integrity (HVCI) and Android’s MTE support on Pixel 8, but Apple’s approach is enabled by default across key attack surfaces. Security reporters note the move significantly complicates spyware operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ransomware 3.0 raises alarm over AI-generated cyber threats

Researchers at NYU’s Tandon School of Engineering have demonstrated how large language models can be utilised to execute ransomware campaigns autonomously. Their prototype, dubbed Ransomware 3.0, simulated every stage of an attack, from intrusion to the generation of a ransom note.

The system briefly raised an alarm after cybersecurity firm ESET discovered its files on VirusTotal, mistakenly identifying them as live malware. The proof-of-concept was designed only for controlled laboratory use and posed no risk outside testing environments.

Instead of pre-written code, the prototype embedded text instructions that triggered AI models to generate tailored attack scripts. Each execution created unique code, evading traditional detection methods and running across Windows, Linux, and Raspberry Pi systems.

The researchers found that the system identified up to 96% of sensitive files and could generate personalised extortion notes, raising psychological pressure on victims. With costs as low as $0.70 per attack using commercial AI services, such methods could lower barriers for criminals.

The team stressed that the work was conducted ethically and aims to help defenders prepare countermeasures. They recommend monitoring file access patterns, limiting outbound AI connections, and developing defences against AI-generated attack behaviours.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!