Ukraine urges ethical use of AI in education

AI can help build individual learning paths for Ukraine’s 3.5 million students, but its use must remain ethical, First Deputy Minister of Education and Science Yevhen Kudriavets has said.

Speaking to UNN, Kudriavets stressed that AI can analyse large volumes of information and help students acquire the knowledge they need more efficiently. He said AI could construct individual learning trajectories faster than teachers working manually.

He warned, however, that AI should not replace the educational process and that safeguards must be found to prevent misuse.

Kudriavets also said students in Ukraine should understand the reasons behind using AI, adding that it should be used to achieve knowledge rather than to obtain grades.

The deputy minister emphasised that technology itself is neutral, and how people choose to apply it determines whether it benefits education.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube expands AI dubbing to millions of creators

Real-time translation is becoming a standard feature across consumer tech, with Samsung, Google, and Apple all introducing new tools. Apple’s recently announced Live Translation on AirPods demonstrates the utility of such features, particularly for travellers.

YouTube has joined the trend, expanding its multi-language audio feature to millions of creators worldwide. The tool enables creators to add dubbed audio tracks in multiple languages, powered by Google’s Gemini AI, replicating tone and emotion.

The feature was first tested with creators like MrBeast, Mark Rober, and Jamie Oliver. YouTube reports that Jamie Oliver’s channel saw its views triple, while over 25% of the watch time came from non-primary languages.

Mark Rober’s channel now supports more than 30 languages per video, helping creators reach audiences far beyond their native markets. YouTube states that this expansion should make content more accessible to global viewers and increase overall engagement.

Subtitles will still be vital for people with hearing difficulties, but AI-powered dubbing could reduce reliance on them for language translation. For creators, it marks a significant step towards making content truly global.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NATO and Seoul expand cybersecurity dialogue and defence ties

South Korea and NATO have pledged closer cooperation on cybersecurity following high-level talks in Seoul this week, according to Yonhap News Agency.

The discussions, led by Ambassador for International Cyber Affairs Lee Tae Woo and NATO Assistant Secretary General Jean-Charles Ellermann-Kingombe, focused on countering cyber threats and assessing risks in the Indo-Pacific and Euro-Atlantic regions.

Launched in 2023, the high-level cyber dialogue aims to deepen collaboration between South Korea and NATO in the cybersecurity domain.

The meeting followed talks between Defence Minister Ahn Gyu-back and NATO Military Committee chair Giuseppe Cavo Dragone during the Seoul Defence Dialogue earlier this week.

Dragone said cooperation would expand across defence exchanges, information sharing, cyberspace, space, and AI as ties between Seoul and NATO strengthen.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Broadcom lands $10bn AI chip order

Broadcom has secured a $10 billion agreement to supply custom AI chips, with analysts pointing to OpenAI as the likely customer.

The US semiconductor firm announced the deal alongside better-than-expected third-quarter earnings, driven by growing demand for its ASICs. It forecast a strong fourth quarter as cloud providers seek alternatives to Nvidia, whose GPUs remain costly and supply-constrained.

Chief executive Hock Tan said Broadcom is collaborating with four potential new clients on chip development, adding to existing partnerships with major players such as Google and Meta.

The company recently introduced the Tomahawk Ultra and next-generation Jericho networking chips, further strengthening its position in the AI computing sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and cyber priorities headline massive US defence budget bill

The US House of Representatives has passed an $848 billion defence policy bill with new provisions for cybersecurity and AI. Lawmakers voted 231 to 196 to approve the chamber’s version of the National Defence Authorisation Act (NDAA).

The bill mandates that the National Security Agency brief Congress on plans for its Cybersecurity Coordination Centre and requires annual reports from combatant commands on the levels of support provided by US Cyber Command.

It also calls for a software bill of materials for AI-enabled technology that the Department of Defence uses. The Pentagon will be authorised to create up to 12 generative AI projects to improve cybersecurity and intelligence operations.

An adopted amendment allows the NSA to share threat intelligence with the private sector to protect US telecommunications networks. Another requirement is that the Pentagon study the National Guard’s role in cyber response at the federal and state levels.

Proposals to renew the Cybersecurity Information Sharing Act and the State and Local Cybersecurity Grant Program were excluded from the final text. The Senate is expected to approve its version of the NDAA next week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack keeps JLR factories shut, hackers claim responsibility

Jaguar Land Rover (JLR) has confirmed that data was affected in a cyberattack that has kept its UK factories idle for more than a week. The company stated that it is contacting anyone whose data was involved, although it did not clarify whether the breach affected customers, suppliers, or internal systems.

JLR reported the incident to the Information Commissioner’s Office and immediately shut down IT systems to limit damage. Production at Midlands and Merseyside sites has been halted until at least Thursday, with staff instructed not to return before next week.

The disruption has also hit suppliers and retailers, with garages struggling to order spare parts and dealers facing delays registering vehicles. JLR said it is working around the clock to restore operations in a safe and controlled way, though the process is complex.

Responsibility for the hack has been claimed by Scattered Lapsus$ Hunters, a group linked to previous attacks on Marks & Spencer, the Co-op, and Las Vegas casinos in the UK and the US. The hackers posted alleged screenshots from JLR’s internal systems on Telegram last week.

Cybersecurity experts say the group’s claim that ransomware was deployed raises questions, as it appears to have severed ties with Russian ransomware gangs. Analysts suggest the hackers may have only stolen data or are building their own ransomware infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Photonic chips open the path to sustainable AI by training with light

A team of international researchers has shown how training neural networks directly with light on photonic chips could make AI faster and more sustainable.

A breakthrough study, published in Nature, involved collaboration between the Politecnico di Milano, EPFL Lausanne, Stanford University, the University of Cambridge, and the Max Planck Institute.

The research highlights how physical neural networks, which use analogue circuits that exploit the laws of physics, can process information in new ways.

Photonic chips developed at the Politecnico di Milano perform mathematical operations such as addition and multiplication through light interference on silicon microchips only a few millimetres in size.

By eliminating the need to digitise information, these chips dramatically cut both processing time and energy use. Researchers have also pioneered an ‘in-situ’ training technique that enables photonic neural networks to learn tasks entirely through light signals, instead of relying on digital models.

The result is a training process that is faster, more efficient and more robust.

Such advances could lead to more powerful AI models capable of running directly on devices instead of being dependent on energy-hungry data centres.

An approach that paves the way for technologies such as autonomous vehicles, portable intelligent sensors and real-time data processing systems that are both greener and quicker.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media authenticity questioned as Altman points to bot-like behaviour

Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.

Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.

The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.

Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.

Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fake GitHub downloads deliver GPUGate malware to EU IT staff

A malvertising campaign is targeting IT workers in the EU with fake GitHub Desktop installers, according to Arctic Wolf. The goal is to steal credentials, deploy ransomware, and infiltrate sensitive systems. The operation has reportedly been active for over six months.

Attackers used malicious Google Ads that redirected users to doctored GitHub repositories. Modified README files mimicked genuine download pages but linked to a lookalike domain. MacOS users received the AMOS Stealer, while Windows victims downloaded bloated installers hiding malware.

The Windows malware evaded detection using GPU-based checks, refusing to run in sandboxes that lacked real graphics drivers. On genuine machines, it copied itself to %APPDATA%, sought elevated privileges, and altered Defender settings. Analysts dubbed the technique GPUGate.

The payload persisted by creating privileged tasks and sideloading malicious DLLs into legitimate executables. Its modular system could download extra malware tailored to each victim. The campaign was geo-fenced to EU targets and relied on redundant command servers.

Researchers warn that IT staff are prime targets due to their access to codebases and credentials. With the campaign still active, Arctic Wolf has published indicators of compromise, Yara rules, and security advice to mitigate the GPUGate threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!