NATO and Seoul expand cybersecurity dialogue and defence ties

South Korea and NATO have pledged closer cooperation on cybersecurity following high-level talks in Seoul this week, according to Yonhap News Agency.

The discussions, led by Ambassador for International Cyber Affairs Lee Tae Woo and NATO Assistant Secretary General Jean-Charles Ellermann-Kingombe, focused on countering cyber threats and assessing risks in the Indo-Pacific and Euro-Atlantic regions.

Launched in 2023, the high-level cyber dialogue aims to deepen collaboration between South Korea and NATO in the cybersecurity domain.

The meeting followed talks between Defence Minister Ahn Gyu-back and NATO Military Committee chair Giuseppe Cavo Dragone during the Seoul Defence Dialogue earlier this week.

Dragone said cooperation would expand across defence exchanges, information sharing, cyberspace, space, and AI as ties between Seoul and NATO strengthen.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI smart glasses give blind users new independence

Smart glasses powered by AI give people with vision loss new ways to navigate daily life, from cooking to crossing the street.

Users like Andrew Tutty in Ontario say the devices restore independence, helping with tasks such as identifying food or matching clothes. Others, like Emilee Schevers, rely on them to confirm traffic signals before crossing the road.

The AI glasses, developed by Meta, are cheaper than many other assistive devices, which can cost thousands. They connect to smartphones, using voice commands and apps like Be My Eyes to describe surroundings or link with volunteers.

Experts, however, caution that the glasses come with significant privacy concerns. Built-in cameras stream everything within view to large tech firms, raising questions about surveillance, data use and algorithmic reliability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Broadcom lands $10bn AI chip order

Broadcom has secured a $10 billion agreement to supply custom AI chips, with analysts pointing to OpenAI as the likely customer.

The US semiconductor firm announced the deal alongside better-than-expected third-quarter earnings, driven by growing demand for its ASICs. It forecast a strong fourth quarter as cloud providers seek alternatives to Nvidia, whose GPUs remain costly and supply-constrained.

Chief executive Hock Tan said Broadcom is collaborating with four potential new clients on chip development, adding to existing partnerships with major players such as Google and Meta.

The company recently introduced the Tomahawk Ultra and next-generation Jericho networking chips, further strengthening its position in the AI computing sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and cyber priorities headline massive US defence budget bill

The US House of Representatives has passed an $848 billion defence policy bill with new provisions for cybersecurity and AI. Lawmakers voted 231 to 196 to approve the chamber’s version of the National Defence Authorisation Act (NDAA).

The bill mandates that the National Security Agency brief Congress on plans for its Cybersecurity Coordination Centre and requires annual reports from combatant commands on the levels of support provided by US Cyber Command.

It also calls for a software bill of materials for AI-enabled technology that the Department of Defence uses. The Pentagon will be authorised to create up to 12 generative AI projects to improve cybersecurity and intelligence operations.

An adopted amendment allows the NSA to share threat intelligence with the private sector to protect US telecommunications networks. Another requirement is that the Pentagon study the National Guard’s role in cyber response at the federal and state levels.

Proposals to renew the Cybersecurity Information Sharing Act and the State and Local Cybersecurity Grant Program were excluded from the final text. The Senate is expected to approve its version of the NDAA next week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Moncler Korea fined over customer data breach

South Korea’s Personal Information Protection Commission has fined Moncler Korea 88 million won ($63,200) over a large-scale customer data breach.

The regulator said a cyberattack in December 2021 exposed the personal details of about 230,000 customers. Hackers gained access by compromising an administrator account and installing malware on the company’s servers.

The stolen information of the South Korean customers included purchase-related data, though names, dates of birth, emails and card numbers were not part of the leak.

According to officials, Moncler Korea only became aware of the breach a month later and delayed reporting it to both customers and the regulator.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM turns notes into flashcards podcasts and quizzes

Google’s learning-focused AI tool NotebookLM has gained a major update, making studying and teaching more interactive.

Instead of offering only static summaries, it now generates flashcards that condense key information into easy-to-remember notes, helping users recall knowledge more effectively.

Reports can also be transformed into quizzes with customisable topics and difficulty, which can then be shared with friends or colleagues through a simple link.

The update extends to audio learning, where NotebookLM’s podcast-style Audio Overviews are evolving with new formats. Instead of a single style, users can now create Brief, Debate, or Critique episodes, giving greater flexibility in how material is explained or discussed.

Google is also strengthening its teaching tools. A new Blog Post format offers contextual suggestions such as strategy papers or explainers, while the ability to create custom report formats allows users to design study resources tailored to their needs.

The most significant addition, however, is the Learning Guide. Acting like a personal tutor, it promotes deeper understanding by asking open-ended questions, breaking problems into smaller steps, and adapting explanations to suit each learner.

With these features, NotebookLM is moving closer to becoming a comprehensive learning assistant, offering a mix of interactive study aids and adaptable teaching methods that go beyond simple note-taking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing concern over AI fatigue among students and teachers

Experts say growing exposure to AI is leaving many people exhausted, a phenomenon increasingly described as ‘AI fatigue’.

Educators and policymakers note that AI adoption surged before society had time to thoroughly weigh its ethical or social effects. The technology now underpins tasks from homework writing to digital art, leaving some feeling overwhelmed or displaced.

University students are among those most affected, with many relying heavily on AI for assignments. Teachers say it has become challenging to identify AI-generated work, as detection tools often produce inconsistent results.

Some educators are experimenting with low-tech classrooms, banning phones and requiring handwritten work. They report deeper conversations and stronger engagement when distractions are removed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot