Anthropic introduces memory feature to Claude AI for workplace productivity

The AI startup Anthropic has added a memory feature to its Claude AI, designed to automatically recall details from earlier conversations, such as project information and team preferences.

Initially, the upgrade is only available to Team and Enterprise subscribers, who can manage, edit, or delete the content that the system retains.

Anthropic presents the tool as a way to improve workplace efficiency instead of forcing users to repeat instructions. Enterprise administrators have additional controls, including entirely turning memory off.

Privacy safeguards are included, such as an ‘incognito mode’ for conversations that are not stored.

Analysts view the step as an effort to catch up with competitors like ChatGPT and Gemini, which already offer similar functions. Memory also links with Claude’s newer tools for creating spreadsheets, presentations, and PDFs, allowing past information to be reused in future documents.

Anthropic plans a wider release after testing the feature with businesses. Experts suggest the approach could strengthen the company’s position in the AI market by offering both continuity and security, which appeal to enterprises handling sensitive data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK plans AI systems to monitor offenders and prevent crimes before they occur

The UK government is expanding its use of AI across prisons, probation and courts to monitor offenders, assess risk and prevent crime before it occurs under the AI Action Plan.

One key measure involves an AI violence prediction tool that uses factors like an offender’s age, past violent incidents and institutional behaviour to identify those most likely to pose risk.

These predictions will inform decisions to increase supervision or relocate prisoners in custody wings ahead of potential violence.

Another component scans seized mobile phone content to highlight secret or coded messages that may signal plotting of violent acts, intelligence operations or contraband activities.

Officials are also working to merge offender records across courts, prisons and probation to create a single digital identity for each offender.

UK authorities say the goal is to reduce reoffending and prioritise public and staff safety, while shifting resources from reactive investigations to proactive prevention. Civil liberties groups caution about privacy, bias and the risk of overreach if transparency and oversight are not built in.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ukraine urges ethical use of AI in education

AI can help build individual learning paths for Ukraine’s 3.5 million students, but its use must remain ethical, First Deputy Minister of Education and Science Yevhen Kudriavets has said.

Speaking to UNN, Kudriavets stressed that AI can analyse large volumes of information and help students acquire the knowledge they need more efficiently. He said AI could construct individual learning trajectories faster than teachers working manually.

He warned, however, that AI should not replace the educational process and that safeguards must be found to prevent misuse.

Kudriavets also said students in Ukraine should understand the reasons behind using AI, adding that it should be used to achieve knowledge rather than to obtain grades.

The deputy minister emphasised that technology itself is neutral, and how people choose to apply it determines whether it benefits education.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NATO and Seoul expand cybersecurity dialogue and defence ties

South Korea and NATO have pledged closer cooperation on cybersecurity following high-level talks in Seoul this week, according to Yonhap News Agency.

The discussions, led by Ambassador for International Cyber Affairs Lee Tae Woo and NATO Assistant Secretary General Jean-Charles Ellermann-Kingombe, focused on countering cyber threats and assessing risks in the Indo-Pacific and Euro-Atlantic regions.

Launched in 2023, the high-level cyber dialogue aims to deepen collaboration between South Korea and NATO in the cybersecurity domain.

The meeting followed talks between Defence Minister Ahn Gyu-back and NATO Military Committee chair Giuseppe Cavo Dragone during the Seoul Defence Dialogue earlier this week.

Dragone said cooperation would expand across defence exchanges, information sharing, cyberspace, space, and AI as ties between Seoul and NATO strengthen.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FTC opens inquiry into AI chatbots and child safety

The US Federal Trade Commission has launched an inquiry into AI chatbots that act as digital companions, raising concerns about their impact on children and teenagers.

Seven firms, including Alphabet, Meta, OpenAI and Snap, have been asked to provide information about how they address risks linked to ΑΙ chatbots designed to mimic human relationships.

Chairman Andrew Ferguson said protecting children online was a top priority, stressing the need to balance safety with maintaining US leadership in AI. Regulators fear minors may be particularly vulnerable to forming emotional bonds with AI chatbots that simulate friendship and empathy.

An inquiry that will investigate how companies develop AI chatbot personalities, monetise user interactions and enforce age restrictions. It will also assess how personal information from conversations is handled and whether privacy laws are being respected.

Other companies receiving orders include Character.AI and Elon Musk’s xAI.

The probe follows growing public concern over the psychological effects of generative AI on young people.

Last month, the parents of a 16-year-old who died by suicide sued OpenAI, alleging ChatGPT provided harmful instructions. The company later pledged corrective measures, admitting its chatbot does not always recommend mental health support during prolonged conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI smart glasses give blind users new independence

Smart glasses powered by AI give people with vision loss new ways to navigate daily life, from cooking to crossing the street.

Users like Andrew Tutty in Ontario say the devices restore independence, helping with tasks such as identifying food or matching clothes. Others, like Emilee Schevers, rely on them to confirm traffic signals before crossing the road.

The AI glasses, developed by Meta, are cheaper than many other assistive devices, which can cost thousands. They connect to smartphones, using voice commands and apps like Be My Eyes to describe surroundings or link with volunteers.

Experts, however, caution that the glasses come with significant privacy concerns. Built-in cameras stream everything within view to large tech firms, raising questions about surveillance, data use and algorithmic reliability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Broadcom lands $10bn AI chip order

Broadcom has secured a $10 billion agreement to supply custom AI chips, with analysts pointing to OpenAI as the likely customer.

The US semiconductor firm announced the deal alongside better-than-expected third-quarter earnings, driven by growing demand for its ASICs. It forecast a strong fourth quarter as cloud providers seek alternatives to Nvidia, whose GPUs remain costly and supply-constrained.

Chief executive Hock Tan said Broadcom is collaborating with four potential new clients on chip development, adding to existing partnerships with major players such as Google and Meta.

The company recently introduced the Tomahawk Ultra and next-generation Jericho networking chips, further strengthening its position in the AI computing sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and cyber priorities headline massive US defence budget bill

The US House of Representatives has passed an $848 billion defence policy bill with new provisions for cybersecurity and AI. Lawmakers voted 231 to 196 to approve the chamber’s version of the National Defence Authorisation Act (NDAA).

The bill mandates that the National Security Agency brief Congress on plans for its Cybersecurity Coordination Centre and requires annual reports from combatant commands on the levels of support provided by US Cyber Command.

It also calls for a software bill of materials for AI-enabled technology that the Department of Defence uses. The Pentagon will be authorised to create up to 12 generative AI projects to improve cybersecurity and intelligence operations.

An adopted amendment allows the NSA to share threat intelligence with the private sector to protect US telecommunications networks. Another requirement is that the Pentagon study the National Guard’s role in cyber response at the federal and state levels.

Proposals to renew the Cybersecurity Information Sharing Act and the State and Local Cybersecurity Grant Program were excluded from the final text. The Senate is expected to approve its version of the NDAA next week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Moncler Korea fined over customer data breach

South Korea’s Personal Information Protection Commission has fined Moncler Korea 88 million won ($63,200) over a large-scale customer data breach.

The regulator said a cyberattack in December 2021 exposed the personal details of about 230,000 customers. Hackers gained access by compromising an administrator account and installing malware on the company’s servers.

The stolen information of the South Korean customers included purchase-related data, though names, dates of birth, emails and card numbers were not part of the leak.

According to officials, Moncler Korea only became aware of the breach a month later and delayed reporting it to both customers and the regulator.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!