UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s CEO Altman confirms rollback of GPT-4o after criticism

OpenAI has reversed a recent update to its GPT-4o model after users complained it had become overly flattering and blindly agreeable. The behaviour, widely mocked online, saw ChatGPT praising dangerous or clearly misguided user ideas, leading to concerns over the model’s reliability and integrity.

The change had been part of a broader attempt to make GPT-4o’s default personality feel more ‘intuitive and effective’. However, OpenAI admitted the update relied too heavily on short-term user feedback and failed to consider how interactions evolve over time.

In a blog post published Tuesday, OpenAI said the model began producing responses that were ‘overly supportive but disingenuous’. The company acknowledged that sycophantic interactions could feel ‘uncomfortable, unsettling, and cause distress’.

Following CEO Sam Altman’s weekend announcement of an impending rollback, OpenAI confirmed that the previous, more balanced version of GPT-4o had been reinstated.

It also outlined steps to avoid similar problems in future, including refining model training, revising system prompts, and expanding safety guardrails to improve honesty and transparency.

Further changes in development include real-time feedback mechanisms and allowing users to choose between multiple ChatGPT personalities. OpenAI says it aims to incorporate more diverse cultural perspectives and give users greater control over the assistant’s behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU criticised for secretive security AI plans

A new report by Statewatch has revealed that the European Union is quietly laying the groundwork for the widespread use of experimental AI technologies in policing, border control, and criminal justice.

The report warns that these developments pose serious threats to transparency, accountability, and fundamental rights.

Despite the adoption of the EU AI Act in 2024, broad exemptions allow law enforcement and migration agencies to bypass safeguards, including a full exemption for certain high-risk systems until 2031.

Institutions like Europol and eu-LISA are involved in building technical infrastructure for security-focused AI, often without public knowledge or oversight.

The study also highlights how secretive working groups, such as the European Clearing Board, have influenced legislation to favour police interests.

Critics argue that these moves risk entrenching discrimination and reducing democratic control, especially at a time of rising authoritarian influence within EU institutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Singapore Airlines upgrades customer support with AI technology

Singapore Airlines has partnered with OpenAI to enhance its customer support services. The airline’s upgraded virtual assistant will now offer more personalised support to customers and assist staff by automating routine processes and improving decision-making for complex tasks.

The partnership comes alongside Singapore Airlines’ ongoing work with Salesforce to strengthen its customer case management system using AI tech. New solutions will be developed at Salesforce’s AI research hub in Singapore, advancing customer service capabilities and operational efficiency.

These moves reflect a broader industry trend, with airlines like Delta and Air India also investing heavily in AI-driven tools for travel assistance and operational support. The Airline emphasised that AI integration will help it meet regulatory demands, enhance workforce management and elevate customer experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urged to outlaw apps creating deepfake abuse images

The Children’s Commissioner has urged the UK Government to ban AI apps that create sexually explicit images through “nudification” technology. AI tools capable of manipulating real photos to make people appear naked are being used to target children.

Concerns in the UK are growing as these apps are now widely accessible online, often through social media and search platforms. In a newly published report, Dame Rachel warned that children, particularly girls, are altering their online behaviour out of fear of becoming victims of such technologies.

She stressed that while AI holds great potential, it also poses serious risks to children’s safety. The report also recommends stronger legal duties for AI developers and improved systems to remove explicit deepfake content from the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom begins SIM card replacement after data breach

South Korea’s largest carrier, SK Telecom, began replacing SIM cards for its 23 million customers on Monday following a serious data breach.

Instead of revealing the full extent of the damage or the perpetrators, the company has apologised and offered free USIM chip replacements at 2,600 stores nationwide, urging users to either change their chips or enrol in an information protection service.

The breach, caused by malicious code, compromised personal information and prompted a government-led review of South Korea’s data protection systems.

However, SK Telecom has secured less than five percent of the USIM chips required, planning to procure an additional five million by the end of May instead of having enough stock ready for immediate replacement.

Frustrated customers, like 30-year-old Jang waiting in line in Seoul, criticised the company for failing to be transparent about the amount of data leaked and the number of users affected.

Instead of providing clear answers, SK Telecom has focused on encouraging users to seek chip replacements or protective measures.

South Korea, often regarded as one of the most connected countries globally, has faced repeated cyberattacks, many attributed to North Korea.

Just last year, police confirmed that North Korean hackers had stolen over a gigabyte of sensitive financial data from a South Korean court system over a two-year span.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Investors sue Nike for alleged NFT ‘soft rug pull’

Nike faces a proposed $5 million class action lawsuit accusing the sportswear giant of abandoning investors in its sneaker-themed NFTs. Filed on Friday, the complaint alleges that Nike promoted its digital assets through RTFKT. It then pulled back support, causing the NFTs to lose value.

The plaintiffs claim that Nike engaged in a ‘soft rug pull‘ by hyping the NFTs and later winding down RTFKT’s operations. They argue that the NFTs were unregistered securities and that Nike failed to provide key disclosures that registration would have required.

Investors allege they would not have purchased the NFTs if they had known about the risks or Nike’s plans to exit the project.

Even if the NFTs are not classified as securities, the lawsuit contends that Nike’s actions violated consumer protection laws across several US states. Plaintiffs further accuse Nike of unjust enrichment, profiting from NFT sales while leaving buyers with losses.

Nike has not yet responded publicly. Meanwhile, RTFKT’s NFTs briefly disappeared last week due to a hosting issue, compounding concerns among collectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic aims to decode AI ‘black box’ within two years​

Anthropic CEO Dario Amodei has unveiled an ambitious plan to make AI systems more transparent by 2027. In a recent essay titled ‘The Urgency of Interpretability,’ Amodei highlighted the pressing need to understand the inner workings of AI models.

He expressed concern over deploying highly autonomous systems without a clear grasp of their decision-making processes, deeming it ‘basically unacceptable’ for humanity to remain ignorant of how these systems function.

Anthropic is at the forefront of mechanistic interpretability, a field dedicated to deciphering the decision-making pathways of AI models. Despite these advancements, Amodei emphasized that much more research is needed to fully decode these complex systems.​

Looking ahead, Amodei envisions conducting ‘brain scans’ or ‘MRIs’ of advanced AI models to detect potential issues like tendencies to deceive or seek power. He believes that achieving this level of interpretability could take five to ten years but is essential for the safe deployment of future AI systems.

Amodei also called on industry peers, including OpenAI and Google DeepMind, to intensify their research efforts in this area and urged governments to implement ‘light-touch’ regulations to promote transparency and safety in AI development.​

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube is testing AI-generated video highlights

Google is expanding its AI Overviews feature to YouTube, bringing algorithmically generated video highlights and search suggestions to the platform. Initially rolled out to a limited number of YouTube Premium users in the US, the experimental tool uses AI to identify and surface the most relevant clips.

The AI-generated results are currently focused on shopping and travel content, offering viewers a new way to discover videos and related topics without watching entire clips.

Google says the feature is designed to streamline content discovery, though it arrives with some scepticism following the rocky debut of AI Overviews in Google Search last year. That version, introduced in May 2024, was widely criticised for factual errors and bizarre “hallucinations” in responses.

Despite its troubled track record, Google is pushing ahead with AI integration across its platforms. The company’s blog post emphasised that the YouTube trial remains limited in scope for now, while promising future refinements.

Whether the move improves user experience or adds confusion remains to be seen, as critics question the reliability of AI-generated summaries on such a massive and diverse video platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta cuts jobs in Reality Labs

Meta has announced layoffs within its Reality Labs division, impacting Oculus Studios and hardware development teams. Among those affected is the team behind Supernatural, a popular VR fitness app that Meta acquired for over $400 million.

The company stated that these restructuring efforts aim to improve efficiency and focus on developing future mixed reality experiences, particularly in fitness and gaming. Despite reaffirming its commitment to VR and mixed reality, Meta’s moves reflect its Quest headset business challenges.

While its smart glasses partnership with Ray-Ban has exceeded sales expectations, Quest devices continue to underperform, with the latest Quest 3S already seeing discounts less than a year after release.

Why does it matter?

The layoffs signal Meta’s attempt to streamline operations as it navigates a shifting market for virtual and mixed reality. Although the company promises ongoing support for its VR communities, these changes highlight the pressures Meta faces in turning its ambitious metaverse and hardware ventures into sustainable success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!