Trump threatens sanctions on EU over Digital Services Act

Only five days after the Joint Statement on a United States-European Union framework on an agreement on reciprocal, fair and balanced trade (‘Framework Agreement’), the Trump administration is weighing an unprecedented step against the EU over its new tech rules.

According to The Japan Times and Reuters, US officials are discussing sanctions on the EU or member state representatives responsible for implementing the Digital Services Act (DSA), a sweeping law that forces online platforms to police illegal content. Washington argues the regulation censors Americans and unfairly burdens US companies.

While governments often complain about foreign rules they deem restrictive, directly sanctioning allied officials would mark a sharp escalation. So far, discussions have centred on possible visa bans, though no decision has been made.

Last week, Internal State Department meetings focused on whom such measures might target. Secretary of State Marco Rubio has ordered US diplomats in Europe to lobby against the DSA, urging allies to amend or repeal the law.

Washington insists that the EU is curbing freedom of speech under the banner of combating hate speech and misinformation, while the EU maintains that the act is designed to protect citizens from illegal material such as child exploitation and extremist propaganda.

‘Freedom of expression is a fundamental right in the EU. It lies at the heart of the DSA,’ an EU Commission spokesperson said, rejecting US accusations as ‘completely unfounded.’

Trump has framed the dispute in broader terms, threatening tariffs and export restrictions on any country that imposes digital regulations he deems discriminatory. In recent months, he has repeatedly warned that measures like the DSA, or national digital taxes, are veiled attacks on US companies and conservative voices online. At the same time, the administration has not hesitated to sanction foreign officials in other contexts, including a Brazilian judge overseeing cases against Trump ally Jair Bolsonaro.

US leaders, including Vice President JD Vance, have accused European authorities of suppressing right-wing parties and restricting debate on issues such as immigration. In contrast, European officials argue that their rules are about fairness and safety and do not silence political viewpoints. At a transatlantic conference earlier this year, Vance stunned European counterparts by charging that the EU was undermining democracy, remarks that underscored the widening gap.

The question remains whether Washington will take the extraordinary step of sanctioning officials in Brussels or the EU capitals. Such action could further destabilise an already fragile trade relationship while putting the US squarely at odds with Europe over the future of digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky shuts down in Mississippi over new age law

Bluesky, a decentralised social media platform, has ceased operations in Mississippi due to a new state law requiring strict age verification.

The company said compliance would require tracking users, identifying children, and collecting sensitive personal information. For a small team like Bluesky’s, the burden of such infrastructure, alongside privacy concerns, made continued service unfeasible.

The law mandates age checks not just for explicit content, but for access to general social media. Bluesky highlighted that even the UK Online Safety Act does not require platforms to track which users are children.

US Mississippi law has sparked debate over whether efforts to protect minors are inadvertently undermining online privacy and free speech. Bluesky warned that such legislation may stifle innovation and entrench dominance by larger tech firms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hong Kong deepfake scandal exposes gaps in privacy law

The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.

Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.

The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.

The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.

Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta urged to ban child-like chatbots amid Brazil’s safety concerns

Brazil’s Attorney General (AGU) has formally requested Meta to remove AI-powered chatbots that simulate childlike profiles and engage in sexually explicit dialogue, citing concerns that they ‘promote the eroticisation of children.’

The demand was made via an ‘extrajudicial notice,’ recalling that platforms must remove illicit content without a court order, especially when it involves potential harm to minors.

Meta’s AI Studio, used to create and customise these bots across services like Instagram, Facebook, and WhatsApp, is under scrutiny for facilitating interactions that may mislead or exploit users.

While no direct sanctions were announced, the AGU emphasised that tech platforms must proactively manage harmful or inappropriate AI-generated content.

The move follows Brazil’s Supreme Court decision in June, which increased companies’ obligations to remove user-generated illicit content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gamescom showcases EU support for cultural and digital innovation

The European Commission will convene video game professionals in Cologne for the third consecutive year on August 20 and 21. The visit aims to follow developments in the industry, present the future EU budget, and outline opportunities under the upcoming AgoraEU programme.

EU Officials will also discuss AI adoption, new investment opportunities, and ways to protect minors in gaming. Renate Nikolay, Deputy Director-General of DG CONNECT, will deliver a keynote speech and join a panel titled ‘Investment in games – is it finally happening?’.

The European Commission highlights the role of gaming in Europe’s cultural diversity and innovation. Creative Europe MEDIA has already supported nearly 180 projects since 2021. At Gamescom, its booth will feature 79 companies from 24 countries, offering fresh networking opportunities to video game professionals.

The engagement comes just before the release of the second edition of the ‘European Media Industry Outlook’ report. The updated study will provide deeper insights into consumer behaviour and market trends, with a dedicated focus on the video games sector.

Gamescom remains the world’s largest gaming event, with 1,500 exhibitors from 72 nations in 2025. The event celebrates creative and technological achievements, highlighting the industry’s growing importance for Europe’s competitiveness and digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI toys change the way children learn and play

AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.

Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.

Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.

Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.

The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.

At the same time, experts warn about privacy risks, the collection of children’s data, and potential reductions in face-to-face interaction.

Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.

The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia promises to bolster digital sovereignty and AI talent on Independence Day

Indonesia marked its 80th Independence Day by reaffirming its commitment to digital sovereignty and technology-driven inclusion.

The Ministry of Communication and Digital Affairs, following President Prabowo Subianto’s ‘Indonesia Incorporated’ directive, highlighted efforts to build an inclusive, secure, and efficient digital ecosystem.

Priorities include deploying 4G networks in remote regions, expanding public internet services, and reinforcing the Palapa Ring broadband infrastructure.

On the talent front, the government launched a Digital Talent Scholarship and AI Talent Factory to nurture AI skills, from beginners to specialists, setting the stage for future AI innovation domestically.

In parallel, digital protection measures have been bolstered: over 1.2 million pieces of harmful content have been blocked, while new regulations under the Personal Data Protection Law, age-verification, content monitoring, and reporting systems have been introduced to enhance child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!