Meta urged to ban child-like chatbots amid Brazil’s safety concerns

Brazil’s Attorney General (AGU) has formally requested Meta to remove AI-powered chatbots that simulate childlike profiles and engage in sexually explicit dialogue, citing concerns that they ‘promote the eroticisation of children.’

The demand was made via an ‘extrajudicial notice,’ recalling that platforms must remove illicit content without a court order, especially when it involves potential harm to minors.

Meta’s AI Studio, used to create and customise these bots across services like Instagram, Facebook, and WhatsApp, is under scrutiny for facilitating interactions that may mislead or exploit users.

While no direct sanctions were announced, the AGU emphasised that tech platforms must proactively manage harmful or inappropriate AI-generated content.

The move follows Brazil’s Supreme Court decision in June, which increased companies’ obligations to remove user-generated illicit content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gamescom showcases EU support for cultural and digital innovation

The European Commission will convene video game professionals in Cologne for the third consecutive year on August 20 and 21. The visit aims to follow developments in the industry, present the future EU budget, and outline opportunities under the upcoming AgoraEU programme.

EU Officials will also discuss AI adoption, new investment opportunities, and ways to protect minors in gaming. Renate Nikolay, Deputy Director-General of DG CONNECT, will deliver a keynote speech and join a panel titled ‘Investment in games – is it finally happening?’.

The European Commission highlights the role of gaming in Europe’s cultural diversity and innovation. Creative Europe MEDIA has already supported nearly 180 projects since 2021. At Gamescom, its booth will feature 79 companies from 24 countries, offering fresh networking opportunities to video game professionals.

The engagement comes just before the release of the second edition of the ‘European Media Industry Outlook’ report. The updated study will provide deeper insights into consumer behaviour and market trends, with a dedicated focus on the video games sector.

Gamescom remains the world’s largest gaming event, with 1,500 exhibitors from 72 nations in 2025. The event celebrates creative and technological achievements, highlighting the industry’s growing importance for Europe’s competitiveness and digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSPRA warns AI must complement, not replace, human voices in education

A new report from the National School Public Relations Association (NSPRA) and ThoughtExchange highlights the growing role of AI in K-12 communications, offering detailed guidance for ethical integration and effective school engagement.

Drawing on insights from 200 professionals across 37 states, the study reveals how AI tools boost efficiency while underscoring the need for stronger policies, transparency, and ongoing training.

Barbara M Hunter, APR, NSPRA executive director, explained that AI can enhance communication work but will never replace strategy, human judgement, relationships, and authentic school voices.

Key findings show that 91 percent of respondents already use AI, yet most districts still lack clear policies or disclosure practices for employee use.

The report recommends strengthening AI education, accelerating policy development, expanding the scope to cover staff, and building proactive strategies supported by human oversight and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI toys change the way children learn and play

AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.

Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.

Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.

Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.

The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.

At the same time, experts warn about privacy risks, the collection of children’s data, and potential reductions in face-to-face interaction.

Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.

The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia promises to bolster digital sovereignty and AI talent on Independence Day

Indonesia marked its 80th Independence Day by reaffirming its commitment to digital sovereignty and technology-driven inclusion.

The Ministry of Communication and Digital Affairs, following President Prabowo Subianto’s ‘Indonesia Incorporated’ directive, highlighted efforts to build an inclusive, secure, and efficient digital ecosystem.

Priorities include deploying 4G networks in remote regions, expanding public internet services, and reinforcing the Palapa Ring broadband infrastructure.

On the talent front, the government launched a Digital Talent Scholarship and AI Talent Factory to nurture AI skills, from beginners to specialists, setting the stage for future AI innovation domestically.

In parallel, digital protection measures have been bolstered: over 1.2 million pieces of harmful content have been blocked, while new regulations under the Personal Data Protection Law, age-verification, content monitoring, and reporting systems have been introduced to enhance child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU member states clash over the future of encrypted private messaging

The ongoing controversy around the EU’s proposed mandatory scanning of private messages has escalated with the European Parliament intensifying pressure on the Council to reach a formal agreement.

A leaked memo reveals that the Parliament threatens to block the extension of the current voluntary scanning rules unless mandatory chat control is agreed upon.

Denmark, leading the EU Council Presidency, has pushed a more stringent version of the so-called Chat Control law that could become binding as soon as 14 October 2025.

While the Parliament argues the law is essential for protecting children online, many legal experts and rights groups warn the proposal still violates fundamental human rights, particularly the right to privacy and secure communication.

The Council’s Legal Service has repeatedly noted that the draft infringes on these rights since it mandates scanning all private communications, undermining end-to-end encryption that most messaging apps rely on.

Some governments, including Germany and Belgium, remain hesitant or opposed, citing these serious concerns.

Supporters like Italy, Spain, and Hungary have openly backed Denmark’s proposal, signalling a shift in political will towards stricter measures. France’s position has also become more favourable, though internal debate continues.

Opponents warn that weakening encryption could open the door to cyber attacks and foreign interference, while proponents emphasise the urgent need to prevent abuse and close loopholes in existing law.

The next Council meeting in September will be critical in shaping the final form of the regulation.

The dispute highlights the persistent tension between digital privacy and security, reflecting broader European challenges in regulating encrypted communications.

As the October deadline approaches, the EU faces a defining moment in balancing child protection with protecting the confidentiality of citizens’ communications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!