X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Women driving tech innovation as Web Summit marks 10 years

Web Summit’s Women in Tech programme marked a decade of work in Qatar by highlighting steady progress in female participation across global technology sectors.

The Web Summit event recorded an increase in women-founded startups and reflected rising engagement in Qatar, where female founders reached 38 percent.

Leaders from the initiative noted how supportive networks, mentorship, and access to role models are reshaping opportunities for women in technology and entrepreneurship.

Speakers from IBM and other companies focused on the importance of AI skills in shaping the future workforce. They argued that adequate preparation depends on understanding how AI shapes everyday roles, rather than relying solely on technical tools.

IBM’s SkillsBuild platform continues to partner with universities, schools, and nonprofit groups to expand access to recognised AI credentials that can support higher earning potential and new career pathways.

Another feature of the event was its emphasis on inclusion as a driver of innovation. The African Women in Technology initiative, led by Anie Akpe, is working to offer free training in cybersecurity and AI so women in emerging markets can benefit from new digital opportunities.

These efforts aim to support business growth at every level, even for women operating in local markets, who can use technology to reach wider communities.

Female founders also used the platform to showcase new health technology solutions.

ScreenMe, a Qatari company founded by Dr Golnoush Golsharazi, presented its reproductive microbiome testing service, created in response to long-standing gaps in women’s health research and screening.

Organisers expressed confidence that women-led innovation will expand across the region, supported by rising investment and continuing visibility at major global events.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hackers abuse legitimate admin software to hide cyber attacks

Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.

Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.

Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.

The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.

Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global coalition demands ban on AI-nudification tools over child-safety fears

More than 100 organisations have urged governments to outlaw AI-nudification tools after a surge in non-consensual digital images.

Groups such as Amnesty International, the European Commission, and Interpol argue that the technology now fuels harmful practices that undermine human dignity and child safety. Their concerns intensified after the Grok nudification scandal, where users created sexualised images from ordinary photographs.

Campaigners warn that the tools often target women and children instead of staying within any claimed adult-only environment. Millions of manipulated images have circulated across social platforms, with many linked to blackmail, coercion and child sexual abuse material.

Experts say the trauma caused by these AI images is no less serious because the abuse occurs online.

Organisations within the coalition maintain that tech companies already possess the ability to detect and block such material but have failed to apply essential safeguards.

They want developers and platforms to be held accountable and believe that strict prohibitions are now necessary to prevent further exploitation. Advocates argue that meaningful action is overdue and that protection of users must take precedence over commercial interests.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea confirms scale of Coupang data breach

The South Korean government has confirmed that 33.67 million user accounts were exposed in a major data breach at Coupang in South Korea. The findings were released by the Ministry of Science and ICT in Seoul.

Investigators in South Korea said names and email addresses were leaked, while delivery lists containing addresses and phone numbers were accessed 148 million times. Officials warned that the impact in South Korea could extend beyond the headline account figure.

Authorities in South Korea identified a former employee as the attacker, alleging misuse of authentication signing keys. The probe concluded that weaknesses in internal controls at Coupang enabled the breach in South Korea.

The ministry in South Korea criticised delayed reporting and plans to impose a fine on Coupang. The company disputed aspects of the findings but said 33.7 million accounts were involved in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU launches cyberbullying action plan to protect children online

The European Commission has launched an Action Plan Against Cyberbullying aimed at protecting the mental health and well-being of children and teenagers online across the EU. The initiative focuses on reporting access, national coordination, and prevention.

A central element is the development of an EU-wide reporting app that would allow victims to report cyberbullying, receive support, and safely store evidence. The Commission will provide a blueprint for Member States to adapt and link to national helplines.

To ensure consistent protection, Member States are encouraged to adopt a shared understanding of cyberbullying and develop national action plans. This would support comparable data collection and a more coordinated EU response.

The Action Plan builds on existing legislation, including the Digital Services Act, the Audiovisual Media Services Directive, and the AI Act. Updated guidelines will strengthen platform obligations and address AI-enabled forms of abuse.

Prevention and education are also prioritised through expanded resources for schools and families via Safer Internet Centres and the Better Internet for Kids platform. The Commission will implement the plan with Member States, industry, civil society, and children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Crypto confiscation framework approved by State Duma

Russia’s State Duma has passed legislation establishing procedures for the seizure and confiscation of cryptocurrencies in criminal investigations. The law formally recognises digital assets as property under criminal law.

The bill cleared its third reading on 10 February and now awaits approval from the Federation Council and presidential signature.

Investigators may seize digital currency and access devices, with specialists required during investigative actions. Protocols must record asset type, quantity, and wallet identifiers, while access credentials and storage media are sealed.

Where technically feasible, seized funds may be transferred to designated state-controlled addresses, with transactions frozen by court order.

Despite creating a legal basis for confiscation, the law leaves critical operational questions unresolved. No method exists for valuing volatile crypto assets or for their storage, cybersecurity, or liquidation.

Practical cooperation with foreign crypto platforms, particularly under sanctions, also remains uncertain.

The government is expected to develop subordinate regulations covering state custody wallets and enforcement mechanics. Russia faces implementation challenges, including non-custodial wallet access barriers, stablecoin freezing limits, and institutional oversight risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!