X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Codex growth prompts OpenAI to expand access

OpenAI said its new Codex Mac app has surpassed one million downloads just over a week after launch, with overall Codex usage rising by 60% following the release of GPT-5.3-Codex.

The strong uptake has prompted OpenAI to extend free access to Codex for Free and Go users beyond the initial launch promotion. Sam Altman said usage limits for lower tiers may be tightened, but access would remain available so more users can experiment and build.

Separately, OpenAI released a YouTube video showcasing a redesigned Deep Research interface, introducing a full-screen report viewer that opens research outputs in a separate window from the chat interface.

The updated layout includes a table of contents for navigation, hyperlinks, and anchor tags within reports, and a dedicated source panel for verification. Users can also download reports as PDF or Word files, while new controls allow research scopes and sources to be adjusted during generation.

The Deep Research updates are available to Plus and Pro users, with broader access expected soon. OpenAI also confirmed the changes in ChatGPT release notes on 10 February and announced a more minor GPT-5.2 update focused on more measured responses.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers abuse legitimate admin software to hide cyber attacks

Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.

Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.

Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.

The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.

Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU launches cyberbullying action plan to protect children online

The European Commission has launched an Action Plan Against Cyberbullying aimed at protecting the mental health and well-being of children and teenagers online across the EU. The initiative focuses on reporting access, national coordination, and prevention.

A central element is the development of an EU-wide reporting app that would allow victims to report cyberbullying, receive support, and safely store evidence. The Commission will provide a blueprint for Member States to adapt and link to national helplines.

To ensure consistent protection, Member States are encouraged to adopt a shared understanding of cyberbullying and develop national action plans. This would support comparable data collection and a more coordinated EU response.

The Action Plan builds on existing legislation, including the Digital Services Act, the Audiovisual Media Services Directive, and the AI Act. Updated guidelines will strengthen platform obligations and address AI-enabled forms of abuse.

Prevention and education are also prioritised through expanded resources for schools and families via Safer Internet Centres and the Better Internet for Kids platform. The Commission will implement the plan with Member States, industry, civil society, and children.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU reopens debate on social media age restrictions for children

The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.

The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.

Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conversational advertising takes the stage as ChatGPT tests in-chat promotions

Advertising inside ChatGPT marks a shift in where commercial messages appear, not a break from how advertising works. AI systems have shaped search, social media, and recommendations for years, but conversational interfaces make those decisions more visible during moments of exploration.

Unlike search or social formats, conversational advertising operates inside dialogue. Ads appear because users are already asking questions or seeking clarity. Relevance is built through context rather than keywords, changing when information is encountered rather than how decisions are made.

In healthcare and clinical research, this distinction matters. Conversational ads cannot enroll patients directly, but they may raise awareness earlier in patient journeys and shape later discussions with clinicians and care providers.

Early rollout will be limited to free or low-cost ChatGPT tiers, likely skewing exposure towards patients and caregivers. As with earlier platforms, sensitive categories may remain restricted until governance and safeguards mature.

The main risks are organisational rather than technical. New channels will not fix unclear value propositions or operational bottlenecks. Conversational advertising changes visibility, not fundamentals, and success will depend on responsible integration.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EMFA guidance sets expectations for Big Tech media protections

The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.

Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.

The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.

The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.

The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Shadow AI becomes a new governance challenge for European organisations

Employees are adopting generative tools at work faster than organisations can approve or secure them, giving rise to what is increasingly described as ‘shadow AI‘. Unlike earlier forms of shadow IT, these tools can transform data, infer sensitive insights, and trigger automated actions beyond established controls.

For European organisations, the issue is no longer whether AI should be used, but how to regain visibility and control without undermining productivity, as shadow AI increasingly appears inside approved platforms, browser extensions, and developer tools, expanding risks beyond data leakage.

Security experts warn that blanket bans often push AI use further underground, reducing transparency and trust. Instead, guidance from EU cybersecurity bodies increasingly promotes responsible enablement through clear policies, staff awareness, and targeted technical controls.

Key mitigation measures include mapping AI use across approved and informal tools, defining safe prompt data, and offering sanctioned alternatives, with logging, least-privilege access, and approval steps becoming essential as AI acts across workflows.

With the EU AI Act introducing clearer accountability across the AI value chain, unmanaged shadow AI is also emerging as a compliance risk. As AI becomes embedded across enterprise software, organisations face growing pressure to make safe use the default rather than the exception.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global leaders turn to AI adoption as Davos priorities evolve

AI dominated this year’s World Economic Forum, with debate shifting from experimentation to execution. Leaders focused on scaling AI adoption, delivering economic impact, and ensuring benefits extend beyond a small group of advanced economies and firms.

Concerns centred on the risk that AI could deepen global inequality if access to computing, data, power, and financing remains uneven. Without affordable deployment in health, education, and public services, support for AI’s rising energy and infrastructure demands could erode quickly.

Geopolitics has become inseparable from AI adoption. Trade restrictions, export controls, and diverging regulatory models are reshaping access to semiconductors, data centres, and critical minerals, making sovereignty and partnerships as important as innovation.

For developing economies, widespread AI adoption is now a development priority rather than a technological luxury. Blended finance and targeted investment are increasingly seen as essential to fund infrastructure and direct AI toward productivity, resilience, and inclusion.

Discussions under the ‘Blue Davos‘ theme highlighted how AI is embedded in physical and environmental systems, from energy grids to oceans. Choices on governance, financing, and deployment will shape whether AI supports sustainable development or widens existing divides.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!