Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram responds to claims of user data exposure

Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.

The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.

A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.

The platform said affected accounts remained secure, with no unauthorised access recorded.

Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.

Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tether and UN join to boost digital security in Africa

Tether has joined the UN Office on Drugs and Crime to enhance cybersecurity and digital asset education across Africa. The collaboration aims to reduce vulnerabilities to cybercrime and safeguard communities against online scams and fraud.

Africa, emerging as the third-fastest-growing crypto region, faces increasing threats from digital asset fraud. A recent Interpol operation uncovered $260 million in illicit crypto and fiat across Africa, highlighting the urgent need for stronger digital security.

The partnership includes several key initiatives. In Senegal, youth will participate in a multi-phase cybersecurity education programme featuring boot camps, mentorship, and micro-grants to support innovative projects.

Civil society organisations across Africa will receive funding to support human trafficking victims in Nigeria, DRC, Malawi, Ethiopia, and Uganda. In Papua New Guinea, universities will host competitions to promote financial inclusion and prevent digital asset fraud using blockchain solutions.

Tether and UNODC aim to create secure digital ecosystems, boost economic opportunities, and equip communities to prevent organised crime. Coordinated action across sectors is considered vital to creating safer and more inclusive environments for vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK outlines approval process for crypto firms

The UK’s Financial Conduct Authority has confirmed that all regulated crypto firms must obtain authorisation under the Financial Services and Markets Act. Both new market entrants and existing operators will be required to comply.

No automatic transition will be available for firms currently registered under anti-money laundering rules. Companies already authorised for other financial services must apply to extend permissions to cover crypto activities and ensure compliance with upcoming regulations.

Pre-application meetings and information sessions will be offered to help firms understand regulatory expectations and enhance the quality of their applications.

An official application window is expected to open in September 2026 and remain active for at least 28 days. Applications submitted during that period are intended to be assessed before the regime formally begins, with further procedural details to be confirmed by the FCA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wegmans faces backlash over facial recognition in US stores

Supermarket chain Wegmans Food Markets is facing scrutiny over its use of facial recognition technology. The issue emerged after New York City stores displayed signs warning that biometric data could be collected for security purposes.

New York law requires businesses to disclose biometric data collection, but the wording of the notices alarmed privacy advocates. Wegmans later said it only uses facial recognition, not voice or eye scans, and only in a small number of higher-risk stores.

According to the US company, the system identifies individuals who have been previously flagged for misconduct, such as theft or threatening behaviour. Wegmans says facial recognition is just one investigative tool and that all actions are subject to human review.

Critics argue the signage suggests broader surveillance than the company admits. Wegmans has not explained why the notices mention eyes and voice if that data is not collected, or when the wording might be revised.

Lawmakers in Connecticut have now proposed a ban on retail facial recognition. Supporters say grocery shopping is essential and that biometric monitoring weakens meaningful customer consent.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

BBC launches media literacy series for teenagers

BBC Children’s and Education has launched Solve The Story, a new series tackling online misinformation among teenagers. The six-part programme is designed for classroom use across UK schools.

The British series follows research showing teachers lack resources to teach critical thinking effectively. Surveys found teenagers struggle with online content volume, while one in three teachers find media literacy difficult to deliver.

Solve The Story uses mystery-style storytelling to help pupils question sources, spot deepfakes and challenge viral claims. Each episode includes practical classroom guides supporting teachers and lesson planning.

BBC figures say two thirds of teenagers worry about fake news causing confusion and stress. Educators argue AI-driven misinformation makes structured media literacy support increasingly urgent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto crime report 2025 reveals record nation-state activity

Illicit crypto activity surged in 2025 as nation states and professional criminal networks expanded on-chain operations. Government-linked actors used infrastructure built for organised cybercrime, increasing risks for regulators and security teams.

Data shows that illicit crypto addresses received at least $154 billion during the year, representing a 162% increase compared to 2024. Sanctioned entities drove much of the growth, with stablecoins making up 84% of illicit transactions due to their liquidity and ease of cross-border transfer.

North Korea remained the most aggressive state actor, with hackers stealing around $2 billion, including the record-breaking Bybit breach. Russia’s ruble-backed A7A5 token saw over $93 billion in sanction-evasion transactions, while Iran-linked networks continued using crypto for illicit trade and financing.

Chinese money laundering networks also emerged as a central force, offering full-service criminal infrastructure to fraud groups, hackers, and sanctioned entities. Links between crypto and physical crime grew, with trafficking and coercion increasingly tied to digital asset transfers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot