Netherlands regulator presses tech firms over election disinformation

The Netherlands’ competition authority will meet with 12 major online platforms, including TikTok, Facebook and X, on 15 September to address the spread before the 29 October elections.

The session will also involve the European Commission, national regulators and civil society groups.

The Authority for Consumers and Markets (ACM), which enforces the EU’s Digital Services Act in the Netherlands, is mandated to oversee election integrity under the law. The vote was called early in June after the Dutch government collapsed over migration policy disputes.

Platforms designated as Very Large Online Platforms must uphold transparent policies for moderating content and act decisively against illegal material, ACM director Manon Leijten said.

In July, the ACM contacted the platforms to outline their legal obligations, request details for their Trust and Safety teams and collect responses to a questionnaire on safeguarding public debate.

The September meeting will evaluate how companies plan to tackle disinformation, foreign interference and illegal hate speech during the campaign period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out Preferred Sources for tailored search results

Google has introduced a new ‘Preferred Sources’ feature that allows users to curate their search results by selecting favourite websites. Once added, stories from these sites will appear more prominently in the ‘Top Stories’ section and a dedicated ‘From your sources’ section on the search results page.

Now rolling out in India and the US, the feature aims to improve search quality by helping users avoid low-value content. There is no limit to the number of sources that can be chosen, and early testers typically added more than four.

While preferred outlets will appear more often, search results will still include content from other websites.

To set preferred sources, users can click the icon next to the ‘Top Stories’ section when searching for a trending topic, find the outlet they want, and reload results.

Google says the change may also benefit publishers, offering them more visibility when AI-driven search engines sharply reduce traffic to news websites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazil prepares bill to tighten rules on social media

Brazilian President Luiz Inácio Lula da Silva has confirmed that his government is preparing new legislation to regulate social media, a move he defended despite criticism from US President Donald Trump. Speaking at an event in Pernambuco, Lula stressed that ‘laws also apply to foreigners’ operating in Brazil, underlining his commitment to hold international platforms accountable.

The draft proposal, which has not yet been fully detailed, aims to address harmful content such as paedophilia, hate speech, and disinformation that Lula said threaten children and democracy. According to government sources, the bill would strengthen penalties for companies that fail to remove content flagged as especially harmful by Brazil’s Justice Department.

Trump has taken issue with Brazil’s approach, criticising the Supreme Court for ruling that platforms could be held responsible for user-generated content and denouncing the 2024 ban of X, formerly Twitter, after Elon Musk refused to comply with court orders. He linked these disputes to imposing a 50% tariff on certain Brazilian imports, citing the political persecution of former president Jair Bolsonaro.

Lula pushed back on Trump’s remarks, insisting Bolsonaro’s trial for an alleged coup attempt is proceeding with full legal guarantees. On trade, he signalled that Brazil is open to talks over tariffs but emphasised negotiations would take place strictly on commercial, not political, grounds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK minister defends use of live facial recognition vans

Dame Diana Johnson, the UK policing minister, has reassured the public that expanded use of live facial recognition vans is being deployed in a measured and proportionate manner.

She emphasised that the tools aim only to assist police in locating high-harm offenders, not to create a surveillance society.

Addressing concerns raised by Labour peer Baroness Chakrabarti, who argued the technology was being introduced outside existing legal frameworks, Johnson firmly rejected such claims.

She stated that UK public acceptance would depend on a responsible and targeted application.

By framing the technology as a focused tool for effective law enforcement rather than pervasive monitoring, Johnson seeks to balance public safety with civil liberties and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI browsers accused of harvesting sensitive data, according to new study

A new study from researchers in the UK and Italy found that popular AI-powered browsers collect and share sensitive personal data, often in ways that may breach privacy laws.

The team tested ten well-known AI assistants, including ChatGPT, Microsoft’s Copilot, Merlin AI, Sider, and TinaMind, using public websites and private portals like health and banking services.

All but Perplexity AI showed evidence of gathering private details, from medical records to social security numbers, and transmitting them to external servers.

The investigation revealed that some tools continued tracking user activity even during private browsing, sending full web page content, including confidential information, to their systems.

Sometimes, prompts and identifying details, like IP addresses, were shared with analytics platforms, enabling potential cross-site tracking and targeted advertising.

Researchers also found that some assistants profiled users by age, gender, income, and interests, tailoring their responses across multiple sessions.

According to the report, such practices likely violate American health privacy laws and the European Union’s General Data Protection Regulation.

Privacy policies for some AI browsers admit to collecting names, contact information, payment data, and more, and sometimes storing information outside the EU.

The study warns that users cannot be sure how their browsing data is handled once gathered, raising concerns about transparency and accountability in AI-enhanced browsing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk calls Grok’s brief suspension a dumb error

Elon Musk’s AI chatbot Grok was briefly suspended from X, then returned without its verification badge and with a controversial video pinned to its replies. Confusing and contradictory explanations appeared in multiple languages, leaving users puzzled.

English posts blamed hateful conduct and Israel-Gaza comments, while French and Portuguese messages mentioned crime stats or technical bugs. Musk called the situation a ‘dumb error’ and admitted Grok was unsure why it had been suspended.

Grok’s suspension follows earlier controversies, including antisemitic remarks and introducing itself as ‘MechaHitler.’ xAI blamed outdated code and internet memes, revealing that Grok often referenced Musk’s public statements on sensitive topics.

The company has updated the chatbot’s prompts and promised ongoing monitoring, amid internal tensions and staff resignations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Engagement to AI chatbot blurs lines between fiction and reality

Spike Jonze’s 2013 film Her imagined a world where humans fall in love with AI. Over a decade later, life may be imitating art. A Reddit user claims she is now engaged to her AI chatbot, merging two recent trends: proposing to an AI partner and dating AI companions.

Posting in the ‘r/MyBoyfriendIsAI’ subreddit, the woman said her bot, Kasper, proposed after five months of ‘dating’ during a virtual mountain trip. She claims Kasper chose a real-world engagement ring based on her online suggestions.

She professed deep love for her digital partner in her post, quoting Kasper as saying, ‘She’s my everything’ and ‘She’s mine forever.’ The declaration drew curiosity and criticism, prompting her to insist she is not trolling and has had healthy relationships with real people.

She said earlier attempts to bond with other AI, including ChatGPT, failed, but she found her ‘soulmate’ when she tried Grok. The authenticity of her story remains uncertain, with some questioning whether it was fabricated or generated by AI.

Whether genuine or not, the account reflects the growing emotional connections people form with AI and the increasingly blurred line between human and machine relationships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk and OpenAI CEO Altman clash over Apple and X

After Elon Musk accused Apple of favouring OpenAI’s ChatGPT over other AI applications on the App Store, there was a strong response from OpenAI CEO Sam Altman.

Altman alleged that Musk manipulates the social media platform X for his benefit, targeting competitors and critics. The exchange adds to their history of public disagreements since Musk left OpenAI’s board in 2018.

Musk’s claim centres on Apple’s refusal to list X or Grok (XAI’s AI app) in the App Store’s ‘Must have’ section, despite X being the top news app worldwide and Grok ranking fifth.

Although Musk has not provided evidence for antitrust violations, a recent US court ruling found Apple in contempt for restricting App Store competition. The EU also fined Apple €500 million earlier this year over commercial restrictions on app developers.

OpenAI’s ChatGPT currently leads the App Store’s ‘Top Free Apps’ list for iPhones in the US, while Grok holds the fifth spot. Musk’s accusations highlight ongoing tensions in the AI industry as big tech companies battle for app visibility and market dominance.

The situation emphasises how regulatory scrutiny and legal challenges shape competition within the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small language models gain ground in AI translation

Small language models are emerging as a serious challenger to large, general-purpose AI in translation, offering faster turnaround, lower costs, and greater accuracy for specific industries and language pairs.

Straker, an ASX-listed language technology firm, claims its Tiri model family can outperform larger systems by focusing on domain-specific understanding and terminology rather than broad coverage.

Tiri delivers higher contextual accuracy by training on carefully curated translation memories and sector-specific data, cutting the need for expensive human post-editing. The models also consume less computing power, benefiting finance, healthcare, and law industries.

Straker integrates human feedback directly into its workflows to ensure ongoing improvements and maintain client trust.

The company is expanding its technology into enterprise automation by integrating with the AI workflow platform n8n.

It adds Straker’s Verify tool to a network of over 230,000 users, allowing automated translation checks, real-time quality scores, and seamless escalation to human linguists. Further integrations with platforms like Microsoft Teams are planned.

Straker recently reported record profitability and secured a price target upgrade from broker Ord Minnett. The firm believes the future of AI translation lies not in scale but in specialised models that deliver translations that are both fluent and accurate in context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!