YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK minister defends use of live facial recognition vans

Dame Diana Johnson, the UK policing minister, has reassured the public that expanded use of live facial recognition vans is being deployed in a measured and proportionate manner.

She emphasised that the tools aim only to assist police in locating high-harm offenders, not to create a surveillance society.

Addressing concerns raised by Labour peer Baroness Chakrabarti, who argued the technology was being introduced outside existing legal frameworks, Johnson firmly rejected such claims.

She stated that UK public acceptance would depend on a responsible and targeted application.

By framing the technology as a focused tool for effective law enforcement rather than pervasive monitoring, Johnson seeks to balance public safety with civil liberties and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI browsers accused of harvesting sensitive data, according to new study

A new study from researchers in the UK and Italy found that popular AI-powered browsers collect and share sensitive personal data, often in ways that may breach privacy laws.

The team tested ten well-known AI assistants, including ChatGPT, Microsoft’s Copilot, Merlin AI, Sider, and TinaMind, using public websites and private portals like health and banking services.

All but Perplexity AI showed evidence of gathering private details, from medical records to social security numbers, and transmitting them to external servers.

The investigation revealed that some tools continued tracking user activity even during private browsing, sending full web page content, including confidential information, to their systems.

Sometimes, prompts and identifying details, like IP addresses, were shared with analytics platforms, enabling potential cross-site tracking and targeted advertising.

Researchers also found that some assistants profiled users by age, gender, income, and interests, tailoring their responses across multiple sessions.

According to the report, such practices likely violate American health privacy laws and the European Union’s General Data Protection Regulation.

Privacy policies for some AI browsers admit to collecting names, contact information, payment data, and more, and sometimes storing information outside the EU.

The study warns that users cannot be sure how their browsing data is handled once gathered, raising concerns about transparency and accountability in AI-enhanced browsing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s xAI makes Grok 4 free worldwide for a limited time

Elon Musk’s company xAI has made its latest AI model, Grok 4, available to all users worldwide at no cost for a limited period. The model, launched just a month ago, was initially exclusive to paying subscribers of SuperGrok and X Premium.

Although Grok 4 is now open to everyone, its most potent version, Grok 4 Heavy, remains restricted to SuperGrok Heavy members. The announcement comes days after OpenAI unveiled GPT-5, which is also freely accessible.

Grok 4 features two operating modes. Auto mode decides automatically whether a query requires more detailed reasoning, aiming to deliver faster responses and use fewer resources. Expert mode allows users to manually switch the AI into reasoning mode if they want a more thorough reply.

Alongside the release, xAI has introduced Grok Imagine, a free AI video generation tool for users in the US, with enhanced usage limits for paid members in other regions. The tool has already sparked controversy after reports emerged of its use to create explicit videos of celebrities.

Musk has also revealed plans to integrate advertising into the Grok chatbot interface as an additional revenue source to help offset the high costs of running the AI on powerful GPUs.

The ads will be placed between responses and suggestions on both the web platform and the mobile application, marking another step in xAI’s bid to expand its user base while sustaining the service financially.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stablecoins unlocking crypto adoption and AI economies

Stablecoins have rapidly risen as one of the most promising breakthroughs in the cryptocurrency world. They are neither traditional currency nor the first thing that comes to mind when thinking about crypto; instead, they represent a unique blend of both worlds, combining the stability of fiat with the innovation of digital assets.

In a market often known for wild price swings, stablecoins offer fresh air, enabling practical use of cryptocurrencies for real-world payments and commerce. The real question is, are stablecoins destined to bring crypto into everyday use and unlock their full potential for the masses?

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoin regulation: How global rules drive adoption

Regulators worldwide are stepping up to define clear rules for stablecoins, signalling growing market maturity and increasing confidence from major financial institutions. Recent legislative efforts across multiple jurisdictions aim to establish firm standards such as full reserves, audits, and licensing requirements, encouraging banks and asset managers to engage more confidently with stablecoins. 

These coordinated global moves go beyond simple policy updates; they are laying the foundation for stablecoins to evolve from niche crypto assets to trusted pillars of the future financial ecosystem. Regulators and industry leaders are thus bringing cryptocurrencies closer to everyday users and embedding them into daily financial life. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Corporations and banks embracing stablecoins: A paradigm shift

The adoption of stablecoins by big corporations and banks marks a significant turning point, and, in some ways, a paradox. Once seen as an enemy of decentralised finance, these institutions now seem to be conceding and joining the movement they once resisted – what you fail to control – can ultimately win. 

Retail giants such as Walmart and Amazon are reportedly exploring their stablecoin initiatives to streamline payments and foster deeper customer engagement. On the banking side, institutions like Bank of America, JPMorgan Chase, and Citigroup are developing or assessing stablecoins to integrate crypto-friendly services into their offerings.

Western Union is also experimenting with stablecoin solutions to reduce remittance costs and increase transaction speed, particularly in emerging markets with volatile currencies. 

They all realise that staying competitive means adapting to the latest shifts in global finance. Such corporate interest signals that stablecoins are transitioning from speculative assets to functional money-like assets capable of handling everyday transactions across orders and demographics. 

There is also a sociological dimension to stablecoins’ corporate and institutional embrace. Established institutions bring an inherent trust that can alleviate the scepticism surrounding cryptocurrencies.

By linking stablecoins to familiar brands and regulated banks, these digital tokens can overcome cultural and psychological barriers that have limited crypto adoption, ultimately embedding digital currencies into the fabric of global commerce.

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

Stablecoins and the rise of AI-driven economies

Stablecoins are increasingly becoming the financial backbone of AI-powered economic systems. As AI agents gain autonomy to transact, negotiate, and execute tasks on behalf of individuals and businesses, they require a reliable, programmable, and instantly liquid currency.

Stablecoins perfectly fulfil this role, offering near-instant settlement, low transaction costs, and transparent, trustless operations on blockchain networks. 

In the emerging ‘self-driving economy’, stablecoins may be the preferred currency for a future where machines transact independently. Integrating programmable money with AI may redefine the architecture of commerce and governance. Such a powerful synergy is laying the groundwork for economic systems that operate around the clock without human intervention. 

As AI technology continues to advance rapidly, the demand for stablecoins as the ideal ‘AI money’ will likely accelerate, further driving crypto adoption across industries. 

Stablecoins might be the missing piece that unlocks crypto’s full promise and reshapes the future of digital finance.

The bridge between crypto and fiat economies

From a financial philosophy standpoint, stablecoins represent an attempt to synthesise the advantages of decentralisation with the stability and trust associated with fiat money. They aim to combine the freedom and programmability of blockchain with the reassurance of stable value, thereby lowering entry barriers for a wider audience.

On a global scale, stablecoins have the potential to revolutionise cross-border payments, especially benefiting countries with unstable currencies and limited access to traditional banking. 

Sociologically, stablecoins could redefine the way societies perceive money and trust. Moving away from centralised authorities controlling currency issuance, these tokens leverage transparent blockchain ledgers that anyone can verify. The shift challenges traditional power structures and calls for new forms of economic participation based on openness and accessibility.

Yet challenges remain: stablecoins must navigate regulatory scrutiny, develop secure infrastructure, and educate users worldwide. The future will depend on balancing innovation, safety, and societal acceptance – it seems like we are still in the early stages.

Perhaps stablecoins are not just another financial innovation, but a mirror reflecting our shifting relationship with money, trust, and control. If the value we exchange no longer comes from paper, metal, or even banks, but from code, AI, and consensus, then perhaps the real question is whether their rise marks the beginning of a new financial reality – or something we have yet to fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia to phase out Mastercard and Visa

The Bank of Russia is preparing to phase out Mastercard and Visa cards and to switch to the domestic Mir payment system. Authorities plan a gradual timeline for banks to replace international cards, letting consumers switch at their own pace while keeping access to current accounts.

Visa and Mastercard have operated only domestically since leaving the Russian market after the 2022 invasion of Ukraine. The share of these cards in circulation is declining as more Russians adopt Mir.

The Central Bank has extended its validity temporarily, but a clear deadline for complete replacement is now being discussed.

Russia plans to launch the digital rouble alongside the card transition in September 2026. Only a limited framework for digital coins in foreign trade is expected to remain, highlighting Russia’s broader push for financial sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto crime unit expands with Binance

Tron, Tether, and TRM Labs have announced the expansion of their T3 Financial Crime Unit (T3 FCU) with Binance as the first T3+ partner. The unit has frozen over $250 million in illicit crypto assets since its launch in September 2024.

The T3 FCU works with global law enforcement to tackle money laundering, investment fraud, terrorism financing, and other financial crimes. The new T3+ programme unites exchanges and institutions to share intelligence and tackle threats in real time.

Recent reports highlight the urgency of these efforts. Over $3 billion in crypto was stolen in the first half of 2025, with some hacks laundering funds in under three minutes. Only around 4% of stolen assets were recovered during this period, underscoring the speed and sophistication of modern attacks.

Debate continues over the role of stablecoin issuers and exchanges in freezing funds. Tether’s halt of $86,000 in stolen USDt highlights fast recovery but raises concerns over decentralised principles amid calls for stronger industry-wide security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU targets eight members states over cybersecurity directive implementation delay

Eight EU countries, including Ireland, Spain, France, Bulgaria, Luxembourg, the Netherlands, Portugal, and Sweden, have been warned by the European Commission for failing to meet the deadline on the implementation of the NIS2 Directive.

What is the NIS2 Directive about?

The NIS2 Directive, adopted by the EU in 2022, is an updated legal framework designed to strengthen the cybersecurity and resilience of critical infrastructure and essential services. Essentially, this directive replaces the 2016 NIS Directive, the EU’s first legislation to improve cybersecurity across crucial sectors such as energy, transport, banking, and healthcare. It set baseline security and incident reporting requirements for critical infrastructure operators and digital service providers to enhance the overall resilience of network and information systems in the EU.

With the adoption of the NIS2 Directive, the EU aims to broaden the scope to include not only traditional sectors like energy, transport, banking, and healthcare, but also public administration, space, manufacturing of critical products, food production, postal services, and a wide range of digital service providers.

NIS2 introduces stricter risk management, supply-chain security requirements, and enhanced incident reporting rules, with early warnings due within 24 hours. It increases management accountability, requiring leadership to oversee compliance and undergo cybersecurity training.

It also imposes heavy penalties for violations, including up to €10 million or 2% of global annual turnover for essential entities. The Directive also aims to strengthen EU-level cooperation through bodies like ENISA and EU-CyCLONe.

Member States were expected to transpose NIS2 into national law by 17 October 2024, making timely compliance preparation critical.

What is a directive?

There are two main types of the EU laws: regulations and directives. Regulations apply automatically and uniformly across all member states once adopted by the EU.

In contrast, directives set specific goals that member states must achieve but leave it up to each country to decide how to implement them, allowing for different approaches based on each member state’s capacities and legal systems.

So, why is there a delay in implementing the NIS2 Directive?

According to Insecurity Magazine, the delay is due to member states’ implementation challenges, and many companies across the EU are ‘not fully ready to comply with the directive.’ Six critical infrastructure sectors are facing challenges, including:

  • IT service management is challenged by its cross-border nature and diverse entities
  • Space, with limited cybersecurity knowledge and heavy reliance on commercial off-the-shelf components
  • Public administrations, which “lack the support and experience seen in more mature sectors”
  • Maritime, facing operational technology-related challenges and needing tailored cybersecurity risk management guidance
  • Health, relying on complex supply chains, legacy systems, and poorly secured medical devices
  • Gas, which must improve incident readiness and response capabilities

The deadline for the implementation was 17 October 2024. In May 2025, the European Commission warned 19 member states about delays, giving them two months to act or risk referral to the Court of Justice of the EU. It remains unclear whether the eight remaining holdouts will face further legal consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data breach hits cervical cancer screening programme

Hackers have stolen personal and medical information from nearly 500,000 participants in the Netherlands’ cervical cancer screening programme. The attack targeted the NMDL laboratory in Rijswijk between 3 and 6 July, but authorities were only informed on 6 August.

Data includes names, addresses, birth dates, citizen service numbers, possible test results and healthcare provider details. For some victims, phone numbers and email addresses were also stolen. The lab, owned by Eurofins Scientific, has suspended operations while a security review occurs.

The Dutch Population Screening Association has switched to a different laboratory to process future tests and is warning those affected of the risk of fraud. Local media reports suggest hackers may also have accessed up to 300GB of data on other patients from the past three years.

Security experts say the breach underscores the dangers of weak links in healthcare supply chains. Victims are now being contacted by the authorities, who have expressed regret for the distress caused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!