Privacy laws block cross-border crypto regulation progress

Regulators continue to face hurdles in overseeing global crypto markets as privacy laws block effective cross-border data sharing, the Financial Stability Board warned. Sixteen years after Bitcoin’s launch, regulation remains inconsistent, with differing national approaches causing data gaps and fragmented oversight.

The FSB, under the Bank for International Settlements, said secrecy laws hinder authorities from monitoring risks and sharing information. Some jurisdictions block data sharing with foreign regulators, while others delay cooperation over privacy and reciprocity concerns.

According to the report, addressing these legal and institutional barriers is essential to improving cross-border collaboration and ensuring more effective global oversight of crypto markets.

However, the FSB noted that reliable data on digital assets remain scarce, as regulators rely heavily on incomplete or inconsistent sources from commercial data providers.

Despite the growing urgency to monitor financial stability risks, little progress has been made since similar concerns were raised nearly four years ago. The FSB has yet to outline concrete solutions for bridging the gap between data privacy protection and effective crypto regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government urges awareness as £106m lost to romance fraud in one year

Romance fraud has surged across the United Kingdom, with new figures showing that victims lost a combined £106 million in the past financial year. Action Fraud, the UK’s national reporting centre for cybercrime, described the crime as one that causes severe financial, emotional, and social damage.

Among the victims is London banker Varun Yadav, who lost £40,000 to a scammer posing as a romantic partner on a dating app. After months of chatting online, the fraudster persuaded him to invest in a cryptocurrency platform.

When his funds became inaccessible, Yadav realised he had been deceived. ‘You see all the signs, but you are so emotionally attached,’ he said. ‘You are willing to lose the money, but not the connection.’

The Financial Conduct Authority (FCA) said banks should play a stronger role in disrupting romance scams, calling for improved detection systems and better staff training to identify vulnerable customers. It urged firms to adopt what it called ‘compassionate aftercare’ for those affected.

Romance fraud typically involves criminals creating fake online profiles to build emotional connections before manipulating victims into transferring money.

The National Cyber Security Centre (NCSC) and UK police recommend maintaining privacy on social media, avoiding financial transfers to online contacts, and speaking openly with friends or family before sending money.

The Metropolitan Police recently launched an awareness campaign featuring victim testimonies and guidance on spotting red flags. The initiative also promotes collaboration with dating apps, banks, and social platforms to identify fraud networks.

Detective Superintendent Kerry Wood, head of economic crime for the Met Police, said that romance scams remain ‘one of the most devastating’ forms of fraud. ‘It’s an abuse of trust which undermines people’s confidence and sense of self-worth. Awareness is the most powerful defence against fraud,’ she said.

Although Yadav never recovered his savings, he said sharing his story helped him rebuild his life. He urged others facing similar scams to speak up: ‘Do not isolate yourself. There is hope.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AWS glitch triggers widespread outages across major apps

A major internet outage hit some of the world’s biggest apps and sites from about 9 a.m. CET Monday, with issues traced to Amazon Web Services. Tracking sites reported widespread failures across the US and beyond, disrupting consumer and enterprise services.

AWS cited ‘significant error rates’ in DynamoDB requests in the US-EAST-1 region, impacting additional services in Northern Virginia. Engineers are mitigating while investigating root cause, and some customers couldn’t create or update Support Cases.

Outages clustered around Virginia’s dense data-centre corridor but rippled globally. Impacted brands included Amazon, Google, Snapchat, Roblox, Fortnite, Canva, Coinbase, Slack, Signal, Vodafone and the UK tax authority HMRC.

Coinbase told users ‘all funds are safe’ as platforms struggled to authenticate, fetch data and serve content tied to affected back-ends. Third-party monitors noted elevated failure rates across APIs and app logins.

The incident underscores heavy reliance on hyperscale infrastructure and the blast radius when core data services falter. Full restoration and a formal post-mortem are pending from AWS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data Act now in force, more data sharing in EU

The EU’s Data Act is now in force, marking a major shift in European data governance. The regulation aims to expand access to industrial and Internet of Things data, giving users greater control over information they generate while maintaining safeguards for trade secrets and privacy.

Adopted as part of the EU’s Digital Strategy, the act seeks to promote fair competition, innovation, and public-sector efficiency. It enables individuals and businesses to share co-generated data from connected devices and allows public authorities limited access in emergencies or matters of public interest.

Some obligations take effect later. Requirements on product design for data access will apply to new connected devices from September 2026, while certain contract rules are deferred until 2027. Member states will set national penalties, with fines in some cases reaching up to 10% of global annual turnover.

The European Commission will assess the law’s impact within three years of its entry into force. Policymakers hope the act will foster a fairer, more competitive data economy, though much will depend on consistent enforcement and how businesses adapt their practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Public consultation: EU clarifies how DMA and GDPR work together

The European Commission and European Data Protection Board have jointly published long-awaited guidelines clarifying how the Digital Markets Act aligns with the GDPR. It aims to remove uncertainty for large online platforms over consent requirements, data sharing amongst other things.

Under the new interpretation, gatekeepers must obtain specific and separate consent when combining user data across different services, including when using it for AI training. They cannot rely on legitimate interest or contractual necessity for such processing, closing a loophole long debated in EU privacy law.

The Guidelines also set limits on how often consent can be re-requested, prohibiting repeated or slightly altered requests for the same purpose within a year. In addition, they make clear that offering users a binary choice between accepting tracking or paying a fee will rarely qualify as freely given consent.

The Guidance also introduces a practical standard for anonymisation, requiring platforms to prevent re-identification using technical and organisational safeguards. Consultation on the Guidelines runs until 4 December 2025, after which they are expected to shape future enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Labels and Spotify align on artist-first AI safeguards

Spotify partners with major labels on artist-first AI tools, putting consent and copyright at the centre of product design. The plan aims to align new features with transparent labelling and fair compensation while addressing concerns about generative music flooding platforms.

The collaboration with Sony, Universal, Warner, and Merlin will give artists control over participation in AI experiences and how their catalogues are used. Spotify says it will prioritise consent, clearer attribution, and rights management as it builds new tools.

Early direction points to expanded labelling via DDEX, stricter controls against mass AI uploads, and protections against search and recommendation manipulation. Spotify’s AI DJ and prompt-based playlists hint at how engagement features could evolve without sidelining creators.

Future products are expected to let artists opt in, monitor usage, and manage when their music feeds AI-generated works. Rights holders and distributors would gain better tracking and payment flows as transparency improves across the ecosystem.

Industry observers say the tie-up could set a benchmark for responsible AI in music if enforcement matches ambition. By moving in step with labels, Spotify is pitching a path where innovation and artist advocacy reinforce rather than undermine each other.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Veo 3.1 brings audio and control to AI filmmaking

Google DeepMind has unveiled Veo 3.1, the newest upgrade to its video generation model, bringing more artistic freedom, realism and sound integration to its AI filmmaking tool, Flow.

The update gives creators advanced scene control and introduces generated audio across existing features like ‘Ingredients to Video’, ‘Frames to Video’ and ‘Extend’.

Users can now fine-tune visuals by combining multiple reference images, seamlessly link frames into longer clips, and edit scenes with new insert and removal tools that handle shadows and lighting automatically.

Flow’s new precision tools mark a significant step toward cinematic-level storytelling powered by AI.

Veo 3.1 is also accessible through the Gemini API, Vertex AI and the Gemini app, broadening its availability to developers and enterprises alike.

These enhancements signal Google’s ongoing ambition to push the boundaries of generative video technology for creative and professional applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Between trips, Uber pilots paid AI data work

Uber is piloting ‘Digital Tasks’ in the US, letting select drivers and couriers earn by training AI models between trips.

Tasks include short selfie videos in any language, uploading multilingual documents, and uploading category-tagged images; each takes minutes, and pay varies by task.

Uber says demand came from US drivers seeking off-road income; participants can opt in via the Work Hub and need no extra experience.

Partners commissioning the data aren’t named. The pilot starts later this year, with potential expansion to non-drivers and wider markets.

The move diversifies beyond rides and delivery as robotaxis loom. Uber argues for more earning channels now, while autonomy scales over time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nurses gain AI support as Microsoft evolves Dragon Copilot in healthcare

Microsoft has announced major AI upgrades to Dragon Copilot, its healthcare assistant, extending ambient and generative AI capabilities to nursing workflows and third-party partner integrations.

The update is designed to improve patient journeys, reduce administrative workloads and enhance efficiency across healthcare systems.

The new features allow partners to integrate their own AI applications directly into Dragon Copilot, helping clinicians access trusted information, automate documentation and streamline financial management without leaving their workflow.

Partnerships with Elsevier, Wolters Kluwer, Atropos Health, Canary Speech and others will provide real-time decision support, clinical insights and revenue cycle automation.

Microsoft is also introducing the first commercial ambient AI solution built for nurses, designed to reduce burnout and enhance care quality.

A technology that automatically records nurse-patient interactions and transforms them into editable documentation for electronic health records, saving time and supporting accuracy.

Nurses can also access medical content within the same interface and automate note-taking and summaries, allowing greater focus on patient care.

The company says these developments mark a new phase in its AI strategy for healthcare, strengthening its collaboration with providers and partners.

Microsoft aims to make clinical workflows more connected, reliable and human-centred, while supporting safe, evidence-based decision-making through its expanding ecosystem of AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!