Privacy laws block cross-border crypto regulation progress

Regulators continue to face hurdles in overseeing global crypto markets as privacy laws block effective cross-border data sharing, the Financial Stability Board warned. Sixteen years after Bitcoin’s launch, regulation remains inconsistent, with differing national approaches causing data gaps and fragmented oversight.

The FSB, under the Bank for International Settlements, said secrecy laws hinder authorities from monitoring risks and sharing information. Some jurisdictions block data sharing with foreign regulators, while others delay cooperation over privacy and reciprocity concerns.

According to the report, addressing these legal and institutional barriers is essential to improving cross-border collaboration and ensuring more effective global oversight of crypto markets.

However, the FSB noted that reliable data on digital assets remain scarce, as regulators rely heavily on incomplete or inconsistent sources from commercial data providers.

Despite the growing urgency to monitor financial stability risks, little progress has been made since similar concerns were raised nearly four years ago. The FSB has yet to outline concrete solutions for bridging the gap between data privacy protection and effective crypto regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and TSMC celebrate first US-made Blackwell AI chip

A collaboration between NVIDIA and TSMC has marked a historic milestone with the first NVIDIA Blackwell wafer produced on US soil.

The event, held at TSMC’s facility in Phoenix, symbolised the start of volume production for the Blackwell architecture and a major step toward domestic AI chip manufacturing.

NVIDIA’s CEO Jensen Huang described it as a moment that brings advanced technology and industrial strength back to the US.

A partnership that highlights how the companies aim to strengthen the US’s semiconductor supply chain by producing the world’s most advanced chips domestically.

TSMC Arizona will manufacture next-generation two-, three- and four-nanometre technologies, crucial for AI, telecommunications, and high-performance computing. The process transforms raw wafers through layering, etching, and patterning into the high-speed processors driving the AI revolution.

TSMC executives praised the achievement as the result of decades of partnership with NVIDIA, built on innovation and technical excellence.

Both companies believe that local chip production will help meet the rising global demand for AI infrastructure while securing the US’s strategic position in advanced technology manufacturing.

NVIDIA also plans to use its AI, robotics, and digital twin platforms to design and manage future American facilities, deepening its commitment to domestic production.

The companies say their shared investment signals a long-term vision of sustainable innovation, industrial resilience, and technological leadership for the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harvard’s health division supports AI-powered medical learning

Harvard Health Publishing has partnered with Microsoft to use its health content to train the Copilot AI system. The collaboration seeks to enhance the accuracy of healthcare responses on Microsoft’s AI platform, according to the Wall Street Journal.

HHP publishes consumer health resources reviewed by Harvard scientists, covering topics such as sleep, nutrition, and pain management. The institution confirmed that Microsoft has paid to license its articles, expanding a previous agreement made in 2022.

The move is designed to make medically verified information more accessible to the public through Copilot, which now reaches over 33 million users.

Harvard’s Soroush Saghafian said the deal could help cut errors in AI-generated medical advice, a key concern in healthcare. He emphasised the importance of rigorous testing before deployment, warning that unverified tools could pose serious risks to users.

Harvard continues to invest in AI research and integration across its academic programmes. Recent initiatives include projects to address bias in medical training and studies exploring AI’s role in drug development and cancer treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to pull all political ads in EU ahead of new transparency law

Meta Platforms has said it will stop selling and showing political, electoral and social issue advertisements across its services in the European Union from early October 2025. The decision follows the EU’s Transparency and Targeting of Political Advertising (TTPA) regulation coming into full effect on 10 October.

Under TTPA, platforms will be required to clearly label political ads, disclose the sponsor, the election or social issue at hand, the amounts paid, and how the ads are targeted. These obligations also include strict conditions on targeting and require explicit consent for certain data use.

Meta called the requirements ‘significant operational challenges and legal uncertainties’ and labelled parts of the new rules ‘unworkable’ for advertisers and platforms. It said that personalised ads are widely used for issue-based campaigns and that limiting them might restrict how people access political or social issue-related information.

The company joins Google, which made a similar move last year citing comparable concerns about TTPA compliance.

While political ads will be banned under paid formats, Meta says organic political content (e.g. users posting or sharing political views) will still be permitted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data Act now in force, more data sharing in EU

The EU’s Data Act is now in force, marking a major shift in European data governance. The regulation aims to expand access to industrial and Internet of Things data, giving users greater control over information they generate while maintaining safeguards for trade secrets and privacy.

Adopted as part of the EU’s Digital Strategy, the act seeks to promote fair competition, innovation, and public-sector efficiency. It enables individuals and businesses to share co-generated data from connected devices and allows public authorities limited access in emergencies or matters of public interest.

Some obligations take effect later. Requirements on product design for data access will apply to new connected devices from September 2026, while certain contract rules are deferred until 2027. Member states will set national penalties, with fines in some cases reaching up to 10% of global annual turnover.

The European Commission will assess the law’s impact within three years of its entry into force. Policymakers hope the act will foster a fairer, more competitive data economy, though much will depend on consistent enforcement and how businesses adapt their practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data labelling transforms rural economies in Tamil Nadu

India’s small towns are fast becoming global hubs for AI training and data labelling, as outsourcing firms move operations beyond major cities like Bangalore and Chennai. Lower costs and improved connectivity have driven a trend known as cloud farming, which has transformed rural employment.

In Tamil Nadu, workers annotate and train AI models for global clients, preparing data that helps machines recognise objects, text and speech. Firms like Desicrew pioneered this approach by offering digital careers close to home, reducing migration to cities while maintaining high technical standards.

Desicrew’s chief executive, Mannivannan J K, says about a third of the company’s projects already involve AI, a figure expected to reach nearly all within two years. Much of the work focuses on transcription, building multilingual datasets that teach machines to interpret diverse human voices and dialects.

Analysts argue that cloud farming could make rural India the world’s largest AI operations base, much as it once dominated IT outsourcing. Yet challenges remain around internet reliability, data security and client confidence.

For workers like Dhanalakshmi Vijay, who fine-tunes models by correcting their errors, the impact feels tangible. Her adjustments, she says, help AI systems perform better in real-world applications, improving everything from shopping recommendations to translation tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Public consultation: EU clarifies how DMA and GDPR work together

The European Commission and European Data Protection Board have jointly published long-awaited guidelines clarifying how the Digital Markets Act aligns with the GDPR. It aims to remove uncertainty for large online platforms over consent requirements, data sharing amongst other things.

Under the new interpretation, gatekeepers must obtain specific and separate consent when combining user data across different services, including when using it for AI training. They cannot rely on legitimate interest or contractual necessity for such processing, closing a loophole long debated in EU privacy law.

The Guidelines also set limits on how often consent can be re-requested, prohibiting repeated or slightly altered requests for the same purpose within a year. In addition, they make clear that offering users a binary choice between accepting tracking or paying a fee will rarely qualify as freely given consent.

The Guidance also introduces a practical standard for anonymisation, requiring platforms to prevent re-identification using technical and organisational safeguards. Consultation on the Guidelines runs until 4 December 2025, after which they are expected to shape future enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Labels and Spotify align on artist-first AI safeguards

Spotify partners with major labels on artist-first AI tools, putting consent and copyright at the centre of product design. The plan aims to align new features with transparent labelling and fair compensation while addressing concerns about generative music flooding platforms.

The collaboration with Sony, Universal, Warner, and Merlin will give artists control over participation in AI experiences and how their catalogues are used. Spotify says it will prioritise consent, clearer attribution, and rights management as it builds new tools.

Early direction points to expanded labelling via DDEX, stricter controls against mass AI uploads, and protections against search and recommendation manipulation. Spotify’s AI DJ and prompt-based playlists hint at how engagement features could evolve without sidelining creators.

Future products are expected to let artists opt in, monitor usage, and manage when their music feeds AI-generated works. Rights holders and distributors would gain better tracking and payment flows as transparency improves across the ecosystem.

Industry observers say the tie-up could set a benchmark for responsible AI in music if enforcement matches ambition. By moving in step with labels, Spotify is pitching a path where innovation and artist advocacy reinforce rather than undermine each other.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google and Salesforce deepen AI partnership across Agentforce 360 and Gemini Enterprise

Salesforce and Google have expanded their long-term partnership, introducing new integrations between Salesforce’s Agentforce 360 platform and Google’s Gemini Enterprise. The collaboration aims to enhance productivity and build a new foundation for intelligent, connected business operations.

Through the expansion, Gemini models now power Salesforce’s Atlas Reasoning Engine, combining multimodal intelligence with hybrid reasoning to improve how AI agents handle complex, multistep enterprise tasks.

These integrations also extend across Google Workspace, bringing Agentforce 360 capabilities directly into Gmail, Meet, Docs, Sheets and Drive for sales, service and IT teams.

Salesforce highlights that fine-tuned Gemini models outperform competing LLMs on key CRM benchmarks, enabling businesses to automate workflows more reliably and consistently.

The companies also reaffirm their commitment to open standards like Model Context Protocol and Agent2Agent, allowing multi-agent collaboration and interoperability across enterprise systems.

A partnership that further integrates Gemini Enterprise with Slack’s real-time search API, enabling users to draw insights directly from organisational data within conversations.

Both companies stress that these advances mark a major step toward an ‘Agentic Enterprise’, where AI systems work alongside people to drive innovation, improve service quality and streamline decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!