French lawmakers advance plan to double digital services tax on Big Tech

France’s National Assembly has voted to raise its digital services tax on major tech firms such as Google, Apple, Meta and Amazon from 3% to 6%, despite government warnings that the move could trigger US trade retaliation.

Economy Minister Roland Lescure said the increase would be ‘disproportionate’, cautioning that it could invite equally strong countermeasures from Washington. Lawmakers had initially proposed a 15% levy in response to former US President Donald Trump’s tariff threats, but scaled back amid opposition from industry and the government.

The amendment still requires final approval in next week’s budget vote and then in the French Senate. The proposal also raises the global revenue threshold for companies subject to the digital services tax from €750 million to €2 billion, aiming to shield smaller domestic firms.

John Murphy of the US Chamber of Commerce criticised the plan, arguing it solely targets American companies. Lawmaker Charles Sitzenstuhl, from President Emmanuel Macron’s party, stressed that ‘the objective of this tax was not to harm the United States in any way’, addressing US officials following the vote.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU MiCA greenlight turns Blockchain.com’s Malta base into hub

Blockchain.com received a MiCA license from Malta’s Financial Services Authority, enabling passported crypto services across all 30 EEA countries under one EU framework. Leaders called it a step toward safer, consistent access.

Malta becomes the hub for scaling operations, citing regulatory clarity and cross-border support. Under the authorisation, teams will expand secure custody and wallets, enterprise treasury tools, and localised products for the EU consumers.

A unified license streamlines go-to-market and accelerates launches in priority jurisdictions. Institutions gain clearer expectations on safeguarding, disclosures, and governance, while retail users benefit from standardised protections and stronger redress.

Fiorentina D’Amore will lead the EU strategy with deep fintech experience. Plans include phased rollouts, supervisor engagement, and controls aligned to MiCA’s conduct and prudential requirements across key markets.

Since 2011, Blockchain.com says it has processed over one trillion dollars and serves more than 90 million wallets. Expansion under MiCA adds scalable infrastructure, robust custody, and clearer disclosures for users and institutions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Diella 2.0 set to deliver 83 new AI assistants to aid Albania’s MPs

Albania’s AI minister Diella will ‘give birth’ to 83 virtual assistants for ruling-party MPs, Prime Minister Edi Rama said, framing a quirky rollout of parliamentary copilots that record debates and propose responses.

Diella began in January as a public-service chatbot on e-Albania, then ‘Diella 2.0’ added voice and an avatar in traditional dress. Built with Microsoft by the National Agency for Information Society, it now oversees specific state tech contracts.

The legality is murky: the constitution of Albania requires ministers to be natural persons. A presidential decree left Rama’s responsibility to establish the role and set up likely court tests from opposition lawmakers.

Rama says the ‘children’ will brief MPs, summarise absences, and suggest counterarguments through 2026, experimenting with automating the day-to-day legislative grind without replacing elected officials.

Reactions range from table-thumping scepticism to cautious curiosity, as other governments debate AI personhood and limits; Diella could become a template, or a cautionary tale for ‘ministerial’ bots.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU investigates Meta and TikTok for DSA breaches

The European Commission has accused Meta and TikTok of breaching the Digital Services Act (DSA), highlighting failures in handling illegal content and providing researchers access to public data.

Meta’s Facebook and Instagram were found to make it too difficult for users to report illegal content or receive responses to complaints, the Commission said in its preliminary findings.

Investigations began after complaints to Ireland’s content regulator, where Meta’s EU base is located. The Commission’s inquiry, which has been ongoing since last year, aims to ensure that large platforms protect users and meet EU safety obligations.

Meta and TikTok can submit counterarguments before penalties of up to six percent of global annual turnover are imposed.

Both companies face separate concerns about denying researchers adequate access to platform data and preventing oversight of systemic online risks. TikTok is under further examination for minor protection and advertising transparency issues.

The Commission has launched 14 such DSA-related proceedings, none concluded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sky acquisition by OpenAI signals ChatGPT’s push into native workflows

OpenAI acquired Software Applications Incorporated, the maker of Sky, to accelerate the development of interfaces that understand context, adapt to intent, and act across apps. Sky’s macOS layer sees what’s on screen and executes tasks. Its team joins OpenAI to bake these capabilities into ChatGPT.

Sky turns the Mac into a cooperative workspace for writing, planning, coding, and daily tasks. It can control native apps, invoke workflows, and ground actions in on-screen context. That tight integration now becomes a core pillar of ChatGPT’s product roadmap.

OpenAI says the goal is capability plus usability: not just answers, but actions completed in your tools. VP Nick Turley framed it as moving from prompts to productivity. Expect ChatGPT features that feel ambient, proactive, and native on desktop.

Sky’s founders say large language models finally enable intuitive, customizable computing. CEO Ari Weinstein described Sky as a layer that ‘floats’ over your desktop, helping you think and create. OpenAI plans to bring that experience to hundreds of millions of users.

A disclosure notes that a fund associated with Sam Altman held a passive stake in Software Applications Incorporated. Nick Turley and Fidji Simo led the deal. OpenAI’s independent Transaction and Audit Committees reviewed and approved the acquisition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gigawatt-scale AI marks Anthropic’s next compute leap

Anthropic will massively expand on Google Cloud, planning to deploy up to 1 million TPUs and bring well over a gigawatt online in 2026. The multiyear investment totals tens of billions to accelerate research and product development.

Google Cloud CEO Thomas Kurian said Anthropic’s move reflects TPUs’ price-performance and efficiency, citing ongoing innovations and the seventh-generation ‘Ironwood’ TPU. Google will add capacity and drive further efficiency across its accelerator portfolio.

Anthropic now serves over 300,000 business customers, with large accounts up nearly sevenfold year over year. Added compute will meet demand while enabling deeper testing, alignment research, and responsible deployment at a global scale.

CFO Krishna Rao said the expansion keeps Claude at the frontier for Fortune 500s and AI-native startups alike. Increased capacity ensures reliability as usage and mission-critical workloads grow rapidly.

Anthropic’s diversified strategy spans Google TPUs, Amazon Trainium, and NVIDIA GPUs. It remains committed to Amazon as its primary training partner, including Project Rainier’s vast US clusters, and will continue investing to advance model capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

$MELANIA coin faces court claims over price manipulation

Executives behind the $MELANIA cryptocurrency, launched by Melania Trump in January, are accused in court filings of orchestrating a pump-and-dump scheme. The coin surged from a few cents to $13.73 before falling to 10 cents, while $TRUMP dropped from $45.47 to $5.79.

Investors allege the creators planned the price surge and collapse to profit from rapid trading. Court papers allege Meteora executives used accomplices to buy and sell $MELANIA quickly, securing large profits while ordinary investors lost money.

Melania Trump herself is not named in the lawsuit, which describes her as unaware of the alleged scheme.

The $MELANIA allegations are now part of broader legal proceedings involving multiple cryptocurrencies that began earlier this year. Meteora has not commented, while the Trump family reportedly earned over $1bn from crypto ventures in the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot