Labels press platforms to curb AI slop and protect artists

Luke Temple woke to messages about a new Here We Go Magic track he never made. An AI-generated song appeared on the band’s Spotify, Tidal, and YouTube pages, triggering fresh worries about impersonation as cheap tools flood platforms.

Platforms say defences are improving. Spotify confirmed the removal of the fake track and highlighted new safeguards against impersonation, plus a tool to flag mismatched releases pre-launch. Tidal said it removed the song and is upgrading AI detection. YouTube did not comment.

Industry teams describe a cat-and-mouse race. Bad actors exploit third-party distributors with light verification, slipping AI pastiches into official pages. Tools like Suno and Udio enable rapid cloning, encouraging volume spam that targets dormant and lesser-known acts.

Per-track revenue losses are tiny, reputational damage is not. Artists warn that identity theft and fan confusion erode trust, especially when fakes sit beside legitimate catalogues or mimic deceased performers. Labels caution that volume is outpacing takedowns across major services.

Proposed fixes include stricter distributor onboarding, verified artist controls, watermark detection, and clear AI labels for listeners. Rights holders want faster escalation and penalties for repeat offenders. Musicians monitor profiles and report issues, yet argue platforms must shoulder the heavier lift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Yuan says AI ‘digital twins’ could trim meetings and the workweek

AI could shorten the workweek, says Zoom’s Eric Yuan. At TechCrunch Disrupt, he pitched AI ‘digital twins’ that attend meetings, negotiate drafts, and triage email, arguing assistants will shoulder routine tasks so humans focus on judgement.

Yuan has already used an AI avatar on an investor call to show how a stand-in can speak on your behalf. He said Zoom will keep investing heavily in assistants that understand context, prioritise messages, and draft responses.

Use cases extend beyond meetings. Yuan described counterparts sending their digital twins to hash out deal terms before principals join to resolve open issues, saving hours of live negotiation and accelerating consensus across teams and time zones.

Zoom plans to infuse AI across its suite, including whiteboards and collaborative docs, so work moves even when people are offline. Yuan said assistants will surface what matters, propose actions, and help execute routine workflows securely.

If adoption scales, Yuan sees schedules changing. He floated a five-year goal where many knowledge workers shift to three or four days a week, with AI increasing throughput, reducing meeting load, and improving focus time across organisations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk launches AI-powered Grokipedia to rival Wikipedia

Elon Musk has launched Grokipedia, an AI-driven online encyclopedia developed by his company xAI. The platform, described as an alternative to Wikipedia, debuted on Monday with over 885,000 articles written and verified by AI.

Musk claimed the early version already surpasses Wikipedia in quality and transparency, promising significant improvements with the release of version 1.0.

Unlike Wikipedia’s crowdsourced model, Grokipedia does not allow users to edit content directly. Instead, users can request modifications through xAI’s chatbot Grok, which decides whether to implement changes and explains its reasoning.

Musk said the project’s guiding principle is ‘the truth, the whole truth, and nothing but the truth,’ acknowledging the platform’s imperfections while pledging continuous refinement.

However, Grokipedia’s launch has raised questions about originality. Several entries contain disclaimers crediting Wikipedia under a Creative Commons licence, with some articles appearing nearly identical.

Musk confirmed awareness of the issue and stated that improvements are expected before the end of the year. The Wikimedia Foundation, which operates Wikipedia, responded calmly, noting that human-created knowledge remains at the heart of its mission.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FDA and patent law create dual hurdles for AI-enabled medical technologies

AI reshapes healthcare by powering more precise and adaptive medical devices and diagnostic systems.

Yet, innovators face two significant challenges: navigating the US Food and Drug Administration’s evolving regulatory framework and overcoming legal uncertainty under US patent law.

These two systems, although interconnected, serve different goals. The FDA protects patients, while patent law rewards invention.

The FDA’s latest guidance seeks to adapt oversight for AI-enabled medical technologies that change over time. Its framework for predetermined change control plans allows developers to update AI models without resubmitting complete applications, provided updates stay within approved limits.

An approach that promotes innovation while maintaining transparency, bias control and post-market safety. By clarifying how adaptive AI devices can evolve safely, the FDA aims to balance accountability with progress.

Patent protection remains more complex. US courts continue to exclude non-human inventors, creating tension when AI contributes to discoveries.

Legal precedents such as Thaler vs Vidal and Alice Corp. vs CLS Bank limit patent eligibility for algorithms or diagnostic methods that resemble abstract ideas or natural laws. Companies must show human-led innovation and technical improvement beyond routine computation to secure patents.

Aligning regulatory and intellectual property strategies is now essential. Developers who engage regulators early, design flexible change control plans and coordinate patent claims with development timelines can reduce risk and accelerate market entry.

Integrating these processes helps ensure AI technologies in healthcare advance safely while preserving inventors’ rights and innovation incentives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic boosts cloud capacity with Google’s AI hardware

Anthropic has struck a multibillion-dollar deal with Google to expand its use of cloud computing and specialised AI chips. The agreement includes the purchase of up to one million Tensor Processing Units, Google’s custom hardware built to train and run large AI models.

The partnership will provide Anthropic with more than a gigawatt of additional computing power by late 2026. Executives said the move will support soaring demand for its Claude model family, which already serves over 300,000 business clients.

Anthropic, founded by former OpenAI employees, has quickly become a major player in generative AI. Backed by Amazon and valued at $183 billion, the company recently launched Claude Sonnet 4.5, praised for its coding and reasoning abilities.

Google continues to invest heavily in AI hardware to compete with Nvidia’s GPUs and rival US tech giants. Analysts said Anthropic’s expansion signals intensifying demand for computing power as companies race to lead the global AI revolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants push AI agents into web browsing

Tech companies are intensifying competition to reshape how people search online through AI-powered browsers. OpenAI’s new Atlas browser, built around ChatGPT, can generate answers and complete web-based tasks such as making shopping lists or reservations.

Atlas joins rivals like Microsoft’s Copilot-enabled Edge, Perplexity’s Comet, and newer platforms Dia and Neon. Developers are moving beyond traditional assistants, creating ‘agentic’ AI capable of acting autonomously while keeping user experience familiar.

Google remains dominant, with Chrome holding over 70 percent of the browser market and integrating limited AI features. Analysts say OpenAI could challenge that control by combining ChatGPT insights with browser behaviour to personalise search and advertising.

Experts note the battle extends beyond browsers as wearables and voice interfaces evolve. Controlling how users interact with AI today, they argue, could determine which company shapes digital habits in the coming decade.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple may have to pay $1.9B in damages to UK consumers over unfair App Store fees

Apple could face damages of up to £1.5 billion ($1.9 billion) after a British court ruled it overcharged consumers by imposing unfair commission fees on app developers.

The Competition Appeal Tribunal found that Apple abused its dominant position between 2015 and 2020 by charging excessive commissions, up to 30%, on App Store purchases and in-app payments. Judges ruled that the company’s fees should not have exceeded 17.5% for app sales and 10% for in-app transactions, concluding that half of the inflated costs were passed on to consumers.

The total damages, to be set next month, would compensate users who paid higher prices for apps, subscriptions and digital purchases. Apple said it will appeal, arguing that the App Store ‘helps developers succeed and provides consumers with a safe and trusted place to discover apps and make payments’.

The ruling comes as Apple continues to resist more burdensome antitrust regulation in Europe, which adds to Apple’s growing list of competition battles across Europe. Courts in the Netherlands and Belgium have accused the company of blocking alternative payment methods and charging excessive commissions, while similar lawsuits are ongoing in the United States.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 60 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

The EU, Canada, and Russia signed in Hanoi, underscoring geopolitical buy-in. Vietnam, being the host, drew scrutiny over censorship and arrests. Officials there cast the treaty as a step toward resilience and stature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MLK estate pushback prompts new Sora 2 guardrails at OpenAI

OpenAI paused the ability to re-create Martin Luther King Jr. in Sora 2 after Bernice King objected to user videos. Company leaders issued a joint statement with the King estate. New guardrails will govern depictions of historical figures on the app.

OpenAI said families and authorised estates should control how likenesses appear. Representatives can request removal or opt-outs. Free speech was acknowledged, but respectful use and consent were emphasised.

Policy scope remains unsettled, including who counts as a public figure. Case-by-case requests may dominate early enforcement. Transparency commitments arrived without full definitions or timelines.

Industry pressure intensified as major talent agencies opted out of clients. CAA and UTA cited exploitation and legal exposure. Some creators welcomed the tool, showing a split among public figures.

User appetite for realistic cameos continues to test boundaries. Rights of publicity and postmortem controls vary by state. OpenAI promised stronger safeguards while Sora 2 evolves.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!