Apple sues European Commission over DMA interoperability ruling

Apple is mounting a legal challenge against the European Commission after being ordered to open up its tightly controlled ecosystem to rival companies under the Digital Markets Act (DMA).

The tech giant filed its appeal with the EU’s General Court, claiming the decision would undermine user privacy and harm innovation.

The dispute centres on a March ruling by the Commission following months of dialogue, which concluded that Apple must guarantee interoperability—a requirement that would allow third-party developers to connect non-Apple products, such as smartwatches and headphones, to iPhones and iPads.

Apple has pushed back strongly, arguing that the mandate is ‘unreasonable, costly and stifles innovation.’ A company spokesperson said the move would benefit what Apple describes as ‘data-hungry companies’ like Meta and Samsung, who could gain access to users’ most sensitive data through third-party connections.

Since December 2024, the European Commission has been pressing Apple to make its ecosystem more open to promote competition across the digital sector. However, Apple maintains that complying with the order would compromise the company’s privacy-first approach and violate its data protection standards.

The Commission, meanwhile, insists the measures are proportionate and fully aligned with the EU’s stringent privacy and security framework. It argues that the order would not strip Apple of control over its devices, but rather enable fairer access for other tech players while keeping user protections intact.

The case is set to become a major test of how far the EU can push tech giants to comply with the Digital Markets Act, which was designed to curb the dominance of so-called ‘gatekeepers’ in digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morocco detains suspect in France’s crypto abduction cases

Moroccan police arrested 24-year-old dual French-Moroccan Bajjou Badiss Mohamed AmiDe, wanted for kidnappings of cryptocurrency holders in France. An Interpol red notice issued by French authorities led to his identification and arrest.

Charges include organised crime, kidnapping, and extortion. Due to his dual nationality, he will face trial in Morocco, with French prosecutors sharing the case files.

The arrest follows a recent surge in violent attacks on crypto entrepreneurs in France. Interior Minister Bruno Retailleau has introduced emergency security measures, including private consultations and home risk assessments for those at risk.

France has seen 14 of the world’s 50 known attacks on crypto figures over the past year, according to Ledger co-founder Éric Larchevêque.

High-profile incidents include the attempted abduction of Paymium CEO Pierre Noizat’s daughter and the arrest of seven suspects linked to a victim found with a severed finger. Officials stress the urgency of judicial action to prevent further violence.

French authorities have thanked Morocco for its cooperation, while proceedings against Bajjou will continue under Moroccan jurisdiction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber attack hits Lee Enterprises staff data

Thousands of current and former employees at Lee Enterprises have had their data exposed following a cyberattack earlier this year.

Hackers accessed to the company’s systems in early February, compromising sensitive information such as names and Social Security numbers before the breach was contained the same day.

Although the media firm, which operates over 70 newspapers across 26 US states, swiftly secured its networks, a three-month investigation involving external cybersecurity experts revealed that attackers accessed databases containing employee details.

The breach potentially affects around 40,000 individuals — far more than the company’s 4,500 current staff — indicating that past employees were also impacted.

The stolen data could be used for identity theft, fraud or phishing attempts. Criminals may even impersonate affected employees to infiltrate deeper into company systems and extract more valuable information.

Lee Enterprises has notified those impacted and filed relevant disclosures with authorities, including the Maine Attorney General’s Office.

Headquartered in Iowa, Lee Enterprises draws over 200 million monthly online page views and generated over $611 million in revenue in 2024. The incident underscores the ongoing vulnerability of media organisations to cyber threats, especially when personal employee data is involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eminem sues Meta over copyright violations

Eminem has filed a major lawsuit against Meta, accusing the tech giant of knowingly enabling widespread copyright infringement across its platforms. The rapper’s publishing company, Eight Mile Style, is seeking £80.6 million in damages, claiming 243 of his songs were used without authorisation.

The lawsuit argues that Meta, which owns Facebook, Instagram and WhatsApp, allowed tools such as Original Audio and Reels to encourage unauthorised reproduction and use of Eminem’s music.

The filing claims it occurred without proper licensing or attribution, significantly diminishing the value of his copyrights.

Eminem’s legal team contends that Meta profited from the infringement instead of ensuring his works were protected. If a settlement cannot be reached, the artist is demanding the maximum statutory damages — $150,000 per song — which would amount to over $109 million.

Meta has faced similar lawsuits before, including a high-profile case in 2022 brought by Epidemic Sound, which alleged the unauthorised use of thousands of its tracks. The latest claim adds to growing pressure on social media platforms to address copyright violations more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI turns ChatGPT into AI gateway

OpenAI plans to reinvent ChatGPT as an all-in-one ‘super assistant’ that knows its users and becomes their primary gateway to the internet.

Details emerged from a partly redacted internal strategy document shared during the US government’s antitrust case against Google.

Rather than limiting ChatGPT to existing apps and websites, OpenAI envisions a future where the assistant supports everyday life—from suggesting recipes at home to taking notes at work or guiding users while travelling.

The company says the AI should evolve into a reliable, emotionally intelligent helper capable of handling a various personal and professional tasks.

OpenAI also believes hardware will be key to this transformation. It recently acquired io, a start-up founded by former Apple designer Jony Ive, for $6.4 billion to develop AI-powered devices.

The company’s strategy outlines how upcoming models like o2 and o3, alongside tools like multimodality and generative user interfaces, could make ChatGPT capable of taking meaningful action instead of simply offering responses.

The document also reveals OpenAI’s intention to back a regulation requiring tech platforms to allow users to set ChatGPT as their default assistant. Confident in its fast growth, research lead, and independence from ads, the company aims to maintain its advantage through bold decisions, speed, and self-disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Courts consider limits on AI evidence

A newly proposed rule by the Federal Judicial Conference could reshape how AI-generated evidence is treated in court. Dubbed Rule 707, it would allow such machine-generated evidence to be admitted only if it meets the same reliability standards required of expert testimony under Rule 702.

However, it would not apply to outputs from simple scientific instruments or widely used commercial software. The rule aims to address concerns about the reliability and transparency of AI-driven analysis, especially when used without a supporting expert witness.

Critics argue that the limitation to non-expert presentation renders the rule overly narrow, as the underlying risks of bias and interpretability persist regardless of whether an expert is involved. They suggest that all machine-generated evidence in US courts should be subject to robust scrutiny.

The Advisory Committee is also considering the scope of terminology such as ‘machine learning’ to prevent Rule 707 from encompassing more than intended. Meanwhile, a separate proposed rule regarding deepfakes has been shelved because courts already have tools to address the forgery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces backlash over open source AI claims

Meta is under renewed scrutiny for what critics describe as ‘open washing’ after sponsoring a Linux Foundation whitepaper on the benefits of open source AI.

The paper highlights how open models help reduce enterprise costs—claiming companies using proprietary AI tools spend over three times more. However, Meta’s involvement has raised questions, as its Llama AI models are presented as open source despite industry experts insisting otherwise.

Amanda Brock, head of OpenUK, argues that Llama does not meet accepted definitions of open source due to licensing terms that restrict commercial use.

She referenced the Open Source Initiative’s (OSI) standards, which Llama fails to meet, pointing to the presence of commercial limitations that contradict open source principles. Brock noted that open source should allow unrestricted use, which Llama’s license does not support.

Meta has long branded its Llama models as open source, but the OSI and other stakeholders have repeatedly pushed back, stating that the company’s licensing undermines the very foundation of open access.

While Brock acknowledged Meta’s contribution to the broader open source conversation, she also warned that such mislabelling could have serious consequences—especially as lawmakers and regulators increasingly reference open source in crafting AI legislation.

Other firms have faced similar allegations, including Databricks with its DBRX model in 2024, which was also criticised for failing to meet OSI standards. As the AI sector continues to evolve, the line between truly open and merely accessible models remains a point of growing tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU says US tech firms censor more

Far more online content is removed under US tech firms’ terms and conditions than under the EU’s Digital Services Act (DSA), according to Tech Commissioner Henna Virkkunen.

Her comments respond to criticism from American tech leaders, including Elon Musk, who have labelled the DSA a threat to free speech.

In an interview with Euractiv, Virkkunen said recent data show that 99% of content removals in the EU between September 2023 and April 2024 were carried out by platforms like Meta and X based on their own rules, not due to EU regulation.

Only 1% of cases involved ‘trusted flaggers’ — vetted organisations that report illegal content to national authorities. Just 0.001% of those reports led to an actual takedown decision by authorities, she added.

The DSA’s transparency rules made those figures available. ‘Often in the US, platforms have more strict rules with content,’ Virkkunen noted.

She gave examples such as discussions about euthanasia and nude artworks, which are often removed under US platform policies but remain online under European guidelines.

Virkkunen recently met with US tech CEOs and lawmakers, including Republican Congressman Jim Jordan, a prominent critic of the DSA and the DMA.

She said the data helped clarify how EU rules actually work. ‘It is important always to underline that the DSA only applies in the European territory,’ she said.

While pushing back against American criticism, Virkkunen avoided direct attacks on individuals like Elon Musk or Mark Zuckerberg. She suggested platform resistance reflects business models and service design choices.

Asked about delays in final decisions under the DSA — including open cases against Meta and X — Virkkunen stressed the need for a strong legal basis before enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!