Judge allows US antitrust case against apple to proceed

A US federal judge has rejected Apple’s attempt to dismiss a major antitrust lawsuit, allowing the case to move forward. The ruling, issued Monday by District Judge Xavier Neals in New Jersey, marks a significant step in the Justice Department’s ongoing challenge to Apple’s business practices.

The lawsuit, filed 15 months ago, accuses Apple of building an illegal monopoly around the iPhone by erecting barriers that prevent competition and inflate profits. Neals’ 33-page opinion found the case strong enough to proceed to trial, which could begin as early as 2027.

Apple had argued the case was flawed, claiming the government misunderstood the smartphone market and distorted legal standards. But Judge Neals ruled there was sufficient evidence for the Justice Department’s claims to be tested in court.

At the heart of the lawsuit is Apple’s so-called ‘walled garden’ — a tightly controlled ecosystem of hardware and software. While Apple says this approach enhances user experience, the government claims it stifles innovation and raises prices.

The court agreed the case contained ‘several allegations of technological barricades that constitute anticompetitive conduct.’ Neals also warned of the ‘dangerous possibility’ that Apple’s control over the iPhone has crossed into illegal monopoly territory.

In response, Apple maintained its position, stating: ‘The DOJ’s case is wrong on the facts and the law.’
The company pledged to continue defending itself in court against the accusations.

The lawsuit is one of several legal threats confronting Apple, whose 2023 profits totalled $94 billion on $295 billion in revenue. In April, another judge barred Apple from charging fees on in-app purchases processed through alternative payment methods.

That ruling could cost the company billions in commission revenue, previously collected at rates of 15% to 30%. Additionally, a separate antitrust case may impact Apple’s agreement with Google, which is worth over $20 billion per year.

Under that deal, Google is the default search engine on Apple devices — a setup under scrutiny for its alleged anticompetitive effects. A Washington, DC judge is now considering whether to outlaw the arrangement as part of a broader case against Google.

On the same day as Neals’ ruling, Apple was also hit with a new lawsuit by app developer Proton.
The case seeks class-action status and accuses Apple of monopolistic behaviour that harms smaller developers and app creators.

Proton’s suit demands punitive damages and a court order to dismantle the walled garden approach central to Apple’s ecosystem. Combined with the DOJ case, the new lawsuit deepens Apple’s mounting legal pressures over its dominance in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU urged to pause AI act rollout

The digital sector is urging the EU leaders to delay the AI act, citing missing guidance and legal uncertainty. Industry group CCIA Europe warns that pressing ahead could damage AI innovation and stall the bloc’s economic ambitions.

The AI Act’s rules for general-purpose AI models are set to apply in August, but key frameworks are incomplete. Concerns have grown as the European Commission risks missing deadlines while the region seeks a €3.4 trillion AI-driven economic boost by 2030.

CCIA Europe calls for the EU heads of state to instruct a pause on implementation to ensure companies have time to comply. Such a delay would allow final standards to be established, offering developers clarity and supporting AI competitiveness.

Failure to adjust the timeline could leave Europe struggling to lead in AI, according to CCIA Europe’s leadership. A rushed approach, they argue, risks harming the very innovation the AI Act aims to promote.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and io face lawsuit over branding conflict

OpenAI and hardware startup io, founded by former Apple designer Jony Ive, are now embroiled in a trademark infringement lawsuit filed by iyO, a Google-backed company specialising in custom headphones.

The legal case prompted OpenAI to withdraw promotional material linked to its $6.005 billion acquisition of io, raising questions about the branding of its future AI device.

Court documents reveal that OpenAI and io had previously met with iyO representatives and tested their custom earbud product, although the tests were unsuccessful.

Despite initial contact and discussions about potential collaboration, OpenAI rejected iyO’s proposals to invest, license, or acquire the company for $200 million. The lawsuit, however, does not centre on an earbud or wearable device, according to io’s co-founders.

Io executives clarified in court that their prototype does not resemble iyO’s product and remains unfinished. It is neither wearable nor intended for sale within the following year.

OpenAI CEO Sam Altman described the joint project as an attempt to reimagine hardware interfaces. At the same time, Jony Ive expressed enthusiasm for the device’s early design, which he claims captured his imagination.

Court testimony and emails suggest io explored various technologies, including desktop, mobile, and portable designs. Internal communications also reference possible ergonomic research using 3D ear scan data.

Although the lawsuit has exposed some development details, the main product of the collaboration between OpenAI and io remains undisclosed.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Banks and tech firms create open-source AI standards

A group of leading banks and technology firms has joined forces to create standardised open-source controls for AI within the financial sector.

The initiative, led by the Fintech Open Source Foundation (FINOS), includes financial institutions such as Citi, BMO, RBC, and Morgan Stanley, working alongside major cloud providers like Microsoft, Google Cloud, and Amazon Web Services.

Known as the Common Controls for AI Services project, the effort seeks to build neutral, industry-wide standards for AI use in financial services.

The framework will be tailored to regulatory environments, offering peer-reviewed governance models and live validation tools to support real-time compliance. It extends FINOS’s earlier Common Cloud Controls framework, which originated with contributions from Citi.

Gabriele Columbro, Executive Director of FINOS, described the moment as critical for AI in finance. He emphasised the role of open source in encouraging early collaboration between financial firms and third-party providers on shared security and compliance goals.

Instead of isolated standards, the project promotes unified approaches that reduce fragmentation across regulated markets.

The project remains open for further contributions from financial organisations, AI vendors, regulators, and technology companies.

As part of the Linux Foundation, FINOS provides a neutral space for competitors to co-develop tools that enhance AI adoption’s safety, transparency, and efficiency in finance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU adviser backs Android antitrust ruling against Google

An adviser to the Court of Justice of the European Union has supported the EU’s antitrust ruling against Google, recommending the dismissal of its appeal over a €4.1bn fine. The case concerns Google’s use of its Android mobile system to limit competition through pre-installed apps and contractual restrictions.

The original €4.34bn fine was imposed by the European Commission in 2018 and later reduced by the General Court.

Google then appealed to the EU’s top court, but Advocate-General Juliane Kokott concluded that Google’s practices gave it unfair market advantages.

Kokott rejected Google’s argument that its actions should be assessed against an equally efficient competitor, noting Google’s dominance in the Android ecosystem and the robust network effects it enjoys.

She argued that bundling Google Search and Chrome with the Play Store created barriers for competitors.

The final court ruling is expected in the coming months and could shape Google’s future regulatory obligations in Europe. Google has already incurred over €8 billion in the EU antitrust fines across several investigations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp ad rollout in EU slower than global pace amid privacy scrutiny

Meta is gradually rolling out advertising features on WhatsApp globally, starting with the Updates tab, where users follow channels and may see sponsored content.

Although the global rollout remains on track, the Irish Data Protection Commission has indicated that a full rollout across the EU will not occur before 2026. However, this delay reflects ongoing regulatory scrutiny, particularly over privacy compliance.

Concerns have emerged regarding how user data from Meta platforms like Facebook, Instagram, and Messenger might be used to target ads on WhatsApp.

Privacy group NOYB had previously voiced criticism about such cross-platform data use. However, Meta clarified that these concerns are not directly applicable to the current WhatsApp ad model.

According to Meta, integrating WhatsApp with the Meta Account Center—which allows cross-app ad personalization—is optional and off by default.

If users do not link their WhatsApp accounts, only limited data sourced from WhatsApp (such as city, language, followed channels, and ad interactions) will be used for ad targeting in the Updates tab.

Meta maintains that this approach aligns with EU privacy rules. Nonetheless, regulators are expected to carefully assess Meta’s implementation, especially in light of recent judgments against the company’s ‘pay or consent’ model under the Digital Markets Act.

Meta recently reduced the cost of its ad-free subscriptions in the EU, signalling a willingness to adapt—but the company continues to prioritize personalized advertising globally as part of its long-term strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek under fire for alleged military ties and export control evasion

The United States has accused Chinese AI startup DeepSeek of assisting China’s military and intelligence services while allegedly seeking to evade export controls to obtain advanced American-made semiconductors.

The claims, made by a senior US State Department official speaking anonymously to Reuters, add to growing concerns over the global security risks posed by AI.

DeepSeek, based in Hangzhou, China, gained international attention earlier this year after claiming its AI models rivalled those of leading United States firms like OpenAI—yet at a fraction of the cost.

However, US officials now say that the firm has shared data with Chinese surveillance networks and provided direct technological support to the People’s Liberation Army (PLA). According to the official, DeepSeek has appeared in over 150 procurement records linked to China’s defence sector.

The company is also suspected of transmitting data from foreign users, including Americans, through backend infrastructure connected to China Mobile, a state-run telecom operator. DeepSeek has not responded publicly to questions about these privacy or security issues.

The official further alleges that DeepSeek has been trying to access Nvidia’s restricted H100 AI chips by creating shell companies in Southeast Asia and using foreign data centres to run AI models on US-origin hardware remotely.

While Nvidia maintains it complies with export restrictions and has not knowingly supplied chips to sanctioned parties, DeepSeek is said to have secured several H100 chips despite the ban.

US officials have yet to place DeepSeek on a trade blacklist, though the company is under scrutiny. Meanwhile, Singapore has already charged three men with fraud in investigating the suspected illegal movement of Nvidia chips to DeepSeek.

Questions have also been raised over the credibility of DeepSeek’s technological claims. Experts argue that the reported $5.58 million spent on training their flagship models is unrealistically low, especially given the compute scale typically required to match OpenAI or Meta’s performance.

DeepSeek has remained silent amid the mounting scrutiny. Still, with the US-China tech race intensifying, the firm could soon find itself at the centre of new trade sanctions and geopolitical fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman claims OpenAI team rejecting Meta’s mega offers

Meta is intensifying efforts to recruit AI talent from OpenAI by offering signing bonuses worth up to $100 million and multi-million-pound annual salaries. However, OpenAI CEO Sam Altman claims none of the company’s top researchers have accepted the offers.

Speaking on the Uncapped podcast, Altman said Meta had approached his team with ‘giant offers’, but OpenAI’s researchers stayed loyal, believing the company has a better chance of achieving superintelligence—AI that surpasses human capabilities.

OpenAI, where the average employee reportedly earns around $1.13 million a year, fosters a mission-driven culture focused on building AI for the benefit of humanity, Altman said.

Meta, meanwhile, is assembling a 50-person Superintelligence Lab, with CEO Mark Zuckerberg personally overseeing recruitment. Bloomberg reported that offers from Meta have reached seven to nine figures in total compensation.

Despite the aggressive approach, Meta appears to be losing some of its own researchers to rivals. VC principal Deedy Das recently said Meta lost three AI researchers to OpenAI and Anthropic, even after offering over $2 million annually.

In a bid to acquire more talent, Meta has also invested $14.3 billion in Scale AI, securing a 49% stake and bringing CEO Alexandr Wang into its Superintelligence Lab leadership.

Meta says its AI assistant now reaches one billion monthly users, while OpenAI reports 500 million weekly active users globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!