Apple is preparing to introduce its AI features to iPhones in China by mid-year. Efforts include significant software adaptations and collaboration with local partners to meet the country’s unique requirements.
Teams based in China and the US are actively working to customise the Apple Intelligence platform for the region. Insiders suggest the launch could happen as early as May, provided technical and regulatory challenges are resolved.
Regulatory compliance remains a critical hurdle for Apple. The project reflects the company’s growing emphasis on localising its technology for key international markets, including China.
For more information on these topics, visit diplomacy.edu.
Tencent, a leading Chinese tech giant, has won a significant copyright infringement case in a US district court. The ruling resulted in nearly $85 million in compensation after a Taiwan-based TV box firm, Unblock Tech, and its distributors were found guilty of violating copyrights on over 1,500 Tencent shows, including ‘Little Days’ and ‘Three Body Problem’.
The judgement was handed down by the US District Court for the Western District of Texas. Tencent units, including Tencent Penguin Film and Tencent Technology, brought the case against the defendants for copying, distributing, and importing the content without permission.
Neither Tencent nor Unblock Tech provided comments regarding the ruling.
Analyst Vivian Toh highlighted the challenges of addressing cross-border infringement, stating that Tencent’s success underscores its commitment to protecting intellectual property.
The win reflects the broader issue of video content piracy, which remains a persistent problem across global markets.
Tencent has also pursued similar cases in China, targeting TikTok and Douyin-owner ByteDance over unauthorised use of its content. The outcomes of those lawsuits have yet to be publicly disclosed.
For more information on these topics, visit diplomacy.edu.
OpenAI has announced plans to simplify its artificial intelligence product line by combining its o-series and GPT-series models into a unified system. CEO Sam Altman revealed the strategy in a post on X, highlighting the need for more accessible AI tools.
The decision marks a shift away from standalone releases, such as the previously unveiled o3 and o3 mini models.
The company aims to launch GPT-5 as a comprehensive AI system that incorporates the features of earlier models, addressing user concerns about complexity. Altman stressed the importance of creating tools that ‘just work’ while providing no exact timeline for the rollout.
OpenAI also plans to release GPT-4.5, codenamed ‘Orion’, as its final non-chain-of-thought model.
The announcement follows increased scrutiny over AI development costs, with competitors like China’s DeepSeek introducing more affordable alternatives. The move aligns with OpenAI’s efforts to remain competitive while addressing usability issues.
By streamlining its offerings, OpenAI hopes to deliver systems capable of handling diverse tasks and leveraging available tools seamlessly. The new roadmap reflects a broader industry trend towards efficiency and user-centric design.
For more information on these topics, visit diplomacy.edu.
The European Commission has abandoned proposed regulations on technology patents, AI liability, and privacy rules for messaging apps, citing a lack of foreseeable agreement among EU lawmakers and member states. The draft rules faced strong opposition from industry groups and major technology firms. A proposed regulation on standard essential patents, designed to streamline licensing disputes for telecom and smart device technologies, was scrapped after opposition from patent holders like Nokia and Ericsson. Car manufacturers and tech giants such as Apple and Google had pushed for reforms to reduce royalty costs.
A proposal that would have allowed consumers to sue AI developers for harm caused by their technology was also withdrawn. The AI Liability Directive, first introduced in 2022, aimed to hold providers accountable for failures in AI systems. Legal experts say the move does not indicate a shift in the EU’s approach to AI regulation, as several laws already govern the sector. Meanwhile, plans to extend telecom privacy rules to platforms like WhatsApp and Skype have been dropped. The proposal, first introduced in 2017, had been stalled due to disagreements over tracking cookies and child protection measures.
The decision has drawn mixed reactions from industry groups. Nokia welcomed the withdrawal of patent rules, arguing they would have discouraged European investment in research and development. The Fair Standards Alliance, representing firms such as BMW, Tesla, and Google, expressed disappointment, warning that the decision undermines fair patent licensing. The Commission has stated it will reassess the need for revised proposals but has not provided a timeline for future regulatory efforts.
For more information on these topics, visit diplomacy.edu.
Thomson Reuters has won a legal battle against Ross Intelligence, after a judge ruled that the law firm’s use of Thomson Reuters’ legal content to train an AI model violated US copyright laws. The case stems from a 2020 lawsuit where Thomson Reuters accused the now-defunct legal research firm of using its Westlaw platform to build a competing AI system without permission.
Judge Stephanos Bibas confirmed that Ross Intelligence’s use of the content did not qualify as “fair use” under US copyright law, which permits limited use of copyrighted material for purposes such as teaching or research. Thomson Reuters expressed satisfaction with the ruling, stating that copying its content for AI training was not a fair use.
This case is part of a broader trend of legal challenges involving AI and copyright issues, with authors, artists, and music labels filing similar lawsuits against AI developers for using their works without compensation. These cases all involve the claim that tech companies have used vast amounts of human-created content to train AI models, raising concerns about intellectual property rights and the ethics of AI development.
For more information on these topics, visit diplomacy.edu.
Elon Musk’s $97.4 billion bid to acquire OpenAI’s assets has sparked controversy, with OpenAI accusing him of contradicting his own legal claims.
Musk’s lawsuit, filed in August, argues that OpenAI’s assets should remain in a charitable trust and not be transferred for private gain. OpenAI has called his offer ‘an improper bid to undermine a competitor’.
The dispute comes as OpenAI seeks to transition into a for-profit organisation to secure funds for advanced AI development. Musk, a co-founder of OpenAI who left before ChatGPT’s rise in 2022, has launched his own AI startup, xAI, in 2023.
OpenAI’s letter to a federal court highlights the clash between Musk’s stated opposition to privatising its assets and his attempt to acquire them with private investors. The AI company argues that Musk’s bid undermines his legal position and the nonprofit’s mission.
Representatives for Musk have yet to comment. OpenAI continues to defend its transition plan, emphasising the need for substantial investment to remain competitive in the fast-evolving AI landscape.
For more information on these topics, visit diplomacy.edu.
Baidu plans to make its AI chatbot, Ernie Bot, free for all users starting 1 April. The service, which will be accessible on both desktop and mobile platforms, reflects the company’s confidence in improved technology and reduced operational costs.
China’s AI sector is heating up, with DeepSeek emerging as a notable rival. DeepSeek offers free chatbot services that it claims rival OpenAI’s advanced systems while maintaining lower costs.
Despite Baidu’s position as an early leader in AI, its Ernie Bot has struggled to gain traction, lagging behind ByteDance’s Doubao chatbot and DeepSeek in user adoption.
Baidu initially introduced premium features in late 2023, charging users for advanced search capabilities powered by Ernie 4.0. The upcoming free release of both Ernie Bot and an advanced search function marks a shift in strategy.
The advanced search feature promises enhanced reasoning and tool integration, aimed at delivering expert-level responses to users.
Ernie Bot’s latest version, Ernie 4.0, claims parity with OpenAI’s GPT-4 in terms of capabilities. By removing cost barriers, Baidu hopes to attract a larger user base and strengthen its position in the competitive AI sector.
For more information on these topics, visit diplomacy.edu.
US Vice President JD Vance criticised Europe’s heavy-handed AI regulations at a Paris summit, warning they could stifle innovation. He argued the EU’s approach, including the Digital Services Act and GDPR, burdens smaller firms with high compliance costs, which could harm AI’s transformative potential. Vance also dismissed content moderation policies as authoritarian censorship.
The United States and Britain opted not to sign the summit’s declaration advocating inclusive, ethical, and safe AI. Vance emphasised America’s intention to lead AI innovation and resist regulatory frameworks that might hinder its progress. French President Emmanuel Macron and European Commission chief Ursula von der Leyen countered by stressing that regulation is essential to build public trust in AI.
Geopolitical competition dominated discussions, with Vance warning of potential risks in partnering with China. He cautioned against allowing authoritarian regimes to influence critical information infrastructure through subsidised technology exports. Although he didn’t name DeepSeek, a recent Chinese AI development, his remarks highlighted growing concerns about maintaining technological leadership.
The summit exposed significant policy differences, with the US prioritising rapid AI advancement over stringent safety measures. Critics labelled this a missed opportunity to address broader AI risks, including supply chain security and workforce disruptions.
For more information on these topics, visit diplomacy.edu.
Les Echos-Le Parisien, owned by LVMH, has opted out of a lawsuit involving French media and Elon Musk’s platform X, according to sources and court officials. The case sought to compel X to compensate publishers for content under EU copyright rules.
The lawsuit, initially backed by Les Echos-Le Parisien, Le Monde, and Le Figaro, aimed to enforce compliance with legislation ensuring fair compensation for digital use of journalistic content. LVMH’s decision to withdraw from the legal action was reportedly communicated to executives from other media groups, though the rationale remains unclear.
Despite this, Le Monde and Le Figaro have filed a joint case to pursue unpaid compensation. Les Echos-Le Parisien previously argued that platforms like X must adhere to EU copyright laws to protect the future of quality journalism.
LVMH, which owns various French media outlets, has previously challenged tech giants like Google and Meta on similar grounds. While declining to comment, LVMH continues to expand its media influence, recently acquiring Paris Match and a French radio station.
For more information on these topics, visit diplomacy.edu.
Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jovan Kurbalija, executive director of Diplo, argues in his blog that before enacting new AI-specific rules, society must assess whether current legal frameworks—such as consumer protection, data governance, and liability laws—can effectively regulate AI.
He draws historical parallels, citing the 4,000-year-old Code of Hammurabi as an example of legal accountability principles that remain relevant today. Kurbalija explains that legal systems have always adapted to technological advances without requiring entirely new legal categories.
He also highlights how laws governing property, commerce, and torts were successfully applied to the internet in the 1990s, suggesting that AI can be regulated similarly. Instead of focusing on abstract ethical discussions, he argues that enforcing existing legal frameworks will ensure accountability for AI developers and users.
The blog post also examines different layers of AI regulation, from hardware and data laws to algorithmic governance and AI applications. While AI-generated content has raised legal disputes over intellectual property and data use, these challenges, Kurbalija contends, should be addressed by refining current laws rather than introducing entirely new ones. He points to ongoing legal battles involving OpenAI, the New York Times, and Getty Images as examples of courts adapting existing regulations to the AI landscape.
Ultimately, Kurbalija asserts that AI is a tool, much like a hammer or a horse, and does not require its own distinct legal system. What matters most, he insists, is holding those who create and deploy AI accountable for its consequences. Society can effectively govern AI without requiring specialised regulations by reinforcing traditional legal principles such as liability, transparency, and justice.