The Board of Peace for Gaza: Assessing its legal boundaries and impact on the UN

The adoption of United Nations Security Council Resolution 2803 (2025) marked a significant development in international engagement with the Gaza conflict.

Central to the resolution is the endorsement of a Comprehensive plan to end the Gaza Conflict and the establishment of the ‘Board of Peace’, entrusted with transitional governance and security responsibilities in Gaza.

The initiative has sparked intense debate among diplomats, legal scholars, and policy practitioners. While some view the Board of Peace as a pragmatic response to a prolonged failure of existing approaches, others raise concerns about mandate overreach, accountability, respect for self-determination, and potential erosion of the United Nations’ institutional role.

Webinar objectives:

  • Clarify the legal boundaries of international transitional governance under UN auspices;
  • Assess institutional and accountability risks arising from delegated governance mechanisms; and,
  • Examine longer-term implications for the UN Security Council and the future of (effective) multilateralism.

Horizon1000 aims to bring powerful AI healthcare tools to Africa

The Gates Foundation and OpenAI have launched a joint healthcare initiative, Horizon1000, to expand the use of AI across primary care systems in Sub-Saharan Africa. The partnership includes a $50 million commitment in funding, technology, and technical support to equip 1,000 clinics with AI tools by 2028.

Horizon1000’s Operations will begin in Rwanda, where local authorities will work with the two organisations to deploy AI systems in frontline healthcare settings. The initiative reflects the Foundation’s long-standing aim to ensure that new technologies reach lower-income regions without long delays.

Bill Gates said the project responds to a critical shortage of healthcare workers, which threatens to undermine decades of progress in global health. Sub-Saharan Africa currently faces a shortfall of nearly six million medical professionals, limiting the capacity of overstretched clinics to deliver consistent care.

Low-quality healthcare contributes to between six and eight million deaths annually in low- and middle-income countries, according to the World Health Organization. Rwanda, the first pilot country, has only one healthcare worker per 1,000 people, far below the WHO’s recommended level.

AI tools under Horizon1000 are intended to support, rather than replace, health workers by assisting with clinical guidance, administration, and patient interactions. The Gates Foundation said it will continue working with regional governments and innovators to scale the programme.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indian firms accelerate growth through AI, Microsoft finds

Indian firms are accelerating the adoption of AI, with many using AI agents to enhance workforce capabilities rather than relying solely on traditional methods.

According to Microsoft’s 2025 Work Trend Index, 93% of leaders in India plan to extend AI integration across their organisations within the next 12 to 18 months.

Frontier firms in India are leading the charge, redesigning operations around collaboration between humans and AI agents instead of following conventional hierarchies.

Over half of leaders already deploy AI to automate workflows and business processes across entire teams, enabling faster and more agile decision-making.

Microsoft notes that AI is becoming a true thought partner, fuelling creativity, accelerating decisions, and redefining teamwork instead of merely supporting routine tasks. Leaders report that embedding AI into daily operations drives measurable improvements in productivity, innovation, and business outcomes.

The findings are part of a global survey of 31,000 participants across 31 countries, highlighting India’s role at the forefront of AI-driven organisational transformation rather than merely following international trends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk–Altman clash escalates over Apple’s alleged AI bias

Elon Musk has accused Apple of favouring ChatGPT on its App Store and threatened legal action, sparking a clash with OpenAI CEO Sam Altman. Musk called Apple’s practices an antitrust violation and vowed to take immediate action through his AI company, xAI.

Critics on X noted rivals like DeepSeek AI and Perplexity AI have topped the App Store this year. Altman called Musk’s claim ‘remarkable’ and accused him of manipulating X. Musk called him a ‘liar’, prompting demands for proof he never altered X’s algorithm.

OpenAI and xAI launched new versions of ChatGPT and Grok, ranked first and fifth among free iPhone apps on Tuesday. Apple, which partnered with OpenAI in 2024 to integrate ChatGPT, did not comment on the matter. Rankings take into account engagement, reviews, and downloads.

The dispute reignites a feud between Musk and OpenAI, which he co-founded but left before the success of ChatGPT. In April, OpenAI accused Musk of attempting to harm the company and establish a rival. Musk launched xAI in 2023 to compete with major players in the AI space.

Chinese startup DeepSeek has disrupted the AI market with cost-efficient models. Since ChatGPT’s 2022 debut, major tech firms have invested billions in AI. OpenAI claims Musk’s actions are driven by ambition rather than a mission for humanity’s benefit.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple’s $20B Google deal under threat as AI lags behind rivals

Apple is set to release Q3 earnings on Thursday amid scrutiny over its Google search deal dependencies and ongoing struggles with AI progress.

Typically, Apple’s fiscal Q3 garners less investor attention, with anticipation focused instead on the upcoming iPhone launch in Q4. However, this quarter is proving to be anything but ordinary.

Analysts and shareholders alike are increasingly concerned about two looming threats: a potential $20 billion hit to Apple’s Services revenue tied to the US Department of Justice’s (DOJ) antitrust case against Google, and ongoing delays in Apple’s AI efforts.

Ahead of the earnings report, Apple shares were mostly unchanged, reflecting investor caution rather than enthusiasm. Apple’s most pressing challenge stems from its lucrative partnership with Google.

In 2022, Google paid Apple approximately $20 billion to remain the default search engine in the Safari browser and across Siri.

The exclusivity deal has formed a significant portion of Apple’s Services segment, which generated $78.1 billion in revenue that year, making Google’s contribution alone account for more than 25% of that figure.

However, a ruling expected next month from Judge Amit Mehta in the US District Court for the District of Columbia could threaten the entire arrangement. Mehta previously found Google guilty of operating an illegal monopoly in the search market.

The forthcoming ‘remedies’ ruling could force Google to end exclusive search deals, divest its Chrome browser, and provide data access to rivals. Should the DOJ’s proposed remedies stand and Google fails to overturn the ruling, Apple could lose a critical source of Services revenue.

According to Morgan Stanley’s Erik Woodring, Apple could see a 12% decline in its full-year 2027 earnings per share (EPS) if it pivots to less lucrative partnerships with alternative search engines.

The user experience may also deteriorate if customers can no longer set Google as their default option. A more radical scenario, Apple launching its search engine, could dent its 2024 EPS by as much as 20%, though analysts believe this outcome is the least likely.

Alongside regulatory threats, Apple is also facing growing doubts about its ability to compete in AI. Apple has not yet set a clear timeline for releasing an upgraded version of Siri, while rivals accelerate AI hiring and unveil new capabilities.

Bank of America analyst Wamsi Mohan noted this week that persistent delays undermine confidence in Apple’s ability to deliver innovation at the pace. ‘Apple’s ability to drive future growth depends on delivering new capabilities and products on time,’ he wrote to investors.

‘If deadlines keep slipping, that potentially delays revenue opportunities and gives competitors a larger window to attract customers.’

While Apple has teased upcoming AI features for future software updates, the lack of a commercial rollout or product roadmap has made investors uneasy, particularly as rivals like Microsoft, Google, and OpenAI continue to set the AI agenda.

Although Apple’s stock remained stable before Thursday’s earnings release, any indication of slowing services growth or missed AI milestones could shake investor confidence.

Analysts will be watching closely for commentary from CEO Tim Cook on how Apple plans to navigate regulatory risks and revive momentum in emerging technologies.

The company’s current crossroads is pivotal for the tech sector more broadly. Regulators are intensifying scrutiny on platform dominance, and AI innovation is fast becoming the new battleground for long-term growth.

As Apple attempts to defend its business model and rekindle its innovation edge, Thursday’s earnings update could serve as a bellwether for its direction in the post-iPhone era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LegalOn raises 50 million to expand AI legal tools

LegalOn Technologies has secured 50 million dollars in Series E funding to expand its AI-powered contract review platform.

The Japanese startup, backed by SoftBank and Goldman Sachs, aims to streamline legal work by reducing the time spent reviewing and managing documents.

Its core product, Review, identifies contract risks and suggests edits using expert-built legal playbooks. The company says it improves accuracy while cutting review time by up to 85 percent across 7,000 client organisations in Japan, the US and the UK.

LegalOn plans to develop AI agents to handle tasks before and after the review process, including contract tracking and workflow integration. A new tool, Matter Management, enables teams to efficiently assign contract responsibilities, collaborate, and link documents.

While legal AI adoption grows, CEO Daniel Lewis insists the technology will support rather than replace lawyers. He believes professionals who embrace AI will gain the most leverage, as human oversight remains vital to legal judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI considers antitrust action against Microsoft over AI hosting control

OpenAI reportedly tries to reduce Microsoft’s exclusive control over hosting its AI models, signalling growing friction between the two companies.

According to the Wall Street Journal, OpenAI leadership has considered filing an antitrust complaint against Microsoft, alleging anti-competitive behaviour in their ongoing collaboration. The move could trigger federal regulatory scrutiny.

The tension comes amid ongoing talks over OpenAI’s corporate restructuring. A report by The Information suggests that OpenAI is negotiating to grant Microsoft a 33% stake in its reorganized for-profit unit. In exchange, Microsoft would give up rights to future profits.

OpenAI also wants to revise its existing contract with Microsoft, particularly clauses that grant exclusive Azure hosting rights. The company reportedly aims to exclude its planned $3 billion acquisition of AI startup Windsurf from the agreement, which otherwise gives Microsoft access to OpenAI’s intellectual property.

This developing rift could reshape one of the most influential alliances in AI. Microsoft has invested heavily in OpenAI since 2019 and integrates its models into Microsoft 365 Copilot and Azure services. However, both firms are diversifying.

OpenAI is turning to Google Cloud and Oracle for additional computing power, while Microsoft has begun integrating alternative AI models into its products.

Industry experts warn that regulatory scrutiny or contract changes could impact enterprise customers relying on tightly integrated AI solutions, particularly in sectors like healthcare and finance. Companies may face service disruptions, higher costs, or compatibility challenges if major players shift strategy or infrastructure.

Analysts suggest that the era of single-model reliance may be ending. As innovation from rivals like DeepSeek accelerates, enterprises and cloud providers are moving toward multi-model support, aiming for modular, scalable, and use-case-specific AI deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!