Musk escalates legal battle with new lawsuit against OpenAI

Elon Musk’s xAI has sued OpenAI, alleging a coordinated and unlawful campaign to steal its proprietary technology. The complaint alleges OpenAI targeted former xAI staff to steal source code, training methods, and data centre strategies.

The lawsuit claims OpenAI recruiter Tifa Chen offered large packages to engineers who then allegedly uploaded xAI’s source code to personal devices. Notable incidents include Xuechen Li confessing to code theft and Jimmy Fraiture allegedly transferring confidential files via AirDrop repeatedly.

Legal experts note the case centres on employee poaching and the definition of xAI’s ‘secret sauce,’ including GPU racking, vendor contracts, and operational playbooks.

Liability may depend on whether OpenAI knowingly directed recruiters, while the company could defend itself by showing independent creation with time-stamped records.

xAI is seeking damages, restitution, and injunctions requiring OpenAI to remove its materials and destroy models built using them. The lawsuit is Musk’s latest legal action against OpenAI, following a recent antitrust case with Apple over alleged market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils new Gemini Robotics models

Google has unveiled two new robotics models, Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, designed to help robots better perceive, plan, and act in complex environments. The models aim to enable more capable robots to complete multi-step tasks efficiently and transparently.

Gemini Robotics 1.5 converts visual information and instructions into actions, letting robots think before acting and explain their reasoning. Gemini Robotics-ER 1.5 acts as a high-level planner, reasoning about the physical world and using tools like Google Search to support decisions.

Together, the models form an ‘agentic’ framework. ER 1.5 orchestrates a robot’s activities, while Robotics 1.5 carries them out, enabling the machines to tackle semantically complex tasks. The pairing strengthens generalisation across diverse environments and longer missions.

Google said Gemini Robotics-ER 1.5 is now available to developers through the Gemini API in Google AI Studio, while Gemini Robotics 1.5 is currently open to select partners. Both models advance robots’ reasoning, spatial awareness, and multi-tasking capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Spotify launches new policies on AI and music spam

Spotify announced new measures to address AI risks in music, aiming to protect artists’ identities and preserve trust on the platform. The company said AI can boost creativity but also enable harmful content like impersonations and spam that exploit artists and cut into royalties.

A new impersonation policy has been introduced, clarifying that AI-generated vocal clones of artists are only permitted with explicit authorisation. Spotify is strengthening processes to block fraudulent uploads and mismatches, giving artists quicker recourse when their work is misused.

The platform will launch a new spam filter this year to detect and curb manipulative practices like mass uploads and artificially short tracks. The system will be deployed cautiously, with updates added as new abuse tactics emerge, in order to safeguard legitimate creators.

In addition, Spotify will back an industry standard for AI disclosures in music credits, allowing artists and rights holders to show how AI was used in production. The company said these steps show its commitment to protecting artists, ensuring transparency, and fair royalties as AI reshapes the music industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants warn Digital Markets Act is failing

Apple and Google have urged the European Union to revisit its Digital Markets Act, arguing the law is damaging users and businesses.

Apple said the rules have forced delays to new features for European customers, including live translation on AirPods and improvements to Apple Maps. It warned that competition requirements could weaken security and slow innovation without boosting the EU economy.

Google raised concerns that its search results must now prioritise intermediary travel sites, leading to higher costs for consumers and fewer direct sales for airlines and hotels. It added that AI services may arrive in Europe up to a year later than elsewhere.

Both firms stressed that enforcement should be more consistent and user-focused. The European Commission is reviewing the Act, with formal submissions under consideration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI unveils ChatGPT Pulse for proactive updates

OpenAI has introduced a preview of ChatGPT Pulse, a feature designed to deliver proactive and personalised updates to Pro users on mobile. Instead of waiting for users to ask questions, Pulse researches chat history, feedback, and connected apps to deliver daily insights.

The updates appear as visual cards covering relevant topics, which users can scan quickly or expand for detail. Integrations with Gmail and Google Calendar are available, enabling suggestions such as drafting meeting agendas, recommending restaurants for trips, or reminding users about birthdays.

These integrations are optional and can be switched off at any time.

Pulse is built to prioritise usefulness over screen time, offering updates that expire daily unless saved or added to chat history. Early trials with students highlighted the importance of simple feedback to refine results, and users can guide what appears by curating topics or rating suggestions.

OpenAI plans to refine the feature further before expanding its availability beyond Pro users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General launches call for candidates for AI Scientific Panel

The UN Secretary-General has launched an open call for candidates to serve on the Independent International Scientific Panel on Artificial Intelligence.

The Panel was agreed by UN member states in September 2024 as part of the Global Digital Compact; its terms of reference were later defined in a UN General Assembly resolution adopted in August 2025. The 40-member Panel will provide evidence-based scientific assessments on AI’s opportunities, risks, and impacts. Its work will culminate in an annual, policy-relevant – but non-prescriptive –summary report presented to the Global Dialogue on AI Governance, along with up to two updates per year to engage with the General Assembly plenary.

Candidates with expertise in the following fields are invited to apply:

  • AI, including foundation models & generative AI, machine learning methods, core AI subfields (e.g. vision, language, speech/audio, robotics, planning & scheduling, knowledge representation), reliability, safety & alignment, cognitive & neuroscience links, human–AI interaction, AI security and infrastructure;
  • Applied AI, including science (foundational and applied in health, climate, life sciences, physics, health, social sciences, agriculture), engineering, industry and mobility (e.g. materials, drugs, transportation, smart cities, IoT, satellite, navigation), digital society (e.g. misinformation & disinformation, online harms, social networks, software engineering, web),
  • Related fields, including AI opportunity, risk and impact assessment, AI impacts on society, technology, economy, and environment, AI security and infrastructure, data, ethics, and rights, governance (e.g. public policy, international law, standards, oversight, compliance, foresight and scenario-building).

Following the call for nominations (open until 31 October 2025), the Secretary-General will recommend 40 members for appointment by the General Assembly.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global Dialogue on AI Governance officially launched

On 25 September 2025, the President of the UN General Assembly chaired a high-level multistakeholder informal meeting to launch the Global Dialogue on AI Governance.

The creation of the Dialogue was agreed by UN member states in September 2024, with the adoption of the Global Digital Compact. In August 2025, the General Assembly adopted a resolution outlining the terms of reference and modalities for this new global mechanism.

The Global Dialogue on AI Governance is tasked with facilitating open, transparent and inclusive discussions on AI governance. Issues to focus on will include safe, trustworthy AI; bridging capacity and digital divides; social, ethical, and technical implications; interoperability of governance approaches; human rights; transparency and accountability; and open-source AI development.

The Dialogue will meet annually for up to two days alongside UN conferences in Geneva or New York, featuring high-level government participation, thematic discussions, and an annual report presentation. Initially, it will be held back-to-back in the margins of the International Telecommunication Union Artificial Intelligence for Good Global Summit in Geneva, in 2026, and of the multistakeholder forum on science, technology and innovation for the SDGs in New York, in 2027.

Speaking at the launch of the Dialogue, the UN Secretary-General noted that the Dialogue is ‘about creating a space where governments, industry and civil society can advance common solutions together.  Where innovation can thrive — guided by shared standards and common purpose.’

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazil to host massive AI-ready data centre by RT-One

RT-One plans to build Latin America’s largest AI data centre after securing land in Uberlândia, Minas Gerais, Brazil. The US$1.2bn project will span over one million square metres, with 300,000 m² reserved as protected green space.

The site will support high-performance computing, sovereign cloud services, and AI workloads, launching with 100MW capacity and scaling to 400MW. It will run on 100% renewable energy and utilise advanced cooling systems to minimise its environmental impact.

RT-One states that the project will prepare Brazil to compete globally, generate skilled jobs, and train new talent for the digital economy. A wide network of partners, including Hitachi, Siemens, WEG, and Schneider Electric, is collaborating on the development, aiming to ensure resilience and sustainability at scale.

The project is expected to stimulate regional growth, with jobs, training programmes, and opportunities for collaboration between academia and industry. Local officials, including the mayor of Uberlândia, attended the launch event to underline government support for the initiative.

Once complete, the Uberlândia facility will provide sovereign cloud capacity, high-density compute, and AI-ready infrastructure for Brazil and beyond. RT-One says the development will position the city as a hub for digital innovation and strengthen Latin America’s role in the global AI economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General warns humanity cannot rely on algorithms

UN Secretary-General António Guterres has urged world leaders to act swiftly to ensure AI serves humanity rather than threatens it. Speaking at a UN Security Council debate, he warned that while AI can help anticipate food crises, support de-mining efforts, and prevent violence, it is equally capable of fueling conflict through cyberattacks, disinformation, and autonomous weapons.

‘Humanity’s fate cannot be left to an algorithm,’ he stressed.

Guterres outlined four urgent priorities. First, he called for strict human oversight in all military uses of AI, repeating his demand for a global ban on lethal autonomous weapons systems. He insisted that life-and-death decisions, including any involving nuclear weapons, must never be left to machines.

Second, he pressed for coherent international regulations to ensure AI complies with international law at every stage, from design to deployment. He highlighted the dangers of AI lowering barriers to acquiring prohibited weapons and urged states to build transparency, trust, and safeguards against misuse.

Finally, Guterres emphasised protecting information integrity and closing the global AI capacity gap. He warned that AI-driven disinformation could destabilise peace processes and elections, while unequal access risks leaving developing countries behind.

The UN has already launched initiatives, including a new international scientific panel and an annual AI governance dialogue, to foster cooperation and accountability.

‘The window is closing to shape AI, for peace, justice, and humanity,’ he concluded.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!