OpenAI has announced plans to simplify its artificial intelligence product line by combining its o-series and GPT-series models into a unified system. CEO Sam Altman revealed the strategy in a post on X, highlighting the need for more accessible AI tools.
The decision marks a shift away from standalone releases, such as the previously unveiled o3 and o3 mini models.
The company aims to launch GPT-5 as a comprehensive AI system that incorporates the features of earlier models, addressing user concerns about complexity. Altman stressed the importance of creating tools that ‘just work’ while providing no exact timeline for the rollout.
OpenAI also plans to release GPT-4.5, codenamed ‘Orion’, as its final non-chain-of-thought model.
The announcement follows increased scrutiny over AI development costs, with competitors like China’s DeepSeek introducing more affordable alternatives. The move aligns with OpenAI’s efforts to remain competitive while addressing usability issues.
By streamlining its offerings, OpenAI hopes to deliver systems capable of handling diverse tasks and leveraging available tools seamlessly. The new roadmap reflects a broader industry trend towards efficiency and user-centric design.
For more information on these topics, visit diplomacy.edu.
Dario Amodei, CEO of AI firm Anthropic, has warned that the race to develop AI is moving faster than efforts to fully understand it. Speaking at an event in Paris, he stressed the need for deeper research into AI models, describing it as a race between expanding capabilities and improving transparency. ‘We can’t slow down development, but our understanding must match our ability to build,’ he said.
Amodei rejected the notion that AI safety measures hinder progress, arguing instead that they help refine and improve models. He pointed to earlier discussions at the UK’s Bletchley Summit, where risk assessment strategies were introduced, and insisted they had not slowed technological growth. ‘Better testing and measurement actually lead to better models,’ he said.
The Anthropic CEO also discussed the evolving AI market, including competition from Chinese firm DeepSeek, whose claims of dramatically lower training costs he dismissed as ‘not based on facts.’ Looking ahead, he hinted at upcoming improvements in AI reasoning, with a focus on creating more seamless transitions between different types of models. He remains optimistic, predicting that AI will drive innovation across industries, from healthcare to finance and energy.
For more information on these topics, visit diplomacy.edu.
Adobe has launched the first public version of its AI-powered video generation tool, Firefly Video Model, introducing competition to OpenAI’s Sora and Runway’s video-generation services. The tool is designed to integrate with Adobe’s Premiere Pro software, making it useful for film and television professionals. Instead of focusing on generating long video clips, Adobe’s model helps improve or extend real production shots that need adjustments.
The tool currently produces five-second clips at 1080p resolution, shorter than OpenAI’s 20-second limit, but Adobe argues that most production clips are only a few seconds long. Pricing starts at $9.99 for 20 clips per month and $29.99 for 70 clips, with a separate ‘Premium’ plan for high-volume users like studios to be announced later this year. Adobe is also working on 4K video generation, prioritising visual quality over longer clips.
Vice President of Generative AI Alexandru Costin emphasised that Adobe aims to make AI-generated video look as realistic as traditional filming. The company remains focused on improving motion, structure, and image quality rather than extending clip duration. Meta Platforms is also developing a video-generation model, but has not yet confirmed a release date.
For more information on these topics, visit diplomacy.edu.
The recent AI Action Summit in Paris marked a turning point in global AI governance, shifting the focus from long-term existential risks to immediate concerns such as innovation, economic impact, and public good. Unlike previous AI summits in Bletchley and Seoul, which prioritised safety regulations, Paris embraced a more pragmatic approach, emphasising competition, national sovereignty, and AI’s role in society.
The resulting Paris Statement reflected this shift, downplaying AI safety concerns in favour of fostering open-source models, job creation, and consumer protection. As highlighted in the blog titled ‘The Paris AI Summit: A Diplomatic Failure or a Strategic Success?’ by Jovan Kurbalija, a major theme of the summit was the need to counterbalance the dominance of large tech corporations in shaping AI policy.
US Vice-President Vance criticised calls for strict safety regulations, arguing they often serve the interests of major AI companies rather than the public. The Paris Statement also reinforced AI sovereignty, urging nations to develop AI strategies aligned with their own frameworks rather than adhering to a universal regulatory model. Additionally, France used the summit to highlight its own advancements in AI, mainly through the open-source Mistral model.
Despite these achievements, the absence of US and UK support underscored geopolitical tensions in AI governance. The US remains wary of multilateral AI regulations that could challenge its technological leadership, while the UK, having invested heavily in AI safety initiatives, found the summit’s shift in focus at odds with its strategic goals. British Prime Minister Rishi Sunak’s decision to skip the event further signalled the country’s discomfort with this new direction.
Why does it matter?
Ultimately, the Paris Summit may not have produced a sweeping declaration, but it succeeded in redefining the global AI agenda. The summit laid the groundwork for a more inclusive and action-oriented approach by moving past theoretical risks and addressing AI’s real-world implications. Whether this shift will gain broader international support remains to be seen, but it is clear that Paris has opened a new chapter in AI diplomacy.
Elon Musk’s $97.4 billion bid to acquire OpenAI’s assets has sparked controversy, with OpenAI accusing him of contradicting his own legal claims.
Musk’s lawsuit, filed in August, argues that OpenAI’s assets should remain in a charitable trust and not be transferred for private gain. OpenAI has called his offer ‘an improper bid to undermine a competitor’.
The dispute comes as OpenAI seeks to transition into a for-profit organisation to secure funds for advanced AI development. Musk, a co-founder of OpenAI who left before ChatGPT’s rise in 2022, has launched his own AI startup, xAI, in 2023.
OpenAI’s letter to a federal court highlights the clash between Musk’s stated opposition to privatising its assets and his attempt to acquire them with private investors. The AI company argues that Musk’s bid undermines his legal position and the nonprofit’s mission.
Representatives for Musk have yet to comment. OpenAI continues to defend its transition plan, emphasising the need for substantial investment to remain competitive in the fast-evolving AI landscape.
For more information on these topics, visit diplomacy.edu.
The United Arab Emirates‘ Energy Minister, Suhail Mohamed Al Mazrouei, stated on Wednesday that he does not believe the Chinese AI app DeepSeek will impact the demand for nuclear energy. DeepSeek, a Chinese startup, has developed AI models that deliver comparable results with much lower computing power, resulting in significant energy savings.
However, Al Mazrouei expressed confidence that this advancement will not reduce the growing need for nuclear energy in the UAE. He highlighted that nuclear power remains a critical component of the country’s strategy for diversifying energy sources and ensuring energy security in the long term.
The UAE has been investing heavily in nuclear energy as part of its efforts to reduce dependence on fossil fuels and to meet its climate goals. The Barakah nuclear power plant, which is set to become one of the largest nuclear power stations in the Middle East, is a key part of this initiative.
Al Mazrouei also noted that nuclear energy offers a reliable and scalable solution that can complement renewable energy sources, especially as the UAE looks to meet rising energy demands. While AI advancements like DeepSeek may contribute to energy efficiency, the UAE remains focused on expanding its nuclear energy infrastructure to support its future growth and sustainability objectives.
For more information on these topics, visit diplomacy.edu.
In his op-ed, From Hammurabi to ChatGPT, Jovan Kurbalija draws on the ancient Code of Hammurabi to argue for a principle of legal accountability in modern AI regulation and governance. Dating back 4,000 years, Hammurabi’s Code established that builders were responsible for damages caused by their work—a principle Kurbalija believes should apply to AI developers, deployers, and beneficiaries today.
While this may seem like common sense, current legal frameworks, particularly Section 230 of the 1996 US Communications Decency Act, have created a loophole. The provision, designed to protect early internet platforms, grants them immunity for user-generated content, allowing AI companies nowadays to evade responsibility for causing harm such as deepfakes, fraud, and cyber crimes. The legal anomaly complicates global AI governance and digital diplomacy efforts, as inconsistent accountability standards hinder international cooperation.
Kurbalija emphasises that existing legal rules—applied by courts, as seen in internet regulation—should suffice for AI governance. New AI-specific rules should only be introduced in exceptional cases, such as when addressing apparent legal gaps, similar to how cybercrime and data protection laws emerged in the internet era.
He concludes that AI, like hammers, is ultimately a tool—albeit a powerful one. Legal responsibility must lie with humans, not machines. By discarding the immunity shield of Section 230 and reaffirming principles of accountability, transparency, and justice, policymakers can draw on 4,000 years of legal wisdom to govern AI effectively. That approach strengthens AI governance and advances digital diplomacy by creating a foundation for global norms and cooperation in the digital age.
For more information on these topics, visit diplomacy.edu.
Lyft is preparing to introduce fully autonomous robotaxis in Dallas by 2026, powered by Mobileye’s technology. The announcement from CEO David Risher on Monday saw Lyft’s shares rise by 4.6%, while Mobileye’s stock jumped 17%.
Companies across the automotive and tech industries continue to invest heavily in self-driving technology, viewing it as a key factor in shaping the future of mobility.
Japanese conglomerate Marubeni will own and finance the Mobileye-equipped vehicles, which will be available through the Lyft app. Mobileye had previously confirmed a partnership with Lyft in November to bring autonomous vehicles to the platform.
Lyft’s move comes as competition in the self-driving space intensifies, with Uber’s partner Waymo set to launch its own autonomous taxi service in Austin next month.
Waymo has already expanded its self-driving ride-hailing services to major US cities, including Miami, Phoenix, Los Angeles, San Francisco, and Austin.
More cities are expected to be added in 2025 as testing expands. Tesla has also announced plans to test driverless car technology in Austin from June but has yet to reveal details about a paid service.
For more information on these topics, visit diplomacy.edu.
Perplexity AI saw a 50% surge in app downloads after a clever Super Bowl promotion that required users to interact with its AI-powered search tool. Instead of running an expensive TV advert like OpenAI and Google, the company posted on X, encouraging users to download the app and ask five questions during the game for a chance to win $1 million.
The strategy paid off, with app installs rising to 45,000 on US Super Bowl Sunday, compared to the previous daily average of 30,000. The contest not only increased downloads but also helped familiarise users with Perplexity’s AI capabilities. By requiring engagement during the game, the company ensured new users experienced the search tool in action.
While OpenAI and Google invested heavily in traditional advertising, Perplexity’s approach appeared to have a more direct impact on user interaction. The app climbed the US App Store rankings, reaching as high as No. 6 in the Productivity category. Early estimates suggest the momentum may continue, potentially doubling downloads in the following days.
For more information on these topics, visit diplomacy.edu.
Apple is headded into consumer robotics, unveiling research that highlights the importance of expressive movements in human-robot interaction. Drawing inspiration from Pixar’s Luxo Jr., the company’s study explores how non-humanlike objects, such as a lamp, can be designed to convey intention and emotion through motion.
A video accompanying the research showcases a prototype lamp robot, which mimics Pixar’s iconic animated mascot. The study suggests that even small movements, such as turning towards a window before answering a weather query, can create a stronger connection between humans and machines. The lamp, operating with Siri’s voice, behaves as a more dynamic alternative to smart speakers like Apple’s HomePod or Amazon’s Echo.
This research comes amid speculation that Apple is working on a more advanced smart home hub, possibly incorporating robotic features. While details remain scarce, rumours suggest a device resembling a robotic arm with an integrated screen. Though Apple’s consumer robotics project is still in early stages, the findings hint at a future where expressive, intelligent robots become a part of everyday life.