EssilorLuxottica is set to ramp up production of its smart glasses, driven by the success of its Ray-Ban Meta range developed in partnership with Meta. Since their launch in September 2023, over two million units have been sold, with growing user engagement indicating a shift towards mainstream adoption.
The eyewear giant, which has collaborated with Meta since 2019, aims to expand its smart glasses portfolio with new brands and features. The company is also considering subscription-based services and additional functionalities to enhance user experience.
To meet rising demand, EssilorLuxottica plans to increase production capacity to 10 million units annually by the end of next year. Manufacturing will be expanded across China and Southeast Asia, enabling the company to support future product releases, including the development of Nuance Audio glasses with integrated hearing solutions.
For more information on these topics, visit diplomacy.edu.
The UK government has partnered with AI startup Anthropic to explore the use of its chatbot, Claude, in public services. The collaboration aims to improve access to public information and streamline interactions for citizens.
The initiative aligns with Prime Minister Keir Starmer’s ambition to establish the UK as a leader in AI and enhance public service efficiency through innovative technologies.
Technology minister Peter Kyle highlighted the importance of this partnership, emphasising its role in positioning the UK as a hub for advanced AI development.
Claude has already been employed by the European Parliament to simplify access to its archives, demonstrating its potential in reducing time for document retrieval and analysis.
This step underscores Britain’s commitment to leveraging cutting-edge AI for the benefit of individuals and businesses nationwide.
For more information on these topics, visit diplomacy.edu.
AI is set to revolutionise wealth management by lowering the barriers to entry for new players, according to a Microsoft executive. Martin Moeller, head of AI for financial services at Microsoft, highlighted that AI’s ability to process vast amounts of data could allow small teams or even individuals to offer services that traditionally required entire teams at banks. This shift is expected to reshape the competitive landscape, much like the internet did decades ago.
AI is already being used in the financial sector, with Swedish payment provider Klarna employing AI from OpenAI to handle tasks previously carried out by 700 employees. UBS, the world’s largest asset manager, also sees significant potential in AI to boost productivity and ease job functions. AI is expected to reduce operational costs for startups and allow banks that have not been involved in wealth management to enter the market with minimal investment.
Customer behaviour is also changing, with younger entrepreneurs increasingly managing their own investments. In response, banks are using AI to enable customers to consolidate financial information independently. While AI currently does not provide specific investment advice, ‘agentic AI’ is expected to be developed in the next two years, which will make independent decisions without human input, further transforming the industry.
For more information on these topics, visit diplomacy.edu.
OpenAI has announced plans to simplify its artificial intelligence product line by combining its o-series and GPT-series models into a unified system. CEO Sam Altman revealed the strategy in a post on X, highlighting the need for more accessible AI tools.
The decision marks a shift away from standalone releases, such as the previously unveiled o3 and o3 mini models.
The company aims to launch GPT-5 as a comprehensive AI system that incorporates the features of earlier models, addressing user concerns about complexity. Altman stressed the importance of creating tools that ‘just work’ while providing no exact timeline for the rollout.
OpenAI also plans to release GPT-4.5, codenamed ‘Orion’, as its final non-chain-of-thought model.
The announcement follows increased scrutiny over AI development costs, with competitors like China’s DeepSeek introducing more affordable alternatives. The move aligns with OpenAI’s efforts to remain competitive while addressing usability issues.
By streamlining its offerings, OpenAI hopes to deliver systems capable of handling diverse tasks and leveraging available tools seamlessly. The new roadmap reflects a broader industry trend towards efficiency and user-centric design.
For more information on these topics, visit diplomacy.edu.
Dario Amodei, CEO of AI firm Anthropic, has warned that the race to develop AI is moving faster than efforts to fully understand it. Speaking at an event in Paris, he stressed the need for deeper research into AI models, describing it as a race between expanding capabilities and improving transparency. ‘We can’t slow down development, but our understanding must match our ability to build,’ he said.
Amodei rejected the notion that AI safety measures hinder progress, arguing instead that they help refine and improve models. He pointed to earlier discussions at the UK’s Bletchley Summit, where risk assessment strategies were introduced, and insisted they had not slowed technological growth. ‘Better testing and measurement actually lead to better models,’ he said.
The Anthropic CEO also discussed the evolving AI market, including competition from Chinese firm DeepSeek, whose claims of dramatically lower training costs he dismissed as ‘not based on facts.’ Looking ahead, he hinted at upcoming improvements in AI reasoning, with a focus on creating more seamless transitions between different types of models. He remains optimistic, predicting that AI will drive innovation across industries, from healthcare to finance and energy.
For more information on these topics, visit diplomacy.edu.
Adobe has launched the first public version of its AI-powered video generation tool, Firefly Video Model, introducing competition to OpenAI’s Sora and Runway’s video-generation services. The tool is designed to integrate with Adobe’s Premiere Pro software, making it useful for film and television professionals. Instead of focusing on generating long video clips, Adobe’s model helps improve or extend real production shots that need adjustments.
The tool currently produces five-second clips at 1080p resolution, shorter than OpenAI’s 20-second limit, but Adobe argues that most production clips are only a few seconds long. Pricing starts at $9.99 for 20 clips per month and $29.99 for 70 clips, with a separate ‘Premium’ plan for high-volume users like studios to be announced later this year. Adobe is also working on 4K video generation, prioritising visual quality over longer clips.
Vice President of Generative AI Alexandru Costin emphasised that Adobe aims to make AI-generated video look as realistic as traditional filming. The company remains focused on improving motion, structure, and image quality rather than extending clip duration. Meta Platforms is also developing a video-generation model, but has not yet confirmed a release date.
For more information on these topics, visit diplomacy.edu.
The recent AI Action Summit in Paris marked a turning point in global AI governance, shifting the focus from long-term existential risks to immediate concerns such as innovation, economic impact, and public good. Unlike previous AI summits in Bletchley and Seoul, which prioritised safety regulations, Paris embraced a more pragmatic approach, emphasising competition, national sovereignty, and AI’s role in society.
The resulting Paris Statement reflected this shift, downplaying AI safety concerns in favour of fostering open-source models, job creation, and consumer protection. As highlighted in the blog titled ‘The Paris AI Summit: A Diplomatic Failure or a Strategic Success?’ by Jovan Kurbalija, a major theme of the summit was the need to counterbalance the dominance of large tech corporations in shaping AI policy.
US Vice-President Vance criticised calls for strict safety regulations, arguing they often serve the interests of major AI companies rather than the public. The Paris Statement also reinforced AI sovereignty, urging nations to develop AI strategies aligned with their own frameworks rather than adhering to a universal regulatory model. Additionally, France used the summit to highlight its own advancements in AI, mainly through the open-source Mistral model.
Despite these achievements, the absence of US and UK support underscored geopolitical tensions in AI governance. The US remains wary of multilateral AI regulations that could challenge its technological leadership, while the UK, having invested heavily in AI safety initiatives, found the summit’s shift in focus at odds with its strategic goals. British Prime Minister Rishi Sunak’s decision to skip the event further signalled the country’s discomfort with this new direction.
Why does it matter?
Ultimately, the Paris Summit may not have produced a sweeping declaration, but it succeeded in redefining the global AI agenda. The summit laid the groundwork for a more inclusive and action-oriented approach by moving past theoretical risks and addressing AI’s real-world implications. Whether this shift will gain broader international support remains to be seen, but it is clear that Paris has opened a new chapter in AI diplomacy.
Elon Musk’s $97.4 billion bid to acquire OpenAI’s assets has sparked controversy, with OpenAI accusing him of contradicting his own legal claims.
Musk’s lawsuit, filed in August, argues that OpenAI’s assets should remain in a charitable trust and not be transferred for private gain. OpenAI has called his offer ‘an improper bid to undermine a competitor’.
The dispute comes as OpenAI seeks to transition into a for-profit organisation to secure funds for advanced AI development. Musk, a co-founder of OpenAI who left before ChatGPT’s rise in 2022, has launched his own AI startup, xAI, in 2023.
OpenAI’s letter to a federal court highlights the clash between Musk’s stated opposition to privatising its assets and his attempt to acquire them with private investors. The AI company argues that Musk’s bid undermines his legal position and the nonprofit’s mission.
Representatives for Musk have yet to comment. OpenAI continues to defend its transition plan, emphasising the need for substantial investment to remain competitive in the fast-evolving AI landscape.
For more information on these topics, visit diplomacy.edu.
The United Arab Emirates‘ Energy Minister, Suhail Mohamed Al Mazrouei, stated on Wednesday that he does not believe the Chinese AI app DeepSeek will impact the demand for nuclear energy. DeepSeek, a Chinese startup, has developed AI models that deliver comparable results with much lower computing power, resulting in significant energy savings.
However, Al Mazrouei expressed confidence that this advancement will not reduce the growing need for nuclear energy in the UAE. He highlighted that nuclear power remains a critical component of the country’s strategy for diversifying energy sources and ensuring energy security in the long term.
The UAE has been investing heavily in nuclear energy as part of its efforts to reduce dependence on fossil fuels and to meet its climate goals. The Barakah nuclear power plant, which is set to become one of the largest nuclear power stations in the Middle East, is a key part of this initiative.
Al Mazrouei also noted that nuclear energy offers a reliable and scalable solution that can complement renewable energy sources, especially as the UAE looks to meet rising energy demands. While AI advancements like DeepSeek may contribute to energy efficiency, the UAE remains focused on expanding its nuclear energy infrastructure to support its future growth and sustainability objectives.
For more information on these topics, visit diplomacy.edu.
In his op-ed, From Hammurabi to ChatGPT, Jovan Kurbalija draws on the ancient Code of Hammurabi to argue for a principle of legal accountability in modern AI regulation and governance. Dating back 4,000 years, Hammurabi’s Code established that builders were responsible for damages caused by their work—a principle Kurbalija believes should apply to AI developers, deployers, and beneficiaries today.
While this may seem like common sense, current legal frameworks, particularly Section 230 of the 1996 US Communications Decency Act, have created a loophole. The provision, designed to protect early internet platforms, grants them immunity for user-generated content, allowing AI companies nowadays to evade responsibility for causing harm such as deepfakes, fraud, and cyber crimes. The legal anomaly complicates global AI governance and digital diplomacy efforts, as inconsistent accountability standards hinder international cooperation.
Kurbalija emphasises that existing legal rules—applied by courts, as seen in internet regulation—should suffice for AI governance. New AI-specific rules should only be introduced in exceptional cases, such as when addressing apparent legal gaps, similar to how cybercrime and data protection laws emerged in the internet era.
He concludes that AI, like hammers, is ultimately a tool—albeit a powerful one. Legal responsibility must lie with humans, not machines. By discarding the immunity shield of Section 230 and reaffirming principles of accountability, transparency, and justice, policymakers can draw on 4,000 years of legal wisdom to govern AI effectively. That approach strengthens AI governance and advances digital diplomacy by creating a foundation for global norms and cooperation in the digital age.
For more information on these topics, visit diplomacy.edu.