OpenAI negotiates $500m deal for AI startup

OpenAI is reportedly in talks to acquire io Products, an AI hardware startup co-founded by former Apple design chief Jony Ive and OpenAI CEO Sam Altman, in a deal that could exceed $500 million.

Instead of focusing solely on software like ChatGPT and API tools, OpenAI appears to be eyeing consumer devices as a way to diversify its revenue.

io Products is said to be working on AI-powered consumer tech, including a screenless smartphone and smart home gadgets.

The company’s team includes several former Apple designers, such as Tang Tan and Evans Hankey. Instead of traditional screens, these new devices are expected to explore more ambient and context-aware ways of interaction.

Jony Ive, best known for his role in designing iconic Apple products like the iPhone and iMac, left Apple in 2019 to launch his design consultancy, LoveFrom.

His collaboration with Altman on io Products was publicly confirmed last year and has already drawn interest from high-profile backers, including Laurene Powell Jobs. Funding for the startup was projected to reach $1 billion by the end of 2024.

The move echoes Altman’s previous investments in AI hardware, such as Humane Inc., a wearable tech startup that also focused on screenless interaction. Instead of scaling that venture, however, HP acquired some of Humane’s assets for $166 million earlier this year.

OpenAI’s potential acquisition of io Products could mark a significant shift toward physical consumer products in the AI space.

For more information on these topics, visit diplomacy.edu.

Anthropic grows its presence in Europe

Anthropic is expanding its operations across Europe, with plans to add over 100 new roles in sales, engineering, research, and business operations. Most of these positions will be based in Dublin and London.

The company has also appointed Guillaume Princen, a former Stripe executive, as its head for Europe, the Middle East, and Africa. This move signals Anthropic’s ambition to strengthen its global presence, particularly in Europe where the demand for enterprise-ready AI tools is rising.

The company’s hiring strategy also reflects a wider trend within the AI industry, with firms like Anthropic competing for global market share after securing significant funding.

The recent $3.5 billion funding round bolsters Anthropic’s position as it seeks to lead the AI race across multiple regions, including the Americas, Europe, and Asia.

Instead of focusing solely on the US, Anthropic’s European push is designed to comply with local AI governance and regulatory standards, which are increasingly important to businesses operating in the region.

Anthropic’s expansion comes at a time when AI firms are facing growing competition from companies like Cohere, which has been positioning itself as a European-compliant alternative.

As the EU continues to shape global AI regulations, Anthropic’s focus on safety and localisation could position it favourably in these highly regulated markets. Analysts suggest that while the US may remain a less regulated environment for AI, the EU is likely to lead global AI policy development in the near future.

For more information on these topics, visit diplomacy.edu.

DeepSeek unveils new approach to improve AI reasoning

Chinese AI firm DeepSeek has unveiled a new method to improve LLM reasoning skills, claiming it offers more accurate and faster responses than current technologies. The approach, developed with researchers from Tsinghua University, combines generative reward modeling (GRM) with a self-principled critique tuning technique.

The method aims to refine how AI LLMs respond to general queries by better aligning their outputs with human preferences. According to a paper published on the arXiv scientific repository, the resulting DeepSeek-GRM models showed stronger performance than existing methods and proved competitive against widely accepted public reward models.

DeepSeek has announced intentions to release these models as open source, though no release date has been set. The move follows increased global interest in the company, which had earlier gained attention for its V3 foundation model and R1 reasoning model.

For more information on these topics, visit diplomacy.edu.

Digital Morocco 2030 strategy focuses on tech-driven transformation

Morocco has set ambitious goals to boost its economy through investment in emerging technologies, aiming for a 10% increase in GDP by 2030. As part of its Digital Morocco 2030 strategy, the government is committing over 11 billion dirhams ($1.1 billion) by 2026 to drive digital transformation, create more than 240,000 jobs, and train 100,000 young people annually in digital skills.

The roadmap prioritises digitising government services through a Unified Administrative Services Portal, with the long-term goal of placing Morocco among the world’s top 50 tech nations. Blockchain plays a central role in this vision, being adopted to improve transparency and efficiency in public services, and already undergoing trials in private sectors like healthcare and finance.

Despite an ongoing official ban, digital asset ownership has surged, more than six million Moroccans now hold such assets, representing over 15% of the population. In parallel, the country is rapidly expanding its use of AI. Notably, Morocco has introduced AI into its judiciary, launched an AI-powered university learning system, and trained over 1,000 small- and medium-sized businesses in AI adoption through partnerships with LinkedIn and the European Bank for Reconstruction and Development.

For more information on these topics, visit diplomacy.edu.

Copyright lawsuits against OpenAI and Microsoft combined in AI showdown

Twelve copyright lawsuits filed against OpenAI and Microsoft have been merged into a single case in the Southern District of New York.

The US judicial panel on multidistrict litigation decided to consolidate, despite objections from many plaintiffs who argued their cases were too distinct.

The lawsuits claim that OpenAI and Microsoft used copyrighted books and journalistic works without consent to train AI tools like ChatGPT and Copilot.

The plaintiffs include high-profile authors—Ta-Nehisi Coates, Sarah Silverman, Junot Díaz—and major media outlets such as The New York Times and Daily News.

The panel justified the centralisation by citing shared factual questions and the benefits of unified pretrial proceedings, including streamlined discovery and avoidance of conflicting rulings.

OpenAI has defended its use of publicly available data under the legal doctrine of ‘fair use.’

A spokesperson stated the company welcomed the consolidation and looked forward to proving that its practices are lawful and support innovation. Microsoft has not yet issued a comment on the ruling.

The authors’ attorney, Steven Lieberman, countered that this is about large-scale theft. He emphasised that both Microsoft and OpenAI have, in their view, infringed on millions of protected works.

Some of the same authors are also suing Meta, alleging the company trained its models using books from the shadow library LibGen, which houses over 7.5 million titles.

Simultaneously, Meta faced backlash in the UK, where authors protested outside the company’s London office. The demonstration focused on Meta’s alleged use of pirated literature in its AI training datasets.

The Society of Authors has called the actions illegal and harmful to writers’ livelihoods.

Amazon also entered the copyright discussion this week, confirming its new Kindle ‘Recaps’ feature uses generative AI to summarise book plots.

While Amazon claims accuracy, concerns have emerged online about the reliability of AI-generated summaries.

In the UK, lawmakers are also reconsidering copyright exemptions for AI companies, facing growing pressure from creative industry advocates.

The debate over how AI models access and use copyrighted material is intensifying, and the decisions made in courtrooms and parliaments could radically change the digital publishing landscape.

For more information on these topics, visit diplomacy.edu.

Sam Altman’s AI cricket post fuels India speculation

A seemingly light-hearted social media post by OpenAI CEO Sam Altman has stirred a wave of curiosity and scepticism in India. Altman shared an AI-generated anime image of himself as a cricket player dressed in an Indian jersey, which quickly went viral among Indian users.

While some saw it as a fun gesture, others questioned the timing and motives, speculating whether it was part of a broader strategy to woo Indian audiences. This isn’t the first time Altman has publicly praised India.

In recent weeks, he lauded the country’s rapid adoption of AI technology, calling it ‘amazing to watch’ and even said it was outpacing the rest of the world. His comments marked a shift from a more dismissive stance during a 2023 visit when he doubted India’s potential to compete with OpenAI’s large-scale models.

However, during his return visit in February 2025, he expressed interest in collaborating with Indian authorities on affordable AI solutions. The timing of Altman’s praise coincides with a surge in Indian users on OpenAI’s platforms, now the company’s second-largest market.

Meanwhile, OpenAI faces a legal tussle with several Indian media outlets over their alleged content misuse. Despite this, the potential of India’s booming AI market—projected to hit $8 billion by 2025—makes the country a critical frontier for global tech firms.

Experts argue that Altman’s overtures are more about business than sentiment. With increasing competition from rival AI models like DeepSeek and Gemini, maintaining and growing OpenAI’s Indian user base has become vital. As technology analyst Nikhil Pahwa said, ‘There’s no real love; it’s just business.’

For more information on these topics, visit diplomacy.edu.

Thailand strengthens cybersecurity with Google Cloud

Thailand’s National Cyber Security Agency (NCSA) has joined forces with Google Cloud to strengthen the country’s cyber resilience, using AI-based tools and shared threat intelligence instead of relying solely on traditional defences.

The collaboration aims to better protect public agencies and citizens against increasingly sophisticated cyber threats.

A key part of the initiative involves deploying Google Cloud Cybershield for centralised monitoring of security events across government bodies. Instead of having fragmented monitoring systems, this unified approach will help streamline incident detection and response.

The partnership also brings advanced training for cybersecurity personnel in the public sector, alongside regular threat intelligence sharing.

Google Cloud Web Risk will be integrated into government operations to automatically block websites hosting malware and phishing content, instead of relying on manual checks.

Google further noted the impact of its anti-scam technology in Google Play Protect, which has prevented over 6.6 million high-risk app installation attempts in Thailand since its 2024 launch—enhancing mobile safety for millions of users.

For more information on these topics, visit diplomacy.edu.

Microsoft showcases Copilot’s AI potential with Quake II demo

Microsoft has introduced a browser-based, AI-generated version of the classic game Quake II as a demonstration of its Copilot Gaming Experiences.

The innovative approach showcases the capabilities of generative AI by applying it to a beloved retro game, providing a fresh, interactive experience that requires no traditional game engine.

The project stems from the company’s research labs, utilising technologies like MuseWorld and the Human Action Model (WHAM) to generate gameplay in real time.

Training the AI model on a level of Quake II enabled Copilot to dynamically create game visuals and respond to player inputs instantly. Microsoft describes the technology as ‘a glimpse into next-generation AI gaming experiences’.

Rather than relying on standard game engines, it simulates gameplay through AI generation, demonstrating how older games can be revitalised through modern techniques.

Although the demo is not a full game, it provides users with a tangible example of how AI can enhance classic games.

Microsoft encourages players to share their experiences and feedback, as the company seeks to refine and expand the use of AI in gaming. The demo is now available to try for free, offering an engaging preview of what could be a new frontier in interactive entertainment.

For more information on these topics, visit diplomacy.edu.

Meta unveils Llama 4 models to boost AI across platforms

Meta has launched Llama 4, its latest and most advanced family of open-weight AI models, aiming to enhance the intelligence of Meta AI across services like WhatsApp, Instagram, and Messenger.

Instead of keeping these models cloud-restricted, Meta has made them available for download through its official Llama website and Hugging Face, encouraging wider developer access.

Two models, Llama 4 Scout and Maverick, are now publicly available. Scout, the lighter model with 17 billion active parameters, supports a 10 million-token context window and can run on a single Nvidia H100 GPU.

It outperforms rivals like Google’s Gemma 3 and Mistral 3.1 in benchmark tests. Maverick, the more capable model, uses the same number of active parameters but with 128 experts, offering competitive performance against GPT-4o and DeepSeek v3 while being more efficient.

Meta also revealed the Llama 4 Behemoth model, still in training, which serves as a teacher for the rest of the Llama 4 line. Instead of targeting lightweight use, Behemoth focuses on heavy multimodal tasks with 288 billion active parameters and nearly two trillion in total.

Meta claims it outpaces GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in key STEM-related evaluations.

These open-weight AI models allow local deployment instead of relying on cloud APIs, though some licensing limits may apply. With Scout and Maverick already accessible, Meta is gradually integrating Llama 4 capabilities into its messaging and social platforms worldwide.

For more information on these topics, visit diplomacy.edu.

Tech giants face pushback over AI and book piracy

Meta and Anthropic’s recent attempts to defend their use of copyrighted books in training AI tools under the US legal concept of ‘fair use’ are unlikely to succeed in UK courts, according to the Publishers Association and the Society of Authors.

Legal experts argue that ‘fair use’ is far broader than the UK’s stricter ‘fair dealing’ rules, which limit the unauthorised use of copyrighted works.

The controversy follows revelations that Meta may have used pirated books from LibraryGenesis to train its AI model, Llama 3. Legal filings in the US claim the use of these books was transformative and formed only a small part of the training data.

However, UK organisations and authors insist that such use amounts to large-scale copyright infringement and would not be justified under UK law.

Calls for transparency and licensing reform are growing, with more than 8,000 writers signing a petition and protests planned outside Meta’s London headquarters.

Critics, including Baroness Beeban Kidron, argue that AI models rely on the creativity and quality of copyrighted content—making it all the more important for authors to retain control and receive proper compensation.

For more information on these topics, visit diplomacy.edu.