New AI firm Deep Cogito launches versatile open models

A new San Francisco-based startup, Deep Cogito, has unveiled its first family of AI models, Cogito 1, which can switch between fast-response and deep-reasoning modes instead of being limited to just one approach.

These hybrid models combine the efficiency of standard AI with the step-by-step problem-solving abilities seen in advanced systems like OpenAI’s o1. While reasoning models excel in fields like maths and physics, they often require more computing power, a trade-off Deep Cogito aims to balance.

The Cogito 1 series, built on Meta’s Llama and Alibaba’s Qwen models instead of starting from scratch, ranges from 3 billion to 70 billion parameters, with larger versions planned.

Early tests suggest the top-tier Cogito 70B outperforms rivals like DeepSeek’s reasoning model and Meta’s Llama 4 Scout in some tasks. The models are available for download or through cloud APIs, offering flexibility for developers.

Founded in June 2024 by ex-Google DeepMind product manager Dhruv Malhotra and former Google engineer Drishan Arora, Deep Cogito is backed by investors like South Park Commons.

The company’s ambitious goal is to develop general superintelligence,’ AI that surpasses human capabilities, rather than merely matching them. For now, the team says they’ve only scratched the surface of their scaling potential.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.

Google blends AI mode with Lens

Google is enhancing its experimental AI Mode by combining the visual power of Google Lens with the conversational intelligence of Gemini, offering users a more dynamic way to search.

Instead of typing queries alone, users can now upload photos or take snapshots with their smartphone to receive more insightful answers.

The new feature moves beyond traditional reverse image search. For instance, you could snap a photo of a mystery kitchen tool and ask, ‘What is this, and how do I use it?’, receiving not only a helpful explanation but links to buy it and even video demonstrations.

Rather than focusing on a single object, AI Mode can interpret entire scenes, offering context-aware suggestions.

Take a photo of a bookshelf, a meal, or even a cluttered drawer, and AI Mode will identify items and describe how they relate to each other. It might suggest recipes using the ingredients shown, help identify a misplaced phone charger, or recommend the order to read your books.

Behind the scenes, the system runs multiple AI agents to analyse each element, providing layered, tailored responses.

Although other platforms like ChatGPT also support image recognition, Google’s strength lies in its decades of search data and visual indexing. Currently, the feature is accessible to Google One AI Premium subscribers or those enrolled in Search Labs via the Google mobile app.

For more information on these topics, visit diplomacy.edu.

OpenAI negotiates $500m deal for AI startup

OpenAI is reportedly in talks to acquire io Products, an AI hardware startup co-founded by former Apple design chief Jony Ive and OpenAI CEO Sam Altman, in a deal that could exceed $500 million.

Instead of focusing solely on software like ChatGPT and API tools, OpenAI appears to be eyeing consumer devices as a way to diversify its revenue.

io Products is said to be working on AI-powered consumer tech, including a screenless smartphone and smart home gadgets.

The company’s team includes several former Apple designers, such as Tang Tan and Evans Hankey. Instead of traditional screens, these new devices are expected to explore more ambient and context-aware ways of interaction.

Jony Ive, best known for his role in designing iconic Apple products like the iPhone and iMac, left Apple in 2019 to launch his design consultancy, LoveFrom.

His collaboration with Altman on io Products was publicly confirmed last year and has already drawn interest from high-profile backers, including Laurene Powell Jobs. Funding for the startup was projected to reach $1 billion by the end of 2024.

The move echoes Altman’s previous investments in AI hardware, such as Humane Inc., a wearable tech startup that also focused on screenless interaction. Instead of scaling that venture, however, HP acquired some of Humane’s assets for $166 million earlier this year.

OpenAI’s potential acquisition of io Products could mark a significant shift toward physical consumer products in the AI space.

For more information on these topics, visit diplomacy.edu.

Anthropic grows its presence in Europe

Anthropic is expanding its operations across Europe, with plans to add over 100 new roles in sales, engineering, research, and business operations. Most of these positions will be based in Dublin and London.

The company has also appointed Guillaume Princen, a former Stripe executive, as its head for Europe, the Middle East, and Africa. This move signals Anthropic’s ambition to strengthen its global presence, particularly in Europe where the demand for enterprise-ready AI tools is rising.

The company’s hiring strategy also reflects a wider trend within the AI industry, with firms like Anthropic competing for global market share after securing significant funding.

The recent $3.5 billion funding round bolsters Anthropic’s position as it seeks to lead the AI race across multiple regions, including the Americas, Europe, and Asia.

Instead of focusing solely on the US, Anthropic’s European push is designed to comply with local AI governance and regulatory standards, which are increasingly important to businesses operating in the region.

Anthropic’s expansion comes at a time when AI firms are facing growing competition from companies like Cohere, which has been positioning itself as a European-compliant alternative.

As the EU continues to shape global AI regulations, Anthropic’s focus on safety and localisation could position it favourably in these highly regulated markets. Analysts suggest that while the US may remain a less regulated environment for AI, the EU is likely to lead global AI policy development in the near future.

For more information on these topics, visit diplomacy.edu.

Meta faces backlash over Llama 4 release

Over the weekend, Meta unveiled two new Llama 4 models—Scout, a smaller version, and Maverick, a mid-sized variant it claims outperforms OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across multiple benchmarks.

Maverick quickly climbed to second place on LMArena, an AI benchmarking platform where human evaluators compare and vote on model outputs. Meta proudly pointed to Maverick’s ELO score of 1417, placing it just beneath Gemini 2.5 Pro, instead of trailing behind the usual leaders.

However, AI researchers noticed a critical detail buried in Meta’s documentation: the version of Maverick that ranked so highly wasn’t the one released to the public. Instead of using the standard model, Meta had submitted an ‘experimental’ version specifically optimised for conversations.

LMArena later criticised this move, saying Meta failed to clearly indicate the model was customised, prompting the platform to update its policies to ensure future evaluations remain fair and reproducible.

Meta’s spokesperson acknowledged the use of experimental variants, insisting the company frequently tests different configurations.

While this wasn’t a violation of LMArena’s existing rules, the episode raised concerns about the credibility of benchmark rankings when companies submit fine-tuned models instead of the ones accessible to the wider community.

Independent AI researcher Simon Willison expressed frustration, saying the impressive ranking lost all meaning once it became clear the public couldn’t even use the same version.

The controversy unfolded against a backdrop of mounting competition in open-weight AI, with Meta under pressure following high-profile releases like China’s DeepSeek model.

Instead of offering a smooth rollout, Meta released Llama 4 on a Saturday—an unusual move—which CEO Mark Zuckerberg explained simply as ‘that’s when it was ready.’ But for many in the AI space, the launch has only deepened confusion around what these models can genuinely deliver.

For more information on these topics, visit diplomacy.edu.

Scientists achieve breakthrough in quantum computing stability

A new study by researchers from the University of Oxford, Delft University of Technology, Eindhoven University of Technology, and Quantum Machines has made a major step forward in quantum computing.

The team has found a way to make Majorana zero modes (MZMs)—special particles crucial for quantum computers—far more stable, bringing us closer to building error-free, scalable machines.

Quantum computers are incredibly powerful but face a key challenge: their basic units, qubits, are highly fragile and easily disrupted by environmental noise.

MZMs have long been seen as a potential solution because they are predicted to resist such disturbances, but stabilising them for practical use has been difficult until now.

The researchers created a structure called a three-site Kitaev chain, which is a simplified version of a topological superconductor.

By using quantum dots to trap electrons and connecting them with superconducting wires, they created a stable ‘sweet spot’ where MZMs could be farther apart, reducing interference and enhancing their stability.

Lead author Dr. Greg Mazur believes this breakthrough shows that it is possible to keep MZMs stable as quantum systems grow. With further research, the team aims to build longer chains to improve stability even more, potentially opening the door to reliable, next-generation quantum materials and devices.

For more information on these topics, visit diplomacy.edu.

Osney Capital invests in the UK’s cybersecurity innovation

Osney Capital has launched the UK’s first specialist cybersecurity seed fund, focused on investing in promising cybersecurity startups at the Pre-Seed and Seed stages.

The fund, which raised more than its initial £50 million target, will write cheques between £250k and £2.5 million and has the capacity for follow-on investments in Series A rounds.

Led by Adam Cragg, Josh Walter, and Paul Wilkes, the Osney Capital team brings decades of experience in cybersecurity and early-stage investing. Instead of relying on generalist investors, the fund will offer tailored support to early-stage companies, addressing the unique challenges in the cybersecurity sector.

The UK cybersecurity industry has grown to £13.2 billion in 2025, driven by complex cyber threats, regulatory pressures, and the rapid adoption of AI. The fund aims to capitalise on this growth, tapping into the strong talent pipeline boosted by UK universities and specialised cybersecurity programs.

Supported by cornerstone investments from the British Business Bank and accredited by the UK’s National Security Strategic Investment Fund, Osney Capital’s mission is to back the next generation of cybersecurity founders and help them scale globally competitive businesses.

For more information on these topics, visit diplomacy.edu.

Thailand strengthens cybersecurity with Google Cloud

Thailand’s National Cyber Security Agency (NCSA) has joined forces with Google Cloud to strengthen the country’s cyber resilience, using AI-based tools and shared threat intelligence instead of relying solely on traditional defences.

The collaboration aims to better protect public agencies and citizens against increasingly sophisticated cyber threats.

A key part of the initiative involves deploying Google Cloud Cybershield for centralised monitoring of security events across government bodies. Instead of having fragmented monitoring systems, this unified approach will help streamline incident detection and response.

The partnership also brings advanced training for cybersecurity personnel in the public sector, alongside regular threat intelligence sharing.

Google Cloud Web Risk will be integrated into government operations to automatically block websites hosting malware and phishing content, instead of relying on manual checks.

Google further noted the impact of its anti-scam technology in Google Play Protect, which has prevented over 6.6 million high-risk app installation attempts in Thailand since its 2024 launch—enhancing mobile safety for millions of users.

For more information on these topics, visit diplomacy.edu.

Meta unveils Llama 4 models to boost AI across platforms

Meta has launched Llama 4, its latest and most advanced family of open-weight AI models, aiming to enhance the intelligence of Meta AI across services like WhatsApp, Instagram, and Messenger.

Instead of keeping these models cloud-restricted, Meta has made them available for download through its official Llama website and Hugging Face, encouraging wider developer access.

Two models, Llama 4 Scout and Maverick, are now publicly available. Scout, the lighter model with 17 billion active parameters, supports a 10 million-token context window and can run on a single Nvidia H100 GPU.

It outperforms rivals like Google’s Gemma 3 and Mistral 3.1 in benchmark tests. Maverick, the more capable model, uses the same number of active parameters but with 128 experts, offering competitive performance against GPT-4o and DeepSeek v3 while being more efficient.

Meta also revealed the Llama 4 Behemoth model, still in training, which serves as a teacher for the rest of the Llama 4 line. Instead of targeting lightweight use, Behemoth focuses on heavy multimodal tasks with 288 billion active parameters and nearly two trillion in total.

Meta claims it outpaces GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in key STEM-related evaluations.

These open-weight AI models allow local deployment instead of relying on cloud APIs, though some licensing limits may apply. With Scout and Maverick already accessible, Meta is gradually integrating Llama 4 capabilities into its messaging and social platforms worldwide.

For more information on these topics, visit diplomacy.edu.