New AI firm Deep Cogito launches versatile open models

A new San Francisco-based startup, Deep Cogito, has unveiled its first family of AI models, Cogito 1, which can switch between fast-response and deep-reasoning modes instead of being limited to just one approach.

These hybrid models combine the efficiency of standard AI with the step-by-step problem-solving abilities seen in advanced systems like OpenAI’s o1. While reasoning models excel in fields like maths and physics, they often require more computing power, a trade-off Deep Cogito aims to balance.

The Cogito 1 series, built on Meta’s Llama and Alibaba’s Qwen models instead of starting from scratch, ranges from 3 billion to 70 billion parameters, with larger versions planned.

Early tests suggest the top-tier Cogito 70B outperforms rivals like DeepSeek’s reasoning model and Meta’s Llama 4 Scout in some tasks. The models are available for download or through cloud APIs, offering flexibility for developers.

Founded in June 2024 by ex-Google DeepMind product manager Dhruv Malhotra and former Google engineer Drishan Arora, Deep Cogito is backed by investors like South Park Commons.

The company’s ambitious goal is to develop general superintelligence,’ AI that surpasses human capabilities, rather than merely matching them. For now, the team says they’ve only scratched the surface of their scaling potential.

For more information on these topics, visit diplomacy.edu.

Dutch researchers to face new security screenings

The Dutch government has proposed new legislation requiring background checks for thousands of researchers working with sensitive technologies. The plan, announced by Education Minister Eppo Bruins, aims to block foreign intelligence from accessing high-risk scientific work.

Around 8,000 people a year, including Dutch citizens, would undergo screenings involving criminal records, work history, and possible links to hostile regimes.

Intelligence services would support the process, which targets sectors like AI, quantum computing, and biotech.

Universities worry the checks may deter global talent due to delays and bureaucracy. Critics also highlight a loophole: screenings occur only once, meaning researchers could still be approached by foreign governments after being cleared.

While other countries are introducing similar measures, the Netherlands will attempt to avoid unnecessary delays. Officials admit, however, that no system can eliminate all risks.

For more information on these topics, visit diplomacy.edu.

Man uses AI avatar in New York court

A 74-year-old man representing himself in a New York State appeal has apologised after using an AI-generated avatar during court proceedings.

Jerome Dewald submitted a video featuring a youthful digital figure to deliver part of his legal argument, prompting confusion and criticism from the judges. One justice described the move as misleading, expressing frustration over the lack of prior disclosure.

Dewald later explained he intended to ease his courtroom anxiety and present his case more clearly, not to deceive.

In a letter to the judges, he acknowledged that transparency should have taken priority and accepted responsibility for the confusion caused. His case, a contract dispute with a former employer, remains under review by the appellate court.

The incident has reignited debate over the role of AI in legal settings. Recent years have seen several high-profile cases where AI-generated content introduced errors or false information, highlighting the risks of using generative technology without proper oversight.

Legal experts say such incidents are becoming increasingly common as AI tools become more accessible.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.

Google blends AI mode with Lens

Google is enhancing its experimental AI Mode by combining the visual power of Google Lens with the conversational intelligence of Gemini, offering users a more dynamic way to search.

Instead of typing queries alone, users can now upload photos or take snapshots with their smartphone to receive more insightful answers.

The new feature moves beyond traditional reverse image search. For instance, you could snap a photo of a mystery kitchen tool and ask, ‘What is this, and how do I use it?’, receiving not only a helpful explanation but links to buy it and even video demonstrations.

Rather than focusing on a single object, AI Mode can interpret entire scenes, offering context-aware suggestions.

Take a photo of a bookshelf, a meal, or even a cluttered drawer, and AI Mode will identify items and describe how they relate to each other. It might suggest recipes using the ingredients shown, help identify a misplaced phone charger, or recommend the order to read your books.

Behind the scenes, the system runs multiple AI agents to analyse each element, providing layered, tailored responses.

Although other platforms like ChatGPT also support image recognition, Google’s strength lies in its decades of search data and visual indexing. Currently, the feature is accessible to Google One AI Premium subscribers or those enrolled in Search Labs via the Google mobile app.

For more information on these topics, visit diplomacy.edu.

Meta faces backlash over Llama 4 release

Over the weekend, Meta unveiled two new Llama 4 models—Scout, a smaller version, and Maverick, a mid-sized variant it claims outperforms OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across multiple benchmarks.

Maverick quickly climbed to second place on LMArena, an AI benchmarking platform where human evaluators compare and vote on model outputs. Meta proudly pointed to Maverick’s ELO score of 1417, placing it just beneath Gemini 2.5 Pro, instead of trailing behind the usual leaders.

However, AI researchers noticed a critical detail buried in Meta’s documentation: the version of Maverick that ranked so highly wasn’t the one released to the public. Instead of using the standard model, Meta had submitted an ‘experimental’ version specifically optimised for conversations.

LMArena later criticised this move, saying Meta failed to clearly indicate the model was customised, prompting the platform to update its policies to ensure future evaluations remain fair and reproducible.

Meta’s spokesperson acknowledged the use of experimental variants, insisting the company frequently tests different configurations.

While this wasn’t a violation of LMArena’s existing rules, the episode raised concerns about the credibility of benchmark rankings when companies submit fine-tuned models instead of the ones accessible to the wider community.

Independent AI researcher Simon Willison expressed frustration, saying the impressive ranking lost all meaning once it became clear the public couldn’t even use the same version.

The controversy unfolded against a backdrop of mounting competition in open-weight AI, with Meta under pressure following high-profile releases like China’s DeepSeek model.

Instead of offering a smooth rollout, Meta released Llama 4 on a Saturday—an unusual move—which CEO Mark Zuckerberg explained simply as ‘that’s when it was ready.’ But for many in the AI space, the launch has only deepened confusion around what these models can genuinely deliver.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

Tech giants face pushback over AI and book piracy

Meta and Anthropic’s recent attempts to defend their use of copyrighted books in training AI tools under the US legal concept of ‘fair use’ are unlikely to succeed in UK courts, according to the Publishers Association and the Society of Authors.

Legal experts argue that ‘fair use’ is far broader than the UK’s stricter ‘fair dealing’ rules, which limit the unauthorised use of copyrighted works.

The controversy follows revelations that Meta may have used pirated books from LibraryGenesis to train its AI model, Llama 3. Legal filings in the US claim the use of these books was transformative and formed only a small part of the training data.

However, UK organisations and authors insist that such use amounts to large-scale copyright infringement and would not be justified under UK law.

Calls for transparency and licensing reform are growing, with more than 8,000 writers signing a petition and protests planned outside Meta’s London headquarters.

Critics, including Baroness Beeban Kidron, argue that AI models rely on the creativity and quality of copyrighted content—making it all the more important for authors to retain control and receive proper compensation.

For more information on these topics, visit diplomacy.edu.

National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises the harmful use of AI deepfakes

New Jersey has become one of several US states to criminalise the creation and distribution of deceptive AI-generated media, commonly known as deepfakes. Governor Phil Murphy signed the legislation on Wednesday, introducing civil and criminal penalties for those who produce or share such media.

If deepfakes are used to commit further crimes like harassment, they may now be treated as a third-degree offence, punishable by fines up to $30,000 or up to five years in prison.

The bill was inspired by a disturbing incident at a New Jersey school where students shared explicit AI-generated images of a classmate.

Governor Murphy had initially vetoed the legislation in March, calling for changes to reduce the risk of constitutional challenges. Lawmakers later amended the bill, which passed with overwhelming support in both chambers.

Instead of ignoring the threat posed by deepfakes, the law aims to deter their misuse while preserving legitimate applications of AI.

‘This legislation takes a proactive approach,’ said Representative Lou Greenwald, one of the bill’s sponsors. ‘We are safeguarding New Jersey residents and offering justice to victims of digital abuse.’

A growing number of US states are taking similar action, particularly around election integrity and online harassment. While 27 states now target AI-generated sexual content, others have introduced measures to limit political deepfakes.

States like Texas and Minnesota have banned deceptive political media outright, while Florida and Wisconsin require clear disclosures. New Jersey’s move reflects a broader push to keep pace with rapidly evolving technology and its impact on public trust and safety.

For more information on these topics, visit diplomacy.edu.