Google blends AI mode with Lens

Google is enhancing its experimental AI Mode by combining the visual power of Google Lens with the conversational intelligence of Gemini, offering users a more dynamic way to search.

Instead of typing queries alone, users can now upload photos or take snapshots with their smartphone to receive more insightful answers.

The new feature moves beyond traditional reverse image search. For instance, you could snap a photo of a mystery kitchen tool and ask, ‘What is this, and how do I use it?’, receiving not only a helpful explanation but links to buy it and even video demonstrations.

Rather than focusing on a single object, AI Mode can interpret entire scenes, offering context-aware suggestions.

Take a photo of a bookshelf, a meal, or even a cluttered drawer, and AI Mode will identify items and describe how they relate to each other. It might suggest recipes using the ingredients shown, help identify a misplaced phone charger, or recommend the order to read your books.

Behind the scenes, the system runs multiple AI agents to analyse each element, providing layered, tailored responses.

Although other platforms like ChatGPT also support image recognition, Google’s strength lies in its decades of search data and visual indexing. Currently, the feature is accessible to Google One AI Premium subscribers or those enrolled in Search Labs via the Google mobile app.

For more information on these topics, visit diplomacy.edu.

Meta faces backlash over Llama 4 release

Over the weekend, Meta unveiled two new Llama 4 models—Scout, a smaller version, and Maverick, a mid-sized variant it claims outperforms OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across multiple benchmarks.

Maverick quickly climbed to second place on LMArena, an AI benchmarking platform where human evaluators compare and vote on model outputs. Meta proudly pointed to Maverick’s ELO score of 1417, placing it just beneath Gemini 2.5 Pro, instead of trailing behind the usual leaders.

However, AI researchers noticed a critical detail buried in Meta’s documentation: the version of Maverick that ranked so highly wasn’t the one released to the public. Instead of using the standard model, Meta had submitted an ‘experimental’ version specifically optimised for conversations.

LMArena later criticised this move, saying Meta failed to clearly indicate the model was customised, prompting the platform to update its policies to ensure future evaluations remain fair and reproducible.

Meta’s spokesperson acknowledged the use of experimental variants, insisting the company frequently tests different configurations.

While this wasn’t a violation of LMArena’s existing rules, the episode raised concerns about the credibility of benchmark rankings when companies submit fine-tuned models instead of the ones accessible to the wider community.

Independent AI researcher Simon Willison expressed frustration, saying the impressive ranking lost all meaning once it became clear the public couldn’t even use the same version.

The controversy unfolded against a backdrop of mounting competition in open-weight AI, with Meta under pressure following high-profile releases like China’s DeepSeek model.

Instead of offering a smooth rollout, Meta released Llama 4 on a Saturday—an unusual move—which CEO Mark Zuckerberg explained simply as ‘that’s when it was ready.’ But for many in the AI space, the launch has only deepened confusion around what these models can genuinely deliver.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

Tech giants face pushback over AI and book piracy

Meta and Anthropic’s recent attempts to defend their use of copyrighted books in training AI tools under the US legal concept of ‘fair use’ are unlikely to succeed in UK courts, according to the Publishers Association and the Society of Authors.

Legal experts argue that ‘fair use’ is far broader than the UK’s stricter ‘fair dealing’ rules, which limit the unauthorised use of copyrighted works.

The controversy follows revelations that Meta may have used pirated books from LibraryGenesis to train its AI model, Llama 3. Legal filings in the US claim the use of these books was transformative and formed only a small part of the training data.

However, UK organisations and authors insist that such use amounts to large-scale copyright infringement and would not be justified under UK law.

Calls for transparency and licensing reform are growing, with more than 8,000 writers signing a petition and protests planned outside Meta’s London headquarters.

Critics, including Baroness Beeban Kidron, argue that AI models rely on the creativity and quality of copyrighted content—making it all the more important for authors to retain control and receive proper compensation.

For more information on these topics, visit diplomacy.edu.

National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises the harmful use of AI deepfakes

New Jersey has become one of several US states to criminalise the creation and distribution of deceptive AI-generated media, commonly known as deepfakes. Governor Phil Murphy signed the legislation on Wednesday, introducing civil and criminal penalties for those who produce or share such media.

If deepfakes are used to commit further crimes like harassment, they may now be treated as a third-degree offence, punishable by fines up to $30,000 or up to five years in prison.

The bill was inspired by a disturbing incident at a New Jersey school where students shared explicit AI-generated images of a classmate.

Governor Murphy had initially vetoed the legislation in March, calling for changes to reduce the risk of constitutional challenges. Lawmakers later amended the bill, which passed with overwhelming support in both chambers.

Instead of ignoring the threat posed by deepfakes, the law aims to deter their misuse while preserving legitimate applications of AI.

‘This legislation takes a proactive approach,’ said Representative Lou Greenwald, one of the bill’s sponsors. ‘We are safeguarding New Jersey residents and offering justice to victims of digital abuse.’

A growing number of US states are taking similar action, particularly around election integrity and online harassment. While 27 states now target AI-generated sexual content, others have introduced measures to limit political deepfakes.

States like Texas and Minnesota have banned deceptive political media outright, while Florida and Wisconsin require clear disclosures. New Jersey’s move reflects a broader push to keep pace with rapidly evolving technology and its impact on public trust and safety.

For more information on these topics, visit diplomacy.edu.

Meta and UFC to transform fight experience

UFC President Dana White has announced a groundbreaking partnership with Meta, following his recent appointment to the tech giant’s board.

The collaboration marks a significant moment for both organisations, with Meta CEO Mark Zuckerberg, a well-known MMA enthusiast and practitioner, praising White’s ability to elevate global sports brands.

The deal aims to revolutionise fan engagement through cutting-edge technologies. According to White, plans are already underway to redesign the UFC’s ranking system, with hopes of delivering more compelling matchups.

While details remain under wraps, he hinted that AI could be central to the project, potentially transforming how fights are scored and analysed in real time.

Zuckerberg expressed excitement about the future of UFC fan experiences, suggesting Meta’s tech resources could introduce innovative ways for audiences to connect with the sport.

Enhanced data analysis may also support fighters in training and strategy, leading to higher-quality contests and fewer controversial decisions.

The full impact of the partnership will unfold in the coming years, with fans and athletes alike anticipating significant change.

For more information on these topics, visit diplomacy.edu.

Retail stocks slump after tariff shock

Retail giants are facing sharp declines in after-hours trading as new tariffs from the US on imports from China, the European Union, and Vietnam begin to rattle markets. Walmart and Amazon both saw their shares fall, with Nike also heavily impacted due to its dependence on Chinese manufacturing.

Walmart’s drop of over 4% reflects its heavy reliance on Chinese imports, with roughly 70% of its merchandise tied to the country. Amazon, similarly exposed through its third-party sellers, dipped close to 5% amid fears that rising costs will force sellers to raise prices, dampening consumer demand. These developments could severely affect the upcoming holiday shopping season.

Nike, meanwhile, saw shares fall by more than 6% as news emerged that many of its products, including popular sneakers, are produced in China and Vietnam. Although the company has been diversifying production to Vietnam, the move offers little relief now, as Vietnam faces an even steeper 46% tariff. The new policies may force widespread price hikes, putting further pressure on consumers and the broader retail sector.

For more information on these topics, visit diplomacy.edu.

Emergence AI launches platform that builds AI with AI

The startup Emergence AI has launched a new no-code platform that allows users to generate custom AI agents simply by describing tasks in natural language.

These agents can then autonomously create other, more specialised agents to complete complex work, in real time and without requiring human coding expertise.

The system, which the company calls a breakthrough in ‘recursive intelligence’, checks its registry of agents for task compatibility. If existing agents aren’t suitable, new ones are created instantly to handle the job.

These can also anticipate related tasks, boosting automation across enterprise operations. Emergence AI claims the platform can seamlessly orchestrate collaboration among multiple agents, bringing a new level of efficiency to data transformation, migration, analytics, and even code generation and verification.

Users can select from a range of major large language models including OpenAI’s GPT-4.5, Anthropic’s Claude, and Meta’s Llama. Enterprises can also integrate their own models.

With safety and oversight in mind, Emergence AI has built in access controls, performance verification tools, and human review processes to ensure responsible deployment. Pricing has yet to be disclosed, but interested parties are encouraged to contact the firm directly.

For more information on these topics, visit diplomacy.edu.

Studio Ghibli director warns AI can’t replicate emotional depth

AI may soon be capable of producing entire animated films, warns Goro Miyazaki, son of the iconic Hayao Miyazaki and managing director at Studio Ghibli.

Amid a viral trend of AI-generated images mimicking Ghibli’s hand-drawn style, Goro reflected on both the potential and risks of generative technology. While automation could ease Japan’s animator shortage, he believes the emotional depth that defines Ghibli’s work cannot be replicated by machines.

Speaking from Ghibli’s atelier in Tokyo, Goro acknowledged that AI could soon produce full-length features, yet questioned whether audiences would accept creations lacking human touch.

He also noted that the new wave of tools may unlock creative opportunities for previously overlooked talent.

Nonetheless, he stressed the irreplaceable vision of artists like his father, whose Oscar-winning final film The Boy and the Heron continues to embody themes of loss and mortality, drawn from postwar experience.

OpenAI, whose ChatGPT image generator has fuelled the online Ghibli-style craze, claims to block imitations of living artists while allowing broader studio aesthetics.

As AI redefines content creation, Goro warned against losing the uniquely human elements, both light and dark, that have given Studio Ghibli’s films their enduring depth and resonance.

For more information on these topics, visit diplomacy.edu.