Google blends AI mode with Lens

Google is enhancing its experimental AI Mode by combining the visual power of Google Lens with the conversational intelligence of Gemini, offering users a more dynamic way to search.

Instead of typing queries alone, users can now upload photos or take snapshots with their smartphone to receive more insightful answers.

The new feature moves beyond traditional reverse image search. For instance, you could snap a photo of a mystery kitchen tool and ask, ‘What is this, and how do I use it?’, receiving not only a helpful explanation but links to buy it and even video demonstrations.

Rather than focusing on a single object, AI Mode can interpret entire scenes, offering context-aware suggestions.

Take a photo of a bookshelf, a meal, or even a cluttered drawer, and AI Mode will identify items and describe how they relate to each other. It might suggest recipes using the ingredients shown, help identify a misplaced phone charger, or recommend the order to read your books.

Behind the scenes, the system runs multiple AI agents to analyse each element, providing layered, tailored responses.

Although other platforms like ChatGPT also support image recognition, Google’s strength lies in its decades of search data and visual indexing. Currently, the feature is accessible to Google One AI Premium subscribers or those enrolled in Search Labs via the Google mobile app.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises AI-generated nude deepfakes of minors

New Jersey has become the first US state to criminalise the creation and sharing of AI-generated nude images of minors, following a high-profile campaign led by 14-year-old Francesca Mani. The US legislation, signed into law on 2 April by Governor Phil Murphy, allows victims to sue perpetrators for up to $1,000 per image and includes criminal penalties of up to five years in prison and fines of up to $30,000.

Mani launched her campaign after discovering that boys at her school had used an AI “nudify” website to target her and other girls. Refusing to accept the school’s minimal disciplinary response, she called for lawmakers to take decisive action against such deepfake abuses. Her efforts gained national attention, including a feature on 60 Minutes, and helped drive the new legal protections.

The law defines deepfakes as media that convincingly depicts someone doing something they never actually did. It also prohibits the use of such technology for election interference or defamation. Although the law’s focus is on malicious misuse, questions remain about whether exemptions will be made for legitimate uses in film, tech, or education sectors.

For more information on these topics, visit diplomacy.edu.

Trump administration pushes for pro-AI shift in US federal agencies

The White House announced on Monday a shift in how US federal agencies will approach AI, prioritising innovation over the stricter regulatory framework previously established under President Biden. 

A new memorandum from the Office of Management and Budget instructs agencies to appoint chief AI officers and craft policies to expand the use of AI technologies across government operations.

This pivot includes repealing two Biden-era directives emphasising transparency and safeguards against AI misuse. 

The earlier rules required federal agencies to implement protective measures for civil rights and limit unchecked acquisition of AI tools. 

These protections have now been replaced with a call for a more ‘forward-leaning and pro-innovation’ stance, removing what the current administration views as excessive bureaucratic constraints.

Federal agencies are now expected to develop AI strategies within six months. These plans must identify barriers to responsible AI implementation and improve how the technology is used enterprise-wide. 

The administration also encouraged the development of specific policies for generative AI, emphasising maximising the use of American-made solutions and enhancing interoperability between systems.

The policy change is part of President Trump’s broader rollback of previous AI governance, including his earlier revocation of a 2023 executive order signed by Biden that required developers to disclose sensitive training data. 

The new framework aims to streamline AI procurement processes and eliminate what the administration labels unnecessary reporting burdens while still maintaining basic privacy protections.

Federal agencies have already begun integrating AI into their operations. The Federal Aviation Administration, for example, has applied machine learning to analyse safety reports and identify emerging aviation risks. 

Under the new guidelines, such initiatives are expected to accelerate, signalling a broader federal embrace of AI across sectors.

For more information on these topics, visit diplomacy.edu.

Russia fines Telegram over extremist content

A Moscow court has fined the messaging platform Telegram 7 million roubles (approximately $80,000) for failing to remove content allegedly promoting terrorist acts and inciting anti-government protests, according to TASS (Russian state news agency).

The court ruled that Telegram did not comply with legal obligations to take down materials deemed extremist, including calls to sabotage railway systems in support of Ukrainian forces and to overthrow the Russian government.

The judgement cited specific Telegram channels accused of distributing such content. Authorities argue that these channels played a role in encouraging public unrest and potentially supporting hostile actions against the Russian state.

The decision adds to the long-standing tension between Russia’s media watchdogs and Telegram, which remains one of the most widely used messaging platforms across Russia and neighbouring countries.

Telegram has not stated in response to the fine, and it is unclear whether the company plans to challenge the court’s ruling. 

The platform was founded by Russian-born entrepreneur Pavel Durov and is currently headquartered in Dubai, boasting close to a billion users globally. 

Telegram’s decentralised nature and encrypted messaging features have made it popular among users seeking privacy, but it has also drawn criticism from governments citing national security concerns.

Durov himself returned to Dubai in March after months in France following his 2024 arrest linked to accusations that Telegram was used in connection with fraud, money laundering, and the circulation of illegal content.

Although he has denied any wrongdoing, the incident has further strained the company’s relationship with authorities in Russia.

This latest legal action reflects Russia’s ongoing crackdown on digital platforms accused of facilitating dissent or undermining state control.

With geopolitical tensions still high, especially surrounding the conflict in Ukraine, platforms like Telegram face increasing scrutiny and legal pressure in multiple jurisdictions.

Senator Warner warns TikTok deal deadline extension breaks the law

Senator Mark Warner, the top Democrat on the Senate Intelligence Committee, has criticised President Donald Trump’s recent move to extend the deadline for ByteDance to divest TikTok’s US operations. 

Warner argued that the 75-day extension violates the law passed in 2024, which mandates a complete separation between TikTok’s American entity and its Chinese parent company due to national security concerns.

The deal currently under consideration would allow ByteDance to retain a significant equity stake and maintain an operational role in the new US-based company. 

According to Warner, this arrangement fails to satisfy the legal requirement of eliminating Chinese influence over TikTok’s US operations. 

He emphasised that any legitimate divestiture must include a complete technological and organisational break, preventing ByteDance from accessing user data or source code.

The White House and TikTok have not issued statements in response to Warner’s criticism. In its second term, Trump’s administration has stated it is in contact with four groups regarding a potential TikTok acquisition. 

However, no agreement has been finalised, and China has yet to publicly support a sale of TikTok’s US assets, one of the primary obstacles to completing the deal.

Under the 2024 law, ByteDance was required to divest TikTok’s US business by 19 January or face a ban

Trump, who retook office on 20 January, chose not to enforce the ban immediately and instead signed an executive order extending the deadline. 

The Justice Department further complicated the issue when it told Apple and Google that the law would not be enforced, allowing the app to remain available for download.

As the deadline extension continues to stir controversy, lawmakers like Warner insist that national security and legislative integrity are at stake.

For more information on these topics, visit diplomacy.edu.

Copyright lawsuits against OpenAI and Microsoft combined in AI showdown

Twelve copyright lawsuits filed against OpenAI and Microsoft have been merged into a single case in the Southern District of New York.

The US judicial panel on multidistrict litigation decided to consolidate, despite objections from many plaintiffs who argued their cases were too distinct.

The lawsuits claim that OpenAI and Microsoft used copyrighted books and journalistic works without consent to train AI tools like ChatGPT and Copilot.

The plaintiffs include high-profile authors—Ta-Nehisi Coates, Sarah Silverman, Junot Díaz—and major media outlets such as The New York Times and Daily News.

The panel justified the centralisation by citing shared factual questions and the benefits of unified pretrial proceedings, including streamlined discovery and avoidance of conflicting rulings.

OpenAI has defended its use of publicly available data under the legal doctrine of ‘fair use.’

A spokesperson stated the company welcomed the consolidation and looked forward to proving that its practices are lawful and support innovation. Microsoft has not yet issued a comment on the ruling.

The authors’ attorney, Steven Lieberman, countered that this is about large-scale theft. He emphasised that both Microsoft and OpenAI have, in their view, infringed on millions of protected works.

Some of the same authors are also suing Meta, alleging the company trained its models using books from the shadow library LibGen, which houses over 7.5 million titles.

Simultaneously, Meta faced backlash in the UK, where authors protested outside the company’s London office. The demonstration focused on Meta’s alleged use of pirated literature in its AI training datasets.

The Society of Authors has called the actions illegal and harmful to writers’ livelihoods.

Amazon also entered the copyright discussion this week, confirming its new Kindle ‘Recaps’ feature uses generative AI to summarise book plots.

While Amazon claims accuracy, concerns have emerged online about the reliability of AI-generated summaries.

In the UK, lawmakers are also reconsidering copyright exemptions for AI companies, facing growing pressure from creative industry advocates.

The debate over how AI models access and use copyrighted material is intensifying, and the decisions made in courtrooms and parliaments could radically change the digital publishing landscape.

For more information on these topics, visit diplomacy.edu.

Sam Altman’s AI cricket post fuels India speculation

A seemingly light-hearted social media post by OpenAI CEO Sam Altman has stirred a wave of curiosity and scepticism in India. Altman shared an AI-generated anime image of himself as a cricket player dressed in an Indian jersey, which quickly went viral among Indian users.

While some saw it as a fun gesture, others questioned the timing and motives, speculating whether it was part of a broader strategy to woo Indian audiences. This isn’t the first time Altman has publicly praised India.

In recent weeks, he lauded the country’s rapid adoption of AI technology, calling it ‘amazing to watch’ and even said it was outpacing the rest of the world. His comments marked a shift from a more dismissive stance during a 2023 visit when he doubted India’s potential to compete with OpenAI’s large-scale models.

However, during his return visit in February 2025, he expressed interest in collaborating with Indian authorities on affordable AI solutions. The timing of Altman’s praise coincides with a surge in Indian users on OpenAI’s platforms, now the company’s second-largest market.

Meanwhile, OpenAI faces a legal tussle with several Indian media outlets over their alleged content misuse. Despite this, the potential of India’s booming AI market—projected to hit $8 billion by 2025—makes the country a critical frontier for global tech firms.

Experts argue that Altman’s overtures are more about business than sentiment. With increasing competition from rival AI models like DeepSeek and Gemini, maintaining and growing OpenAI’s Indian user base has become vital. As technology analyst Nikhil Pahwa said, ‘There’s no real love; it’s just business.’

For more information on these topics, visit diplomacy.edu.

Thailand strengthens cybersecurity with Google Cloud

Thailand’s National Cyber Security Agency (NCSA) has joined forces with Google Cloud to strengthen the country’s cyber resilience, using AI-based tools and shared threat intelligence instead of relying solely on traditional defences.

The collaboration aims to better protect public agencies and citizens against increasingly sophisticated cyber threats.

A key part of the initiative involves deploying Google Cloud Cybershield for centralised monitoring of security events across government bodies. Instead of having fragmented monitoring systems, this unified approach will help streamline incident detection and response.

The partnership also brings advanced training for cybersecurity personnel in the public sector, alongside regular threat intelligence sharing.

Google Cloud Web Risk will be integrated into government operations to automatically block websites hosting malware and phishing content, instead of relying on manual checks.

Google further noted the impact of its anti-scam technology in Google Play Protect, which has prevented over 6.6 million high-risk app installation attempts in Thailand since its 2024 launch—enhancing mobile safety for millions of users.

For more information on these topics, visit diplomacy.edu.

TikTok deal stalled amid US-China trade tensions

Negotiations to divest TikTok’s US operations have been halted following China’s indication that it would not approve the deal. The development came after President Donald Trump announced increased tariffs on Chinese imports.

The proposed arrangement involved creating a new US-based company to manage TikTok’s American operations, with US investors holding a majority stake and ByteDance retaining less than 20%. This plan had received approvals from existing and new investors, ByteDance, and the US government.

In response to the stalled negotiations, President Trump extended the deadline for ByteDance to sell TikTok’s US assets by 75 days, aiming to allow more time for securing necessary approvals.

He emphasised the desire to continue collaborating with TikTok and China to finalise the deal, expressing a preference to avoid shutting the app in the US.

The future of TikTok in the US remains unpredictable as geopolitical tensions and trade disputes continue to influence the negotiations.

On one side, such a reaction from the Chinese government could have been expected in exchange for the increase of US tariffs on Chinese products; on the other side, by extending the deadline, Trump would be able to maintain his protectionist policy while collecting sympathies from 170 million US citizens using the app, which now is a victim in their eyes as it faces potential banning if the US-China trade war doesn’t calm down and a resolution is not reached within the extended timeframe.

For more information on these topics, visit diplomacy.edu.