Gemini 2.5 Computer Use brings human-like interface control to AI agents

Google DeepMind has launched the Gemini 2.5 Computer Use model, a specialised version of Gemini 2.5 Pro designed to let AI agents interact directly with digital user interfaces.

Available in preview through the Gemini API, developers can build agents capable of performing web and mobile tasks such as form-filling, navigation and interaction within apps.

Unlike models limited to structured APIs, Gemini 2.5 Computer Use can reason visually about what it sees on screen, making it possible to complete tasks requiring clicks, scrolls and text input.

While maintaining low latency, it outperforms rivals on several benchmarks, including Browserbase’s Online-Mind2Web and WebVoyager.

The model’s safety design includes per-step risk checks, built-in safeguards against misuse and developer-controlled restrictions on high-risk actions such as payments or security changes.

Google has already integrated it into systems like Project Mariner, Firebase Testing Agent and AI Mode in Search, while early testers report faster, more reliable automation.

Gemini 2.5 Computer Use is now available in public preview via Google AI Studio and Vertex AI, enabling developers to experiment with advanced interface-aware agents that can perform complex digital workflows securely and efficiently.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tools make Facebook Reels more engaging than ever

Facebook enhances how users find and share Reels, focusing on personalisation and social interaction.

The platform’s new recommendation engine learns user interests faster, presenting more relevant and up-to-date content. Video viewing time in the US has risen over 20% year-on-year, reflecting the growing appeal of both short and long-form clips.

The update introduces new ‘friend bubbles’ showing which Reels or posts friends have liked, allowing users to start private chats instantly.

A feature that encourages more spontaneous conversation and discovery through shared interests. Facebook’s ‘Save’ option has also been simplified, letting users collect favourite posts and Reels in one place, while improving future recommendations.

AI now plays a larger role in content exploration, offering suggested searches on certain Reels to help users find related topics without leaving the player. By combining smarter algorithms with stronger social cues, Facebook aims to make video discovery more meaningful and community-driven.

Further personalisation tools are expected to follow as the platform continues refining its Reels experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sanders warns AI could erase 100 million US jobs

Senator Bernie Sanders has warned that AI and automation could eliminate nearly 100 million US jobs within the next decade unless stronger worker protections are introduced.

The report, titled The Big Tech Oligarchs’ War Against Workers, claims that companies such as Amazon, Walmart, JPMorgan Chase, and UnitedHealth already use AI to reduce their workforces while rewarding executives with multimillion-dollar pay packages.

According to the findings, nearly 90% of US fast-food workers, two-thirds of accountants, and almost half of truck drivers could see their jobs replaced by automation. Sanders argues that technological progress should enhance people’s lives rather than displace them,

His proposals include introducing a 32-hour workweek without loss of pay, a ‘robot tax’ for companies that replace human labour, and giving workers a share of profits and board representation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scammers use AI to fake British boutiques

Fraudsters are using AI-generated images and back stories to pose as British family businesses, luring shoppers into buying cheap goods from Asia. Websites claiming to be long-standing local boutiques have been linked to warehouses in China and Hong Kong.

Among them is C’est La Vie, which presented itself as a Birmingham jeweller run by a couple called Eileen and Patrick. The supposed owners appeared in highly convincing AI-generated photos, while customers later discovered their purchases were shipped from China.

Victims described feeling cheated after receiving poor-quality jewellery and clothes that bore no resemblance to the advertised items. More than 500 complaints on Trustpilot accuse such companies of exploiting fabricated stories to appear authentic.

Consumer experts at Which? warn that AI tools now enable scammers to create fake brands at an unprecedented scale. The ASA has called on social media platforms to act, as many victims were targeted through Facebook ads.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 800 million weekly users as OpenAI’s value hits $500 billion

OpenAI CEO Sam Altman has announced that ChatGPT now reaches 800 million weekly active users, reflecting rapid growth across consumers, developers, enterprises and governments.

The figure marks another milestone for the company, which reported 700 million weekly users in August and 500 million at the end of March.

Altman shared the news during OpenAI’s Dev Day keynote, noting that four million developers are now building with OpenAI tools. He said ChatGPT processes more than six billion tokens per minute through its API, signalling how deeply integrated it has become across digital ecosystems.

The event also introduced new tools for building apps directly within ChatGPT and creating more advanced agentic systems. Altman states these will support a new generation of interactive and personalised applications.

OpenAI, still legally a nonprofit, was recently valued at $500 billion following a private stock sale worth $6.6 billion.

Its growing portfolio now includes the Sora video-generation tool, a new social platform, and a commerce partnership with Stripe, consolidating its status as the world’s most valuable private company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Thousands affected by AI-linked data breach in New South Wales

A major data breach has affected the Northern Rivers Resilient Homes Program in New South Wales.

Authorities confirmed that personal information was exposed after a former contractor uploaded data to the AI platform ChatGPT between 12 and 15 March 2025.

The leaked file contained over 12,000 records, with details including names, addresses, contact information and health data. Up to 3,000 individuals may be impacted.

While there is no evidence yet that the information has been accessed by third parties, the NSW Reconstruction Authority (RA) and Cyber Security NSW have launched a forensic investigation.

Officials apologised for the breach and pledged to notify all affected individuals in the coming week. ID Support NSW is offering free advice and resources, while compensation will be provided for any costs linked to replacing compromised identity documents.

The RA has also strengthened its internal policies to prevent unauthorised use of AI platforms. An independent review of the incident is underway to determine how the breach occurred and why notification took several months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour market stability persists despite the rise of AI

Public fears of AI rapidly displacing workers have not yet materialised in the US labour market.

A new study finds that the overall occupational mix has shifted only slightly since the launch of generative AI in November 2022, with changes resembling past technological transitions such as the rise of computers and the internet.

The pace of disruption is not significantly faster than historical benchmarks.

Industry-level data show some variation, particularly in information services, finance, and professional sectors, but trends were already underway before AI tools became widely available.

Similarly, younger workers have not seen a dramatic divergence in opportunities compared with older graduates, suggesting that AI’s impact on early careers remains modest and difficult to isolate.

Exposure, automation, and augmentation metrics offer little evidence of widespread displacement. OpenAI’s exposure data and Anthropic’s usage data suggest stability in the proportion of workers most affected by AI, including those unemployed.

Even in roles theoretically vulnerable to automation, there has been no measurable increase in job losses.

The study concludes that AI’s labour effects are gradual rather than immediate. Historical precedent suggests that large-scale workforce disruption unfolds over decades, not months. Researchers plan to monitor the data to track whether AI’s influence becomes more visible over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!