Gemini 2.5 Computer Use brings human-like interface control to AI agents

Google DeepMind has launched the Gemini 2.5 Computer Use model, a specialised version of Gemini 2.5 Pro designed to let AI agents interact directly with digital user interfaces.

Available in preview through the Gemini API, developers can build agents capable of performing web and mobile tasks such as form-filling, navigation and interaction within apps.

Unlike models limited to structured APIs, Gemini 2.5 Computer Use can reason visually about what it sees on screen, making it possible to complete tasks requiring clicks, scrolls and text input.

While maintaining low latency, it outperforms rivals on several benchmarks, including Browserbase’s Online-Mind2Web and WebVoyager.

The model’s safety design includes per-step risk checks, built-in safeguards against misuse and developer-controlled restrictions on high-risk actions such as payments or security changes.

Google has already integrated it into systems like Project Mariner, Firebase Testing Agent and AI Mode in Search, while early testers report faster, more reliable automation.

Gemini 2.5 Computer Use is now available in public preview via Google AI Studio and Vertex AI, enabling developers to experiment with advanced interface-aware agents that can perform complex digital workflows securely and efficiently.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tools make Facebook Reels more engaging than ever

Facebook enhances how users find and share Reels, focusing on personalisation and social interaction.

The platform’s new recommendation engine learns user interests faster, presenting more relevant and up-to-date content. Video viewing time in the US has risen over 20% year-on-year, reflecting the growing appeal of both short and long-form clips.

The update introduces new ‘friend bubbles’ showing which Reels or posts friends have liked, allowing users to start private chats instantly.

A feature that encourages more spontaneous conversation and discovery through shared interests. Facebook’s ‘Save’ option has also been simplified, letting users collect favourite posts and Reels in one place, while improving future recommendations.

AI now plays a larger role in content exploration, offering suggested searches on certain Reels to help users find related topics without leaving the player. By combining smarter algorithms with stronger social cues, Facebook aims to make video discovery more meaningful and community-driven.

Further personalisation tools are expected to follow as the platform continues refining its Reels experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic launches Bengaluru office to drive responsible AI in India

AI firm Anthropic, the company behind the Claude AI chatbot, is opening its first office in India, choosing Bengaluru as its base.

A move that follows OpenAI’s recent expansion into New Delhi, underlining India’s growing importance as a hub for AI development and adoption.

CEO Dario Amodei said India’s combination of vast technical talent and the government’s commitment to equitable AI progress makes it an ideal location.

The Bengaluru office will focus on developing AI solutions tailored to India’s needs in education, healthcare, and agriculture sectors.

Amodei is visiting India to strengthen ties with enterprises, nonprofits, and startups and promote responsible AI use that is aligned with India’s digital growth strategy.

Anthropic plans further expansion in the Indo-Pacific region, following its Tokyo launch, later in the year.

Chief Commercial Officer Paul Smith noted the rising demand among Indian companies for trustworthy, scalable AI systems. Anthropic’s Claude models are already accessible in India through its API, Amazon Bedrock, and Google Cloud Vertex AI.

The company serves more than 300,000 businesses worldwide, with nearly 80 percent of usage outside the US.

India has become the second-largest market for Claude, with developers using it for tasks such as mobile UI design and web app debugging.

Anthropic also enhances Claude’s multilingual capabilities in major Indic languages, including Hindi, Bengali, and Tamil, to support education and public sector projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New report finds IT leaders unprepared for evolving cyber threats

A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.

Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.

The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.

The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.

11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic’s Claude to power Deloitte’s new enterprise AI expansion

Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.

The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.

Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.

A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.

The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gamers report widespread disconnections across multiple services

Several major gaming and online platforms have reportedly faced simultaneous disruptions across multiple devices and regions. Platforms like Steam and Riot Games experienced connection issues, blocking access to major titles such as Counter-Strike, Dota 2, Valorant, and League of Legends.

Some users reported issues with PlayStation Network, Epic Games, Hulu, AWS, and other services.

Experts suggest the outages may be linked to a possible DDoS attack from the Aisuru botnet. While official confirmations remain limited, reports indicate unusually high traffic, with one source claiming bandwidth levels near 30 terabits per second.

Similar activity from Aisuru has been noted in incidents dating back to 2024, targeting a range of internet-connected devices.

The botnet is thought to exploit vulnerabilities in routers, cameras, and other connected devices, potentially controlling hundreds of thousands of nodes. Researchers say the attacks are widespread across countries and industries, though their full scale and purpose remain uncertain.

Further investigations are ongoing, and platforms continue to monitor and respond to potential threats. Users are advised to remain aware of service updates and exercise caution when accessing online networks during periods of unusual activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s competition watchdog urges AI self-audits to prevent market distortions

The Competition Commission of India (CCI) has urged companies to self-audit their AI systems to prevent anti-competitive practices and ensure responsible autonomy.

A call came as part of the CCI’s market study on AI, emphasising the risks of opacity and algorithmic collusion while highlighting AI’s potential to enhance innovation and productivity.

The study warned that dominant firms could exploit their control over data, infrastructure, and proprietary models to reinforce market power, creating barriers to entry. It also noted that opaque AI systems in user sectors may lead to tacit algorithmic coordination in pricing and strategy, undermining fair competition.

The regulatory approach of India, the CCI said, aims to balance technological progress with accountability through a co-regulatory framework that promotes both competition and innovation.

Additionally, the Commission plans to strengthen its technical capacity, establish a digital markets think tank and host a conference on AI and regulatory challenges.

A report recommended a six-step self-audit framework for enterprises, requiring evaluation of AI systems against competition risks, senior management oversight and clear accountability in high-risk deployments.

It also highlighted AI’s pro-competitive effects, particularly for MSMEs, which benefit from improved efficiency and greater access to digital markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!