Top 7 AI agents transforming business in 2025

AI agents are no longer a futuristic concept — they’re now embedded in the everyday operations of major companies across sectors.

From customer service to data analysis, AI-powered agents transform workflows by handling tasks like scheduling, reporting, and decision-making with minimal human input.

Unlike simple chatbots, today’s AI agents understand context, follow multi-step instructions, and integrate seamlessly with business tools. Google’s Gemini Agents, IBM’s Watsonx Orchestrate, Microsoft Copilot, and OpenAI’s Operator are some tools that reshape how businesses function.

These systems interpret goals and act on behalf of employees, boosting productivity without needing constant prompts.

Other leading platforms include Amelia, known for its enterprise-grade capabilities in finance and telecom; Claude by Anthropic, focused on safe and transparent reasoning; and North by Cohere, which delivers sector-specific AI for clients like Oracle and SAP.

Many of these tools offer no-code or low-code setups, enabling faster adoption across HR, finance, customer support, and more.

While most agents aren’t entirely autonomous, they’re designed to perform meaningful work and evolve with feedback.

The rise of agentic AI marks a significant shift in workplace automation as businesses move beyond experimentation toward real-world implementation, one workflow at a time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AGI moves closer to reshaping society

There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.

Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.

Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.

Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.

Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.

AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.

Still, the rise of AGI raises difficult questions.

How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.

Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.

The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kurbalija’s book on internet governance turns 20 with new life at IGF

At the Internet Governance Forum 2025 in Lillestrøm, Norway, Jovan Kurbalija launched the eighth edition of his seminal textbook ‘Introduction to Internet Governance’, marking a return to writing after a nine-year pause. Moderated by Sorina Teleanu of the Diplo, the session unpacked not just the content of the new edition but also the reasoning behind retaining its original title in an era buzzing with buzzwords like ‘AI governance’ and ‘digital governance.’

Kurbalija defended the choice, arguing that most so-called digital issues—from content regulation to cybersecurity—ultimately operate over internet infrastructure, making ‘Internet governance’ the most precise term available.

The updated edition reflects both continuity and adaptation. He introduced ‘Kaizen publishing,’ a new model that replaces the traditional static book cycle with a continuously updated digital platform. Driven by the fast pace of technological change and aided by AI tools trained on his own writing style, the new format ensures the book evolves in real-time with policy and technological developments.

Jovan book launch

The new edition is structured as a seven-floor pyramid tackling 50 key issues rooted in history and future internet governance trajectories. The book also traces digital policy’s deep historical roots.

Kurbalija highlighted how key global internet governance frameworks—such as ICANN, the WTO e-commerce moratorium, and UN cyber initiatives—emerged within months of each other in 1998, a pivotal moment he calls foundational to today’s landscape. He contrasted this historical consistency with recent transformations, identifying four key shifts since 2016: mass data migration to the cloud, COVID-19’s digital acceleration, the move from CPUs to GPUs, and the rise of AI.

Finally, the session tackled the evolving discourse around AI governance. Kurbalija emphasised the need to weigh long-term existential risks against more immediate challenges like educational disruption and concentrated knowledge power. He also critiqued the shift in global policy language—from knowledge-centric to data-driven frameworks—and warned that this transformation might obscure AI’s true nature as a knowledge-based phenomenon.

As geopolitics reasserts itself in digital governance debates, Kurbalija’s updated book aims to ground readers in the enduring principles shaping an increasingly complex landscape.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI and the future of work: Global forum highlights risks, promise, and urgent choices

At the 20th Internet Governance Forum held in Lillestrøm, Norway, global leaders, industry experts, and creatives gathered for a high-level session exploring how AI is transforming the world of work. While the tone was broadly optimistic, participants wrestled with difficult questions about equity, regulation, and the ethics of data use.

AI’s capacity to enhance productivity, reshape industries, and bring solutions to health, education, and agriculture was celebrated, but sharp divides emerged over how to govern and share its benefits. Concrete examples showcased AI’s positive impact. Norway’s government highlighted AI’s role in green energy and public sector efficiency, while Lesotho’s minister shared how AI helps detect tuberculosis and support smallholder farmers through localised apps.

AI addresses systemic shortfalls in healthcare by reducing documentation burdens and enabling earlier diagnosis. Corporate representatives from Meta and OpenAI showcased tools that personalise education, assist the visually impaired, and democratise advanced technology through open-source platforms.

Joseph Gordon Levitt at IGF 2025

Yet, concerns about fairness and data rights loomed large. Actor and entrepreneur Joseph Gordon-Levitt delivered a pointed critique of tech companies using creative work to train AI without consent or compensation.

He called for economic systems that reward human contributions, warning that failing to do so risks eroding creative and financial incentives. This argument underscored broader concerns about job displacement, automation, and the growing digital divide, especially among women and marginalised communities.

Debates also exposed philosophical rifts between regulatory approaches. While the US emphasised minimal interference to spur innovation, the European Commission and Norway called for risk-based regulation and international cooperation to ensure trust and equity. Speakers agreed on the need for inclusive governance frameworks and education systems that foster critical thinking, resist de-skilling, and prepare workers for an AI-augmented economy.

The session made clear that the future of work in the AI era depends on today’s collective choices that must centre people, fairness, and global solidarity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI governance debated at IGF 2025: Global cooperation meets local needs

At the Internet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of artificial intelligence governance. The discussion, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela of UNESCO, featured a rich exchange between government officials, private sector leaders, civil society voices, and multilateral organisations.

The session highlighted how AI governance is becoming a crowded yet fragmented space, shaped by overlapping frameworks such as the OECD AI Principles, the EU AI Act, UNESCO’s recommendations on AI ethics, and various national and regional strategies. While these efforts reflect progress, they also pose challenges in terms of coordination, coherence, and inclusivity.

IGF session highlights urgent need for democratic resilience online

Melinda Claybaugh, Director of Privacy Policy at Meta, noted the abundance of governance initiatives but warned of disagreements over how AI risks should be measured. ‘We’re at an inflection point,’ she said, calling for more balanced conversations that include not just safety concerns but also the benefits and opportunities AI brings. She argued for transparency in risk assessments and suggested that existing regulatory structures could be adapted to new technologies rather than replaced.

In response, Jhalak Kakkar, Executive Director at India’s Centre for Communication Governance, urged caution against what she termed a ‘false dichotomy’ between innovation and regulation. ‘We need to start building governance from the beginning, not after harms appear,’ she stressed, calling for socio-technical impact assessments and meaningful civil society participation. Kakkar advocated for multi-stakeholder governance that moves beyond formality to real influence.

Mlindi Mashologu, Deputy Director-General at South Africa’s Ministry of Communications and Digital Technology, highlighted the importance of context-aware regulation. ‘There is no one-size-fits-all when it comes to AI,’ he said. Mashologu outlined South Africa’s efforts through its G20 presidency to reduce AI-driven inequality via a new policy toolkit, stressing human rights, data justice, and environmental sustainability as core principles. He also called for capacity-building to enable the Global South to shape its own AI future.

Jovan Kurbalija, Executive Director of the Diplo Foundation, brought a philosophical lens to the discussion, questioning the dominance of ‘data’ in governance frameworks. ‘AI is fundamentally about knowledge, not just data,’ he argued. Kurbalija warned against the monopolisation of human knowledge and advocated for stronger safeguards to ensure fair attribution and decentralisation.

 Crowd, Person, People, Press Conference, Adult, Male, Man, Face, Head, Electrical Device, Microphone, Clothing, Formal Wear, Suit, Audience

The need for transparency, explainability, and inclusive governance remained central themes. Participants explored whether traditional laws—on privacy, competition, and intellectual property—are sufficient or whether new instruments are needed to address AI’s novel challenges.

Audience members added urgency to the discussion. Anna from Mexican digital rights group R3D raised concerns about AI’s environmental toll and extractive infrastructure practices in the Global South. Pilar Rodriguez, youth coordinator for the IGF in Spain, questioned how AI governance could avoid fragmentation while still respecting regional sovereignty.

The session concluded with a call for common-sense, human-centric AI governance. ‘Let’s demystify AI—but still enjoy its magic,’ said Kurbalija, reflecting the spirit of hopeful realism that permeated the discussion. Panelists agreed that while many AI risks remain unclear, global collaboration rooted in human rights, transparency, and local empowerment offers the most promising path forward.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Gemini Robotics On-Device: Google’s AI model for offline robotic tasks

On Tuesday, 24 June, Google’s DeepMind division announced the release of a new large language model named Gemini Robotics On-Device, designed to operate locally on robotic systems.

In a blog post, the company stated that the AI model has been optimised to function efficiently on-device and demonstrates strong general-purpose dexterity and task generalisation capabilities.

The offline model is an advancement of the earlier Gemini Robotics system introduced in March this year. Unlike cloud-based models, this version can operate offline, making it suitable for limited connectivity or critical latency.

Engineered for robots with dual arms, Gemini Robotics On-Device is designed to require minimal computational resources.

It can execute fine motor tasks such as folding garments and unzipping bags. According to Google, the model responds to natural language prompts, enabling more intuitive human-robot interaction.

The company claims the model outperforms comparable on-device alternatives, especially when completing complex, multi-step instructions or handling unfamiliar tasks. Benchmark results indicate that its performance closely approaches that of Google’s cloud-based AI solutions.

Initially developed for ALOHA robots, the on-device model has since been adapted for other systems, including the bi-arm Franka FR3 robot and the Apollo humanoid.

On the Franka FR3, the model followed diverse instructions and managed unfamiliar objects and environments, including industrial tasks like belt assembly. The system demonstrated general object manipulation in previously unseen contexts on the Apollo humanoid.

Developers interested in trialling Gemini Robotics On-Device can access it via the provided software development kit (SDK).

Google joins other major players exploring AI for robotics. At GTC 2025, NVIDIA introduced Groot N1, an AI system for humanoid robots, while Hugging Face is currently developing its own open-source, AI-powered robotic platform.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

North Korea-linked hackers deploy fake Zoom malware to steal crypto

North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.

Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.

The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.

When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.

Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.

Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.

Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

HPE unveils private cloud AI platform featuring Nvidia Blackwell chips

Hewlett Packard Enterprise (HPE) and Nvidia have unveiled new AI factory solutions to accelerate AI adoption across multiple sectors.

Announced at HPE Discover in Las Vegas, the new offerings include modular AI factory infrastructure, AI-ready RTX PRO servers (HPE ProLiant Compute DL380a Gen12), and the next iteration of HPE’s turnkey platform, HPE Private Cloud AI.

The portfolio combines Nvidia’s Blackwell accelerated computing, Spectrum-X Ethernet, and BlueField-3 networking with Nvidia AI Enterprise software and HPE’s hardware, software, and services. The result is a modular, pre-integrated infrastructure stack intended to simplify AI deployment at scale.

HPE’s OpsRamp Software, a validated observability solution for Nvidia’s Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration are also part of the integrated platform.

A key component is the next-generation HPE Private Cloud AI, jointly developed by HPE and Nvidia. It includes ProLiant DL380a Gen12 servers featuring Nvidia RTX PRO 6000 Blackwell Server Edition GPUs, supporting various enterprise and industrial AI applications. These systems are now available for order.

The platform also supports Nvidia AI Blueprints, such as the AI-Q Blueprint, for AI agent creation and workflow management.

HPE additionally announced the Compute XD690, a new Nvidia HGX B300 system powered by Nvidia Blackwell Ultra GPUs, expected to ship in October 2025.

International collaborations are part of the strategy. HPE is partnering with Japanese telecom provider KDDI to build AI infrastructure at the KDDI Osaka Sakai Data Centre using Nvidia’s GB200 NVL72 platform, based on the Grace Blackwell architecture.

In financial services, HPE is working with Accenture to test agentic AI workflows via Accenture’s AI Refinery, leveraging HPE Private Cloud AI for procurement, sourcing, and risk analysis.

Security and governance features have also been emphasised, including air-gapped management, multi-tenancy support, and post-quantum cryptography.

As part of its broader ecosystem expansion, HPE has added 26 partners to its ‘Unleash AI’ initiative, offering more than 70 packaged AI workloads covering video analytics, fraud detection, cybersecurity, and sovereign AI.

To support enterprise adoption, HPE and Nvidia have launched AI Acceleration Workshops aimed at helping organisations scale AI implementations.

Separately, Nvidia recently collaborated with Deutsche Telekom to launch Europe’s first industrial AI cloud in Germany, designed to support the manufacturing sector with applications in engineering, simulation, digital twins, and robotics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and OpenAI revisit investment deal

OpenAI chief executive Sam Altman revealed that he had a conversation with Microsoft CEO Satya Nadella on Monday to discuss the future of their partnership.

Speaking on a New York Times podcast, Altman described the dialogue as part of ongoing efforts to align on the evolving nature of their collaboration.

Earlier this month, the Wall Street Journal reported that Microsoft — OpenAI’s primary backer — and the AI firm are in discussions to revise the terms of their investment. Topics under negotiation reportedly include Microsoft’s future equity stake in OpenAI.

According to the Financial Times, Microsoft is weighing whether to pause the talks if the two parties cannot resolve key issues. Neither Microsoft nor OpenAI responded to media requests for comment outside regular business hours.

‘Obviously, in any deep partnership, there are points of tension, and we certainly have those,’ Altman said. ‘But on the whole, it’s been wonderfully good for both companies.’

Altman also commented on his recent discussions with United States President Donald Trump regarding AI. He noted that Trump appeared to grasp the technology’s broader geopolitical and economic significance.

In January, Trump announced Stargate — a proposed private sector initiative to invest up to $500 billion in AI infrastructure — with potential backing from SoftBank, OpenAI, and Oracle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!