Google releases free Gemini CLI tool for developers

Google has introduced Gemini CLI, a free, open-source AI tool that connects developers directly to its Gemini AI models. The new agentic utility allows developers to request debugging, generate code, and run commands using natural language within their terminal environment.

Built as a lightweight interface, Gemini CLI provides a streamlined way to interact with Gemini. While its coding features stand out, Google says the tool handles content creation, deep research, and complex task management across various workflows.

Gemini CLI uses Gemini 2.5 Pro for coding and reasoning tasks by default. Still, it can also connect to other AI models, such as Imagen and Veo, for image and video generation. It supports the Model Context Protocol (MCP) and integrates with Gemini Code Assist.

Moreover, the tool is available on Windows, MacOS, and Linux, offering developers a free usage tier. Access through Vertex AI or AI Studio is available on a pay-as-you-go basis for advanced setups involving multiple agents or custom models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI feature to sum up all the unread messages

WhatsApp has introduced a new feature using Meta AI to help users manage unread messages more easily. Named ‘Message Summaries’, the tool provides quick overviews of missed messages in individual and group chats, assisting users to catch up without scrolling through long threads.

The summaries are generated using Meta’s Private Processing technology, which operates inside a Trusted Execution Environment. The secure cloud-based system ensures that neither Meta nor WhatsApp — nor anyone else in the conversation — can access your messages or the AI-generated summaries.

According to WhatsApp, Message Summaries are entirely private. No one else in the chat can see the summary created for you. If someone attempts to interfere with the secure system, operations will stop immediately, or the change will be exposed using a built-in transparency check.

Meta has designed the system around three principles: secure data handling during processing and transmission, strict enforcement of protections against tampering, and provable transparency to track any breach attempt.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia becomes world’s most valuable company after stock surge

Nvidia shares hit an all-time high on 25 June, rising 4.3 percent to US$154.31. The stock has surged 63 percent since April, adding another US$1.5 trillion to its market value.

With a total market capitalisation of about US$3.77 trillion, Nvidia has overtaken Microsoft to become the world’s most valuable listed company.

Strong earnings and growing AI infrastructure spending by major clients — including Microsoft, Meta, Alphabet and Amazon — have reinforced investor confidence.

Nvidia’s CEO, Jensen Huang, told shareholders that demand remains strong and that the computer industry is still in the early stages of a major AI upgrade cycle.

Despite gaining 15 percent in 2025, following a 170 percent rise in 2024 and a 240 percent surge in 2023, Nvidia still appears reasonably valued. It trades at 31.5 times forward earnings, below its 10-year average and close to the Nasdaq 100 multiple, even though its projected growth rate is higher.

Analyst sentiment remains firmly bullish. Nearly 90 percent of analysts tracked by Bloomberg recommend buying the stock, which trades below their average price target.

Yet, Nvidia is less widely held among institutional investors than peers like Microsoft and Apple, indicating further room for buying as AI momentum continues into 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI sandboxes pave path for responsible innovation in developing countries

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts from around the world gathered to examine how AI sandboxes—safe, controlled environments for testing new technologies under regulatory oversight—can help ensure that innovation remains responsible and inclusive, especially in developing countries. Moderated by Sophie Tomlinson of the DataSphere Initiative, the session spotlighted the growing global appeal of sandboxes, initially developed for fintech, and now extending into healthcare, transportation, and data governance.

Speakers emphasised that sandboxes provide a much-needed collaborative space for regulators, companies, and civil society to test AI solutions before launching them into the real world. Mariana Rozo-Paz from the DataSphere Initiative likened them to childhood spaces for building and experimentation, underscoring their agility and potential for creative governance.

From the European AI Office, Alex Moltzau described how the EU AI Act integrates sandboxes to support safe innovation and cross-border collaboration. On the African continent, where 25 sandboxes already exist (mainly in finance), countries like Nigeria are using them to implement data protection laws and shape national AI strategies. However, funding and legal authority remain hurdles.

The workshop laid bare several shared challenges: limited resources, lack of clear legal frameworks, and insufficient participation in civil society. Natalie Cohen of the OECD pointed out that just 41% of countries trust governments to regulate new technologies effectively—a gap that sandboxes can help bridge. By enabling evidence-based experimentation and promoting transparency, they serve as trust-building tools among governments, businesses, and communities.

Despite regional differences, there was consensus that AI sandboxes—when well-designed and inclusive—can drive equitable digital innovation. With initiatives like the Global Sandboxes Forum and OECD toolkits in progress, stakeholders signalled a readiness to move from theory to practice, viewing sandboxes as more than just regulatory experiments—they are, increasingly, catalysts for international cooperation and responsible AI deployment.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Top 7 AI agents transforming business in 2025

AI agents are no longer a futuristic concept — they’re now embedded in the everyday operations of major companies across sectors.

From customer service to data analysis, AI-powered agents transform workflows by handling tasks like scheduling, reporting, and decision-making with minimal human input.

Unlike simple chatbots, today’s AI agents understand context, follow multi-step instructions, and integrate seamlessly with business tools. Google’s Gemini Agents, IBM’s Watsonx Orchestrate, Microsoft Copilot, and OpenAI’s Operator are some tools that reshape how businesses function.

These systems interpret goals and act on behalf of employees, boosting productivity without needing constant prompts.

Other leading platforms include Amelia, known for its enterprise-grade capabilities in finance and telecom; Claude by Anthropic, focused on safe and transparent reasoning; and North by Cohere, which delivers sector-specific AI for clients like Oracle and SAP.

Many of these tools offer no-code or low-code setups, enabling faster adoption across HR, finance, customer support, and more.

While most agents aren’t entirely autonomous, they’re designed to perform meaningful work and evolve with feedback.

The rise of agentic AI marks a significant shift in workplace automation as businesses move beyond experimentation toward real-world implementation, one workflow at a time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AGI moves closer to reshaping society

There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.

Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.

Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.

Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.

Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.

AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.

Still, the rise of AGI raises difficult questions.

How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.

Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.

The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SpaceX rocket carries first quantum satellite into space

A groundbreaking quantum leap has taken place in space exploration. The world’s first photonic quantum computer has successfully entered orbit aboard SpaceX’s Transporter 14 mission.

Launched from Vandenberg Space Force Base in California on 23 June, the quantum device was developed by an international research team led by physicist Philip Walther of the University of Vienna.

The miniature quantum computer, designed to withstand harsh space conditions, is now orbiting 550 kilometres above Earth. It was part of a 70-payload cargo, including microsatellites and re-entry capsules.

Uniquely, the system performs ‘edge computing’, processing data like wildfire detection directly on board rather than transmitting raw information to Earth. The innovation drastically reduces energy use and improves response time.

Assembled in just 11 working days by a 12-person team at the German Aerospace Center in Trauen, the quantum processor is expected to transmit its first results within a week of reaching orbit.

The project’s success marks a significant milestone in quantum space technology, opening the door to further experiments in fundamental physics and applied sciences.

The Transporter 14 mission also deployed satellites from Capella Space, Starfish Space, and Varda Space, among others. Following its 26th successful flight, the Falcon 9 rocket safely landed on a Pacific Ocean platform after a nearly two-hour satellite deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea-linked hackers deploy fake Zoom malware to steal crypto

North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.

Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.

The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.

When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.

Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.

Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.

Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!