Google has introduced Gemini CLI, a free, open-source AI tool that connects developers directly to its Gemini AI models. The new agentic utility allows developers to request debugging, generate code, and run commands using natural language within their terminal environment.
Built as a lightweight interface, Gemini CLI provides a streamlined way to interact with Gemini. While its coding features stand out, Google says the tool handles content creation, deep research, and complex task management across various workflows.
Gemini CLI uses Gemini 2.5 Pro for coding and reasoning tasks by default. Still, it can also connect to other AI models, such as Imagen and Veo, for image and video generation. It supports the Model Context Protocol (MCP) and integrates with Gemini Code Assist.
Moreover, the tool is available on Windows, MacOS, and Linux, offering developers a free usage tier. Access through Vertex AI or AI Studio is available on a pay-as-you-go basis for advanced setups involving multiple agents or custom models.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.
A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.
Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.
The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.
Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
WhatsApp has introduced a new feature using Meta AI to help users manage unread messages more easily. Named ‘Message Summaries’, the tool provides quick overviews of missed messages in individual and group chats, assisting users to catch up without scrolling through long threads.
The summaries are generated using Meta’s Private Processing technology, which operates inside a Trusted Execution Environment. The secure cloud-based system ensures that neither Meta nor WhatsApp — nor anyone else in the conversation — can access your messages or the AI-generated summaries.
According to WhatsApp, Message Summaries are entirely private. No one else in the chat can see the summary created for you. If someone attempts to interfere with the secure system, operations will stop immediately, or the change will be exposed using a built-in transparency check.
Meta has designed the system around three principles: secure data handling during processing and transmission, strict enforcement of protections against tampering, and provable transparency to track any breach attempt.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.
Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.
Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.
Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.
Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.
AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.
Still, the rise of AGI raises difficult questions.
How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.
Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.
The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.
The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.
Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.
ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.
At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.
Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.
Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.
The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
North Korean hackers have reportedly used deepfake technology to impersonate executives during a fake Zoom call in an attempt to install malware and steal cryptocurrency from a targeted employee.
Cybersecurity firm Huntress identified the scheme, which involved a convincingly staged meeting and a custom-built AppleScript targeting macOS systems—an unusual move that signals the rising sophistication of state-sponsored cyberattacks.
The incident began with a fraudulent Calendly invitation, which redirected the employee to a fake Zoom link controlled by the attackers. Weeks later, the employee joined what appeared to be a routine video call with company leadership. In reality, the participants were AI-generated deepfakes.
When audio issues arose, the hackers convinced the user to install what was supposedly a Zoom extension but was, in fact, malware designed to hijack cryptocurrency wallets and steal clipboard data.
Huntress traced the attack to TA444, a North Korean group also known by names like BlueNoroff and STARDUST CHOLLIMA. Their malware was built to extract sensitive financial data while disguising its presence and erasing traces once the job was done.
Security experts warn that remote workers and companies have to be especially cautious. Unfamiliar calendar links, sudden platform changes, or requests to install new software should be treated as warning signs.
Verifying suspicious meeting invites through alternative contact methods — like a direct phone call — is a vital but straightforward way to prevent damage.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Since 2015, 21 June marks the International Day of Yoga, celebrating the ancient Indian practice that blends physical movement, breathing, and meditation. But as the world becomes increasingly digital, yoga itself is evolving.
No longer limited to ashrams or studios, yoga today exists on mobile apps, YouTube channels, and even in virtual reality. On the surface, this democratisation seems like a triumph. But what are the more profound implications of digitising a deeply spiritual and embodied tradition? And how do emerging technologies, particularly AI, reshape how we understand and experience yoga in a hyper-connected world?
Tech and wellness: The rise of AI-driven yoga tools
The wellness tech market has exploded, and yoga is a major beneficiary. Apps like Down Dog, YogaGo, and Glo offer personalised yoga sessions, while wearables such as the Apple Watch or Fitbit track heart rate and breathing.
Meanwhile, AI-powered platforms can generate tailored yoga routines based on user preferences, injury history, or biometric feedback. For example, AI motion tracking tools can evaluate your poses in real-time, offering corrections much like a human instructor.
While these tools increase accessibility, they also raise questions about data privacy, consent, and the commodification of spiritual practices. What happens when biometric data from yoga sessions is monetised? Who owns your breath and posture data? These questions sit at the intersection of AI ethics and digital rights.
Beyond the mat: Virtual reality and immersive yoga
The emergence of virtual reality (VR) and augmented reality (AR) is pushing the boundaries of yoga practice. Platforms like TRIPP or Supernatural offer immersive wellness environments where users can perform guided meditation and yoga in surreal, digitally rendered landscapes.
These tools promise enhanced focus and escapism—but also risk detachment from embodied experience. Does VR yoga deepen the meditative state, or does it dilute the tradition by gamifying it? As these technologies grow in sophistication, we must question how presence, environment, and embodiment translate in virtual spaces.
Can AI be a guru? Empathy, authority, and the limits of automation
One provocative question is whether AI can serve as a spiritual guide. AI instructors—whether through chatbots or embodied in VR—may be able to correct your form or suggest breathing techniques. But can they foster the deep, transformative relationship that many associate with traditional yoga masters?
AI lacks emotional intuition, moral responsibility, and cultural embeddedness. While it can mimic the language and movements of yoga, it struggles to replicate the teacher-student connection that grounds authentic practice. As AI becomes more integrated into wellness platforms, we must ask: where do we draw the line between assistance and appropriation?
Community, loneliness, and digital yoga tribes
Yoga has always been more than individual practice—community is central. Yet, as yoga moves online, questions of connection and belonging arise. Can digital communities built on hashtags and video streams replicate the support and accountability of physical sanghas (spiritual communities)?
Paradoxically, while digital yoga connects millions, it may also contribute to isolation. A solitary practice in front of a screen lacks the energy, feedback, and spontaneity of group practice. For tech developers and wellness advocates, the challenge is to reimagine digital spaces that foster authentic community rather than algorithmic echo chambers.
Digital policy and the politics of platformised spirituality
Beyond the individual experience, there’s a broader question of how yoga operates within global digital ecosystems. Platforms like YouTube, Instagram, and TikTok have turned yoga into shareable content, often stripped of its philosophical and spiritual roots.
Meanwhile, Big Tech companies capitalise on wellness trends while contributing to stress-inducing algorithmic environments. There are also geopolitical and cultural considerations.
The export of yoga through Western tech platforms often sidesteps its South Asian origins, raising issues of cultural appropriation. From a policy perspective, regulators must grapple with how spiritual practices are commodified, surveilled, and reshaped by AI-driven infrastructures.
Toward inclusive and ethical design in wellness tech
As AI and digital tools become more deeply embedded in yoga practice, there is a pressing need for ethical design. Developers should consider how their platforms accommodate different bodies, abilities, cultures, and languages. For example, how can AI be trained to recognise non-normative movement patterns? Are apps accessible to users with disabilities?
Inclusive design is not only a matter of social justice—it also aligns with yogic principles of compassion, awareness, and non-harm. Embedding these values into AI development can help ensure that the future of yoga tech is as mindful as the practice it seeks to support.
Toward a mindful tech future
As we celebrate International Day of Yoga, we are called to reflect not only on the practice itself but also on its evolving digital context. Emerging technologies offer powerful tools for access and personalisation, but they also risk diluting the depth and ethics of yoga.
For policymakers, technologists, and practitioners alike, the challenge is to ensure that yoga in the digital age remains a practice of liberation rather than a product of algorithmic control. Yoga teaches awareness, balance, and presence. These are the very qualities we need to shape responsible digital policies in an AI-driven world.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
At the 2025 Internet Governance Forum in Lillestrøm, Norway, parliamentarians from around the world gathered to share perspectives on how to regulate harmful online content without infringing on freedom of expression and democratic values. The session, moderated by Sorina Teleanu, Diplo’s Director of Knowledge, highlighted the increasing urgency for social media platforms to respond more swiftly and responsibly to harmful content, particularly content generated by AI that can lead to real-world consequences such as harassment, mental health issues, and even suicide.
Pakistan’s Anusha Rahman Ahmad Khan delivered a powerful appeal, pointing to cultural insensitivity and profit-driven resistance by platforms that often ignore urgent content removal requests. Representatives from Argentina, Nepal, Bulgaria, and South Africa echoed the need for effective legal frameworks that uphold safety and fundamental rights.
Argentina’s Franco Metaza, Member of Parliament of Mercosur, cited disturbing content that promotes eating disorders among young girls and detailed the tangible danger of disinformation, including an assassination attempt linked to online hate. Nepal’s MP Yogesh Bhattarai advocated for regulation without authoritarian control, underscoring the importance of constitutional safeguards for speech.
Member of European Parliament, Tsvetelina Penkova from Bulgaria, outlined the EU’s multifaceted digital laws, like the Digital Services Act and GDPR, which aim to protect users while grappling with implementation challenges across 27 diverse member states.
Youth engagement and digital literacy emerged as key themes, with several speakers emphasising that involving young people in policymaking leads to better, more inclusive policies. Panellists also stressed that education is essential for equipping users with the tools to navigate online spaces safely and critically.
Calls for multistakeholder cooperation rang throughout the session, with consensus on the need for collaboration between governments, tech companies, civil society, and international organisations. A thought-provoking proposal from a Congolese parliamentarian suggested that digital rights be recognised as a new, fourth generation of human rights—akin to civil, economic, and environmental rights already codified in international frameworks.
Other attendees welcomed the idea and agreed that without such recognition, the enforcement of digital protections would remain fragmented. The session concluded on a collaborative and urgent note, with calls for shared responsibility, joint strategies, and stronger international frameworks to create a safer, more just digital future.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.
Google has launched its advanced AI Mode search experience in India, allowing users to explore information through more natural and complex interactions.
The feature, previously available as an experiment in the US, can now be enabled in English via Search Labs. Users test experimental tools on this platform and share feedback on early Google Search features.
Once activated, AI Mode introduces a new tab in the Search interface and Google app. It offers expanded reasoning capabilities powered by Gemini 2.5, enabling queries through text, voice, or images.
The shift supports deeper exploration by allowing follow-up questions and offering diverse web links, helping users understand topics from multiple viewpoints.
India plays a key role in this rollout due to its widespread visual and voice search use.
According to Hema Budaraju, Vice President of Product Management for Search, more users in India engage with Google Lens each month than anywhere else. AI Mode reflects Google’s broader goal of making information accessible across different formats.
Google also highlighted that over 1.5 billion people globally use AI Overviews monthly. These AI-generated summaries, which appear at the top of search results, have driven a 10% rise in user engagement for specific types of queries in both India and the US.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the 2025 Internet Governance Forum in Lillestrøm, Norway, a parliamentary session titled ‘Click with Care: Protecting Vulnerable Groups Online’ gathered lawmakers, regulators, and digital rights experts from around the world to confront the urgent issue of online harm targeting marginalised communities. Speakers from Uganda, the Philippines, Malaysia, Pakistan, the Netherlands, Portugal, and Kenya shared insights on how current laws often fall short, especially in the Global South where women, children, and LGBTQ+ groups face disproportionate digital threats.
Research presented showed alarming trends—one in three African women experience online abuse, often with no support or recourse, and platforms’ moderation systems are frequently inadequate, slow, or biassed in favor of users from the Global North.
The session exposed critical gaps in enforcement and accountability, particularly regarding large platforms like Meta and Google, which frequently resist compliance with national regulations. Malaysian Deputy Minister Teo Nie Ching and others emphasised that individual countries struggle to hold tech giants accountable, leading to calls for stronger regional blocs and international cooperation.
Meanwhile, Philippine lawmaker Raoul Manuel highlighted legislative progress, including extraterritorial jurisdiction for child exploitation and expanded definitions of online violence, though enforcement remains patchy. In Pakistan, Nighat Dad raised the alarm over AI-generated deepfakes and the burden placed on victims to monitor and report their own abuse.
Panellists also stressed that simply taking down harmful content isn’t enough. They called for systemic platform reform, including greater algorithm transparency, meaningful reporting tools, and design changes that prevent harm before it occurs.
Behavioural economist Sandra Maximiano introduced the concept of ‘nudging’ safer user behavior through design interventions that account for human cognitive biases—approaches that could complement legal strategies by embedding protection into the architecture of online spaces.
Why does it matter?
A powerful takeaway from the session was the consensus that online safety must be treated as both a technological and human challenge. Participants agreed that coordinated global responses, inclusive policymaking, and engagement with community structures are essential to making the internet a safer place—particularly for those who need protection the most.
Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.