Google Doppl, the new AI app, turns outfit photos into try-on videos

Google has unveiled Doppl, a new AI-powered app that lets users create short videos of themselves wearing any outfit they choose.

Instead of relying on imagination or guesswork, Doppl allows people to upload full-body photos and apply outfits seen on social media, thrift shops, or friends, creating animated try-ons that bring static images to life.

The app builds on Google’s earlier virtual try-on tools integrated with its Shopping Graph. Doppl pushes things further by transforming still photos into motion videos, showing how clothes flow and fit in movement.

Users can upload their full-body image or choose an AI model to preview outfits. However, Google warns that the fit and details might not always be accurate at an early stage.

Doppl is currently only available in the US for Android and iOS users aged 18 or older. While Google encourages sharing videos with friends and followers, the tool raises concerns about misuse, such as generating content using photos of others.

Google’s policy requires disclosure if someone impersonates another person, but the company admits that some abuse may occur. To address the issue, Doppl content will include invisible watermarks for tracking.

In its privacy notice, Google confirmed that user uploads and generated videos will be used to improve AI technologies and services. However, data will be anonymised and separated from user accounts before any human review is allowed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube adds AI search results for travel, shopping and more

YouTube is launching a new AI-powered search feature that mirrors Google’s AI Overviews, aiming to improve how users discover content on the platform.

The update introduces an ‘AI-powered search results carousel’ when YouTube users search for shopping, travel, or local activities.

The carousel offers a collection of video thumbnails and an AI-generated summary highlighting the key topics related to the search. For example, someone searching for ‘best beaches in Hawaii’ might see curated clips of snorkelling locations, volcanic coastlines, and planning tips — all surfaced by the AI.

Currently, the feature is available only to YouTube Premium users in the US. However, the platform plans to expand its conversational AI tool — which provides deeper insights, suggestions, and video summaries — to non-Premium users in the US soon.

That tool was first launched in 2023 to help users better understand content while watching.

YouTube is doubling down on AI features to keep users engaged and make content discovery more intuitive, especially in categories involving planning and decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bridging the digital divide through language inclusion

At the Internet Governance Forum 2025 in Norway, a high-level panel of global experts highlighted the urgent need to embed language inclusion into internet governance and digital rights frameworks.

While internet access has expanded globally, billions remain excluded from meaningful participation due to the continued dominance of a few major languages online.

Moderated by Ram Mohan, Chief Strategy Officer of Identity Digital and Chair of the newly formed Coalition on Digital Impact (CODI), the session brought together speakers from ICANN, the Unicode Consortium, DotAsia, DOTAU, the National Telecom and Regulatory Authority of Egypt, and other institutions. The consensus was clear: true digital inclusion is not possible without linguistic inclusion.

‘There are over 7,000 languages in the world, yet nearly half of online content is still in English,’ said Jennifer Chung, Vice President of Policy at DotAsia Organisation. ‘This creates barriers not just to access, but to culture, safety, and economic opportunity.’

Toral Cowieson, CEO of the Unicode Consortium, explained how foundational technical issues still limit language access. ‘Digital inclusion begins with character encoding. Things like date formatting or currency symbols work seamlessly for majority languages, but often break down for minority ones.’

Manal Ismail of Egypt’s National Telecom and Regulatory Authority stressed the importance of government involvement. ‘Language remains a fundamental axis of inequality online,’ she said. ‘We need multilingual access to be treated like other digital infrastructure, alongside cybersecurity and connectivity.’

IGF 2025, Norway, Language inclusion, Diversity
A man and woman sitting in chairs

Sophie Mitchell, Chief Communications Officer at DOTAU, drew attention to the challenges in Australia, where 30% of the population is born overseas and Indigenous languages face extinction. ‘Digital access alone isn’t enough. Without relevant content in native languages, people can’t participate meaningfully,’ she noted.

Theresa Swinehart, representing ICANN, described how historical bias in internet design continues to limit multilingual adoption. ‘We’ve made technical progress, but implementation lags due to awareness gaps. It’s time to lead by example,’ she urged.

Christian Dawson, Executive Director of the Internet Infrastructure Coalition and CODI co-founder, echoed this sentiment. ‘We’re not lacking technology—we’re lacking coordination and motivation. CODI was created to connect those doing good work and help them scale.’

The panel called for a shift from the prevailing ‘English-first’ approach to a ‘multilingual by design’ philosophy, where language accessibility is embedded in digital systems from the start rather than added later. As Chung put it, ‘It’s not just about preserving language—it’s about preserving culture, enhancing security, and enabling rights.’

Audience members also offered insights. Mohammed Abdul Haq Onu from the Bangladesh Internet Governance Forum highlighted successful efforts to promote Bangla language support. Elisabeth Carrera of Wikimedia Norway noted that 88% of traffic to Northern Sami Wikipedia comes from bots and language models, not humans—signalling both the promise and risks of AI in language preservation.

As part of the session’s outcome, each participant committed to concrete follow-up actions, including raising awareness, fostering collaboration, and supporting open data initiatives. The session closed on an optimistic note, with Mohan emphasising, ‘Technology should serve languages—not the other way around.’

The panel’s discussion marked a turning point, framing multilingual internet access not as a luxury, but as a fundamental digital right with far-reaching implications for cultural preservation, cybersecurity, and inclusive economic development.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI feature to sum up all the unread messages

WhatsApp has introduced a new feature using Meta AI to help users manage unread messages more easily. Named ‘Message Summaries’, the tool provides quick overviews of missed messages in individual and group chats, assisting users to catch up without scrolling through long threads.

The summaries are generated using Meta’s Private Processing technology, which operates inside a Trusted Execution Environment. The secure cloud-based system ensures that neither Meta nor WhatsApp — nor anyone else in the conversation — can access your messages or the AI-generated summaries.

According to WhatsApp, Message Summaries are entirely private. No one else in the chat can see the summary created for you. If someone attempts to interfere with the secure system, operations will stop immediately, or the change will be exposed using a built-in transparency check.

Meta has designed the system around three principles: secure data handling during processing and transmission, strict enforcement of protections against tampering, and provable transparency to track any breach attempt.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI sandboxes pave path for responsible innovation in developing countries

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts from around the world gathered to examine how AI sandboxes—safe, controlled environments for testing new technologies under regulatory oversight—can help ensure that innovation remains responsible and inclusive, especially in developing countries. Moderated by Sophie Tomlinson of the DataSphere Initiative, the session spotlighted the growing global appeal of sandboxes, initially developed for fintech, and now extending into healthcare, transportation, and data governance.

Speakers emphasised that sandboxes provide a much-needed collaborative space for regulators, companies, and civil society to test AI solutions before launching them into the real world. Mariana Rozo-Paz from the DataSphere Initiative likened them to childhood spaces for building and experimentation, underscoring their agility and potential for creative governance.

From the European AI Office, Alex Moltzau described how the EU AI Act integrates sandboxes to support safe innovation and cross-border collaboration. On the African continent, where 25 sandboxes already exist (mainly in finance), countries like Nigeria are using them to implement data protection laws and shape national AI strategies. However, funding and legal authority remain hurdles.

The workshop laid bare several shared challenges: limited resources, lack of clear legal frameworks, and insufficient participation in civil society. Natalie Cohen of the OECD pointed out that just 41% of countries trust governments to regulate new technologies effectively—a gap that sandboxes can help bridge. By enabling evidence-based experimentation and promoting transparency, they serve as trust-building tools among governments, businesses, and communities.

Despite regional differences, there was consensus that AI sandboxes—when well-designed and inclusive—can drive equitable digital innovation. With initiatives like the Global Sandboxes Forum and OECD toolkits in progress, stakeholders signalled a readiness to move from theory to practice, viewing sandboxes as more than just regulatory experiments—they are, increasingly, catalysts for international cooperation and responsible AI deployment.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Top 7 AI agents transforming business in 2025

AI agents are no longer a futuristic concept — they’re now embedded in the everyday operations of major companies across sectors.

From customer service to data analysis, AI-powered agents transform workflows by handling tasks like scheduling, reporting, and decision-making with minimal human input.

Unlike simple chatbots, today’s AI agents understand context, follow multi-step instructions, and integrate seamlessly with business tools. Google’s Gemini Agents, IBM’s Watsonx Orchestrate, Microsoft Copilot, and OpenAI’s Operator are some tools that reshape how businesses function.

These systems interpret goals and act on behalf of employees, boosting productivity without needing constant prompts.

Other leading platforms include Amelia, known for its enterprise-grade capabilities in finance and telecom; Claude by Anthropic, focused on safe and transparent reasoning; and North by Cohere, which delivers sector-specific AI for clients like Oracle and SAP.

Many of these tools offer no-code or low-code setups, enabling faster adoption across HR, finance, customer support, and more.

While most agents aren’t entirely autonomous, they’re designed to perform meaningful work and evolve with feedback.

The rise of agentic AI marks a significant shift in workplace automation as businesses move beyond experimentation toward real-world implementation, one workflow at a time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AGI moves closer to reshaping society

There was a time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of becoming a reality — and it could reshape our world as profoundly as electricity or the internet once did.

Unlike today’s narrow AI systems, AGI can learn, reason and adapt across domains, handling everything from creative writing to scientific research without being limited to a single task.

Recent breakthroughs in neural architecture, multimodal models, and self-improving algorithms bring AGI closer—systems like GPT-4o and DeepMind’s Gemini now process language, images, audio and video together.

Open-source tools such as AutoGPT show early signs of autonomous reasoning. Memory-enabled AIs and brain-computer interfaces are blurring the line between human and machine thought while companies race to develop systems that can not only learn but learn how to learn.

Though true AGI hasn’t yet arrived, early applications show its potential. AI already assists in generating code, designing products, supporting mental health, and uncovering scientific insights.

AGI could transform industries such as healthcare, finance, education, and defence as development accelerates — not just by automating tasks but also by amplifying human capabilities.

Still, the rise of AGI raises difficult questions.

How can societies ensure safety, fairness, and control over systems that are more intelligent than their creators? Issues like bias, job disruption and data privacy demand urgent attention.

Most importantly, global cooperation and ethical design are essential to ensure AGI benefits humanity rather than becoming a threat.

The challenge is no longer whether AGI is coming but whether we are ready to shape it wisely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ranking shows which AI respects your data

A new report comparing leading AI chatbots on privacy grounds has named Le Chat by Mistral AI as the most respectful of user data.

The study, conducted by data removal service Incogni, assessed nine generative AI services using eleven criteria related to data usage, transparency and user control.

Le Chat emerged as the top performer thanks to limited data collection and clarity in privacy practices, even if it lost some points for complete transparency.

ChatGPT followed in second place, earning praise for providing clear privacy policies and offering users tools to limit data use despite concerns about handling training data. Grok, xAI’s chatbot, took the third position, though its privacy policy was harder to read.

At the other end of the spectrum, Meta AI ranked lowest. Its data collection and sharing practices were flagged as the most invasive, with prompts reportedly shared within its corporate group and with research collaborators.

Microsoft’s Copilot and Google’s Gemini also performed poorly in terms of user control and data transparency.

Incogni’s report found that some services allow users to prevent their input from being used to train models, such as ChatGPT Grok and Le Chat. In contrast, others, including Gemini, Pi AI, DeepSeek and Meta AI, offered no clear way to opt-out.

The report emphasised that simple, well-maintained privacy support pages can significantly improve user trust and understanding.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!