Google Doppl, the new AI app, turns outfit photos into try-on videos

Google has unveiled Doppl, a new AI-powered app that lets users create short videos of themselves wearing any outfit they choose.

Instead of relying on imagination or guesswork, Doppl allows people to upload full-body photos and apply outfits seen on social media, thrift shops, or friends, creating animated try-ons that bring static images to life.

The app builds on Google’s earlier virtual try-on tools integrated with its Shopping Graph. Doppl pushes things further by transforming still photos into motion videos, showing how clothes flow and fit in movement.

Users can upload their full-body image or choose an AI model to preview outfits. However, Google warns that the fit and details might not always be accurate at an early stage.

Doppl is currently only available in the US for Android and iOS users aged 18 or older. While Google encourages sharing videos with friends and followers, the tool raises concerns about misuse, such as generating content using photos of others.

Google’s policy requires disclosure if someone impersonates another person, but the company admits that some abuse may occur. To address the issue, Doppl content will include invisible watermarks for tracking.

In its privacy notice, Google confirmed that user uploads and generated videos will be used to improve AI technologies and services. However, data will be anonymised and separated from user accounts before any human review is allowed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Path forward for global digital cooperation debated at IGF 2025

At the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, policymakers, civil society, and digital stakeholders gathered to chart the future of global internet governance through the WSIS+20 review. With a high-level UN General Assembly meeting scheduled for December, co-facilitators from Kenya and Albania emphasised the need to update the World Summit on the Information Society (WSIS) framework while preserving its original, people-centred vision.

They underscored the importance of inclusive consultations, highlighting a new multistakeholder sounding board and upcoming joint sessions to enhance dialogue between governments and broader communities. The conversation revolved around the evolving digital landscape and how WSIS can adapt to emerging technologies like AI, data governance, and digital public infrastructure.

While some participants favoured WSIS as the primary global framework, others advocated for closer synergy with the Global Digital Compact (GDC), stressing the importance of coordination to avoid institutional duplication. Despite varied views, there was widespread consensus that the existing WSIS action lines, being technology-neutral, can remain relevant by accommodating new innovations.

Speakers from the government, private sector, and civil society reiterated the call to permanently secure the IGF’s mandate, praising its unique ability to foster open, inclusive dialogue without the pressure of binding negotiations. They pointed to IGF’s historical success in boosting internet connectivity and called for more tangible outputs to influence policymaking.

National-level participation, especially from developing countries, women, youth, and marginalised communities, was identified as crucial for meaningful engagement.

The session ended on a hopeful note, with participants expressing a shared commitment to a more inclusive and equitable digital future. As the December deadline looms, the global community faces the task of turning shared principles into concrete action, ensuring digital governance mechanisms remain cooperative, adaptable, and genuinely representative of all voices.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Microsoft family safety blocks Google Chrome on Windows 11

Windows 11 users have reported that Google Chrome crashes and fails to reopen when Microsoft family safety parental controls are active.

The issue appears to be linked to Chrome’s recent update, version 137.0.7151.68 and does not affect users of Microsoft Edge under the same settings.

Google acknowledged the problem and provided a workaround involving changes to family safety settings, such as unblocking Chrome or adjusting content filters.

Microsoft has not issued a formal statement, but its family safety FAQ confirms that non-Edge browsers are blocked from web filtering.

Users are encouraged to update Google Chrome to version 138.0.7204.50 to address other security concerns recently disclosed by Google.

The update aims to patch vulnerabilities that could let attackers bypass security policies and run malicious code.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Protecting the vulnerable online: Global lawmakers push for new digital safety standards

At the 2025 Internet Governance Forum in Lillestrøm, Norway, a parliamentary session titled ‘Click with Care: Protecting Vulnerable Groups Online’ gathered lawmakers, regulators, and digital rights experts from around the world to confront the urgent issue of online harm targeting marginalised communities. Speakers from Uganda, the Philippines, Malaysia, Pakistan, the Netherlands, Portugal, and Kenya shared insights on how current laws often fall short, especially in the Global South where women, children, and LGBTQ+ groups face disproportionate digital threats.

Research presented showed alarming trends—one in three African women experience online abuse, often with no support or recourse, and platforms’ moderation systems are frequently inadequate, slow, or biassed in favor of users from the Global North.

The session exposed critical gaps in enforcement and accountability, particularly regarding large platforms like Meta and Google, which frequently resist compliance with national regulations. Malaysian Deputy Minister Teo Nie Ching and others emphasised that individual countries struggle to hold tech giants accountable, leading to calls for stronger regional blocs and international cooperation.

Meanwhile, Philippine lawmaker Raoul Manuel highlighted legislative progress, including extraterritorial jurisdiction for child exploitation and expanded definitions of online violence, though enforcement remains patchy. In Pakistan, Nighat Dad raised the alarm over AI-generated deepfakes and the burden placed on victims to monitor and report their own abuse.

Panellists also stressed that simply taking down harmful content isn’t enough. They called for systemic platform reform, including greater algorithm transparency, meaningful reporting tools, and design changes that prevent harm before it occurs.

Behavioural economist Sandra Maximiano introduced the concept of ‘nudging’ safer user behavior through design interventions that account for human cognitive biases—approaches that could complement legal strategies by embedding protection into the architecture of online spaces.

Why does it matter?

A powerful takeaway from the session was the consensus that online safety must be treated as both a technological and human challenge. Participants agreed that coordinated global responses, inclusive policymaking, and engagement with community structures are essential to making the internet a safer place—particularly for those who need protection the most.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France appeals porn site ruling based on EU legal grounds

The French government is challenging a recent decision by the Administrative Court of Paris that temporarily halted the enforcement of mandatory age verification on pornographic websites based in the EU. The court found France’s current approach potentially inconsistent with the EU law—specifically the 2002 E-Commerce Directive—which upholds the ‘country-of-origin’ principle.

That rule limits an EU country’s authority to regulate online services hosted in another member state unless it follows a formal process involving both the host country and the European Commission. The dispute’s heart is whether France correctly followed the required legal steps.

While French authorities say they notified the host countries of porn companies like Hammy Media (Xhamster) and Aylo (owner of Pornhub and others) and waited the mandated three months, legal experts argue that notifying the Commission is also essential. So far, there is no confirmation that this additional step was taken, which may weaken France’s legal standing.

Digital Minister Clara Chappaz reaffirmed the government’s commitment to enforcing age checks, calling it a ‘priority’ in a public statement. The ministry insists its rules align with the EU’s Audiovisual Media Services Directive.

However, the court’s ruling highlights broader tensions between France’s national digital regulations and overarching the EU law. Similar legal challenges have already forced France to adjust parts of its digital, influencer, and cloud regulation frameworks in the past two years.

The appeal could have significant implications for age restrictions on adult content and how France asserts digital sovereignty within the EU. If the court upholds the suspension, other digital regulations based on national initiatives may also be vulnerable to legal scrutiny under the EU principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI must protect dignity, say US bishops

The US Conference of Catholic Bishops has urged Congress to centre AI policy on human dignity and the common good.

Their message outlines moral principles rather than technical guidance, warning against misuse of technology that may erode truth, justice, or the protection of the vulnerable.

The bishops caution against letting AI replace human moral judgement, especially in sensitive areas like family life, work, and warfare. They express concern about AI deepening inequality and harming those already marginalised without strict oversight.

Their call includes demands for greater transparency, regulation of autonomous weapons, and stronger protections for children and workers in the US.

Rooted in Catholic social teaching, the letter frames AI not as a neutral innovation but as a force that must serve people, not displace them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Massive leak exposes data of millions in China

Cybersecurity researchers have uncovered a brief but significant leak of over 600 gigabytes of data, exposing information on millions of Chinese citizens.

The haul, containing WeChat, Alipay, banking, and residential records, is part of a centralised system, possibly aimed at large-scale surveillance instead of a random data breach.

According to research from Cybernews and cybersecurity consultant Bob Diachenko, the data was likely used to build individuals’ detailed behavioural, social and economic profiles.

They warned the information could be exploited for phishing, fraud, blackmail or even disinformation campaigns instead of remaining dormant. Although only 16 datasets were reviewed before the database vanished, they indicated a highly organised and purposeful collection effort.

The source of the leak remains unknown, but the scale and nature of the data suggest it may involve government-linked or state-backed entities rather than lone hackers.

The exposed information could allow malicious actors to track residence locations, financial activity and personal identifiers, placing millions at risk instead of keeping their lives private and secure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!