Guess AI model sparks fashion world debate

A striking new ‘supermodel’ has appeared in the August print edition of Vogue, featuring in a Guess advert for their summer collection. Uniquely, the flawless blonde model is not real, as a small disclaimer reveals she was created using AI.

While Vogue clarifies the AI model’s inclusion was an advertising decision, not editorial, it marks a significant first for the magazine and has ignited widespread controversy.

The development raises serious questions for real models, who have long campaigned for greater diversity, and consumers, particularly young people, are already grappling with unrealistic beauty standards.

Seraphinne Vallora, the company behind the controversial Guess advert, comprises founders Valentina Gonzalez and Andreea Petrescu. They told the BBC that Guess’s co-founder, Paul Marciano, approached them on Instagram to create an AI model for the brand’s summer campaign.

Valentina Gonzalez explained, ‘We created 10 draft models for him and he selected one brunette woman and one blonde that we developed further.’ Petrescu described AI image generation as a complex process, with their five employees taking up to a month to create a finished product, charging clients like Guess up to the low six figures.

However, plus-size model Felicity Hayward, with over a decade in the industry, criticised the use of AI models, stating it ‘feels lazy and cheap’ and worried it could ‘undermine years of work towards more diversity in the industry.’

Hayward believes the fashion industry, which saw strides in inclusivity in the 2010s, has regressed, leading to fewer bookings for diverse models. She warned, ‘The use of AI models is another kick in the teeth that will disproportionately affect plus-size models.’

Gonzalez and Petrescu insist they do not reinforce narrow beauty standards, with Petrescu claiming, ‘We don’t create unattainable looks – the AI model for Guess looks quite realistic.’ They contended, ‘Ultimately, all adverts are created to look perfect and usually have supermodels in, so what we do is no different.’

While admitting their company’s Instagram shows a lack of diversity, Gonzalez explained to the BBC that attempts to post AI images of women with different skin tones did not gain traction, stating, ‘people do not respond to them – we don’t get any traction or likes.’

They also noted that the technology is not yet advanced enough to create plus-size AI women. However, this mirrors a 2024 Dove campaign that highlighted AI bias by showing image generators consistently producing thin, white, blonde women when asked for ‘the most beautiful woman in the world.’

Vanessa Longley, CEO of eating disorder charity Beat, found the advert ‘worrying,’ telling the BBC, ‘If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder.’

The lack of transparent labelling for AI-generated content in the UK is also a concern, despite Guess having a small disclaimer. Sinead Bovell, a former model and now tech entrepreneur, told the BBC that not clearly labelling AI content is ‘exceptionally problematic’ due to ‘AI is already influencing beauty standards.’

Sara Ziff, a former model and founder of Model Alliance, views Guess’s campaign as “less about innovation and more about desperation and need to cut costs,’ advocating for ‘meaningful protections for workers’ in the industry.

Seraphinne Vallora, however, denies replacing models, with Petrescu explaining, ‘We’re offering companies another choice in how they market a product.’

Despite their website claiming cost-efficiency by ‘eliminating the need for expensive set-ups… hiring models,’ they involve real models and photographers in their AI creation process. Vogue’s decision to run the advert has drawn criticism on social media, with Bovell noting the magazine’s influential position, which means they are ‘in some way ruling it as acceptable.’

Looking ahead, Bovell predicts more AI-generated models but not their total dominance, foreseeing a future where individuals might create personal AI avatars to try on clothes and a potential ‘society opting out’ if AI models become too unattainable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to retaliate against cyber attacks, minister warns

Britain’s security minister has warned that hackers targeting UK institutions will face consequences, including potential retaliatory cyber operations.

Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.

‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.

The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.

The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.

While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.

AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.

Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.

Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

5G Advanced lays the groundwork for 6G, says 5G Americas

5G Americas has released a new white paper outlining how 5G Advanced features in 3GPP Releases 18 to 20 are shaping the path to 6G.

The report highlights how 5G Advanced is evolving mobile networks through embedded AI, scaled IoT, improved energy efficiency, and broader service capabilities. Viet Nguyen, President of 5G Americas, called it a turning point for wireless systems, offering more intelligent, resilient, and sustainable connectivity.

AI-native networking is a key innovation which brings machine learning into the radio and core network. The innovation enables zero-touch automation, predictive maintenance, and self-organising systems, cutting fault detection by 90% and reducing false alarms by 70%.

Energy efficiency is another core benefit. Features like cell sleep modes and antenna switching can reduce energy use by up to 56%. Ambient IoT also advances, enabling battery-less devices for industrial and consumer use in energy-constrained environments.

Latency improvements like L4S and enhanced QoS allow scalable support for immersive XR and real-time automation. Advances in spectral efficiency and satellite support are boosting uplink speeds above 500 Mbps and expanding coverage to remote areas.

Andrea Brambilla of Nokia noted that 5G Advanced supports digital twins, private networks, and AI-driven transformation. Pei Hou of T-Mobile said it builds on 5G Standalone to prepare for a sustainable shift to 6G.

The paper urges updated policies on AI governance, spectrum sharing, and IoT standards to ensure global interoperability. Strategic takeaways include AI, automation, and energy savings as key to long-term innovation and monetisation across the public and private sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SatanLock ends operation amid ransomware ecosystem turmoil

SatanLock, a ransomware group active since April 2025, has announced it is shutting down. The group quickly gained notoriety, claiming 67 victims on its now-defunct dark web leak site.

Cybersecurity firm Check Point says more than 65% of these victims had already appeared on other ransomware leak pages. However, this suggests the group may have used shared infrastructure or tried to hijack previously compromised networks.

Such tactics reflect growing disorder within the ransomware ecosystem, where victim double-posting is rising. SatanLock may have been part of a broader criminal network, as it shares ties to families like Babuk-Bjorka and GD Lockersec.

A shutdown message was posted on the gang’s Telegram channel and leak page, announcing plans to leak all stolen data. The reason for the sudden closure has not been disclosed.

Another group, Hunters International, announced its disbandment just days earlier.

Unlike SatanLock, Hunters offered free decryption keys to its victims in a parting gesture.

These back-to-back exits signal possible pressure from law enforcement, rivals, or internal collapse in the ransomware world. Analysts are watching closely to see whether this trend continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025: Africa charts a sovereign path for AI governance

African leaders at the Internet Governance Forum (IGF) 2025 in Oslo called for urgent action to build sovereign and ethical AI systems tailored to local needs. Hosted by the German Federal Ministry for Economic Cooperation and Development (BMZ), the session brought together voices from government, civil society, and private enterprises.

Moderated by Ashana Kalemera, Programmes Manager at CIPESA, the discussion focused on ensuring AI supports democratic governance in Africa. ‘We must ensure AI reflects our realities,’ Kalemera said, emphasising fairness, transparency, and inclusion as guiding principles.

Executive Director of Policy Neema Iyer warned that AI harms governance through surveillance, disinformation, and political manipulation. ‘Civil society must act as watchdogs and storytellers,’ she said, urging public interest impact assessments and grassroots education.

Representing South Africa, Mlindi Mashologu stressed the need for transparent governance frameworks rooted in constitutional values. ‘Policies must be inclusive,’ he said, highlighting explainability, data bias removal, and citizen oversight as essential components of trustworthy AI.

Lacina Koné, CEO of Smart Africa, called for urgent action to avoid digital dependency. ‘We cannot be passively optimistic. Africa must be intentional,’ he stated. Over 1,000 African startups rely on foreign AI models, creating sovereignty risks.

Koné emphasised that Africa should focus on beneficial AI, not the most powerful. He highlighted agriculture, healthcare, and education sectors where local AI could transform. ‘It’s about opportunity for the many, not just the few,’ he said.

From Mauritania, Matchiane Soueid Ahmed shared her country’s experience developing a national AI strategy. Challenges include poor rural infrastructure, technical capacity gaps, and lack of institutional coordination. ‘Sovereignty is not just territorial—it’s digital too,’ she noted.

Shikoh Gitau, CEO of KALA in Kenya, brought a private sector perspective. ‘We must move from paper to pavement,’ she said. Her team runs an AI literacy campaign across six countries, training teachers directly through their communities.

Gitau stressed the importance of enabling environments and blended financing. ‘Governments should provide space, and private firms must raise awareness,’ she said. She also questioned imported frameworks: ‘What definition of democracy are we applying?’

Audience members from Gambia, Ghana, and Liberia raised key questions about harmonisation, youth fears over job loss and AI readiness. Koné responded that Smart Africa is benchmarking national strategies and promoting convergence without erasing national sovereignty.

Though 19 African countries have published AI strategies, speakers noted that implementation remains slow. Practical action—such as infrastructure upgrades, talent development, and public-private collaboration—is vital to bring these frameworks to life.

The panel underscored the need to build AI systems prioritising inclusion, utility, and human rights. Investments in digital literacy, ethics boards, and regulatory sandboxes were cited as key tools for democratic AI governance.

Kalemera concluded, ‘It’s not yet Uhuru for AI in Africa—but with the right investments and partnerships, the future is promising.’ The session reflected cautious optimism and a strong desire for Africa to shape its AI destiny.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Internet Governance Forum marks 20 years of reshaping global digital policy

The 2025 Internet Governance Forum (IGF), held in Norway, offered a deep and wide-ranging reflection on the IGF’s 20-year journey in shaping digital governance and its prospects for the future.

Bringing together voices from governments, civil society, the technical community, business, and academia, the session celebrated the IGF’s unique role in institutionalising a multistakeholder approach to internet policymaking, particularly through inclusive and non-binding dialogue.

Moderated by Avri Doria, who has been with the IGF since its inception, the session focused on how the forum has influenced individuals, governments, and institutions across the globe. Doria described the IGF as a critical learning platform and a ‘home for evolving objectives’ that has helped connect people with vastly different viewpoints over the decades.

Professor Bitange Ndemo, Ambassador of Kenya to the European Union, reflected on his early scepticism, admitting that stakeholder consultation initially felt ‘painful’ for policymakers unfamiliar with collaborative approaches.

Over time, however, it proved ‘much, much easier’ for implementation and policy acceptance. ‘Thank God it went the IGF way,’ he said, emphasising how early IGF discussions guided Kenya and much of Africa in building digital infrastructure from the ground up.

Hans Petter Holen, Managing Director of RIPE NCC, underlined the importance of the IGF as a space where ‘technical realities meet policy aspirations’. He called for a permanent IGF mandate, stressing that uncertainty over its future limits its ability to shape digital governance effectively.

Renata Mielli, Chair of the Internet Steering Committee of Brazil (CGI.br), spoke about how IGF-inspired dialogue was key to shaping Brazil’s Internet Civil Rights Framework and Data Protection Law. ‘We are not talking about an event or a body, but an ecosystem,’ she said, advocating for the IGF to become the focal point for implementing the UN Global Digital Compact.

Funke Opeke, founder of MainOne in Nigeria, credited the IGF with helping drive West Africa’s digital transformation. ‘When we launched our submarine cable in 2010, penetration was close to 10%. Now it’s near 50%,’ she noted, urging continued support for inclusion and access in the Global South.

Qusai Al Shatti, from the Arab IGF, highlighted how the forum helped embed multistakeholder dialogue into governance across the Arab world, calling the IGF ‘the most successful outcome of WSIS‘.

From the civil society perspective, Chat Garcia Ramilo of the Association for Progressive Communications (APC) described the IGF as a platform to listen deeply, to speak, and, more importantly, to act’. She stressed the forum’s role in amplifying marginalised voices and pushing human rights and gender issues to the forefront of global internet policy.

Luca Belli of FGV Law School in Brazil echoed the need for better visibility of the IGF’s successes. Despite running four dynamic coalitions, he expressed frustration that many contributions go unnoticed. ‘We’re not good at celebrating success,’ he remarked.

Isabelle Lois, Vice Chair of the UN Commission on Science and Technology for Development (CSTD), emphasised the need to ‘connect the IGF to the wider WSIS architecture’ and ensure its outcomes influence broader UN digital frameworks.

Other voices joined online and from the floor, including Dr Robinson Sibbe of Digital Footprints Nigeria, who praised the IGF for contextualising cybersecurity challenges, and Emily Taylor, a UK researcher, who noted that the IGF had helped lay the groundwork for key initiatives like the IANA transition and the proliferation of internet exchange points across Africa.

Youth participants like Jasmine Maffei from Hong Kong and Piu from Myanmar stressed the IGF’s openness and accessibility. They called for their voices to be formally recognised within the multistakeholder model.

Veteran internet governance leader Markus Kummer reminded the room that the IGF’s ability to build trust and foster dialogue across divides enabled global cooperation during crucial events like the IANA transition.

Despite the celebratory tone, speakers repeatedly stressed three urgent needs: a permanent IGF mandate, stronger integration with global digital governance efforts such as the WSIS and Global Digital Compact, and broader inclusion of youth and underrepresented regions.

As the forum entered its third decade, many speakers agreed that the IGF’s legacy lies in its meetings or declarations and the relationships, trust, and governance culture it has helped create. The message from Norway was clear: in a fragmented and rapidly changing digital world, the IGF is more vital than ever—and its future must be secured.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Onnuri Church probes hack after broadcast hijacked by North Korean flag

A North Korean flag briefly appeared during a live-streamed worship service from one of Seoul’s largest Presbyterian churches, prompting an urgent investigation into what church officials are calling a cyberattack.

The incident occurred Wednesday morning during an early service at Onnuri Church’s Seobinggo campus in Yongsan, South Korea.

While Pastor Park Jong-gil was delivering his sermon, the broadcast suddenly cut to a full-screen image of the flag of North Korea, accompanied by unidentified background music. His audio was muted during the disruption, which lasted around 20 seconds.

The unexpected clip appeared on the church’s official YouTube channel and was quickly captured by viewers, who began sharing it across online platforms and communities.

On Thursday, Onnuri Church issued a public apology on its website and confirmed it was treating the event as a deliberate cyber intrusion.

‘An unplanned video was transmitted during the livestream of our early morning worship on 18 June. We believe this resulted from a hacking incident,’ the statement read. ‘An internal investigation is underway, and we are taking immediate measures to identify the source and prevent future breaches.’

A church official told Yonhap News Agency that the incident had been reported to the relevant authorities, and no demands or threats had been received regarding the breach. The investigation continues as the church works with authorities to determine the origin and intent of the attack.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!