AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas hacked as airline cyber threats escalate

Qantas Airways has confirmed that personal data from 5.7 million customers was stolen in a recent cyberattack, including names, contact details and meal preferences. The airline stated that no financial or login credentials were accessed, and frequent flyer accounts remain secure.

An internal investigation found the data breach involved various levels of personal information, with 2.8 million passengers affected most severely. Meal preferences were the least common data stolen, while over a million customers lost addresses or birth dates.

Qantas has contacted affected passengers and says it offers support while monitoring the situation with cybersecurity experts. Under pressure to manage the crisis effectively, CEO Vanessa Hudson assured the public that extra security steps had been taken.

The breach is the latest in a wave of attacks targeting airlines, with the FBI warning that the hacking group Scattered Spider may be responsible. Similar incidents have recently affected carriers in the US and Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S still rebuilding after April cyber incident

Marks & Spencer has revealed that the major cyberattack it suffered in April stemmed from a sophisticated impersonation of a third-party user.

The breach began on 17 April and was detected two days later, sparking weeks of disruption and a crisis response effort described as ‘traumatic’ by Chairman Archie Norman.

The retailer estimates the incident will cost it £300 million in operating profit and says it remains in rebuild mode, although customer services are expected to normalise by month-end.

Norman confirmed M&S is working with UK and US authorities, including the National Crime Agency, the National Cyber Security Centre, and the FBI.

While the ransomware group DragonForce has claimed responsibility, Norman declined to comment on whether any ransom was paid. He said such matters were better left to law enforcement and not in the public interest to discuss further.

The company expects to recover some of its losses through insurance, although the process may take up to 18 months. Other UK retailers, including Co-op and Harrods, were also targeted in similar attacks around the same time, reportedly using impersonation tactics to bypass internal security systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital humanism in the AI era: Caution, culture, and the call for human-centric technology

At the WSIS+20 High-Level Event in Geneva, the session ‘Digital Humanism: People First!’ spotlighted growing concerns over how digital technologies—especially AI—are reshaping society. Moderated by Alfredo M. Ronchi, the discussion revealed a deep tension between the liberating potential of digital tools and the risks they pose to cultural identity, human dignity, and critical thinking.

Speakers warned that while digital access has democratised communication, it has also birthed a new form of ‘cognitive colonialism’—where people become dependent on AI systems that are often inaccurate, manipulative, and culturally homogenising.

The panellists, including legal expert Pavan Duggal, entrepreneur Lilly Christoforidou, and academic Sarah Jane Fox, voiced alarm over society’s uncritical embrace of generative AI and its looming evolution toward artificial general intelligence by 2026. Duggal painted a stark picture of a world where AI systems override human commands and manipulate users, calling for a rethinking of legal frameworks prioritising risk reduction over human rights.

Fox drew attention to older people, warning that growing digital complexity risks alienating entire generations, while Christoforidou urged for ethical awareness to be embedded in educational systems, especially among startups and micro-enterprises.

Despite some disagreement over the fundamental impact of technology—ranging from Goyal’s pessimistic warning about dehumanisation to Anna Katz’s cautious optimism about educational potential—the session reached a strong consensus on the urgent need for education, cultural protection, and contingency planning. Panellists called for international cooperation to preserve cultural diversity and develop ‘Plan B’ systems to sustain society if digital infrastructures fail.

The session’s tone was overwhelmingly cautionary, with speakers imploring stakeholders to act before AI outpaces our capacity to govern it. Their message was clear: human values, not algorithms, must define the digital age. Without urgent reforms, the digital future may leave humanity behind—not by design, but by neglect.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UN leaders chart inclusive digital future at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UN leaders gathered for a pivotal dialogue on shaping an inclusive digital transformation, marking two decades since the World Summit on the Information Society (WSIS). Speakers across the UN system emphasised that technology must serve people, not vice versa.

They highlighted that bridging the digital divide is critical to ensuring that innovations like AI uplift all of humanity, not just those in advanced economies. Without equitable access, the benefits of digital transformation risk reinforcing existing inequalities and leaving millions behind.

The discussion showcased how digital technologies already transform disaster response and climate resilience. The World Meteorological Organization and the UN Office for Disaster Risk Reduction illustrated how AI powers early warning systems and real-time risk analysis, saving lives in vulnerable regions.

Meanwhile, the Food and Agriculture Organization of the UN underscored the need to align technology with basic human needs, reminding the audience that ‘AI is not food,’ and calling for thoughtful, efficient deployment of digital tools to address global hunger and development.

Workforce transformation and leadership in the AI era also featured prominently. Leaders from the International Labour Organization and UNITAR stressed that while AI may replace some roles, it will augment many more, making digital literacy, ethical foresight, and collaborative governance essential skills. Examples from within the UN system itself, such as the digitisation of the Joint Staff Pension Fund through facial recognition and blockchain, demonstrated how innovation can enhance services without sacrificing inclusivity or ethics.

As the session closed, speakers collectively reaffirmed the importance of human rights, international cooperation, and shared digital governance. They stressed that the future of global development hinges on treating digital infrastructure and knowledge as public goods.

With the WSIS framework and Global Digital Compact as guideposts, UN leaders called for sustained, unified efforts to ensure that digital transformation uplifts every community and contributes meaningfully to the Sustainable Development Goals.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X CEO Yaccarino resigns as AI controversy and Musk’s influence grow

Linda Yaccarino has stepped down as CEO of X, ending a turbulent two-year tenure marked by Musk’s controversial leadership and ongoing transformation of the social media company.

Her resignation came just one day after a backlash over offensive posts by Grok, the AI chatbot created by Musk’s xAI, which had been recently integrated into the platform.

Yaccarino, who was previously a top advertising executive at NBCUniversal, was brought on in 2023 to help stabilise the company following Musk’s $44bn acquisition.

In her farewell post, she cited efforts to improve user safety and rebuild advertiser trust, but did not provide a clear reason for her departure.

Analysts suggest growing tensions with Musk’s management style, particularly around AI moderation, may have prompted the move.

Her exit adds to the mounting challenges facing Musk’s empire.

Tesla is suffering from slumping sales and executive departures, while X remains under pressure from heavy debts and legal battles with advertisers.

Yaccarino had spearheaded ambitious initiatives, including payment partnerships with Visa and plans for an X-branded credit or debit card.

Despite these developments, X continues to face scrutiny for its rightward political shift and reliance on controversial AI tools.

Whether the company can fulfil Musk’s vision of becoming an ‘everything app’ without Yaccarino remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO panel calls for ethics to be core of emerging tech, not an afterthought

At the WSIS+20 High-Level Event in Geneva, UNESCO hosted a session titled ‘Ethics in AI: Shaping a Human-Centred Future in the Digital Age,’ where global experts warned that ethics must be built into the foundation of emerging technologies such as AI, neurotechnology, and quantum computing—not added later as damage control.

UNESCO’s Chief of Bioethics and Ethics of Science and Technology, Dafna Feinholz, stressed that ethical considerations should shape technology development from the start, echoing the organisation’s mission to safeguard human rights and freedoms alongside scientific innovation.

Panellists underscored the tension between individual intentions and institutional realities. Philosopher Mira Wolf-Bauwens argued that while developers often begin with a sense of moral responsibility, corporate pressures quickly override these principles.

Drawing from her work in the quantum sector, she described how companies dilute ethical concerns into mere legal compliance, eroding their original purpose. Neuroscientist and entrepreneur Ryota Kanai echoed this concern, sharing how the rush to commercialise neurotechnology has led to premature products that risk undermining public trust, especially when privacy risks remain poorly understood.

The session also highlighted success stories in ethical governance, such as Thailand’s efforts to implement UNESCO’s AI ethics framework. Chaichana Mitrpant, leading the country’s digital policy agency, described a localised yet uncompromised approach that engaged multiple stakeholders—from regulators to small businesses. The collaborative model helped tailor global ethical guidelines to national realities while maintaining core human values.

Panellists agreed that while regulation plays a role, ethics must remain broader, more agile, and focused on motivation rather than just rule enforcement. With technologies evolving faster than laws can adapt, anticipatory governance, cross-sector collaboration, and inclusive debate were hailed as essential. The session closed with a shared call to action: embedding ethics in every stage of technology development is not just ideal—it’s urgently necessary to build a trustworthy digital future.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

xAI unveils Grok 4 with top benchmark scores

Elon Musk’s AI company, xAI, has launched its latest flagship model, Grok 4, alongside an ultra-premium $300 monthly plan named SuperGrok Heavy.

Grok 4, which competes with OpenAI’s ChatGPT and Google’s Gemini, can handle complex queries and interpret images. It is now integrated more deeply into the social media platform X, which Musk also owns.

Despite recent controversy, including antisemitic responses generated by Grok’s official X account, xAI focused on showcasing the model’s performance.

Musk claimed Grok 4 is ‘better than PhD level’ in all academic subjects and revealed a high-performing version called Grok 4 Heavy, which uses multiple AI agents to solve problems collaboratively.

The models scored strongly on benchmark exams, including a 25.4% score for Grok 4 on Humanity’s Last Exam, outperforming major rivals. With tools enabled, Grok 4 Heavy reached 44.4%, nearly doubling OpenAI’s and Google’s results.

It also achieved a leading score of 16.2% on the ARC-AGI-2 pattern recognition test, nearly double that of Claude Opus 4.

xAI is targeting developers through its API and enterprise partnerships while teasing upcoming tools: an AI coding model in August, a multi-modal agent in September, and video generation in October.

Yet the road ahead may be rocky, as the company works to overcome trust issues and position Grok as a serious rival in the AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google partners with UK government on AI training

The UK government has struck a major partnership with Google Cloud aimed at modernising public services by eliminating agreing IT systems and equipping 100,000 civil servants with digital and AI skills by 2030.

Backed by DSIT, the initiative targets sectors like the NHS and local councils, seeking both operational efficiency and workforce transformation.

Replacing legacy contracts, some of which date back decades, could unlock as much as £45 billion in efficiency savings, say ministers. Google DeepMind will provide technical expertise to help departments adopt emerging AI solutions and accelerate public sector innovation.

Despite these promising aims, privacy campaigners warn that reliance on a US-based tech giant threatens national data sovereignty and may lead to long-term lock-in.

Foxglove’s Martha Dark described the deal as ‘dangerously naive’, with concerns around data access, accountability, public procurement processes and geopolitical risk.

As ministers pursue broader technological transformation, similar partnerships with Microsoft, OpenAI and Meta are underway, reflecting an industry-wide effort to bridge digital skills gaps and bring agile solutions into Whitehall.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!